Step by Step Install Oracle Rac on Aix

xiaoxiao2021-03-06  74

Step-By-Step Installation of RAC on IBM AIX (RS / 6000) Purpose This document will provide the reader with step-by-step instructions on how to install a cluster, install Oracle Real Application Clusters (RAC) and start a cluster database ON IBM AIX HACMP / ES (CRM) 4.4.x. For additional Explanation or Information ON Any of these Steps, please see the reference. this Note Does Not Cover IBM SP2 Platform. DiscLaM:

If there is all, errors or issues prior to step 3.3,

please contact IBM Support. The information contained here is as accurate as possible at the time of writing. · 1. Configuring the Cluster Hardware o 1.1 Minimal Hardware list / System Requirements § 1.1.1 Hardware § 1.1.2 Software § 1.1.3 Patches o 1.2 Installing Disk Arrays o 1.3 Installing Cluster Interconnect and Public Network Hardware · 2. Creating a cluster o 2.1 HACMP / ES Software Installation o 2.2 Configuring Cluster Topology o 2.3 Synchronizing Cluster Topology o 2.4 Configuring Cluster Resources § 2.4.1 Create volume groups to be shared concurrently on one node § 2.4.2 Create Shared RAW Logical Volumes § 2.4.3 Import the Volume Group on to the Other Nodes § 2.4.4 Add a Concurrent Cluster Resource Group § 2.4.5 Configure the Concurrent Cluster Resource Group § 2.4 .6 Creating Parallel FileSystems (GPFS) O 2.5 Synchronizing Cluster Resources O 2.6 Join Nodes Into The Cluster O 2.7 Basic Cluster Administration · 3. Preparing for the Installation of Rac o 3.1 Configure the shared disks and UNIX preinstallation tasks § 3.2.1 Configure the shared disks § 3.2.2 UNIX preinstallation tasks o 3.2 Using the Oracle Universal Installer for Real Application Clusters o 3.3 Create a RAC Database using the Oracle Database Configuration Assistant · 4 . Administering Real Application Clusters Instances · 5. References 1. Configuring the Clusters Hardware 1.1 Minimal Hardware list / System Requirements for a two node cluster the following would be a minimum recommended hardware list. Check the RAC / IBM AIX certification matrix for RAC updates on Currently Supported Hardware / Software. 1.1.1 Hardware · IBM Servers - Two IBM Servers Capable of Running Aix 4.3.3 OR 5L 64bit · for IBM or Third-PA

rty storage products, Cluster interconnects, Public networks, Switch options, Memory, swap & CPU requirements consult with the operating system vendor or hardware vendor. · Memory, swap & CPU requirements o Each server must have a minimum of 512Mb of memory, at least 1GB Swap Space or Twice The Physical Memory Whichever Is Greater. To Determine System Memory: - $ / usr / sbin / lsattr - E-L Sys0 -a RealMem To Determine Swap Space Use: - $ / usr / sbin / lsps -a o 64-bit processors are required 1.1.2 Software · When using IBM AIX 4.3.3:. o HACMP / ES CRM 4.4.x o Only RAW Logical Volumes (Raw Devices) for Database Files supported o Oracle Server Enterprise Edition 9i Release 1 (9.0.1) OR 9i Release 2 (9.2.0) · When Using IBM AIX 5.1 (5L): o For Database Files Residing on Raw Logical Volumes: o Hacmp / ES CRM 4.4.x O for Database Files Residing on Parallel FileSystem (GPFS): o Hacmp / ES 4.4.X (HACMP / CRM IS Not Required) o GPFS 1.5 o IBM PATCH PTF12 and IBM PATC h IY34917 or IBM Patch PTF13 o Oracle Server Enterprise Edition 9i Release 2 (9.2.0) o Oracle Server Enterprise Edition 9i for AIX 4.3.3 and 5L are in separate CD packs and include Real Application Cluster (RAC) 1.1.3 Patches The IBM Cluster nodes might require patches in the following areas: · IBM AIX Operating Environment patches · Storage firmware patches or microcode updates Patching considerations: · Make sure all cluster nodes have the same patch levels · Do not install any firmware-related patches without qualified assistance · ALWAYS OBTAIN THE MOST CURRENT PATCH INFORMATION · Read All Patch Readme Notes Carefully. · For a list of required operation system patches check the source in note 211537.1 andundact

. IBM corporation for additional patch requirements To see all currently installed patches use the following command:% / usr / sbin / instfix -i To verify installation of a specific patch use:% / usr / sbin / instfix -ivk eg: % / usr / sbin / instfix -ivk IY30927 1.2 Installing Disk Arrays Follow the procedures for an initial installation of the disk enclosures or arrays, prior to installing the IBM AIX operating system environment and HACMP software. Perform this procedure in conjunction with the procedures in the HACMP for AIX 4.X.1 Installation Guide and your server hardware manual. 1.3 Installing Cluster Interconnect and public Network Hardware The cluster interconnect and public network interfaces do not need to be configured prior to the HACMP installation but must be configured and available before The cluster can be configured, install host adapters in your cluster nodes. for the procedure on installing host adapters, see the documenta tion that shipped with your host adapters and node hardware Install the transport cables (and optionally, transport junctions), depending on how many nodes are in your cluster:. · A cluster with more than two nodes requires two cluster transport junctions These transport junctions. are Ethernet-based switches (customer-supplied). you install the cluster software and configure the interconnect after you have installed all other hardware. 2. Creating a Cluster 2.1 IBM HACMP / ES Software Installation The HACMP / ES 4.XX installation and configuration Process IS Completed In Several Major Steps. The General Process IS: · Install Hardware · Install The IBM AIX Operating System Software · Install The Latest IBM AIX Maintenance Leve

l and required patches · install HACMP / ES 4.XX on each node · install HACMP / ES required patches · configure the cluster topology · synchronize the cluster topology · configure cluster resources · synchronize cluster resources Follow the instructions in the HACMP for AIX 4. XX Installation Guide for detailed instructions on insalling the required HACMP packages The required / suggested packages include the following:. · cluster.adt.es.client.demos · cluster.adt.es.client.include · cluster.adt.es.server .demos · cluster.clvm.rte HACMP for AIX Concurrent · cluster.cspoc.cmds HACMP CSPOC commands · cluster.cspoc.dsh HACMP CSPOC dsh and perl · cluster.cspoc.rte HACMP CSPOC Runtime Commands · cluster.es.client.lib ES Client Libraries · cluster.es.client.rte ES Client Runtime · cluster.es.client.utils ES Client Utilities · cluster.es.clvm.rte ES for AIX Concurrent Access · cluster.es.cspoc.cmds ES CSPOC Commands> · Cluster.es.cspoc.dsh ES CSPOC DSH AND Perl · Clust Er.s.cspoc.rte ES CSPOC Runtime Commands · Cluster.es.hc.rte ES HC Daemon · Cluster.es.server.diag ES Server Diags · Cluster.es.Server.Events ES Server Events · Cluster.es.Server .rte ES Base Server Runtime · cluster.es.server.utils ES Server Utilities · cluster.hc.rte HACMP HC Daemon · cluster.msg.En_US.cspoc HACMP CSPOC Messages - US · cluster.msg.en_US.cspoc HACMP CSPOC Messages - US Cluster.msg.en_us.es.client · cluster.msg.en_us.es.server · Cluster.msg.en_us.haview Hacmp Haview Messages - US Cluster.vsm.es ES VSM Configuration Utility · Cluster.CLVM. Rte Hacmp for Aix Concurrent · Cluster.es.Client.RTE ES Client Runtime · Cluster.es.clvm.rte Es for Aix Concurrent Access · Cluster.es.hc.RTE ES H

C Daemon · Cluster.es.server.Events ES Server Events · Cluster.es.server.rte ES BASE Server Runtime · Cluster.es.server.utils ES Server Utilities · Cluster.hc.rte Hacmp HC Daemon Cluster.man. En_us.client.data · Cluster.man.en_us.cspoc.data · Cluster.man.en_us.es.data ES MAN PAGES - US ENGLISH · Cluster.man.en_us.server.data · rsct.basic.hacmp RS / 6000 Cluster Technology · rsct.basic.rte RS / 6000 Cluster Technology · rsct.basic.sp RS / 6000 Cluster Technology · rsct.clients.hacmp RS / 6000 Cluster Technology · rsct.clients.rte RS / 6000 Cluster Technology · rsct.clients .sp RS / 6000 Cluster Technology · rsct.basic.rte RS / 6000 Cluster Technology You can verify the installed HACMP software with the "clverify" command. # / usr / sbin / cluster / diag / clverify At the "clverify>" prompt Enter "Software" THE "ClveriFy.Software>" PROMPT ENTER "LPP". You Should See a Message Similar TO: Checking Aix Files for Hacmp for Aix-Specific Modifications ... * / etc / inittab not configured for HACMP for AIX. If IP Address Takeover is configured, or the Cluster Manager is to be started on boot, then / etc / inittab must contain the proper HACMP for AIX entries. Command completed. -------- - Hit Return To Continue --------- Contact IBM support if there were any failure messages or problems executing the "clverify" command 2.2 Configuring the Cluster Topology Using the "smit hacmp" command: # smit hacmp Note:. The following is an example of a generic HACMP configuration to be used as an example only. See the HACMP installation and planning documentation for specific examples. All questions concerning the configuration of your cluster should

be directed to IBM Support. This configuration does not include an example of a IP takeover network. "smit" fastpaths are being used to navigate the "smit hacmp" configuration menus. Each one of these configuration screens are obtainable from "smit hacmp". . All configuration is done from one node and then synchronized to the other participating nodes Add the cluster definition: Smit HACMP -> Cluster Configuration -> Cluster Topology -> Configure Cluster -> Add a Cluster Definintion Fastpath: # smit cm_config_cluster.add Add a Cluster DefinitionType or select values ​​in entry fields.Press Enter AFTER making all desired changes [entry Fields] ** NOTE:.. Cluster Manager MUST BE RESTARTED in order for changes to be acknowledged *** Cluster ID [0] * Cluster Name [ Cluster1] The "Cluster ID" and "Cluster NA me ". are arbitrary The" Cluster ID "must be a valid number between 0 and 99999 and the" Cluster Name "can be any alpha string up to 32 characters in length Configuring Nodes: Smit HACMP -> Cluster Configuration -> Cluster Topology. -> Configure Nodes -> Add Cluster Nodes FastPath: # smit cm_config_nodes.add Add Cluster NodesType or select values ​​in entry fields.Press Enter AFTER making all desired changes [entry Fields] * Node Names [node1 node2] "Node Names" should. Be the hostnames of the nodes. They must be alpha numeric and contain no moreward 32 char

acters. All nodes participating in the cluster must be entered on this screen separated by a space. Next to be configured is the network adapters. This example will utilize two ethernet adapters on each node as well as one RS232 serial port on each node for heartbeat . Node Name address IP Label (/ etc / hosts) Type node1 192.168.0.1 node1srvc service 192.168.1.1 node1stby standby / dev / tty0 serial node2 192.168.0.2 node2srvc service 192.168.1.2 node2stby standby / dev / tty0 serial The following screens are configuration settings needed to configure the above networks into the cluster configuration: Smit HACMP -> Cluster Configuration -> Cluster Topology -> Configure Nodes -> Add an Adapter FastPath: # smit cm_confg_adapters.add Add an AdapterType or select values ​​in entry fields.Press Enter After Making All Desired Changes. [Entry Fields] * Adapter IP Label [Node1srvc] * Network Type [ether] * Network Name [ipa] * Network Attribute public * Adapter Function service Adapter Identifier [] Adapter Hardware Address [] Node Name [node1] It is important to note that the " Adapter IP Label "Must Match What is in the" / etc / hosts "file Otherwise the adapter will not map to a valid ip address and the cluster wi

ll not synchronize. The "Network Name" is an arbitrary name for the network configuration. All the adapters in this ether configuration should have the same "Network Name". This name is used to determine what adapters will be used in the event of an adapter failure. Add an AdapterType or select values ​​in entry fields.Press Enter AFTER making all desired changes. [entry Fields] * Adapter IP Label [node1stby] * Network Type [ether] * Network Name [ipa] * Network Attribute public * Adapter Function Standby Adapter Identifier [] Adapter Hardware Address [] Node Name [Node1] Add an AdapterType or select values ​​in entry fields.Press Enter AFTER making all desired changes. [Entry Fields] * Adapter IP Label [node2srvc] * Network Type [ether] * Network Name [ipa] * Network Attribute PUBLIC * Adapter Function Service Adapter Identifier [] Adapter Hardware

Address [] Node Name [node2] Add an AdapterType or select values ​​in entry fields.Press Enter AFTER making all desired changes. [Entry Fields] * Adapter IP Label [node2stby] * Network Type [ether] * Network Name [ipa ] * Network Attribute public * Adapter Function standby Adapter Identifier [] Adapter Hardware Address [] Node Name [node2] The following is the serial configuration: Add an AdapterType or select values in entry fields.Press Enter AFTER making all desired changes. [Entry Fields] * Adapter IP Label [node1_tty] * Network Type [rs232] * Network Name [serial] * Network Attribute serial * Adapter Function service Adapter Identifier [ / dev / tty0] Adapter Hardware Address [] Node Name [Node1] & #

43;. Add an AdapterType or select values ​​in entry fields.Press Enter AFTER making all desired changes [Entry Fields] * Adapter IP Label [node2_tty] * Network Type [rs232] * Network Name [serial] * Network Attribute serial * Adapter Function service Adapter Identifier [/ dev / tty0] Adapter Hardware Address [] Node name [node2] Since this is not on the same network as the ethernet cards the "Network name" is different. The same name is used for The network name. Use "smit mktty" to configure the rs232 adapters: # smit mktty Add a TTYType or select values ​​in entry fields.Press Enter AFTER making all desired changes. [TOP] [Entry Fields] TTY type tty TTY interface rs232 Description Asynchronous Terminal Parent adapter sa0 * PORT number [0] Enable LOGIN disable BAUD rate [9600] Parity

[NONE] BITS Per Character [8] Number of Stop Bits [1] Time Before Advancing to Next Port Setting [0] # Terminal Type [Dumb] Flow Control to Be Used [xon] [more ... 31] Be Sure That "enable login" is set to the default of "disable". The "port number" is the the value "# where" # "is the port number. So if you defined this as "0" the device would be "/ dev / tty0". 2.3 Synchronizing the Cluster Topology After the topology is configured it needs to be synchronized. The synchronization performs topology sanity checks as well as pushes the configuration data to each of The Nodes in The Cluster Configuration. for the synchronization to work user equivalence must be configured for the root user. There is all ways Do this. One Way Would Be to create a ".rhosts" file on each node in the "/" Directory. Example of a ".rhosts" file: node1 root node2 root be suple permissions on the "/.rhosts" File IS 600. # chmod 600 /.rhosts Use a Remote Command Such AS "RCP" to test equmand res "RCP" to test equmand exe: from node1: # rcp / etc / group node2: / tmp version node2: # rcp / etc / group node1: / tmp View Your IBM Operating System Documentation for More Information Or Contact IBM Support if you have User Equivalence for the root user. Smit Hacmp -> Cluster Configuration

-> Cluster Topology -> Synchronize Cluster Topology FastPath: # smit configchk.dialog Synchronize Cluster TopologyType or select values ​​in entry fields.Press Enter AFTER making all desired changes [TOP] [Entry Fields] Ignore Cluster Verification Errors [No] .? * Emulate or Actual [Actual] Note:? Only the local node's default configuration files keep the changes you make for topology DARE emulation Once you run your emulation, to restore the original configuration rather than running an actual DARE, run the SMIT command. , ". Restore System Default Configuration from Active Configuration" We recommend that you make a snapshot before running an emulation, just in case uncontrolled cluster events happen during emulation NOTE:. If the Cluster Manager is active on this node, synchronizing the Cluster Topology will CAU se the Cluster Manager to make any changes take effect once the synchronization has successfully completed. [BOTTOM] 2.4 Configuring Cluster Resources In a RAC configuration only one resource group is required. This resource group is a concurrent group for the shared volume group. The following are the steps to add a concurrent resource group for a shared volume group: First there needs to be a volume group that is shared between the nodes SHARED LOGICAL VOLUME MANAGER, SHARED CONCURRENT DISKS (NO VSD The two instances of the same cluster database have. A Concurrent Access On The Same External Disks. This Is Real Concurrent Access and NOT A Shared ONE LIKE IN The VSD Environment. Because Several Instances

access at the same time the same files and data, locks have to be managed. These locks, at the CLVM layer (including memory cache), are managed by HACMP. 1) Check if the target disks are physically linked to the two machines of the cluster, and seen by both Type the lspv command on both machines Note:... the hdisk number can be different, depending on the others nodes disk configurations Use the second field of the output (PVid) of lspv to be sure you are dealing with the same physical disk from two hosts. Although hdisk inconsistency may not be a problem IBM suggests using ghost disks to ensure hdisk numbers match between the nodes. Contact IBM for further information on this topic. 2.4.1 Create volume groups to be shared Concurrently On ONE Node # SMIT VG SELECT "Add A Volume Group" Type Or SELECT VALUES IN ENTRY FIELDS. Add A Volume GroupType Or SELECT VALUES IN ENTRY FIELDS.PRESS Enter After Making All Desired Changes. [Entry Fields] VOLUME GROUP name [oracle_vg] Physical partition SIZE in megabytes 32 * PHYSICAL VOLUME names [hdisk5] Activate volume group AUTOMATICALLY no at system restart? Volume Group MAJOR NUMBER [57] # Create VG Concurrent Capable? Yes Auto-Varyon in Concurrent Mode? NO The "Physical Volume Names" Must Be Physical Disks That Are Shared Between The Nodes. We do not want TH

e volume group automatically activated at system startup because HACMP activates it. Also "Auto-varyon in Concurrent Mode?" should be set to "no" because HACMP varies it on in concurrent mode. You must choose the major number to be sure the volume GROUPS HAVE The Same Major Number in All The Nodes (Attention, Before Choosing this Number, You Must Be Sure It? S Free On All the nodes). To check all defined major number, type:% ls? Al / dev / * crw -rw ---- 1 root system 57, 0 aug 02 13:39 / dev / oracle_vg the major number for oracle_vg volume group is 57. Ensure That 57 Is Available on All The Other Nodes and is not buy by Another Device. if it is free then make use of the same on all nodes. On this volume group, create all the logical volumes and file systems you need for the cluster database. 2.4.2 Create Shared RAW Logical Volumes if not using GPFS. See section 2.4. 6 for Details About GPFS. MKLV -Y'DB_NAME_CNTRL1_110M '-W'n' -S'n '-R'n' Usupport_VG 4 HDisk5 mklv -y'db_name_ CNTRL2_110M '-W'N' -S 'N' -R 'N' Usupport_VG 4 HDisk5 MKLV -Y'DB_NAME_SYSTEM_400M '-W' N '-S' Name_USERS_120M ' -w'n '-s'n' -r'n 'usupport_vg 4 hdisk5 mklv -y'db_name_drsys_90m' -w'n '-s'd 3 hdisk5 mklv -y'db_name_tools_12m' -w 'n' -s'n '-r'n' usupport_vg 1 hdisk5 mklv -y'db_name_temp_100m '-w'n' -s'n '-r'n' USUPPORT_VG 4 HDisk5 mklv -y'db_name_undotbs1_312m '-w'n '-S'n' -r'n 'usupport_vg 10 hdisk5 mklv -

Y'DB_NAME_UNDOTBS2_312M '-W'n' -S'n '-R'n' Usupport_VG 10 HDisk5 MKLV -Y'DB_NAME_LOG11_120M '-W'n' -S'n '-R'n' Usupport_VG 4 HDisk5 MKLV -Y ' DB_NAME_LOG12_120M '-W'n' -s'n '-R'n' USUPPORT_VG 4 HDisk5 MKLV -Y'db_name_log21_120m '-W'n' -S'n '-R'n' USUPPORT_VG 4 HDisk5 MKLV -Y'db_name_log22_120m ' -w'n '-s'n' -r'n 'usupport_vg 4 hdisk5 mklv -y'db_name_indx_70m' -w'n '-s'n' -rn 'useupport_vg 3 hdisk5 mklv -y'db_name_cwmlite_100m' -w 'n' -s'n '-r'n' usupport_vg 4 hdisk5 mklv -y'db_name_example_160m '-w'n' -s'd-hn 'usupport_vg 5 hdisk5 mklv -y'db_name_oemrepo_20m' -w'n '-S'n' -r'n 'usupport_vg 1 hdisk5 mklv -y'db_name_spfile_5m' -w'n '-s'n' -r'n 'useupport_vg 1 hdisk5 mklv -y'db_name_srvmconf_100m' -w'n '- s'n '-r'n' usupport_vg 4 hdisk5 Substitute your database name in place of the "db_name" value. When the volume group was created a partition size of 32 megabytes was used. The seventh field is the number of partitions that make Up The File So for Example IF "DB_NAME_CN trl1_110m "needs to be 110 megabytes we would need 4 partitions The raw partitions are created in the." / dev mklv -y'db_name_cntrl1_110m '-w'n' "directory and it is the character devices that will be used The." - S'n '-R'n' Usupport_VG 4 HDisk5 "Creates Two Files: / DEV / DB_NAME_CNTRL1_110M / DEV / RDB_NAME_CNTRL1_110M Change Thae

permissions on the character devices so the software owner owns them: # chown oracle: dba / dev / rdb_name * 2.4.3 Import the Volume Group on to the Other Nodes Use "importvg" to import the oracle_vg volume group on all of the other nodes On the first machine, type:% varyoffvg oracle_vg On the other nodes, import the definition of the volume group using "smit vg":. Select "Import a Volume Group" type or select values ​​in entry fields Press Enter AFTER making all desired changes . Import a Volume GroupType or select values ​​in entry fields.Press Enter AFTER making all desired changes. [entry Fields] VOLUME GROUP name [oracle_vg] * PHYSICAL VOLUME name [hdisk5] Volume Group MAJOR NUMBER [57] # Make this VG Concurrent Capable? NO MAKE default varyon of VG Concurrent? no It is possible that the physical volume name (hdisk) could be different on each node. Check the PVID of the disk using "lspv", and be sure to pick the hdisk that has the same PVID as the disk used to create the volume group on the first node. Also make sure the same major number is used as well .. This number has to be undefined on all the nodes. The "Make default varyon of VG Concurrent?" option should be Set to "no". The Volume Group WAS CREATED CONCURRENT CAPABLE SO THE OPTION "MAKE THIS VG ConcURRENT CAPABLE?" Can Be Left at "no". The Command Line for Importing The Volume Group After

Varying it off was orginally created on would be:% importvg -v -y h disk #% chvg -an % varyoffvg after importing the volume group onto each node be sure to change the ownership of the character devices to the software owner: # chown oracle: dba / dev / rdb_name * 2.4.4 Add a Concurrent Cluster Resource Group The shared resource in this example is "oracle_vg" to create. the concurrent resource group that will manage "oracle_vg" do the following: Smit HACMP -> Cluster Configuration -> Cluster Resources -> Define Resource Groups -> Add a Resource Group FastPath: # smit cm_add_grp Add a Resource GroupType or select values ​​in entry fields .Press Enter instas. [Entry fields] * resource group name [shared_vg] * Node Relationship Concur rent * Participating Node Names [node1 node2] The "Resource Group Name" is arbitrary and is used when selecting the resource group for configuration. Because we are configuring a shared resources the "Node Relationship" is "concurrent" meaning a group of nodes that will share the resource "Participating Node Names" is a space separated list of the nodes that will be sharing the resource 2.4.5 Configure the Concurrent Cluster Resource Group Once the resource group is added it can then be configured with:.. Smit HACMP -> Cluster Configuration -> Cluster Resources -> Change / Show Resour

ces for a Resource Group FastPath:. # smit cm_cfg_res.select Configure Resources for a Resource GroupType or select values ​​in entry fields.Press Enter AFTER making all desired changes [TOP] [Entry Fields] Resource Group Name concurrent_group Node Relationship concurrent Participating Node Names opcbaix1 opcbaix2 Service IP label [] Filesystems [] Filesystems Consistency Check fsck Filesystems Recovery Method sequential Filesystems to Export [] Filesystems to NFS mount [] Volume Groups [] Concurrent Volume groups [oracle_vg] Raw Disk PVIDs [00041486eb90ebb7] AIX Connections Service [] AIX Fast Connect Services [] Application Servers [] Highly Available Communication Links [] Miscellaneous Data [] Inactive Takeover Activated false 9333 Disk Fencing Activated

false SSA Disk Fencing Activated false Filesystems mounted before IP configured false [BOTTOM] Note that the settings for "Resource Group Name", "Node Relationship" and "Participating Node Names" comes from the data entered in the previous menu. " Concurrent Volume groups "needs to be a pre-created volume group on shared storage. The" Raw Disk PVIDs "are the physical volumes IDs for each of the disks that make up the" Concurrent Volume groups ". It is important to note that you a resource group manage multiple concurrent resources. In such a case separate each volume group name with a space. Also, the "Raw Disk PVIDs" will be a space delimited list of all the physical volume IDs that make up the concurrent volume group list. Alternative Each Volume Group Can Be Configured In Its Own Concurrent Resource Group. 2.4.6 Creating Parallel FileSystems (GPFS) with AIX 5.1 (5L) You Can ALS o place your files on GPFS (RAW Logical Volumes are not a requirement of GPFS). In this case create GPFS capable of holding all required Database Files, Controlfiles and Logfiles. 2.5 Synchronizing the Cluster Resources After configuring the resource group a resource synchronization is needed . Smit HACMP -> Cluster Configuration -> Cluster Resources -> Synchronize Cluster Resources FastPath:. # smit clsyncnode.dialog Type or select values ​​in entry fields.Press Enter AFTER making all desired changes [TOP] [entry Fields] Ignore Cluster Verification Errors ? [No] un / con

? Igure Cluster Resources [Yes] * Emulate or Actual [Actual] Note:? Only the local node's default configuration files keep the changes you make for resource DARE emulation Once you run your emulation, to restore the original configuration rather than running. an actual DARE, run the SMIT command, "Restore System Default Configuration from Active Configuration." We recommend that you make a snapshot before running an emulation, just in case uncontrolled cluster events happen during emulation. [BOTTOM] Just keep the defaults. 2.6 Joining Nodes Into the Cluster After the cluster topology and resources are configured the nodes can join the cluster. It is important to start one node at a time unless using C-SPOC (Cluster-Single Poing of Control). For more information on using C -SPOC Consult IBM'S HACMP Specific Documentation. The Use of C-Spoc Will Not Be Covered In this Document. Start CLU ster services by doing the following: Smit HACMP -> Cluster Services -> Start Cluster Services FastPath: # smit clstart.dialog Type or select values ​​in entry fields.Press Enter AFTER making all desired changes [Entry Fields] * Start now, on. system restart or both now BROADCAST message at startup? false Startup Cluster Lock Services? false Startup Cluster Information Daemon? true Setting "Start now, on system restart or both" to "now" will start the HACMP daemons i

mmediately. "restart" will update the "/ etc / inittab" with an entry to start the daemons at reboot and "both" will do exactly that, update the "/ etc / inittab" and start the daemons immediately. "BROADCAST message at STARTUP? "CAN Either Be" True "or" false ". If set to" true "Wall Type Message Will Be Displayed When the Node Is Joining The Cluster." STARTUP Cluster Lock Services? "Should Be set to" false "for A RAC configuration. Setting this parameter to "true" will prevent the cluster from working but the added daemon is not used. If "clstat" is going to be used to to monitor the cluster the "Startup Cluster Information daemon?" will need to be Siwhen ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Completed: Node_up_complete node11111 other nodes Will Report a successful join in their "/ tmp / harmp. out "files: May 23 09:34:11 EVENT COMPLETED: node_up_complete node1 2.7 Basic Cluster Administration The" /tmp/hacmp.out "is the best place to look for cluster information." clstat "can also be used to verify cluster health . The "clstat" program can take a while to update with the latest cluster information and at times does not work at all. Also you must have the "Startup Cluster Information Daemon?" set to "true" when starting cluster services. Use the FOLLOWING COMMAND to Start "Clstat": # / usr / es / sbin / cluster / clstat clstat - Hacmp for a

IX Cluster Status Monitor ------------------------------------------- Cluster : Cluster1 (0) Tue Jul 2 08:38:06 Edt 2002 State: Up Node: Node1 State: Up Interface: Node1 (0) Address: 192.168.0.1 state: Up Node: Node2 State: Up Interface: Node2 (0) Address: 192.168.0.2 State: Up One Other Way To Check The Cluster Status is by querying the "snmpd" daem "snmpinfo": # / usr / sbin / snmpinfo -m get -o / usr / es / sbin / cluster / hacmp.defs -v clusterSubstate.0 This should return "32": clusterSubState.0 = 32 If other values ​​are returned from any node consult your IBM HACMP documentation or contact IBM support You can get a quick view. Of the Hacmp Specific Daemons with: Smit Hacmp -> Cluster Services -> SHOW Clus ter Services COMMAND STATUSCommand: OK stdout: yes stderr: noBefore command completion, additional instructions may appear below.Subsystem Group PID Status clstrmgrES cluster 22000 active clinfoES cluster 21394 active clsmuxpdES cluster 14342 active cllockdES lock inoperative clresmgrdES 29720 active Starting & Stopping Cluster Nodes To join And vict nodes from the cluster USE: SMIT HACMP -> Cluster Services -> Start Cluster Services See Section 2.6 for more information on joining a node inte the cluster. Use the fla

lowing to evict a node from the cluster: Smit HACMP -> Cluster Services -> Stop Cluster Services FastPath: # smit clstop.dialog Stop Cluster ServicesType or select values ​​in entry fields.Press Enter AFTER making all desired changes [Entry Fields] *. Stop now, on system restart or both now BROADCAST cluster shutdown? true * Shutdown mode graceful (graceful or graceful with takeover, forced) See section 2.6 "Joining Nodes Into the Cluster" for and explanation of "Stop now, on system restart or both "and" BROADCAST cluster shutdown? ". The" Shutdown mode "determines whether or not resources are going to move between nodes if a shutdown occurs." forced "is new with 4.4.1 of HACMP and will leave applications running that Are Controlled by Hacmp Events WHEN TH e shutdown occurs. "graceful" will bring everything down but cascading and rotating resources are not switched where as with "graceful with takeover" these resources will be switched at shutdown. Log Files for HACMP / ES All cluster reconfiguration information during cluster startup and shutdown goes into the "/tmp/hacmp.out". 3.0 Preparing for the installation of RAC The Real Application Clusters installation process includes four major tasks. 1. Configure the shared disks and UNIX preinstallation tasks. 2. Run the Oracle Universal Installer to install The Oracle9i Enterprise Edition and The Oracle9i Real Application Clusters Software. 3. Create and Co

nfigure your database. 3.1 Configure the shared disks and UNIX preinstallation tasks 3.1.1 Configure the shared disks Real Application Clusters requires that all each instance be able to access a set of unformatted devices on a shared disk subsystem if GPFS is not being used. These shared disks are also referred to as raw devices If your platform supports an Oracle-certified cluster file system, however, you can store the files that Real Application Clusters requires directly on the cluster file system Note:.. If you are using Parallel Filesystem ( GPFS), however, you can store the files that Real Application Clusters requires directly on the cluster file system! The Oracle instances in Real Application Clusters write data onto the raw devices to update the control file, server parameter file, each datafile, and each Redo log file. All Instances in the cluster share these files. The Oracle Instances in The Rac Configuration Write Information To Raw Devices Defined for: · THE C rol file · The spfile.ora · Each datafile · Each ONLINE redo log file · Server Manager (SRVM) configuration information It is therefore necessary to define raw devices for each of these categories of file. The Oracle Database Configuration Assistant (DBCA) will create a seed database expecting the following configuration: Raw Volume File Size Sample File Name SYSTEM tablespace 400 Mb db_name_raw_system_400m USERS tablespace 120 Mb db_name_raw_users_120m TEMP tablespace 100 Mb db_name_raw_temp_100m UNDOTBS tablespace per instance 312 Mb db_name_raw_undotbsx_312m CWMLITE tablespace 100 Mb db_name_raw_cwmlite_100m EXAMPLE 160 Mb db_name_raw_example_160m OEMREPO 20 Mb db_name_raw_oemrepo_20m INDX TABLESPACE 70 MB DB_NAME_RAW_INDX_70M TOOLS

tablespace 12 Mb db_name_raw_tools_12m DRYSYS tablespace 90 Mb db_name_raw_drsys_90m First control file 110 Mb db_name_raw_controlfile1_110m Second control file 110 Mb db_name_raw_controlfile2_110m Two ONLINE redo log files per instance 120 Mb x 2 db_name_thread_lognumber_120m spfile.ora 5 Mb db_name_raw_spfile_5m srvmconfig 100 Mb db_name_raw_srvmconf_100m Note: Automatic Undo Management requires an undo tablespace per instance therefore you would require a minimum of 2 tablespaces as described above. By following the naming convention described in the table above, raw partitions are identified with the database and the raw volume type (the data contained in the raw volume). Raw volume size is also identified using this method Note:. in the sample names listed in the table, the string db_name should be replaced with the actual database name, thread is the thread number of the instance, and lognumber is the log number within a Thread. on the node from Which you run the oracle universa l Installer, create an ASCII file identifying the raw volume objects as shown above The DBCA requires that these objects exist during installation and database creation When creating the ASCII file content for the objects, name them using the format:.. database_object = raw_device_file_path When you create the ASCII file, separate the database objects from the paths with equals (=) signs as shown in the example below: system1 = / dev / rdb_name_system_400m spfile1 = / dev / rdb_name_spfile_5m users1 = / dev / rdb_name_users_120m temp1 = / dev / rdb_name_emp_100m undotbs1 = / DEV / RDB_NAME_UNDOTBS1_312M Undotbs2 = / DEV / RDB_NAME_UNDOTBS2_312M EXAMPLE1 = / DEV / RDB_NAME_EXAMPLE_160M CWMLITE1 = / DEV / RDB_NAME_CWMLITE_100M INDX1 = / DE

v / rdb_name_indx_70m tools1 = / dev / rdb_name_tools_12m drsys1 = / dev / rdb_name_drsys_90m control1 = / dev / rdb_name_cntrl1_110m control2 = / dev / rdb_name_cntrl2_110m redo1_1 = / dev / rdb_name_log11_120m redo1_2 = / dev / rdb_name_log12_120m redo2_1 = / dev / rdb_name_log21_120m redo2_2 = / dev / rdb_name_log22_120m You must specify that Oracle should use this file to determine the raw device volume names by setting the following environment variable where filename is the name of the ASCII file that contains the entries shown in the example above: csh: setenv DBCA_RAW_CONFIG filename ksh, bash or sh: DBCA_RAW_CONFIG = filename; export DBCA_RAW_CONFIG 3.1.2 UNIX Preinstallation Steps Note: in addition, you can run the installPrep.sh script provided in Note 189256.1 which catches most unix environment problems After configuring the raw volumes, perform the following steps prior. To Installation As Root User: Add The Oracle User · Make Sure You Have An OSDBA Group Defined in The / etc / Group File on All Nodes of Your cluster. To designate an osdba group name and group number and osoper group during installation, these group names must be identical on all nodes of your UNIX cluster that will be part of the Real Application Clusters database. The default UNIX group name for the osdba and . osoper groups is dba There also needs be an oinstall group which the software owner should have as its primary group A typical entry would therefore look like the following:. dba :: 101racle oinstall :: 102: root, oracle The following is an example Of the command named to create the "dba" group with a group ID "101": # mkgroup -'a 'id =' 101 'users =' oracle 'DBA · CREATE AN ORACLE AC

count on each node so that the account: o Is a member of the osdba group (dba in this example) o Has oinstall as its primary group o Is used only to install and update Oracle software o Has write permissions on remote directories The following is an example of the smit command used to create the "oracle" user: Smit -> Security & Users -> Users -> Add a User Fastpath: # smit mkuser Type or select values ​​in entry fields.Press Enter AFTER making all desired changes. [TOP] [Entry Fields] * User Name [ORACLE] User ID [101] # Administrative User? False primary group [oinstall] group set [] administrative groups [] ROLES [] Another User Can Groups [All] Home Directory [/ Home / Oracle] Initial Program []]]] m (mmddhhmmy) [0] Note That The primary group is not "dba". The "use" of "oinstall" is optional but recommended. for more information on the use of the "oinstall" Group See The: Oracle9i Installti

on Guide Release 2 (9.XXX0) for UNIX Systems:. AIX-Based Systems, Compaq Tru64 UNIX, HP 9000 Series HP-UX, Linux Intel and Sun SPARC Solaris documentation · Create a mount point directory on each node to serve as the top of your Oracle software directory structure so that: o The name of the mount point on each node is identical to that on the initial node o The oracle account has read, write, and execute privileges · On the node from which you will run the Oracle Universal Installer, set up user equivalence by adding entries for all nodes in the cluster, including the local node, to the .rhosts file of the oracle account, or the /etc/hosts.equiv file. · As oracle account user, check for user equivalence for the oracle account by performing a remote login (rlogin) to each node in the cluster. · As oracle account user, if you are prompted for a password, you have not given the oracle account the same attributes on all nodes You Must Correct this Because The Oracle Universal Instance aller can not use the rcp command to copy Oracle products to the remote node's directories without user equivalence. Establish system environment variables · Set a local bin directory in the user's PATH, such as / usr / local / bin, or / opt / bin. It is necessary to have execute permissions on this directory. · Set the DISPLAY variable to point to the system's (from where you will run OUI) IP address, or name, X server, and screen. · Set a temporary directory path for TMPDIR with at . least 20 Mb of free space to which the OUI has write permission Establish Oracle environment variables: Set the following Oracle environment variables: Environment Variable Suggested value ORACLE_BASE eg / u01 / app / oracle ORACLE_HOME eg / u01 / app / oracle / pr

oduct / 901 ORACLE_TERM xterm NLS_LANG AMERICAN-AMERICA.UTF8 for example ORA_NLS33 $ ORACLE_HOME / ocommon / nls / admin / data PATH Should contain $ ORACLE_HOME / bin CLASSPATH $ ORACLE_HOME / JRE: $ ORACLE_HOME / jlib / $ ORACLE_HOME / rdbms / jlib: / $ ORACLE_HOME / network / jlib · Create the directory / var / opt / oracle and set ownership to the oracle user. · Verify the existence of the file / opt / SUNWcluster / bin / lkmgr. This is used by the OUI to indicate that the installation is being performed on a cluster Note:. There is a verification script InstallPrep.sh available which may be downloaded and run prior to the installation of Oracle Real Application Clusters This script verifies that the system is configured correctly according to the Installation Guide.. The output of the script will report any further tasks that need to be performed before successfully installing Oracle 9.x DataServer (RDBMS) This script performs the following verifications: -. ORACLE_HOME Directory Verification UNIX User / umask Verification U NIX Group Verification Memory / Swap Verification TMP Space Verification Real Application Cluster Option Verification Unix Kernel Verification ./InstallPrep.sh You are currently logged on as oracle Is oracle the unix user that will be installing Oracle Software? Y or n y Enter the unix group that will be used during the installation Default: dba dba Enter Location where you will be installing Oracle Default: / u01 / app / oracle / product / oracle9i /u01/app/oracle/product/9.2.0.1 Your Operating System is AIX Gathering information ... please Wait Checking Unix User ... User Test Passed Checking Unix Umask ... Umask Test Passed Checking Unix Group ... Unix Group Test Passed Checking Memory & Swap ... Memory Test Passed / TMP TEST Pass

Checking for a cluster ... AIX Cluster test Cluster has been detected You have 2 cluster members configured and 2 are curently up No cluster warnings detected Processing kernel parameters ... Please wait Running Kernel Parameter Report ... Check the report for Kernel parameter verification Completed. / tmp / Oracle_InstallPrep_Report has been generated Please review this report and resolve all issues before attempting to install the Oracle Database Software 3.2 Using the Oracle Universal Installer for Real Application Clusters Follow these procedures to use the Oracle Universal Installer to install the Oracle Enterprise Edition and the Real Application Clusters software. Oracle9i is supplied on multiple CD-ROM disks. During the installation process it is necessary to switch between the CD-ROMS. OUI will manage the switching between CDs. For the latest RAC / IBM certification matrix see Here. To Install the Oracle Software, Perform The Following :. · Login As The Root User And Mount The First THE FIR CD-ROM IF Installing from CD-ROM # mount -rv CDRFS / DEV / CD0 / CDROM · Execute The "rootpre.sh" shell script on the cd-rom mount point or the location of disk1 if installing from a disk stage. See the Oracle9i Installation Guide Release 2 (9.XXX0) for UNIX Systems:. AIX-Based Systems, Compaq Tru64 UNIX, HP 9000 Series HP-UX, Linux Intel and Sun SPARC Solaris documentation for more information on creating disk stages # / < Location_Of_Install_Media> /rootpre.sh · Login as the oracle user and execute the "runInstaller" See if you experience problems starting the runInstaller $ / / runInstaller · At the OUI Welcome screen, click Next. · A Prompt Will A

ppear for the Inventory Location (if this is the first time that OUI has been run on this system). This is the base directory into which OUI will install files. The Oracle Inventory definition can be found in the file /etc/oraInst.loc . Click OK. · Verify the UNIX group name of the user who controls the installation of the Oracle9i software. If an instruction to run /tmp/orainstRoot.sh appears, the pre-installation steps were not completed successfully. Typically, the / var / opt / oracle directory does not exist or is not writeable by oracle. Run /tmp/orainstRoot.sh to correct this, forcing Oracle Inventory files, and others, to be written to the ORACLE_HOME directory. Once again this screen only appears the first time Oracle9i products are installed on the system. Click Next. · The File Location window will appear. Do NOT change the Source field. The Destination field defaults to the ORACLE_HOME environment variable. Click Next. · Select the Products to install. In this example , SELECT the Oracle9i Server then click Next. · Select the installation type. Choose the Enterprise Edition option. The selection on this screen refers to the installation operation, not the database configuration. The next screen allows for a customized database configuration to be chosen. Click Next . · Select the configuration type. In this example you choose the Advanced Configuration as this option provides a database that you can customize, and configures the selected server products. Select Customized and click Next. · Select the other nodes on to which the Oracle RDBMS Software Will Be Installed. It is not nesery to select the node on which the ing. click next. Identify The Raw Partition in To Which the Oracle9i

Real Application Clusters (RAC) configuration information will be written. It is recommended that this raw partition is a minimum of 100MB in size. · An option to Upgrade or Migrate an existing database is presented. Do NOT select the radio button. The Oracle Migration utility is not able to upgrade a RAC database, and will error if selected to do so. · The Summary screen will be presented. Confirm that the RAC database software will be installed and then click install. The OUI will install the Oracle9i software on to the local node, and then copy this information to the other nodes selected. · Once install is selected, the OUI will install the Oracle RAC software on to the local node, and then copy software to the other nodes selected earlier. This will take some time During the installation process, the OUI does not display messages indicating that components are being installed on other nodes -.. I / O activity may be the only indication that the process is continuing 3.3 Create a RAC Database using the Oracle Database Configuration Assistant The Oracle Database Configuration Assistant (DBCA) will create a database for you (for an example of manual database creation see Database Creation in Oracle9i RAC). The DBCA creates your database using the optimal flexible architecture (OFA .) This means the DBCA creates your database files, including the default server parameter file, using standard file naming and file placement practices The primary phases of DBCA processing are: -. · Verify that you correctly configured the shared disks for each tablespace (for Non-Cluster File System Platforms · Create The Database · Configure The Oracle Network Services · Start The Database Instances and Listeners Oracle Corporation

recommends that you use the DBCA to create your database. This is because the DBCA preconfigured databases optimize your environment to take advantage of Oracle9i features such as the server parameter file and automatic undo management. The DBCA also enables you to define arbitrary tablespaces as part of the database creation process. So even if you have datafile requirements that differ from those offered in one of the DBCA templates, use the DBCA. you can also execute user-specified scripts as part of the database creation process. The DBCA and the Oracle Net Configuration Assistant also accurately configure your Real Application Clusters environment for various Oracle high availability features and cluster administration tools. · DBCA will launch as part of the installation process, but can be run manually by executing the command dbca from the $ ORACLE_HOME / bin directory on UNIX Platforms. The Rac Welcome Page Displays. Choose Oracle Cluster Database Option and SELECT NEXT. · THE Operations page is displayed. Choose the option Create a Database and click Next. · The Node Selection page appears. Select the nodes that you want to configure as part of the RAC database and click Next. If nodes are missing from the Node Selection then perform clusterware diagnostics by executing the $ ORACLE_HOME / bin / lsnodes -v command and analyzing its output. Refer to your vendor's clusterware documentation if the output indicates that your clusterware is not properly installed. Resolve the problem and then restart the DBCA. · The Database Templates The Templates Other Than New Database Include DataFiles. Choose New Database and the Click Next. · The show details button provides Information on

mplate selected. · DBCA now displays the Database Identification page. Enter the Global Database Name and Oracle System Identifier (SID). The Global Database Name is typically of the form name.domain, for example mydb.us.oracle.com while the SID is used to uniquely identify an instance (DBCA should insert a suggested SID, equivalent to name1 where name was entered in the Database name field). in the RAC case the SID specified will be used as a prefix for the instance number. for example, MYDB, would become MYDB1, MYDB2 for instance 1 and 2 respectively · The Database Options page is displayed Select the options you wish to configure and then choose Next Note:... If you did not choose New Database from the Database Template page, you .. will not see this screen · The Additional database Configurations button displays additional database features Make sure both are checked and click OK · Select the connection options desired from the Database Connection Options page Note:.. If you did not choose New Database from the Database Template page, you will not see this screen. Click Next. · DBCA now displays the Initialization Parameters page. This page comprises a number of Tab fields. Modify the Memory settings if desired and then select the File Locations tab to update information on the Initialization Parameters filename and location. Then click Next. · The option Create persistent initialization parameter file is selected by default. If you have a cluster file system, then enter a file system name, otherwise a raw device name for The Location of the Server Parameter File (SPFILE) MUST. · The Button File Location Variables? Displays Variable Information. Click Ok. · The Button All Initializa

tion Parameters? displays the Initialization Parameters dialog box. This box presents values ​​for all initialization parameters and indicates whether they are to be included in the spfile to be created through the check box, included (Y / N). Instance specific parameters have an instance value in the instance column Complete entries in the All Initialization Parameters page and select Close Note:.. There are a few exceptions to what can be altered via this screen Ensure all entries in the Initialization Parameters page are complete and select Next · DBCA.. now displays the Database Storage Window. This page allows you to enter file names for each tablespace in your database. · The file names are displayed in the Datafiles folder, but are entered by selecting the Tablespaces icon, and then selecting the tablespace object from the Expanded Tree. Any Names Displayed Here Can Be Changed. A Configuration File Can Be Used, See Section 3.2.1 (Pointed to by the Environment Variable DBCA_RAW_CO NFIG). Complete the database storage information and click Next. · The Database Creation Options page is displayed. Ensure that the option Create Database is checked and click Finish. · The DBCA Summary window is displayed. Review this information and then click OK. · Once the Summary screen is closed using the OK option, DBCA begins to create the database according to the values ​​specified. A new database now exists. It can be accessed via Oracle SQL * PLUS or other applications designed to work with an Oracle RAC database. 4.0 Administering Real Application Clusters Instances Oracle Corporation Recommends That You Use Srvctl To Administer Your Real Application Clusters Database Environment. Srvctl Manages Configuration Information THA

t is used by several Oracle tools. For example, Oracle Enterprise Manager and the Intelligent Agent use the configuration information that SRVCTL generates to discover and monitor nodes in your cluster. Before using SRVCTL, ensure that your Global Services Daemon (GSD) is running after you configure your database. to use sRVCTL, you must have already created the configuration information for the database that you want to administer. you must have done this either by using the Oracle Database Configuration Assistant (DBCA), or by using the srvctl add command as described below. If this is the first Oracle9i database created on this cluster, then you must initialize the clusterwide SRVM configuration. Firstly, create or edit the file /var/opt/oracle/srvConfig.loc file and add the entry srvconfig_loc = path_name .where the path name is a small cluster-shared raw volume eg $ vi /var/opt/oracle/srvconfig.loc srvconfig_loc = / dev / rrac_srvconfig_100m Then Execute the Following Command to Initial Ize this Raw Volume (NOTE: this cannot be run while the gsd is running. prior to 9i release 2 you will need to kill the ... / jre / 1.1.8 / bin / ... process to stop the gsd from running . From 9i Release 2 use the gsdctl stop command): $ srvconfig -init The first time you use the SRVCTL Utility to create the configuration, start the Global Services Daemon (GSD) on all nodes so that SRVCTL can access your cluster's configuration information. Then execute the srvctl add command so that Real Application Clusters knows what instances belong to your cluster using the following syntax: For Oracle RAC v9.0.1: $ gsd Successfully started the daemon on the local node $ srvctl add db -p db_name -o. Oracle_Home Then for Each Instance Enter the C

ommand from either node: $ srvctl add instance -p db_name -i sid -n node To display the configuration details for, example, databases racdb1 / 2, on nodes racnode1 / 2 with instances racinst1 / 2 run: - $ srvctl config racdb1 racdb2 $ srvctl config -p racdb1 racnode1 racinst1 racnode2 racinst2 $ srvctl config -p racdb1 -n racnode1 racnode1 racinst1 Examples of starting and stopping RAC follow: - $ srvctl start -p racdb1 Instance successfully started on node: racnode2 Listeners successfully started on node: racnode2 Instance successfully started on node: racnode1 Listeners successfully started on node: racnode1 $ srvctl stop -p racdb2 Instance successfully stopped on node: racnode2 Instance successfully stopped on node: racnode1 Listener successfully stopped on node: racnode2 Listener successfully stopped on node: racnode1 $ srvctl Stop -p racdb1 -i racinst2 -s instinth: RacNode2 $ srvctl stop -p racdb1 -s instimety stopped inn n ode: racnode2 Instance successfully stopped on node: racnode1 For Oracle RAC v9.2.0 : $ gsdctl start Successfully started the daemon on the local node $ srvctl add database -d db_name -o oracle_home [-m domain_name] [-s spfile]. Then for each instance enter the command: $ srvctl add instance -d db_name -i sid -n node To display the configuration details for, example, databases racdb1 / 2, on nodes racnode1 / 2 with instances racinst1 / 2 run: - $ srvctl config racdb1 racdb2 $ srvctl config -p racdb1 -n racnode1 racnode1 racinst1 /u01/app/oracle/product/9.2.0.1 $ srvctl status database -d racdb1 Instance racinst1 is running on node racnode1 Instance racinst2 is running on node racnode2 Examples of starting And Stopping Rac Foolow: - $ SRVC

转载请注明原文地址:https://www.9cbs.com/read-120476.html

New Post(0)