Foreword
Position
This article is positioned in those who have a certain Linux and Oracle foundation, and the RAC must also be related to the hotfire as a reference manual, not the so-called installation wizard. So this article does not have a detailed steps such as installation process, but there is a detailed installation error solution.
Coverage
Suitable for Redhet AS 2.1 and AS 3.0 system
Including single nodes, multi-node installation
Including 9201 upgrade to 9204 and directly install 9204
Including file system (single machine), OCFS file system, RAW device and NFS network file system installation is different
Chapter 1. RAC mechanism
RAC originated from the OPS (Oracle Parallel Server) of Version 8, the original design of OPS / RAC is the high availability of the system and the application. OPS / RAC uses one (typically one) or multiple Oracle Instances with a Database connection with a Database. RAC has made many improvements in early OPS, especially in communication and management of nodes. RACs are in operation, each node can be used separately and is balanced by the application load. If an accident occurs, such as a node failure, the failover of the node can be realized to ensure the high availability of database 24 * 7.
RAC's database requirements are built on a shared disk device. For OPS, only RAW devices are supported, and RACs have supported file systems (single-machine simulation), OCFS, RAW, file systems such as NFS or devices. Because the RAC is a multiple instance corresponding to a database, each node has its own log, so some special processing will be required in backup and recovery.
However, RAC does not provide disaster relief functionality, such as the damage of the shared disk device, natural disasters, etc. The inevitable loss will result in the irreparable RAC, so RAC is generally used in conjunction with other disaster recovery components, such as RAC DATA Guard.
Replacing previous versions Distributed Lock Management (DLM), Global Buffer Service (GCS) and Global Lock Services (GES) will be responsible for RAC management. The GCS synchronization layer allows each instance to access the database separately, through the database's core layer management instance level consistency and lock resources. All or more tasks will be completed by a special background process:
LMON (Lock Monitor Process), lock monitoring process
Responsible for monitoring the overall RAC's global resources, management instances and processes expiration, and the recovery of global buffer services and global lock services, LMON provides a well-known Complex Services (CGS).
LMSN (The Global Cache Service Processes), global buffer service process
LMSN can handle information about the global buffer service of the remote node. LMSN is responsible for handling the interrupt request obtained by the global buffer service of the remote node to ensure multiple instance read consistency requests. LMSN creates a consistent read message of a block and sent to an instance of a remote node.
LMD (The Global Enqueue Service Daemon), Global Resource Services
The LMD resource agent process is responsible for managing the management of global buffer service resources, which can be responsible for requests for remote node resources and deadlock detection.
Chapter 2. System Requirements for Installing RAC on Linux
2.1 core needs
If it is AS2.1, the kernel 2.4.9 E16 or more, such as
[Oracle @ DBRAC Oracle] $ uname -a
Linux DBRAC 2.4.9-E.37Enterprise # 1 SMP MON JAN 26 11:20:59 Est 2004 I686 Unknown
If it is 3.0, there is no kernel requirements, and the kernel information is usually as follows [root @ dbrac oracle] $ uname -a
Linux dbrac 2.4.21-4.elsmp # 1 SMP fri oct 3 17:52:56 Edt 2003 I686 i686 i386 GNU / Linux
2.2 Demand for Binutils
Binutils requires binutils-2.11.90.0.8-12, such as
Such as: 2.1 version
[Oracle @ DBRAC Oracle] $ rpm -qa | grep -i binutils
Binutils-2.11.90.0.8-12
3.0 version
[root @ dbrac oracle] $ rpm -qa | grep -i binutils
Binutils-2.14.90.0.4-26
2.3 Sharing Disk Needs
If it is a single node installation, it can be a local hard disk, the file system can
If you are a multi-node installation, you need to share the disk system, which can be a RAW device, an OCFS file system, an NFS network file system, and the like.
Chapter 3. Preparation before installation
3.1 Adjusting Linux Core Parameters
Added in /etc/sysctl.conf
Net.core.rmem_default = 262144
NET.CORE.RMEM_MAX = 262144
NET.CORE.WMEM_DEFAULT = 262144
NET.CORE.RMEM_MAX = 262144
Net.IPv4.tcp_sack = 0
Net.IPv4.tcp_timestamps = 0
fs.file-max = 65535
Kernel.sem = 500 64000 100 128
Kernel.shmmax = 2147483648
The above value may change according to different environments, no longer describe the specific meaning of each value
3.2 Loading System Status Check Module
This module is self-contained in the core of AS2.1-E16 or above, and does not need to be installed. It replaces the database 9201 version of WatchDog, so we can do not need to configure WatchDog, if the core is not enough, you can Upgrade core.
You can detect whether the module is present by the following method
$ FIND / LIB / MODULES -NAME "Hangcheck-Timer.o"
/LIB/Modules/2.4.9-e.37enterprise/kernel/drivers/char/hangcheck-timer.o
You can run the module and check the log information
#su - root
# / sbin / insmod hangcheck-timer hangcheck_tick = 30 hangcheck_margin = 180
# Grep hangcheck / var / log / messages | TAIL -1
Added under /etc/rc.local
#! / bin / sh
Touch / Var / Lock / Subsys / Local
/ sbin / insmod hangcheck-timer hangcheck_tick = 30 hangcheck_margin = 180
or
#su - root
# echo "options hangcheck-timer hangcheck_tick = 30 hangcheck_margin = 180" >> /etc/modules.conf
In this case, the module can be loaded automatically after the system is restarted.
3.3 Determine and Configuration Nodes
If it is determined to simulate RAC on a single node, then the / etc / hosts file content can be similar as follows
[root @ dbrac root] # more / etc / hosts # do not remove the following line, or various programs
# That Require Network FunctionAlity Will Fail.
127.0.0.1 Localhost
10.0.29.162 DBRAC
Where DBRAC is the machine name of the machine, consistent with HostName or / etc / sysconfig / network content
If you install RAC on a multi-node, then / etc / hosts file contents can be similar
[Oracle @ db205 oracle] $ more / etc / hosts
# Do not remove the following line, or various programs
# That Require Network FunctionAlity Will Fail.
127.0.0.1 Localhost
192.168.168.205 DBRAC1
192.168.168.206 DBRAC2
192.168.0.205 DBRAC1-ETH1
192.168.0.206 DBRAC1-ETH1
Among them, representing the name of the public node and the private node, the public node is the IP address configured by the NIC 1, indicating the external application connection channel; the private node is the IP address configured by the NIC 2, which is used for communication dedication between multiple nodes.
3.4 Creating Oracle User and Group
#groupadd dba
# useradd oracle -g dba
#passwd oracle
3.5 Setting the node environment variable
If it is AS 3.0, pay attention to set the following parameters
Export ld_assume_kernel = 2.4.1
The following parameters are public in two platforms
Export oracle_base = / u01 / oracle
Export oracle_home = / u01 / oracle / ora920
Export Oracle_Term = xterm
Export NLS_LANG = American_america.zHS16GBK
Export ORA_NLS33 = $ Oracle_Home / Ocommon / NLS / Admin / Data
Export ld_library_path = $ oracle_home / lib: / lib: / usr / lib: / usr / local / lib
PATH = $ PATH: $ HOME / BIN: $ Oracle_Home / Bin
Export Path
3.6 Preparation Directory Structure
Su - Oracle
$ CD $ ORACLE_BASE
$ mkdir -p admin / rac / - stores configuration file
$ CD Admin / RAC /
$ mkdir bdump cdump udump recreatedblog
$ CD $ ORACLE_BASE
$ mkdir -p ORADATA / RAC ---- Store data file
Note: The above operation, if you are multiple nodes, you need to complete on multiple nodes, and a single node only needs to be done on a single node.
Chapter 4. Determine the shared disk device
4.1 Installing RAC on a single file system
File systems such as EXT2, EXT3, etc., can simulate RACs on a single machine.
Assume that we divide / U01 partitions for the EXT3 file system, first of all, we create new partitions
#fdisk / dev / sda
Assuming new partitions is / dev / sda6, then we format the partition, if it is AS 2.1
# MKFS.EXT2 -J / DEV / SDA6
The -j parameter is formatted in EXT3, if it is a 3.0 system, you can call the mkfs.ext3 command directly.
# MKFS.EXT3 / DEV / SDA6
Then we create a mounted point #mkdir / u01; chmod 777 / u01
Grant user permissions
#Chown Oracle: DBA / U01
Can be hung in mount
#mount -t ext3 / dev / sda6 / u01
If you want to automatically flush, modify / etc / fstab when you start
/ DEV / SDA6 / U01 EXT3 Defaults 1 1
Then we can find information similar to the following with DF.
/ DEV / SDA6 17820972 2860164 14055548 17% / u01
4.2 OCFS file system single-machine simulation and multi-node sharing installation
OCFS is Oracle Cluster File System, suitable for single-nodes with multi-node RAC installation. We need from before installation
http://oss.oracle.com
Download the latest installation package and pay attention to whether the installation package matches the current core. If it is the version of the AS 2.1 Enterprise core download.
OCFS-2.4.9-E-Enterprise-1.0.10-1.i686.rpm
OCFS-Support-1.0.10-1.i386.rpm
Ocfs-Tools-1.0.10-1.i386.rpm
The required core version is 2.4.9-E.12 or above
If it is the version of the AS3.0 SMP core download
OCFS-2.4.21-EL-SMP-1.0.10-1.i686.rpm
OCFS-Support-1.0.10-1.i386.rpm
Ocfs-Tools-1.0.10-1.i386.rpm
We can use rpm to install software, such as
# rpm -ivh ocfs *
Can use the following command to see if the installation is successful
# rpm -qa | grep -i ocfs
Check if the service is installed successfully
# chkconfig --List | grep ocfs
OCFS 0: OFF 1: OFF 2: on 3: on 4: on 5: on 6: Off
Configure the /etc/ocfs.conf file, the result is about the following
# Ensure this file exissrs in / etc Directory #
Node_name = DBRAC
IP_ADDRESS = 10.0.29.162
IP_Port = 7000
COMM_VOTI = 1
Then run the OCFS_UID_GEN -C to get the GID, then the file becomes as shown below.
[root @ dbrac root] # more /etc/ocfs.conf
Node_name = DBRAC
IP_ADDRESS = 10.0.29.162
IP_Port = 7000
COMM_VOTI = 1
Guid = 7f2311e5dabe42fbcd86000d56bac410
If you change the NIC, you need to re-run the OCFS_UID_GEN -C to get GID
Finally, load OCFS start Oracle Cluster Manager, this command is on one node, just run once after installation, when the system is started, will be automatically loaded from / etc / fstab from the / etc / fstab.
Su - root
# / sbin / loading_ocfs
Note: All steps need to be performed on all nodes, if it is a single node, run on one node. The following operation is performed on one node.
In order to use the OCFS file system, we first divide two partitions, a quorum file, a quorum file, a database that includes control files, data files, log files, archive, server profiles ( SRVM Configuration File, etc.
# fdisk / dev / sdb divided one / dev / sdb1 and / dev / sdb5
Then create a flush point
Mkdir / shared; chmod 777 / shared
MKDIR / OCFS01; ChMOD 777 / OCFS01
Now we format the partition.
# mkfs.ocfs -b 128 -C -G 500 -U 500 -L OCFS01 -M / OCFS01 -P 0775 / DEV / SDB5
Where -G -U is group and user number
The meaning of each parameter is as follows
-F mandatory formatted existing OCFS partition
-b block size (KB) must be multiple Oracle block size, Oracle suggestion 128k
-L volume label
-M hinge points (this article "/ ocfs01")
The UID of the owner of the -U root path (this article is "oracle")
-g root path owner group GID (this article is "DBA")
-p root path permission license
Now we can hide this partition
#Services Ocfs Start # If you have already started, you don't have to
#mount -t ocfs / dev / sdb1 / shared
#mount -t OCFS / DEV / SDB5 / OCFS01
You can also add the following entry in / etc / fstab, which will be automatically loaded when the system is started.
/ dev / sda1 / share OCFS _NETDEV 0 0 0
/ DEV / SDA5 / CFS01 OCFS _NETDEV 0 0
We can see the following information with DF.
/ DEV / SDB1 1026144 24288 1001856 3% / share
/ DEV / SDB5 34529760 1153120 3337640 4% / OCFS01
All the above steps are finished, it is recommended to restart once, let multiple nodes confirm the shared device.
4.3 RAW bare equipment
First of all, you need to divide a range of partitions, it should be noted that each device cannot be more than 15 partitions, and Linux cannot exceed 255 naked devices.
Naked devices are generally used to share disk systems. You can use the following method to flush
#su - root
Raw / dev / raw / raw1 / dev / sda2 # Used for the cluster manager quorum file
RAW / DEV / RAW / RAW2 / dev / sda3 # Used for the shared configuration file for srvctl
# / dev / sda4: used for create the extended Partition Which Starts AS / DEV / SDA5.
Raw / dev / raw / raw3 / dev / sda5 # spfileorcl.ora
RAW / DEV / RAW / RAW4 / DEV / SDA6 # Control01.ctl
RAW / DEV / RAW / RAW5 / DEV / SDA7 # Control02.ctl
RAW / DEV / RAW / RAW6 / DEV / SDA8 # index01.dbf
Raw / dev / raw / raw7 / dev / sda9 # system01.dbf
RAW / DEV / RAW / RAW8 / DEV / SDA10 # TEMP01.DBF
RAW / DEV / RAW / RAW9 / DEV / SDA11 # Tools01.dbf
RAW / DEV / RAW / RAW10 / dev / sda12 # undotbs01.dbf
RAW / DEV / RAW / RAW11 / DEV / SDA13 # undotbs02.dbf
RAW / DEV / RAW / RAW12 / DEV / SDA14 # undotbs03.dbf
RAW / DEV / RAW / RAW13 / DEV / SDA15 # users01.dbf
RAW / DEV / RAW / RAW14 / DEV / SDB5 # redo01.log (Group # 1 thread # 1) RAW / DEV / RAW / RAW15 / DEV / SDB6 # redo02.log (Group # 2 thread # 1)
RAW / dev / Raw / Raw16 / dev / sdb7 # redo03.log (Group # 3 thread # 2)
RAW / DEV / RAW / RAW17 / DEV / SDB8 # orcl_redo2_2.log (Group # 4 Thread # 2)
RAW / DEV / RAW / RAW18 / DEV / SDB9 # ORCL_REDO3_1.Log (Group # 5 Thread # 3)
RAW / DEV / RAW / RAW19 / DEV / SDB10 # orcl_redo3_2.log (Group # 6 thread # 3)
If you check the connection, use the following command
Su - root
Raw -qa
or
MORE / DEV / RAW / RAW1 similar method check.
If you want to install it when you start, write the above command to /etc/rc.local, or write the Rawdevices file under / etc / sysconfig, such as
# More rawdevices
/ DEV / RAW / RAW1 / DEV / SDA2
/ DEV / RAW / RAW2 / DEV / SDA3
......
If you need to authorize the bare equipment, you can run the following script, where n represents the number of bare equipment partitions
Su - root
For i in `seq 1 n`
DO
CHMOD 660 / DEV / RAW / RAW $ I
Chown Oracle.dba / Dev / Raw / RAW $ I
DONE
Use the following method to establish a soft link, then use naked devices as the file system.
Su - Oracle
LN -S / dev / Raw / Raw1 / VAR / OPT / ORACLE / ORADATA / ORCL / CMQUORUMFILE
LN -S / dev / Raw / Raw2 / Var / Opt / Oracle / ORADATA / ORCL / SharedSrvctlconfigfile
Ln -s / dev / raw / raw3 /var/opt/oracle/oradata/orcl/spfileorcl.ora
......
Note: In addition to partitioning, you need to complete each node
4.4 Others, such as NFS file systems
Pay attention to start NFS, NFSLOCK service
MOUNT This file system is as follows
Mount 10.0.29.152:/vol/vol1/fas250 / netapp NFS
RW, Hard, Nointr, TCP, Noac, Vers = 3, Timeo = 600, RSize = 32768, wsize = 32768
You can also put it in FSTAB, similar to the above
Other places are similar to OCFS, no additional description
Chapter 5. Install OCM (Oracle Cluster Manager)
5.1 Generate a CM management file
If it is a single-node file system, you can use the following command to simulate
Su - Oracle
$ DD IF = / dev / zero of = / u01 / oracle / oradata / rac / racquorumdisk BS = 1024 count = 1024
If it is a multi-node OCFS or RAW device, you can use DD to generate the corresponding file, put on the ready-to-share disk device, 1M can be used.
5.2 Install OCM Management Software
1. If it is 9201 for Linux, install 9201 OCM, install the last item of the option, then upgrade to 9204
2, if it is 9204 for Linux, select 9204 OCM installation directly 3, if installed on AS 3.0, please do the following before installation
First link GCC
Su - root
MV / USR / BIN / GCC / USR / BIN / GCC323
LN -S / USR / BIN / GCC296 / USR / BIN / GCC
MV / USR / BIN / G / USR / BIN / G 323 # IF G Doesn't EXIST, THEN GCC-C WAS Not Installed
LN -S / USR / BIN / G 296 / usr / bin / g
Then play the patch 3006854, you can go
Http://metalink.oracle.com. Download the patch and refer to the patch more information
Su - root
# unzip p3006854_9204_linux.zip
Archive: p3006854_9204_linux.zip
Creating: 3006854 /
INFL OFLATING: 3006854 / RHEL3_PRE_INSTALL.SH
Inflating: 3006854 / readme.txt
# CD 3006854
# sh rhel3_pre_install.sh
Applying Patch ...
Patch SuccessFully Applied
If you refuse the graphical interface in the local X Win, pay attention to setting
$ XHOST this name or IP
Enter the machine name with the private node and the private node, consistent with the contents of / etc / hosts, if you do not add, this can be configured
When we need us to enter this disk partition, we entered our generated file name, if you do not enter, you can also configure it.
/ u01 / oracle / oradata / rac / racquorumdisk
5.3 Configuring OCM files
1, cmcfg.ora configuration file
[Oracle @ Appc2 Admin] $ CP cmcfg.ora.tmp cmcfg.ora
You can find the following content to view the contents of the configuration file.
[Oracle @ appc2 admin] $ more cmcfg.ora
Heartbeat = 15000
ClusterName = Oracle Cluster Manager, Version 9i
Pollinterval = 1000
Misscount = 210
PrivateNodenames = DBRAC
Publicnodenames = DBRAC
Serviceport = 9998
# WatchdogsafetyMargin = 5000
# Watchdogtimermargin = 60000
Hostname = DBRAC
Cmdiskfile = / home / oracle / oradata / rac / racquorumdisk
Because we no longer use WatchDog to detect the system, but use hangcheck-timer, so we need to note the two lines of WatchDog, increase the following line.
KeerModulenAme = hangcheck-timer
An example of the above single-node RAC, it can be seen that the node name, file name, file name, the file name, which is required, because it is a single node, only one of all public nodes and private nodes.
If it is a multi-node RAC, the public node and private node should be similar as follows
PrivateNodenames = DBRAC1-Eth1 DBRAC2-ETH1
Publicnodenames = dbrac1 dbrac2
Among them, the private node is the address configured by the NIC 2, which is used for the two nodes directly to the address of the NIC 1 configured for external access databases.
2, OCMARGS.ORA configuration file
Note $ Oracle_Home / Oracm / Admin / Ocmargs.ora contains WatchDogd lines
More $ oracle_home / oracm / admin / ocmargs.ora
# Sample Configuration File $ Oracle_Home / ORACM / Admin / OCMARGS. ORA
#watchdogd
Oracm
NoRestart 1800
3, OCMSTART.SH startup file
Note $ Oracle_Home / ORACM / BIN / OCMSTART.SH below
# Watchdogd's Default log file
# Watchdogd_log_file = $ oracle_home / oracm / log / wdd.log
# Watchdogd's Default Backup File
# Watchdogd_bak_file = $ oracle_home / oracm / log / wdd.log.bak
# Get Arguments
# watchdogd_args = `grep '^ Watchdogd' $ o cararS_file | /
# SED -E 'S ^ WatchDogd * ' `
# Check WatchDogd's Existance
# iw watchdogd status | Grep 'Watchdog daemon Active'> / dev / null
# THEN
# echo 'ocmstart.sh: error: watchdogd is already running'
# exit 1
# fi
# Backup the old watchdogd log
# i i Test -R $ WatchDogd_log_file
# THEN
# mv $ watchdogd_log_file $ watchdogd_bak_file
# fi
# Startup Watchdogd
# echo watchdogd $ Watchdogd_args
# Watchdogd $ Watchdogd_args
5.4 Start OCM
$ cd $ oracle_home / oracm / bin
$ SU
# ./oCmstart.sh
Loosing the PS-EF | GREP ORACM Look at the process, if there is no change in $ oracle_home / oracm / log directory,
Chapter 6. Installing Oracle Software
6.1 Open RSH
In multi-node, if it doesn't matter in a single node, it is necessary to install the Oracle software on a node. After the RSH is installed, pay attention to whether the iptables firewall is open, it is best to turn off the firewall. .
Su - root
ChkConfig RSH on
ChkConfig Rlogin on
Service Xinetd Reload
Configure remote permissions
$ more /etc/hosts.equiv
DBRAC1 Oracle
DBRAC2 ORACLE
DBRAC1-Eth1 Oracle
DBRAC2-Eth2 Oracle
Test if RSH works properly, check the file content of the remote node on the node 1
[Oracle @ dbrac1 admin] $ RSH DBRAC2 CAT / Etc/Hosts.equiv
DBRAC1 Oracle
DBRAC2 Oracle
DBRAC1-Eth1 Oracle
DBRAC2-Eth1 Oracle
If there is a result, it is proved, similarly, check the file content of node 1 on node 2, you can detect node 2
6.2 Installation Software
If there is no problem with RSH setting, only one node is installed, or the second node that can be copied after installing.
Software installation process is no longer more narrative, there is a need to pay attention
1, at the beginning of the installation, pay attention to the selection node, and note whether the Cluster component is installed, you can remove excess components, such as OEM and HTTP Server
2, the installation on AS2.1 should have no problem, install 9201 on AS 3.0, if there is ins_oemagent.mk (patch 3119415 fixed) in the compilation phase, INS_CTX.MK (fixed in the 9204 patch) is ignored, Will be fixed in the patch.
3, if you are installing 9201 and upgrading to 9204, pay attention to upgrade OUI, and then running the runInstaller under the $ Oracle_Home / BIN, if it is installed directly to install the installer, as long as the installation can be installed.
Before running 9204 upgrade programs, pay attention to do the following (this is unique to the RAC upgrade).
Su - Oracle
CD $ ORACLE_BASE / OUI / BIN / Linux
ln -s libclntsh.so.9.0 libclntsh.so
4. Upgrade from 9201 to 9204 on AS3.0, if INS_OEMAGENT.MK is encountered, it is ignored, and will be fixed in the following patch.
Patch 3119415 and 2617419 patch, fixed or above INS_OEMAGENT.MK error
Su - Oracle
$ CP P2617419_220_Generic.zip / TMP
$ CD / TMP
$ unzip p2617419_220_generic.zip
Before playing 3119415, you need to make sure that Fuser can be used, and now start the patch.
Su - Oracle
$ unzip p3119415_9204_linux.zip
$ CD 3119415
$ export path = $ PATH: / TMP / OPATCH
$ export path = $ path: / sbin # Because Fuser is under / sbin
$ Which Opatch
/ TMP / OPATCH / OPATCH
$ OPATCH APPLY
5, finally pay attention to only install, do not create a database
6.3 First Test Sharing File
Create a configuration file after installation
Su - root
# MKDIR -P / VAR / OPT / ORACLE
# Touch /Var/opt/oracle/srvconfig.loc
# chown oracle: dba /var/opt/oracle/srvconfig.loc
# chmod 755 /var/opt/oracle/srvconfig.loc
Adding a srvconfig_loc parameter in the middle of Srvconfig.loc is as follows:
SRVCONFIG_LOC = / u01 / oracle / oradata / rac / srvconfig.dbf
Create a Srvconfig.dbf file. If you are a shared device, you need to create to the shared device, such as the OCFS file system or on the RAW partition, then the above file name will have some differences.
Su - Oracle
$ Touch Srvconfig.dbf
Initialization profile
$ SRVCONFIG -INIT
Chapter 7. Creating a Database
7.1 Preparation parameters are as follows
* .log_buffer = 626688
* .compatible = '9.2.0.0.0'
* .control_files = '/ u01 / oracle / oraData / Rac / Control01.ctl', '/ u01 / oracle / oradata / rac / control02.ctl', '/ u01 / oracle / oraData / rac / control03.ctl' *. Core_dump_dest = '/ u01 / oracle / admin / rac / cdump'
* .user_dump_dest = '/ u01 / oracle / admin / rac / udump'
* .BACKGROUND_DUMP_DEST = '/ u01 / oracle / admin / rac / bdump'
* .db_block_size = 8192
* .db_cache_size = 250549376
* .db_file_multiblock_read_count = 16
* .db_name = 'RAC'
* .fast_start_mttr_target = 300
* .hash_join_ENABED = TRUE
* .job_queue_processes = 2
* .lage_pool_size = 3145728
* .pga_aggregate_target = 51200000
* .processes = 100
* .remote_login_passwordfile = 'Exclusive'
* .SGA_MAX_SIZE = 600000000
* .shared_pool_size = 31457280
* .timed_statistics = true
* .undo_management = 'auto'
* .undo_retention = 10800
* .session_cached_cursors = 200
# Note the following parameters, is the parameters required by Cluster
* .Cluster_Database = TRUE
* .Cluster_Database_instances = 2
Raca.instance_name = 'RACA'
Racb.instance_name = 'racb'
Raca.instance_number = 1
Racb.instance_number = 2
* .service_names = 'rac'
Raca.thread = 1
Racb.thread = 2
Raca.local_listener = '(pecol = tcp) (host = DBRAC) (port = 1521))'
Raca.Remote_Listener = '(pecol = tcp) (host = dbrac) (port = 1522))'
Racb.local_listener = '(pecol = tcp) (host = dbrac) (port = 1522))'
Racb.remote_listener = '(pecol = tcp) (host = dbrac) (port = 1521))'
Raca.undo_tablespace = Undotbs1
Racb.undo_tablespace = undotbs2
Note that the local_listener here is the Remote_Listener because it is a single-node analog RAC parameter. If it is a multi-node, the remote name and port will be configured, mainly for Load_Balancer and Failover. If it is a multi-node, there may be as follows. RACA.LOCAL_LISTENER = '(pecol = tcp) (host = dbrac1) (port = 1521))'
Raca.Remote_Listener = '(pecol = tcp) (host = dbrac2) (port = 1521))'
Racb.local_listener = '(address = (protocol = tcp) (host = DBRAC2) (port = 1521))'
Racb.remote_listener = '(pecol = tcp) (host = dbrac1) (port = 1521))'
The above parameter file can be placed on the shared device to let multiple node instances, or add a pointer to the collar parameter file in the default parameter file of the node, such as
IFILE = / u01 / oracle / ora920 / dbs / init.ora
7.2 Creating a password file
If not a single node, each node is completed
$ export oracle_sid = rac1
$ orapwd file = orapwrac1 password = piner entries = 5
7.3 Creating a database
Completed on a node
Run root.sh.
Start the database to NOMOUNT
SQLPlus / NOLOG
SQL> Connect / as sysdba
SQL> Startup Nomount Pfile = File Name
SQL> CREATE DATABASE RAC
Maxinstances 3
MaxLoghistory 1
MaxLogfiles 10
MaxLogmembers 3
MaxDataFiles 100
DataFile '/u01/oracle/oradata/rac/system01.dbf' size 250m Reuse Autoextend On Next 10240k MaxSize Unlimited
Extent Management Local
Default Temporary TableSpace Temp Tempfile '/u01/oracle/oradata/rac/temp01.dbf' Size 100m Reuse Autoextend On Next 1024k MaxSize Unlimited
Undo TableSpace "undotbs1" DataFile '/u01/oracle/roadata/rac/undotbs1_01.dbf' size 200m reuse autoextend on Next 5120k maxsize unlimited
Character set zhs16gbk
National Character Set Al16UTF16
Logfile Group 1 ('/u01/oracle/oradata/rac/redo01.log') size 102400K,
Group 2 ('/u01/oracle/oradata/rac/redo02.log') Size 102400K,
Group 3 ('/u01/oracle/oradata/rac/redo03.log') size 102400k; Note the above file path, possibly because different shared devices will have different paths. If it is a RAW device, specify the file size, do not set up AutoExtend.
7.4 Creating a corresponding data dictionary
SQL> @? / Rdbms / admin / catalog
SQL> @? / Rdbms / admin / catproc
Create a cluster unique view
SQL> @? / Rdbms / admin / catclust.sql
The above operation is done on a node.
Optional components
@? / rdbms / admin / catexp7.sql;
@? / rdbms / admin / catblock.sql;
@? / rdbms / admin / catOctk.sql;
@? / rdbms / admin / oominst.plb;
Chapter 8. Starting the second node instance
8.1 Preparing the second node logs and redo
On the first node
SQL> Shutdown Immediate
SQL> Startup Mount Pfile = File Name
SQL> ALTABASE Add logfile thread 2
2 Group 4 ('/u01/oracle/oradata/rac/redo04.log') Size 10240K,
3 Group 5 ('/u01/oracle/oradata/rac/redo05.log') Size 10240K,
4 Group 6 ('/u01/oracle/oradata/rac/redo06.log') size 10240k;
SQL> ALTABASE OPEN;
SQL> ALTER DATABASE ENABLIC THREAD 2;
SQL> CREATE undo TableSpace undotbs2 DataFile
2 '/u01/oracle/oradata/rac/undotbs2_01.dbf' size 200m;
TableSpace created.
8.2 Starting the second instance
If it is a single node, turn on a connection terminal
Su - Oracle
$ export oracle_sid = rac2
$ SQLPLUS "/ as sysdba"
SQL> Startup Pfile = File Name
The Pfile here is the PFILE shared.
If it is a multi-node, go to another node, perform the above equivalent operation
8.3 Verify RAC
SQL> SELECT Thread #, Status, Enabled from GV $ Thread;
Thread # status enabled
---------- ------ --------
1 Open public
2 open public
1 Open public
2 open public
SQL> SELECT Instance_Number, Instance_Number, Status, Host_Name from GV $ Instance;
Instance_number instance_number status host_name
------------------------------------------------ ------------
1 1 Open DBRAC12 2 Open DBRAC2
Chapter 9. Testing, using RAC
9.1 Configuration of listening
Listener1 =
(Description_List =
(Description =
(Address_list =
(Address = (protocol = tcp) (Host = DBRAC) (port = 1521))
)
)
)
Listener2 =
(Description_List =
(Description =
(Address_list =
(Address = (protocol = tcp) (Host = DBRAC) (port = 1522))
)
)
)
The above is a single-node analog RAC configuration, two instances use different ports to simulate, if it is a multi-node RAC, each node can configure yourself to listen accordingly.
Start listening, when seeing the status is similar to the next, otherwise, you need to check the local_listener parameters of each instance of local_listener parameters.
$ LSNRCTL STATUS
LSNRCTL for Linux: Version 9.2.0.4.0 - Production on 29-May-2004 10:38:08
CopyRight (C) 1991, 2002, Oracle Corporation. All Rights Reserved.
Connecting to (deSCription = (protocol = tcp) (host = 192.168.168.205))))))
Status of the listener
---------------------------------------------------------------------------------------------------------------------------------------
Alias Listener
Version TNSLSNR for Linux: Version 9.2.0.4.0 - Production
START date 25-May-2004 01:27:14
Uptime 4 days 9 hr. 10 min. 54 sec
Trace Level Off
Security off
SNMP OFF
Listener parameter file /u01/oracle//ora920/neetwork/admin/listener.ora
Listener log file /u01/oracle//ora920/neetwork/log/listener.log
Listening Endpoints Summary ...
(Description = (ADDRESS = (protocol = TCP) (Host = 192.168.168.205) (port = 1521))))))
Services Summary ...
Service "RAC" HAS 2 Instance (s).
Instance "Rac1", Status Ready, HAS 1 Handler (s) for this service ...
Instance "Rac2", Status Ready, HAS 1 Handler (s) for this service ...
THE Command Complated SuccessFully
9.2 Configuration of the local name
RAC =
(Description =
(Load_balance = ON) (Failover = ON)
(address_list =
(Address = (protocol = tcp) (Host = DBRAC) (port = 1521))
(Address = (protocol = TCP) (Host = DBRAC) (port = 1522)))))))
(connect_data =
(service_name = rac)))))))))))))
Rac1 =
(Description =
(Address = (protocol = tcp) (Host = DBRAC) (port = 1521))
(connect_data =
(service_name = rac)
(instance_name = rac1)))))
Rac2 =
(Description =
(Address = (protocol = tcp) (Host = DBRAC) (port = 1522))
(connect_data =
(service_name = rac)
(instance_name = rac2)))))))))
The above is the configuration of the RAC of a node. If it is a number of nodes, you only need to modify the host name and port
9.3 load balancing test
[Oracle @ dbtest admin] $ more test.sh
#! / bin / sh
SQLPlus "test / test @ rac" <
SELECT Instance_name from V / $ Instance;
exit
EOF
[Oracle @ dbtest admin] $ ./test.sh
SQL * Plus: Release 9.2.0.4.0 - Production on Sat May 29 10:50:08 2004
CopyRight (C) 1982, 2002, Oracle Corporation. All Rights Reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the partitioning, Real Application Clusters, OLAP AND ORACLE DATA MINING OPTIONS
JServer Release 9.2.0.4.0 - Production
SQL>
INSTANCE_NAME
----------------
Rac2
[Oracle @ dbtest admin] $ ./test.sh
SQL * Plus: Release 9.2.0.4.0 - Production on Sat May 29 10:50:08 2004
CopyRight (C) 1982, 2002, Oracle Corporation. All Rights Reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the partitioning, Real Application Clusters, OLAP AND ORACLE DATA MINING OPTIONS
JServer Release 9.2.0.4.0 - Production
SQL>
INSTANCE_NAME
----------------
Rac1
9.4 Failed Switching (Failover) Test
Need to modify TNSNames.ora as the following
RAC =
(Description =
# (enable = Broken)
(Load_balance = ON) (Failover = ON)
(address_list =
(Address = (protocol = tcp) (host = dbtest) (port = 1521))
(Address = (Protocol = TCP) (host = dbtest))))))))))))
(connect_data =
(service_name = rac)
(Failover_Mode = (Type = SELECT) (METHOD = Basic)
)
)
)
Pay attention to Failover_Mode
SQL> Connect Test / Test @ RAC
SQL> SELECT Instance_Number, Instance_name from V $ INSTANCE
INSTANCE_NUMBER Instance_name
------------------------------
2 rac2
If the instance RAC2 is now closed, then execute the statement, you can find
SQL> SELECT Instance_Number, Instance_name from V $ INSTANCE
INSTANCE_NUMBER Instance_name
------------------------------
1 RAC1
Already changed to RAC1
9.5 Change to archive mode in the RAC environment
1. Stop all NODE
2. Modify the init file * .cluster_database = false
3. Make modifications in a Node
Startup mount;
Alter Database ArchiveLog;
SQL> Archive Log List;
SQL> ALTABASE OPEN;
4. Restore * .Cluster_Database = TRUE
5. Start all NODE
chapter Ten. Change from single node database to RAC
First assume that the Cluster of the database software is already installed, the OCM has been installed.
10.1 Modify parameter file
Increase, such as the following
* .Cluster_Database = TRUE
* .Cluster_Database_instances = 2
* .undo_management = auto
.undo_tablespace = Undotbs
.instance_name = rac1
.INSTANCE_NUMBER = 1
.thread = 1
.local_listener = listener_RAC1
.remote_listener = listener_RAC2
10.2 Creating a Cluster view
Use $ oracle_home / rdbms / admin / catclust.sql
10.3 Re-create control files
Number of nodes that make MaxInStances from 1 to defined
$ SQLPLUS / NOLOG
SQL> Connect / as sysdba
SQL> Startup Mount
SQL> ALTER DATABASE Backup ControlFile to TRACE;
10.4 Creating the second instance Redo with undo
Start the first instance
SQL> ALTER DATABASE
Add logfile three
Group 3 ('/dev/rac/redo2_01_100.dbf') size 100m,
Group 4 ('/DEV/RAC/Redo2_02_100.dbf') size 100m;
Alter Database Enable Public Thread 2;
SQL> CREATE undo TableSpace undotbs2 DataFile '/ dev / rac / undotbs_02_210.dbf' size 200m;
Finally, install the software on the second node, set environment variables, start instances
summary
1. Describe the RAN's operating principle and operation mechanism
2. Describe the necessary conditions for RAC for Linux, such as kernel requirements such as software requirements.
3. Describe various storage devices with multiple file systems, such as RAW, OCFS, etc.
4, describe the installation method of Cluster management software on different platforms
5, describe the installation method of database software on different platforms
6. Methods describing manually created RAC databases and launching multiple databases
7. Introduce some of the characteristics and management methods of RAC