Here is the procedure I followed to install Oracle 11g RAC on RHEL 5.0 and thought would share this information with everyone who is interested in checking out Oracle11g RAC on Linux. Most of the pre-installation steps should work for Oracle10g as well. This configuration is not for production systems and would be good for testing purposes only.
I have used my two existing two Intel based systems for the purpose, and in all it took about 8 hours to get the whole thing working (including the OS install – as a third bootable Operating System – the first one being Windows XP the second being RHEL 4.0 with 10gR2 RAC).....much better than the initial installation experiences with Oracle10g RAC.
Please note that I am not using Firewire or a third server here for provisioning the shared storage, but NFS - the details follow.
Hardware details:- Node 1: pepsi.pinnacle.com (Intel Pentium 4 2.80Ghz, 2G RAM with 80GB and 160GB hard drives, 2 NICs)
- Node 2: coke.pinnacle.com (Intel Pentium 4 2.80Ghz, 2G RAM with 160GB, 2 NICs)
RHEL 5.0 Installation:
I have performed a complete/full installation – basically all/everything...so that all the RPMs are included therein. Kernel version used - 2.6.18-8.el5xen.
Installation Steps:Here are high-level installation steps:
- Check OS/Packages requirements
- Configure public and private network.
- Create oracle user and dba group.
- Setup SSH for oracle user account between the two nodes.
- Set up shared storage using NFS (details follow)
- Oracle11g CRS Installation
- Oracle11g RDBMS Software
- Oracle11g RAC Database creation.
The details.......
1. Configure Operating System for Oracle11g RAC Install (Both nodes)
The following RPMs are required/recommended for Oracle11g RAC installation on RHEL 5.0.
- binutils-2.17.50.0.6-2.el5
- compat-libstdc++-33-3.2.3-61
- elfutils-libelf-0.97-5 – elfutils-libelf-devel-0.125-3.el5 was installed by default
- elfutils-libelf-devel-0.125
- glibc-2.5-12
- glibc-common-2.5-12
- glibc-devel-2.5-12
- gcc-4.1.1-52
- gcc-c++-4.1.1-52
- libaio-0.3.106
- libaio-devel-0.3.106 – not installed by default
- libgcc-4.1.1-52
- libstdc++-4.1.1
- libstdc++-devel-3.4.3-22.1 – libstdc++-devel-4.1.1-52.el5 was installed by default.
- make-3.81-1.1
- sysstat-7.0.0 – not installed.
- UnixODBC-2.2.11 – not installed.
- UnixODBC-devel-2.2.11 – not installed.
All the above RPMs were installed and available by default (may be because I have done a complete install). Installed the following 4 missing RPMs from RHEL 5.0 CDs (using rpm -ivh
).
- libaio-devel-0.3.106
- sysstat-7.0.0
- UnixODBC-2.2.11
- UnixODBC-devel-2.2.11
Kernel Parameters:- Add the following lines to /etc/sysctl.conf
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 262144
- Run sysctl -p to read the values as indicated in /etc/sysctl.conf file.
CVUQDISK Package:
- Install the cvuqdisk package – located in the CRS software distribution (/clusterware/rpm):
[root@coke rpm]# export CVUQDISK_GRP=dba
[root@coke rpm]# rpm -ivh cvuqdisk-1.0.1-1.rpm
Preparing... ########################################### [100%]
1:cvuqdisk ########################################### [100%]
[root@coke rpm]# pwd
/share/Oracle-Linux-Software/11g/clusterware/rpm
[root@coke rpm]#
- Here is the information from the /etc/hosts file. The contents of this file are same on both the nodes.
$ cat /etc/hosts
#----Entries for Node1
192.168.1.160 pepsi.pinnacle.com pepsi
192.168.1.151 pepsi-vip.pinnacle.com pepsi-vip
172.1.1.160 pepsi-priv.pinnacle.com pepsi-priv
#---Entries for Node2
192.168.1.150 coke.pinnacle.com coke
192.168.1.161 coke-vip.pinnacle.com coke-vip
172.1.1.150 coke-priv.pinnacle.com coke-priv
Node 1:[root@coke ~]# ifconfig -a eth0
eth0 Link encap:Ethernet HWaddr 00:16:76:4B:79:F1
inet addr:192.168.1.150 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::216:76ff:fe4b:79f1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3507 errors:0 dropped:0 overruns:0 frame:0
TX packets:1675 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:665921 (650.3 KiB) TX bytes:188926 (184.4 KiB)
[root@coke ~]# ifconfig -a eth1
eth1 Link encap:Ethernet HWaddr 00:09:5B:E1:CD:DD
inet addr:172.1.1.150 Bcast:172.1.255.255 Mask:255.255.0.0
inet6 addr: fe80::209:5bff:fee1:cddd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:310738 errors:0 dropped:0 overruns:0 frame:0
TX packets:254557 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:141179047 (134.6 MiB) TX bytes:345107330 (329.1 MiB)
Node 2:[root@pepsi ~]# ifconfig -a eth0
eth0 Link encap:Ethernet HWaddr 00:16:76:38:D8:C5
inet addr:192.168.1.160 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::216:76ff:fe38:d8c5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:419 errors:0 dropped:0 overruns:0 frame:0
TX packets:244 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:80282 (78.4 KiB) TX bytes:27401 (26.7 KiB)
[root@pepsi ~]# ifconfig -a eth1
eth1 Link encap:Ethernet HWaddr 00:09:5B:BB:D8:F1
inet addr:172.1.1.160 Bcast:172.1.255.255 Mask:255.255.0.0
inet6 addr: fe80::209:5bff:febb:d8f1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:361934 errors:0 dropped:0 overruns:0 frame:0
TX packets:326328 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:366491792 (349.5 MiB) TX bytes:148740977 (141.8 MiB)
Interrupt:17 Base address:0xe000
2. Configure SSH between two nodes
While logged in as oracle perform the following on each of the nodes of the RAC:
- cd $HOME
- mkdir ~/.ssh
- chmod 700 ~/.ssh
- /usr/bin/ssh-keygen -t rsa
- /usr/bin/ssh-keygen -t dsa
On Node 1:
- cd $HOME/.ssh
- cat id_rsa.pub >> authorized_keys
- cat id_dsa.pub >> authorized_keys
- Copy the authorized_keys file to the node 2. scp authorized_keys node2:/opt/oracle/.ssh
On Node 2:
- cd $HOME/.ssh
- cat id_rsa.pub >> authorized_keys
- cat id_dsa.pub >> authorized_keys
- scp authorized_keys node1:/opt/oracle/.ssh
- Now perform a ssh between all the nodes including the node-priv hostnames as well. Check to make sure that ssh is configured well without prompting for the password (on both the nodes):
$ ssh coke.pinnacle.com date
Sun Aug 12 08:41:42 CDT 2007
$ ssh pepsi.pinnacle.com date
Sun Aug 12 08:42:23 CDT 2007
$ ssh coke-priv date
Sun Aug 12 08:42:45 CDT 2007
$ ssh pepsi-priv date
Sun Aug 12 08:43:22 CDT 2007
3. Configuring the Shared Storage
This is the one of the critical components required for RAC. There are couple of other storage options available for setting up Test environment at home, such as Firewire, NAS and/or NFS. I have installed Oracle RAC with both Oracle9i and Oracle10g using Firewire and this time around I wanted to see if I can use NFS instead. Please note that this is only for testing and experimentation only and will not be suitable for any kind of production use. Usually, we would be needing an separate server/node which can host the NFS file systems, but I have, instead of using a third server, I have configured one of the nodes to host the NFS file systems. Here is the procedure:
On Node 1 (coke):
1. Created three partitions on the local hard drive with a size of 10G each.
[root@coke ~]# fdisk /dev/sda
The number of cylinders for this disk is set to 9726.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sda: 80.0 GB, 80000000000 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 2550 20482843+ c W95 FAT32 (LBA)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
e
Partition number (1-4): 2
First cylinder (2551-9726, default 2551):
Using default value 2551
Last cylinder or +size or +sizeM or +sizeK (2551-9726, default 9726):
Using default value 9726
Command (m for help): p
Disk /dev/sda: 80.0 GB, 80000000000 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 2550 20482843+ c W95 FAT32 (LBA)
/dev/sda2 2551 9726 57641220 5 Extended
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (2551-9726, default 2551):
Using default value 2551
Last cylinder or +size or +sizeM or +sizeK (2551-9726, default 9726): +10G
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (3768-9726, default 3768):
Using default value 3768
Last cylinder or +size or +sizeM or +sizeK (3768-9726, default 9726): +10G
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (4985-9726, default 4985):
Using default value 4985
Last cylinder or +size or +sizeM or +sizeK (4985-9726, default 9726): +10G
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Device Boot Start End Blocks Id System
/dev/sda1 * 1 2550 20482843+ c W95 FAT32 (LBA)
/dev/sda2 2551 9726 57641220 5 Extended
/dev/sda5 2551 3767 9775521 83 Linux
/dev/sda6 3768 4984 9775521 83 Linux
/dev/sda7 4985 6201 9775521 83 Linux
[root@coke ~]# parted /dev/sda print
Model: ATA WDC WD800JD-75MS (scsi)
Disk /dev/sda: 80.0GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 21.0GB 21.0GB primary fat32 boot, lba
2 21.0GB 80.0GB 59.0GB extended
5 21.0GB 31.0GB 10.0GB logical
6 31.0GB 41.0GB 10.0GB logical
7 41.0GB 51.0GB 10.0GB logical
Information: Don't forget to update /etc/fstab, if necessary.
[root@coke ~]# mkfs.ext3 /dev/sda5
[root@coke ~]# mkfs.ext3 /dev/sda6
[root@coke ~]# mkfs.ext3 /dev/sda7
- Create local mount points
[root@coke ~]# mkdir /nfs1 /nfs2 /nfs3
- Label the partitions/file systems
[root@coke ~]# e2label /dev/sda5 nfs1
[root@coke ~]# e2label /dev/sda6 nfs2
[root@coke ~]# e2label /dev/sda7 nfs3
- Add the following lines to the /etc/fstab file on node 1(coke)
LABEL=nfs1 /nfs1 ext3 defaults 1 1
LABEL=nfs2 /nfs1 ext3 defaults 1 1
LABEL=nfs3 /nfs1 ext3 defaults 1 1
[root@coke ~]# mount /nfs2
[root@coke ~]# mount /nfs2
[root@coke ~]# mount /nfs3
[root@coke ~]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda3 47611656 2938484 42215632 7% /
tmpfs 966124 0 966124 0% /dev/shm
/dev/sda5 9621848 152688 8980384 2% /nfs1
/dev/sda6 9621848 152688 8980384 2% /nfs2
/dev/sda7 9621848 152688 8980384 2% /nfs3
- Add the following entries in the /etc/exports file:
/nfs1 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/nfs2 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/nfs3 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
- Configure the NFS services:
[root@coke ~]# chkconfig nfs on
[root@coke ~]# service nfs restart
Shutting down NFS mountd: [ OK ]
Shutting down NFS daemon: [ OK ]
Shutting down NFS quotas: [ OK ]
Shutting down NFS services: [ OK ]
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]
[root@coke ~]#
- With the above, we have /nfs* file systems which are available for both the nodes. Peform the following steps on both the nodes:
- Create mount points to mount the nfs file systems
#mkdir /u01 /u02 /u03 on two nodes
2. Add the following lines to the /etc/fstab file:
coke-priv:/nfs1 /u01 nfs
rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0
coke-priv:/nfs2 /u02 nfs
rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0
coke-priv:/nfs3 /u03 nfs
rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0
3. Mount the nfs filesystem locally:
#mount /u01
#mount /u02
#mount /u03
4. Now we have all the three file systems shared among the two nodes and ready for use.
Here is how it will look like on Node 1:
$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda3 47611656 7029084 38125032 16% /
tmpfs 966124 270188 695936 28% /dev/shm
/dev/sda5 9621848 1925276 7207796 22% /nfs1
/dev/sda6 9621848 152700 8980372 2% /nfs2
/dev/sda7 9621848 152696 8980376 2% /nfs3
172.1.1.150:/nfs1 9621856 1925280 7207808 22% /u01
172.1.1.150:/nfs2 9621856 152704 8980384 2% /u02
172.1.1.150:/nfs3 9621856 152704 8980384 2% /u03
On Node 2:
$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda5 44639012 7245820 35089072 18% /
tmpfs 477484 266084 211400 56% /dev/shm
coke-priv:/nfs1 9621856 1925280 7207808 22% /u01
coke-priv:/nfs2 9621856 152704 8980384 2% /u02
coke-priv:/nfs3 9621856 152704 8980384 2% /u03
● Change the owner of the file systems /u01, /u02 and /u03 to oracle:dba.
4. CRS Installation
- Set up the environment variables for oracle (such ORACLE_BASE, PATH etc..)
- With this we have all the basic pre-installations tasks taken care of and are ready to invoke the Oracle11g OUI from the CRS software distribution, but before that, let's run the CVU:
$ ./runcluvfy.sh stage -pre crsinst -n coke,pepsi
Running orainstRoot.sh and root.sh scripts on both the nodes:
[root@coke ~]# /opt/oracle/oraInventory/orainstRoot.sh
Changing permissions of /opt/oracle/oraInventory to 770.
Changing groupname of /opt/oracle/oraInventory to dba.
The execution of the script is complete
[root@coke ~]# /opt/oracle/product/11.1.0/crs/root.sh
WARNING: directory '/opt/oracle/product/11.1.0' is not owned by root
WARNING: directory '/opt/oracle/product' is not owned by root
WARNING: directory '/opt/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/opt/oracle/product/11.1.0' is not owned by root. Changing owner to root
The directory '/opt/oracle/product' is not owned by root. Changing owner to root
The directory '/opt/oracle' is not owned by root. Changing owner to root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
:
node 1: coke coke-priv coke
node 2: pepsi pepsi-priv pepsi
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /u01/oradb/ocr/vote
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
coke
Cluster Synchronization Services is inactive on these nodes.
pepsi
Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.
[root@coke ~]# /opt/oracle/product/11.1.0/crs/root.sh
[root@pepsi ~]# /opt/oracle/oraInventory/orainstRoot.sh
Changing permissions of /opt/oracle/oraInventory to 770.
Changing groupname of /opt/oracle/oraInventory to dba.
The execution of the script is complete
[root@pepsi ~]# /opt/oracle/product/11.1.0/crs/root.sh
WARNING: directory '/opt/oracle/product/11.1.0' is not owned by root
WARNING: directory '/opt/oracle/product' is not owned by root
WARNING: directory '/opt/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/opt/oracle/product/11.1.0' is not owned by root. Changing owner
to root
The directory '/opt/oracle/product' is not owned by root. Changing owner to root
The directory '/opt/oracle' is not owned by root. Changing owner to root
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: coke coke-priv coke
node 2: pepsi pepsi-priv pepsi
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
coke
pepsi
Cluster Synchronization Services is active on all the nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Creating VIP application resource on (2) nodes...
Creating GSD application resource on (2) nodes...
Creating ONS application resource on (2) nodes...
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...
Done.
[root@pepsi ~]#
$ crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
$
We have a up and running Oracle 11g CRS.
Note that you have to always bring up Node 1 first and then Node 2, since Node 1 is hosting the NFS file system required by Node 2.
More to follow......
Thanks to Tim Hall (oracle-base) for sharing the ideas/thoughts on NFS share/mount options.