Those documents are obsolete, please use the Nethence Documentation
Installing Oracle RAC 11gR2 on RHEL5 nodes
Part 1: network configuration and a clustered logical volume
An Oracle RAC cluster configuration gets installed by Oracle Grid (which includes Oracle Clusterware) and the database software (RDBMS). Both Grid and RDBMS only need to be installed from one node to be deployed on all of the nodes.
You will need (at least) two nodes and a shared storage (using iSCSI here). Each node needs at least two network interfaces: one for the network services and one for the heartbeat between the nodes. Each node needs at least 1.5 GB (1,536 MB+ or 1,572,864 KB+) memory and 1.5 times that for swap space.
See RAC Technologies Matrix for Linux Platforms. Unfortunately GFS is not directly certified by Oracle for 11gR2 but it is kind of tolerated, as long as the support requests about it goes to Redhat. And in that case you will need an additional unbrekable license eventhough you're using RHEL. See Metalink notes 220970.1 (search for 'GFS') and 329530.1.
Virtualization requirements (in case of virtualization)
I am using two RHEL5 nodes and one RHEL5 iscsi server. I did a standard installation in text mode with local clock to UTC and timezone Europe/Paris. I then disabled firewall and SElinux (disabled or permissive) either at firstboot or with 'system-config-securitylevel-tui'. Network has been configured by hand (/etc/sysconfig/...). The first interfaces (ONBOOT is implicit),
the heartbeat network interface,
note. Change the IP address accordingly on each node...
the default gateway should only be defined in the /etc/sysconfig/network file, along with the host's name.
The '/etc/hosts' static resolver should be the same on all the nodes. You can also use it for the iscsi hosts if you want to, but remove the irrelevant Heartbeat hosts.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
# Virtual network
192.168.122.1 m5a m5a.example.local
192.168.122.100 iscsi iscsi.example.local
192.168.122.101 rac1 rac1.example.local
192.168.122.102 rac2 rac2.example.local
10.55.0.1 rac1ha rac1ha.example.local
10.55.0.2 rac2ha rac2ha.example.local
I also added those packages and configured NTP, which may be mandatory for any cluster. Ehm about 'gpm, well, it is already included in @base, but I'm used to proceed like this for standard Redhat post-installations,
yum install gpm screen ntp
service ntpd start
chkconfig ntpd on
Accessing the iSCSI LUN
Configure access to the iSCSI LUN on all nodes,
mv iscsid.conf iscsid.conf.dist
sed -e '
' iscsid.conf.dist > iscsid.conf
service iscsid start
chkconfig iscsid on
iscsiadm -m discovery -t st -p iscsi
iscsiadm -m node -T iqn.2012-12.local.example:iscsi.target1 -p iscsi --login
note. Change the iSCSI server (here 'iscsi') and LUN (here iqn.2012-12.local.example:iscsi.target1) accordingly.
iscsiadm -m session
Configuring a shared logical volume with Redhat cluster suite
Install the needed packages and optimize LVM on all nodes,
yum groupinstall Clustering "Cluster Storage"
Configure the cluster on the first node (rac1),
yum install xorg-x11-xauth (relogin with X11 forwarding enabled)
system-config-cluster (nevermind the warning about cman not started)
add cluster nodes
save (/etc/cluster/cluster.conf is fine) and exit
and copy the cluster configuration to the other nodes,
scp cluster.conf rac2:/etc/cluster/
Start the cluster services on all nodes,
service cman start (at the same time on all nodes)
chkconfig cman on
Note. If the cman service doesn't start and stops at "Starting fencing..." (in case you configured no fencing) just enable the services and reboot the nodes.
Eventually use LVM (it’s scalable and you can’t mess with the hard disk drive order in fstab). Optimize LVM on all the nodes,
mv lvm.conf lvm.conf.dist
' lvm.conf.dist | tee lvm.conf.dist.clean > lvm.conf
#diff -u lvm.conf.dist.clean lvm.conf
Note. --enable-cluster modifies locking_type value from 1 to 3.
Note. I’m unsure if “fallback_to_clustered_locking” has to be disabled.
enable it (clustered mode),
service clvmd start
chkconfig clvmd on
On only one node, configure a logical volume (be CAREFUL the path to the shared disk may not be /dev/sda on your system!!!),
o (empty partition table)
8e (Linux LVM)
vgcreate vgrac /dev/sda1
vgdisplay vgrac | grep 'Total PE'
#vgchange -ay vgrac
lvcreate vgrac --extents <Total PE> --name lvrac.a
Make sure that the shared logicial volume can be accessed by the oracle user (and so the installer will see the disks to configure ASM), on all nodes,
chown oracle:dba vgrac-lvrac.a
Ready to go for the shared logical volume
Check that everything is fine,
service cman status
service clvmd status
Note. To manually fence a node, use,
#fence_ack_manual -n rac2
Make sure all cluster services are enabled on the nodes – and eventually restart some nodes,
chkconfig cman on
chkconfig clvmd on
chkconfig gfs2 off
chkconfig rgmanager off
Last update: Jan 19, 2013