Nethence Documentation Lab Webmail Your IP BBDock  


Those documents are obsolete, please use the Nethence Documentation instead.

HomeUnixWindowsOracleObsoleteHardwareDIYMechanicsScriptsConfigsPrivate

Installing Oracle RAC 11gR2 on RHEL5 nodes
Part 1: network configuration and a clustered logical volume
 
Introduction
An Oracle RAC cluster configuration gets installed by Oracle Grid (which includes Oracle Clusterware) and the database software (RDBMS). Both Grid and RDBMS only need to be installed from one node to be deployed on all of the nodes.
 
Hardware requirements
You will need (at least) two nodes and a shared storage (using iSCSI here). Each node needs at least two network interfaces: one for the network services and one for the heartbeat between the nodes. Each node needs at least 1.5 GB (1,536 MB+ or 1,572,864 KB+) memory and 1.5 times that for swap space.
 
Storage requirements
See RAC Technologies Matrix for Linux Platforms. Unfortunately GFS is not directly certified by Oracle for 11gR2 but it is kind of tolerated, as long as the support requests about it goes to Redhat. And in that case you will need an additional unbrekable license eventhough you're using RHEL. See Metalink notes 220970.1 (search for 'GFS') and 329530.1.
 
Virtualization requirements (in case of virtualization)
See Supported Virtualization and Partitioning Technologies for Oracle Database and RAC Product Releases in case you want to virtualize your cluster.
 
Base configuration
I am using two RHEL5 nodes and one RHEL5 iscsi server. I did a standard installation in text mode with local clock to UTC and timezone Europe/Paris. I then disabled firewall and SElinux (disabled or permissive) either at firstboot or with 'system-config-securitylevel-tui'. Network has been configured by hand (/etc/sysconfig/...). The first interfaces (ONBOOT is implicit),
DEVICE=eth0
IPADDR=192.168.122.100
NETMASK=255.255.255.0
the heartbeat network interface,
DEVICE=eth1
IPADDR=10.55.0.1
NETMASK=255.255.255.0
note. Change the IP address accordingly on each node...
the default gateway should only be defined in the /etc/sysconfig/network file, along with the host's name.
 
The '/etc/hosts' static resolver should be the same on all the nodes. You can also use it for the iscsi hosts if you want to, but remove the irrelevant Heartbeat hosts.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

# Virtual network
192.168.122.1 m5a m5a.example.local
192.168.122.100 iscsi iscsi.example.local
192.168.122.101 rac1 rac1.example.local
192.168.122.102 rac2 rac2.example.local

# Heartbeat
10.55.0.1 rac1ha rac1ha.example.local
10.55.0.2 rac2ha rac2ha.example.local
 
I also added those packages and configured NTP, which may be mandatory for any cluster. Ehm about 'gpm, well, it is already included in @base, but I'm used to proceed like this for standard Redhat post-installations,
yum install gpm screen ntp
service ntpd start
chkconfig ntpd on
 
Accessing the iSCSI LUN
Configure access to the iSCSI LUN on all nodes,
cd /etc/iscsi/
mv iscsid.conf iscsid.conf.dist
sed -e '
/^[[:space:]]*$/d;
/^[[:space:]]*#/d;
' iscsid.conf.dist > iscsid.conf
service iscsid start
chkconfig iscsid on
iscsiadm -m discovery -t st -p iscsi
iscsiadm -m node -T iqn.2012-12.local.example:iscsi.target1 -p iscsi --login
note. Change the iSCSI server (here 'iscsi') and LUN (here iqn.2012-12.local.example:iscsi.target1) accordingly.
and verify,
iscsiadm -m session
 
Configuring a shared logical volume with Redhat cluster suite
Install the needed packages and optimize LVM on all nodes,
yum groupinstall Clustering "Cluster Storage"
 
Configure the cluster on the first node (rac1),
yum install xorg-x11-xauth (relogin with X11 forwarding enabled)
system-config-cluster (nevermind the warning about cman not started)
add cluster nodes
save (/etc/cluster/cluster.conf is fine) and exit
and copy the cluster configuration to the other nodes,
cd /etc/cluster/
scp cluster.conf rac2:/etc/cluster/
 
Start the cluster services on all nodes,
service cman start (at the same time on all nodes)
chkconfig cman on
Note. If the cman service doesn't start and stops at "Starting fencing..." (in case you configured no fencing) just enable the services and reboot the nodes.
Note. There is no need for the "rgmanager" service for a file system cluster. It is only required to cluster services like http or mysql.
 
Eventually use LVM (it’s scalable and you can’t mess with the hard disk drive order in fstab). Optimize LVM on all the nodes,
cd /etc/lvm/
mv lvm.conf lvm.conf.dist
sed '
/^[[:space:]]*$/d;
/^[[:space:]]*#/d;
' lvm.conf.dist | tee lvm.conf.dist.clean > lvm.conf
lvmconf --enable-cluster
#diff -u lvm.conf.dist.clean lvm.conf
Note. --enable-cluster modifies locking_type value from 1 to 3.
Note. I’m unsure if “fallback_to_clustered_locking” has to be disabled.
enable it (clustered mode),
service clvmd start
chkconfig clvmd on
 
On only one node, configure a logical volume (be CAREFUL the path to the shared disk may not be /dev/sda on your system!!!),
fdisk -l
fdisk /dev/sda 
    o (empty partition table)
    n (new)
    p (primary)
    ENTER
    ENTER
    t (type)
    8e (Linux LVM)
    p (print)
    w (write)    
pvcreate /dev/sda1 
vgcreate vgrac /dev/sda1 
vgdisplay vgrac | grep 'Total PE'
#vgchange -ay vgrac
lvcreate vgrac --extents <Total PE> --name lvrac.a
 
Make sure that the shared logicial volume can be accessed by the oracle user (and so the installer will see the disks to configure ASM), on all nodes,
cd /dev/mapper/
chown oracle:dba vgrac-lvrac.a
 
Ready to go for the shared logical volume
Check that everything is fine,
service cman status
service clvmd status
cman_tool nodes
Note. To manually fence a node, use,
#fence_ack_manual -n rac2
 
Troubleshooting
Make sure all cluster services are enabled on the nodes – and eventually restart some nodes,
chkconfig cman on
chkconfig clvmd on
chkconfig gfs2 off 
chkconfig rgmanager off 
 

Last update: Jan 19, 2013