Nethence Newdoc Olddoc Lab Your IP BBDock  

Warning: those guides are mostly obsolete, please have a look at the new documentation.


Oracle 11g release 2 RAC installation
This guide is based on the one provided on ( We're just using a simplier shared storage configuration, one mount point instead of four. With NFS. But you can use OCFS2 or GFS if you like, it's also supported (
In brief we going to:
- prepare the system for the two nodes
- install Oracle Grid Infrastructure (mandatory for RAC)
- install Oracle 11g as RAC (Real Application Cluster)
- two network interfaces by node
- enought memory (at least 1536MB for grid) and twice the swap space
- enought disk for swap and local storage (and the nfs share on rac1 in this case)
- Redhat Enterprise Linux version 5.x
- Oracle Database 11g Release 2 Grid Infrastructure ( for Linux x86-64
- Oracle Database 11g Release 2 ( for Linux x86-64
System configuration
- twice the swap space (3072MB)
- assuming a mount point on /u01 for local storage
- (assuming a mount point on /mnt/share on first node for nfs)
Network configuration
In this guide we're using a default gateway on the private network for testing purposes. It should normally be located on the public network.
On both nodes and on the dom0 (if applicable), add this to /etc/hosts, dom0.example.loc dom0 rac1.example.local rac1 rac2.example.local rac2 rac1-vip.example.local rac1-vip rac2-vip.example.local rac2-vip rac-scan.example.local rac-scan gwpriv.example.local gwpriv
192.168.0.XXX dom0priv.example.local dom0priv rac1priv.example.local rac1priv rac2priv.example.local rac2priv
note. change the dom0priv ip accordingly
Since now on, you should be able to connect remotely to the nodes.
Package requirements (both nodes)
Make sure those are installed,
rpm -q \
binutils \
compat-libstdc++-33 \
elfutils-libelf \
elfutils-libelf-devel \
gcc \
gcc-c++ \
glibc \
glibc-common \
glibc-devel \
glibc-headers \
ksh \
libaio \
libaio-devel \
libgcc \
libstdc++ \
libstdc++-devel \
make \
sysstat \
unixODBC \
unixODBC-devel \
| grep ^package
You will also need those,
rpm -q \
unzip \
xorg-x11-utils \
xorg-x11-apps \
pdksh \
smartmontools \
| grep ^package
note. 'ksh' isn't mandatory but 'pdksh' is.
note. smartctl is needed by grid's install script.
System preparation (both nodes)
Tweak the kernel parameters,
cd /etc/
mv sysctl.conf sysctl.conf.dist
sed '/^#/d; /^$/d;' sysctl.conf.dist > sysctl.conf
echo '' >> sysctl.conf
add and apply,
cat >> sysctl.conf <<EOF9
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
sysctl -p
Tweak the file limits,
cd /etc/security/
cat >> limits.conf <<EOF9
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
Enable the limits,
cd /etc/pam.d/
cat >> login <<EOF9
session required
Configure NTP,
vi /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/" (add -x for slew corrections only) 
service ntpd restart
chkconfig ntpd on
Create the groups and users,
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 1100 -g oinstall -G dba oracle
passwd oracle
Note. as we're using NFS, using common UIDs and GIDs on all nodes is important.
NFS share
Making a share on rac1, mounting the share remotely on both (for testing purposes, otherwise use a seperate nas). Make sure you've got the nfs-utils package and portmap enabled on both nodes,
rpm -q nfs-utils
service portmap restart
chkconfig portmap on
On rac1
Preparing a additional mount point for the nfs share, here on an xen guest (xvdc disk),
mkfs.ext3 /dev/xvdc
mkdir /mnt/shared
cd /etc/
cat >> fstab <<EOF9
/dev/xvdc /mnt/shared ext3 defaults 1 2
mount /mnt/shared
Configure the NFS share,
cat > exports <<EOF9
/mnt/shared *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
and enable it,
service portmap restart
service nfs restart
chkconfig portmap on
chkconfig nfs on
On both nodes
Mount the nfs share (assuming /u01 exists as a mount point already),
mkdir /u01/shared
cd /etc/
cat >> fstab <<EOF9
rac1:/mnt/shared /u01/shared nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
mount /u01/shared
find /u01
chown -R oracle:oinstall /u01
find /u01 -type d -exec chmod 770 {} \;
Ready to go
Login as oracle user on both nodes with X11 forwarding enabled and configure your environment,
cat > .vimrc <<EOF9
syntax off
cat > .bash_profile <<EOF9
source \$HOME/.bashrc
vi .bashrc
PS1='${HOSTNAME%%\.*}> '
ORACLE_HOSTNAME=rac1.example.local; export ORACLE_HOSTNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
TMP=/tmp; export TMP
ulimit -u 16384 -n 65536
note. I will add ORACLE_HOME and edit the PATH later on.
note. Change ORACLE_HOSTNAME accordingly.
source .bash_profile
Grid infrastructure installation
On the node of your choice, upload, extract the grid infrastructure archives,
rm -f
make sure X11 forwarding is enabled and launch the installer,
cd grid/
Installation option > Install and configure grid infrastructure for a cluster
Installation type > Advanced installation
Product languages > English (already selected on the right)
Grid plug and play > Cluster name: rac
Grid plug and play > SCAN name: rac-scan.example.local
Cluster node information > add rac2.example.local / rac2-vip.example.local
Cluster node information > SSH connectivity... > OS password: PASSWORD
Cluster node information > SSH connectivity... > click Setup
Network interface usage > (here public / private)
Storage option > Shared file system
OCR storage > External redundancy: /u01/shared/storage/ocr
Voting disk storage > External redundancy: /u01/shared/storage/vdsk
Failure isolation > (Do not use intelligent plaform management interface)
Operating system groups > dba for all (unless you're using asm) -- continue? yes
Installation location > Oracle base: /u01/app/oracle
Installation location > Software location: /u01/shared/11.2.0/grid
Create inventory > /u01/app/oraInventory
note. change PASSWORD accordingly
all prerequisites should be fine. Save the response file and eventually clean it up for later use, during grid's installation,
cd ~/
mv grid.rsp grid.rsp.dist
sed '/^#/d; /^$/d;' grid.rsp.dist > grid.rsp
you'll then be prompted by the installer to execute those as root and on both nodes,
note. nevermind the "ADVM/ACFS is not supported" message, we're not using ASM (ADVM means Automatic Storage Management Dynamic Volume Manager ; ACFS means Automatic Storage Management Cluster File System).
note. nevermind the Oracle Cluster Verification Utility error, as we should provide DNS round-robin for rac-scan.
INFO: Checking name resolution setup for "rac-scan.example.local"...
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan.example.local"
INFO: PRVF-4657 : Name resolution setup check for "rac-scan.example.local" (IP address: failed
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan.example.local"
INFO: Verification of SCAN VIP and Listener setup failed
Database 11g release 2 installation
Then extract the 11g release 2 database archives and install
rm -f
rm -f
make sure X11 forwarding is enabled and launch the installer,
cd database/
Installation option > Install database software only
Grid options > Real application clusters database installation: rac1, rac2
Grid options > SSH connectivity > OS password: PASSWORD
Grid options > SSH connectivity > click Setup
Product languages > English
Database edition > Enterprise edition
Installation location > Oracle base: (default defined by env) /u01/app/oracle
Installation location > Software location: (default) /u01/app/oracle/product/11.2.0/dbhome_1
  Operating system groups > (default) OSDBA: dba
  Operating system groups > (default) OSOPER: oinstall
note. change PASSWORD accordingly
all prerequisites should be fine. Save the response file and eventually clean it up for later use, during grid's installation,
cd ~/
mv db.rsp db.rsp.dist
sed '/^#/d; /^$/d;' db.rsp.dist > db.rsp
you'll then be prompted by the installer to execute those as root and on both nodes,
note. no need to overwrite these files
You can now add ORACLE_HOME and edit your PATH on both nodes,
cd ~/
vi .bashrc
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1; export ORACLE_HOME
source .bashrc
Fix the perms again, as root just here,
cd /u01/
chown oracle:oinstall shared
chmod 770 shared
Now when creating a database with dbca, choose,
Oracle real application clusters database 
Use common location for all database files: /u01/shared/oradata
Specify flash recovery area: /u01/shared/flash_recovery_area
Init parameters > Memory / Typical: ...
Init parameters > Character sets / use unicode
In case you're experiencing errors during installation,
cd /u01/app/oraInventory/logs/
ls -ltr
If you get this error when trying to install 11g release 2 RAC,
[NS-35354] The system on which you are attempting to install Oracle RAC is not part of a valid cluster.
==> Prior to installing Oracle RAC, you must create a valid cluster. This is done by deploying Grid Infrastructure software, which will allow configuration of Oracle Clusterware and Automatic Storage Management.
If you get this error while installing grid,
ins-41519 one or more voting disk locations are not valid
==> make sure the existing containing folder has the right permissions and is accessible from all nodes.

(obsolete, see the new doc)