Nethence Newdoc Olddoc Lab Your IP BBDock  


Warning: those guides are mostly obsolete, please have a look at the new documentation.

UnixWindowsOracleObsoleteHardwareDIYMechanicsScriptsConfigs

Oracle 11g release 2 RAC installation
 
http://pbraun.nethence.com/oracle/oracle11g_rac.html
http://pbraun.nethence.com/oracle/oracle11g_rac_xen.html
 
 
Introduction
This guide is based on the one provided on oracle-base.com (http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnLinuxUsingNFS.php). We're just using a simplier shared storage configuration, one mount point instead of four. With NFS. But you can use OCFS2 or GFS if you like, it's also supported (http://www.oracle.com/technetwork/database/clustering/tech-generic-linux-new-086754.html).
 
In brief we going to:
- prepare the system for the two nodes
- install Oracle Grid Infrastructure (mandatory for RAC)
- install Oracle 11g as RAC (Real Application Cluster)
 
 
Requirements
Hardware
- two network interfaces by node
- enought memory (at least 1536MB for grid) and twice the swap space
- enought disk for swap and local storage (and the nfs share on rac1 in this case)
 
Software
- Redhat Enterprise Linux version 5.x
- Oracle Database 11g Release 2 Grid Infrastructure (11.2.0.1.0) for Linux x86-64
- Oracle Database 11g Release 2 (11.2.0.1.0) for Linux x86-64
 
System configuration
- twice the swap space (3072MB)
- assuming a mount point on /u01 for local storage
- (assuming a mount point on /mnt/share on first node for nfs)
 
 
Network configuration
In this guide we're using a default gateway on the private network for testing purposes. It should normally be located on the public network.
 
rac1.example.local
eth0: 192.168.2.101
eth1: 192.168.0.101
gw: 192.168.0.1
 
rac2.example.local
eth0: 192.168.2.102
eth1: 192.168.0.102
gw: 192.168.0.1
 
On both nodes and on the dom0 (if applicable), add this to /etc/hosts,
192.168.2.100 dom0.example.loc dom0
192.168.2.101 rac1.example.local rac1
192.168.2.102 rac2.example.local rac2
192.168.2.111 rac1-vip.example.local rac1-vip
192.168.2.112 rac2-vip.example.local rac2-vip
192.168.2.201 rac-scan.example.local rac-scan
 
192.168.0.1 gwpriv.example.local gwpriv
192.168.0.XXX dom0priv.example.local dom0priv
192.168.0.101 rac1priv.example.local rac1priv
192.168.0.102 rac2priv.example.local rac2priv
note. change the dom0priv ip accordingly
 
Since now on, you should be able to connect remotely to the nodes.
 
 
Package requirements (both nodes)
Make sure those are installed,
rpm -q \
binutils \
compat-libstdc++-33 \
elfutils-libelf \
elfutils-libelf-devel \
gcc \
gcc-c++ \
glibc \
glibc-common \
glibc-devel \
glibc-headers \
ksh \
libaio \
libaio-devel \
libgcc \
libstdc++ \
libstdc++-devel \
make \
sysstat \
unixODBC \
unixODBC-devel \
| grep ^package
 
You will also need those,
rpm -q \
unzip \
xorg-x11-utils \
xorg-x11-apps \
pdksh \
smartmontools \
| grep ^package
note. 'ksh' isn't mandatory but 'pdksh' is.
note. smartctl is needed by grid's root.sh install script.
 
System preparation (both nodes)
Tweak the kernel parameters,
cd /etc/
mv sysctl.conf sysctl.conf.dist
sed '/^#/d; /^$/d;' sysctl.conf.dist > sysctl.conf
echo '' >> sysctl.conf
add and apply,
cat >> sysctl.conf <<EOF9
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
EOF9
sysctl -p
 
Tweak the file limits,
cd /etc/security/
cat >> limits.conf <<EOF9
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF9
 
Enable the limits,
cd /etc/pam.d/
cat >> login <<EOF9
session required pam_limits.so
EOF9
 
Configure NTP,
vi /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid" (add -x for slew corrections only) 
service ntpd restart
chkconfig ntpd on
 
Create the groups and users,
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 1100 -g oinstall -G dba oracle
passwd oracle
Note. as we're using NFS, using common UIDs and GIDs on all nodes is important.
 
 
NFS share
Making a share on rac1, mounting the share remotely on both (for testing purposes, otherwise use a seperate nas). Make sure you've got the nfs-utils package and portmap enabled on both nodes,
rpm -q nfs-utils
service portmap restart
chkconfig portmap on
 
On rac1
Preparing a additional mount point for the nfs share, here on an xen guest (xvdc disk),
mkfs.ext3 /dev/xvdc
mkdir /mnt/shared
cd /etc/
cat >> fstab <<EOF9
/dev/xvdc /mnt/shared ext3 defaults 1 2
EOF9
mount /mnt/shared
 
Configure the NFS share,
cat > exports <<EOF9
/mnt/shared *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
EOF9
and enable it,
service portmap restart
service nfs restart
chkconfig portmap on
chkconfig nfs on
 
On both nodes
Mount the nfs share (assuming /u01 exists as a mount point already),
mkdir /u01/shared
cd /etc/
cat >> fstab <<EOF9
rac1:/mnt/shared /u01/shared nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
EOF9
mount /u01/shared
find /u01
chown -R oracle:oinstall /u01
find /u01 -type d -exec chmod 770 {} \;
 
 
Ready to go
Login as oracle user on both nodes with X11 forwarding enabled and configure your environment,
cat > .vimrc <<EOF9
syntax off
EOF9
 
cat > .bash_profile <<EOF9
source \$HOME/.bashrc
EOF9
 
vi .bashrc
like,
PS1='${HOSTNAME%%\.*}> '
 
ORACLE_HOSTNAME=rac1.example.local; export ORACLE_HOSTNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
 
ulimit -u 16384 -n 65536
note. I will add ORACLE_HOME and edit the PATH later on.
note. Change ORACLE_HOSTNAME accordingly.
apply,
source .bash_profile
 
 
Grid infrastructure installation
On the node of your choice, upload, extract the grid infrastructure archives,
unzip linux.x64_11gR2_grid.zip
rm -f linux.x64_11gR2_grid.zip
make sure X11 forwarding is enabled and launch the installer,
cd grid/
./runInstaller
choose,
Installation option > Install and configure grid infrastructure for a cluster
Installation type > Advanced installation
Product languages > English (already selected on the right)
Grid plug and play > Cluster name: rac
Grid plug and play > SCAN name: rac-scan.example.local
Cluster node information > add rac2.example.local / rac2-vip.example.local
Cluster node information > SSH connectivity... > OS password: PASSWORD
Cluster node information > SSH connectivity... > click Setup
Network interface usage > (here 192.168.2.0 public / 192.168.0.0 private)
Storage option > Shared file system
OCR storage > External redundancy: /u01/shared/storage/ocr
Voting disk storage > External redundancy: /u01/shared/storage/vdsk
Failure isolation > (Do not use intelligent plaform management interface)
Operating system groups > dba for all (unless you're using asm) -- continue? yes
Installation location > Oracle base: /u01/app/oracle
Installation location > Software location: /u01/shared/11.2.0/grid
Create inventory > /u01/app/oraInventory
note. change PASSWORD accordingly
all prerequisites should be fine. Save the response file and eventually clean it up for later use, during grid's installation,
cd ~/
mv grid.rsp grid.rsp.dist
sed '/^#/d; /^$/d;' grid.rsp.dist > grid.rsp
you'll then be prompted by the installer to execute those as root and on both nodes,
/u01/app/oraInventory/orainstRoot.sh
/u01/shared/11.2.0/grid/root.sh
note. nevermind the "ADVM/ACFS is not supported" message, we're not using ASM (ADVM means Automatic Storage Management Dynamic Volume Manager ; ACFS means Automatic Storage Management Cluster File System).
note. nevermind the Oracle Cluster Verification Utility error, as we should provide DNS round-robin for rac-scan.
INFO: Checking name resolution setup for "rac-scan.example.local"...
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan.example.local"
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "rac-scan.example.local" (IP address: 192.168.2.201) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan.example.local"
INFO: Verification of SCAN VIP and Listener setup failed
 
 
Database 11g release 2 installation
Then extract the 11g release 2 database archives and install
unzip linux.x64_11gR2_database_1of2.zip
rm -f linux.x64_11gR2_database_1of2.zip
unzip linux.x64_11gR2_database_2of2.zip
rm -f linux.x64_11gR2_database_2of2.zip
make sure X11 forwarding is enabled and launch the installer,
cd database/
./runInstaller
choose,
Installation option > Install database software only
Grid options > Real application clusters database installation: rac1, rac2
Grid options > SSH connectivity > OS password: PASSWORD
Grid options > SSH connectivity > click Setup
Product languages > English
Database edition > Enterprise edition
Installation location > Oracle base: (default defined by env) /u01/app/oracle
Installation location > Software location: (default) /u01/app/oracle/product/11.2.0/dbhome_1
  Operating system groups > (default) OSDBA: dba
  Operating system groups > (default) OSOPER: oinstall
note. change PASSWORD accordingly
all prerequisites should be fine. Save the response file and eventually clean it up for later use, during grid's installation,
cd ~/
mv db.rsp db.rsp.dist
sed '/^#/d; /^$/d;' db.rsp.dist > db.rsp
you'll then be prompted by the installer to execute those as root and on both nodes,
/u01/app/oracle/product/11.2.0/dbhome_1/root.sh
note. no need to overwrite these files
 
You can now add ORACLE_HOME and edit your PATH on both nodes,
cd ~/
vi .bashrc
add,
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1; export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH; export PATH
apply,
source .bashrc
 
 
Usage
Fix the perms again, as root just here,
cd /u01/
chown oracle:oinstall shared
chmod 770 shared
 
Now when creating a database with dbca, choose,
Oracle real application clusters database 
Use common location for all database files: /u01/shared/oradata
Specify flash recovery area: /u01/shared/flash_recovery_area
Init parameters > Memory / Typical: ...
Init parameters > Character sets / use unicode
 
 
Troubbleshooting
In case you're experiencing errors during installation,
cd /u01/app/oraInventory/logs/
ls -ltr
 
If you get this error when trying to install 11g release 2 RAC,
[NS-35354] The system on which you are attempting to install Oracle RAC is not part of a valid cluster.
==> Prior to installing Oracle RAC, you must create a valid cluster. This is done by deploying Grid Infrastructure software, which will allow configuration of Oracle Clusterware and Automatic Storage Management.
ref http://www.error-code.org.uk/view.asp?e=ORACLE-INS-35354
 
If you get this error while installing grid,
ins-41519 one or more voting disk locations are not valid
==> make sure the existing containing folder has the right permissions and is accessible from all nodes.
 
 
References
http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnLinuxUsingNFS.php
http://blog.ronnyegner-consulting.de/2009/09/14/oracle-11g-release-2-install-guide-%E2%80%93-grid-infrastructure-installation/
http://www.oracle.com/technetwork/database/clustering/tech-generic-linux-new-086754.html
http://www.oracle.com/technetwork/database/clustering/downloads/index.html
 
10g
http://www.puschitz.com/InstallingOracle10gRAC.shtml
http://download.oracle.com/docs/cd/B19306_01/rac.102/b28759/install.htm
 

(obsolete, see the new doc)