Nethence Newdoc Olddoc Lab Your IP BBDock  


Warning: those guides are mostly obsolete, please have a look at the new documentation.

UnixWindowsOracleObsoleteHardwareDIYMechanicsScriptsConfigs

Installing Oracle RAC 11gR2 on RHEL5 nodes
Part 3: Oracle Grid and RDBMS
 
Installing Oracle Grid
Uploald Oracle Grid and Database (RDBMS) to one of the nodes. Those are the archive files from Oracle E-delivery but those available on ITRC must be just fine,
V17530-01_1of2.zip (Oracle Database 11g Release 2 (11.2.0.1.0) for Linux x86-64)
V17530-01_2of2.zip
V17531-01.zip (Oracle Database 11g Release 2 Grid Infrastructure (11.2.0.1.0) for Linux x86-64)
Note. Just for reference here are the 32-bit archive file names,
V17489-01_1of2.zip (Oracle Database 11g Release 2 (11.2.0.1.0) for Linux x86)
V17489-01_2of2.zip
V17490-01 (Oracle Database 11g Release 2 Grid Infrastructure (11.2.0.1.0) for Linux x86)
 
Login as oracle user (with X11 forwarding enabled if you are connecting remotely),
unzip V17531-01.zip
rm -f V17531-01.zip
cd grid/
./runInstaller
Note. You cannot place the archive in a folder which has spaces in it, otherwise you might get this error,
./runInstaller: line 106: /home/oracle/Oracle: No such file or directory
 
Proceed with the installation e.g. choose,
1/8: ...for a Cluster (first choice)
2/8: typical installation (first choice)
3/8: add nodes (rac2/rac2-vip). Click "SSH connectivity", enter the password and click Setup. Click "Identify network interfaces" and verify that the heartbeat network is marked as "Private".
3/8: ASM
[...]
6/9: run the script to fix the kernel parameters (since 11gR2 installer it's automated, so why not use that facility),
  cd /tmp/CVU_11.2.0.1.0_oracle/
  ./runfixup.sh
and proceed with the installation.
 
During the installation, you may look at the logs. Look at the terminal you launched the installer on, it tells you what file to read e.g.,
You can find the log of this install session at:
/u01/app/oraInventory/logs/installActions2012-12-31_02-40-08PM.log
Note. This part (or just AFTER?) eventually takes some time,
INFO: Copying Oracle home '/u01/app/11.2.0/grid' to remote nodes 'rac1,rac2'.
 
ADVM/ACFS is not supported on centos-release-5-8.el5.centos
Once everything is done cleanup your home as oracle user,
cd ~/
rm -rf grid/
 
Ready to go for Grid
Check that everything is fine as root,
cd ~/
service clvmd status
cd /u01/app/11.2.0/grid/bin/
./ocrcheck
and as 'oracle' user,
cluvfy stage -pre dbinst -n rac1,rac2
#cluvfy stage -pre dbinst -fixup -n rac1,rac2 -osdba dba -verbose
 
Troubleshooting for Grid installation
If you get this error while executing the 'root.sh' script at the end of the installation,
CRS is already configured on this node for crshome=0
Cannot configure two CRS instances on the same cluster.
Please deconfigure before proceeding with the configuration of new home.
then deconfigure the Oracle HA service (won't work without adding '-force'),,
cd /u01/app/11.2.0/grid/
crs/install/roothas.pl -delete -force
Ref. https://forums.oracle.com/forums/thread.jspa?threadID=976757
 
If you get this error while executing the 'root.sh' script at the end of the installation,
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
then, as root, stop the Oracle HA service deconfigure it (won't work without adding '-force'),
cd /u01/app/11.2.0/grid/bin/
./crsctl stop crs -f
cd /u01/app/11.2.0/grid/
crs/install/roothas.pl -delete -force
Ref. https://forums.oracle.com/forums/thread.jspa?threadID=2284551
 
In case you messed up the installation process here's how to start from scratch again, logout all sessions for the oracle user and as root,
cd /
rm -rf u01/
cd etc/
rm -rf oracle/ oraInst.loc oratab
 
Installing the Oracle RDBMS
Now comes the easy part. Are you still logged as oracle user with X11 forwarding enabled (in case you're in a remote session) ? Well, just keep typing on it. Add those variables the 'oracle' user environment to install the RDBMS binaries (ORACLE_BASE and other variables have been defined previously),
cd ~/
vi .bashrc
add,
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export CLASSPATH
note. I tryed without LD_LIBRARY and CLASSPATH and I got an error while the installer tries to link the 'ins_rdbms.mk' library,
Exception String: Error in invoking target 'all_no_orcl' of makefile '/u01/app/oracle/product/11.2.0/db_1/rdbms/lib/ins_rdbms.mk'. See '/u01/app/oraInventory/logs/installActions2012-12-31_05-50-34PM.log' for details.
apply,
  source .bashrc

Extract and proceed with the installation of the RDBMS,
unzip V17530-01_1of2.zip
rm -f V17530-01_1of2.zip
unzip V17530-01_2of2.zip
rm -f V17530-01_2of2.zip
cd database/
./runInstaller
run the scripts,
  cd /u01/app/oracle/product/11.2.0/db_1/
  ./root.sh
Note. The installer installs the binaries on all the nodes for you. So you only need to run this on one node.
Note. It takes quite some time at some point, before the scripts pop-up windows shows up. I'm not sure actually if it's really the copying of the binaries among the nodes that causes this delay or if it's the cluster inventory save. The logs , as it says in the logs,
to copy the binaries to the other nodes (even longer than for the grid binaries), be patient when you're so far in the logs,
  cd $ORACLE_BASE/oraInventory/logs/
  ls -ltr
  tail -f <installActions file>
  INFO: Copying Oracle home '/u01/app/oracle/product/11.2.0/db_1' to remote nodes 'rac2'.
because just after here it comes,
INFO: Saving Cluster Inventory

You can now clean things up as user 'oracle',
  cd ~/
  rm -rf database/
Checking everything
(in progress...)
./raccheck -v 

Make sure ASM works fine,
  asmcmd lsdg
  #asmca

Ready to go -- Create a clustered relational databases
Create a RAC database as 'oracle' user,
  dbca
choose,
  Create Database
  Custom Database
  GDN and SID: dbname
  (eventually provide an smtp and email address for Enterprise Manager's alerts, or just disable it)
  (you should keep automatic maintenance tasks enabled)

As 'oracle' user, show the current status and configuration of the clustered "dbname" database,
srvctl status database -d dbname
srvctl config database -d dbname
sqlplus / as sysdba
select inst_name from v$active_instances;

References
Oracle Database 11g Release 2 RAC On Linux Using NFS (http://www.oracle-base.com/articles/11g/oracle-db-11gr2-rac-installation-on-linux-using-nfs.php)
Real Application Clusters (http://www.orafaq.com/wiki/Real_Application_Clusters)
Configuration Example - Oracle HA on Cluster Suite (https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Configuration_Example_-_Oracle_HA_on_Cluster_Suite/)

(obsolete, see the new doc)