this is obsolete doc -- see http://doc.nethence.com/ instead
IBM GPFS configuration on RHEL5
Introduction
We're installing a two node GPFS cluster : gpfs1 and gpfs2. Those are RHEL5 systems, accessing a shared disk as '/dev/sdb'. We're not using client/server GPFS feature but just two NSD.
Installation
On each node, make sure you've got those packages installed,
rpm -q \
libstdc++ \
compat-libstdc++-296 \
compat-libstdc++-33 \
libXp \
imake \
gcc-c++ \
kernel \
kernel-headers \
kernel-devel \
xorg-x11-xauth
On each node, make sure the nodes reslove and are able to login as root to each one, even itself,
cat /etc/hosts
ssh-keygen -t dsa -P ''
copy/paste the public keys from each node,
cat .ssh/id_dsa.pub
to one same authorized_keys2 on all the nodes,
vi ~/.ssh/authorized_keys2
check the nodes can connect to each other, even to itselfs,
ssh gpfs1
ssh gpfs2
On each node, extract and install IBM Java,
./gpfs_install-3.2.1-0_i386 --text-only
rpm -ivh /usr/lpp/mmfs/3.2/ibm-java2-i386-jre-5.0-4.0.i386.rpm
extract again and install the GPFS RPMs,
./gpfs_install-3.2.1-0_i386 --text-only
rpm -ivh /usr/lpp/mmfs/3.2/gpfs*.rpm
On each node, get the latest GPFS update (http://www14.software.ibm.com/webapp/set2/sas/f/gpfs/download/home.html) and install it,
mkdir /usr/lpp/mmfs/3.2.1-13
tar xvzf gpfs-3.2.1-13.i386.update.tar.gz -C /usr/lpp/mmfs/3.2.1-13
rpm -Uvh /usr/lpp/mmfs/3.2.1-13/*.rpm
On each node, prepare the portability layer build,
#mv /etc/redhat-release /etc/redhat-release.dist
#echo 'Red Hat Enterprise Linux Server release 5.3 (Tikanga)' > /etc/redhat-release
cd /usr/lpp/mmfs/src
export SHARKCLONEROOT=/usr/lpp/mmfs/src
rm config/site.mcr
make Autoconfig
check for those values into the configuration,
grep ^LINUX_DISTRIBUTION config/site.mcr
grep 'define LINUX_DISTRIBUTION_LEVEL' config/site.mcr
grep 'define LINUX_KERNEL_VERSION' config/site.mcr
Note. "2061899" for kernel "2.6.18-128.1.10.el5"
On each node, build it,
make clean
make World
make InstallImages
On each node, edit the PATH,
vi ~/.bashrc
add this line,
PATH=$PATH:/usr/lpp/mmfs/bin
apply,
source ~/.bashrc
On some node, create the cluster,
mmcrcluster -N gpfs1:quorum,gpfs2:quorum -p gpfs1 -s gpfs2 -r /usr/bin/ssh -R /usr/bin/scp
Note. gpfs1 as primary configuration server, gpfs2 as secondary
On some node, start the cluster on all the nodes,
mmstartup -a
On some node, create the NSD,
vi /etc/diskdef.txt
like,
/dev/sdb:gpfs1,gpfs2::::
apply,
mmcrnsd -F /etc/diskdef.txt
On some node, create the filesystem,
mmcrfs gpfs1 -F /etc/diskdef.txt -A yes -T /gpfs
Note. '-A yes' for automount
Note. check for changes into '/etc/fstab'
On some node, mount /gpfs on all the nodes,
mmmount /gpfs -a
On some node, check you've got access to the GUI,
/etc/init.d/gpfsgui start
Note. if you need to change the default ports, edit those file and change "80" and "443" to the ports you want,
#vi /usr/lpp/mmfs/gui/conf/config.properties
#vi /usr/lpp/mmfs/gui/conf/webcontainer.properties
wait a few seconds (starting JAVA...) and go to node's GUI URL,
https: //gpfs2/ibm/console/
On each node, you can now disable the GUI to save some RAM,
/etc/init.d/gpfsgui stop
chkconfig gpfsgui off
and make sure gpfs is enable everywhere,
chkconfig --list | grep gpfs
Note. also make sure the shared disk shows up at boot.
Usage
For toubleshooting, watch the logs there,
tail -F /var/log/messages | grep 'mmfs:'
On some node, to start the cluster and mount the file system on all the nodes,
mmstartup -a
mmmount /gpfs -a
Note. "mmshutdown" to stop the cluster.
Show cluster informations,
mmlscluster
#mmlsconfig
#mmlsnode
#mmlsmgr
Show file systems and mounts,
#mmlsnsd
#mmlsdisk gpfs1
mmlsmount all
show file systems options,
mmlsfs gpfs1 -a
To disable automount,
mmchfs gpfs1 -A no
to reenable automount,
mmchfs gpfs1 -A yes
References
Install and configure General Parallel File System (GPFS) on xSeries : http://www.ibm.com/developerworks/eserver/library/es-gpfs/
General Parallel File System (GPFS) : http://www.ibm.com/developerworks/wikis/display/hpccentral/General+Parallel+File+System+(GPFS)
Managing File Systems : http://www.ibm.com/developerworks/wikis/display/hpccentral/Managing+File+Systems
GPFS V3.1 Problem Determination Guide : http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfs31/bl1pdg1117.html
GPFS : http://csngwinfo.in2p3.fr/mediawiki/index.php/GPFS
GPFS V3.2 and GPFS V3.1 FAQs : http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html