Nethence Newdoc Olddoc Lab Your IP BBDock  

Warning: those guides are mostly obsolete, please have a look at the new documentation.


IBM GPFS configuration on RHEL5
We're installing a two node GPFS cluster : gpfs1 and gpfs2. Those are RHEL5 systems, accessing a shared disk as '/dev/sdb'. We're not using client/server GPFS feature but just two NSD.
On each node, make sure you've got those packages installed,
rpm -q \
libstdc++ \
compat-libstdc++-296 \
compat-libstdc++-33 \
libXp \
imake \
gcc-c++ \
kernel \
kernel-headers \
kernel-devel \
On each node, make sure the nodes reslove and are able to login as root to each one, even itself,
cat /etc/hosts
ssh-keygen -t dsa -P ''
copy/paste the public keys from each node,
cat .ssh/
to one same authorized_keys2 on all the nodes,
vi ~/.ssh/authorized_keys2
check the nodes can connect to each other, even to itselfs,
ssh gpfs1
ssh gpfs2
On each node, extract and install IBM Java,
./gpfs_install-3.2.1-0_i386 --text-only
rpm -ivh /usr/lpp/mmfs/3.2/ibm-java2-i386-jre-5.0-4.0.i386.rpm
extract again and install the GPFS RPMs,
./gpfs_install-3.2.1-0_i386 --text-only
rpm -ivh /usr/lpp/mmfs/3.2/gpfs*.rpm
On each node, get the latest GPFS update ( and install it,
mkdir /usr/lpp/mmfs/3.2.1-13
tar xvzf gpfs-3.2.1-13.i386.update.tar.gz -C /usr/lpp/mmfs/3.2.1-13
rpm -Uvh /usr/lpp/mmfs/3.2.1-13/*.rpm
On each node, prepare the portability layer build,
#mv /etc/redhat-release /etc/redhat-release.dist
#echo 'Red Hat Enterprise Linux Server release 5.3 (Tikanga)' > /etc/redhat-release
cd /usr/lpp/mmfs/src
export SHARKCLONEROOT=/usr/lpp/mmfs/src
rm config/site.mcr
make Autoconfig
check for those values into the configuration,
grep ^LINUX_DISTRIBUTION config/site.mcr
grep 'define LINUX_DISTRIBUTION_LEVEL' config/site.mcr
grep 'define LINUX_KERNEL_VERSION' config/site.mcr
Note. "2061899" for kernel "2.6.18-128.1.10.el5"
On each node, build it,
make clean
make World
make InstallImages
On each node, edit the PATH,
vi ~/.bashrc
add this line,
source ~/.bashrc
On some node, create the cluster,
mmcrcluster -N gpfs1:quorum,gpfs2:quorum -p gpfs1 -s gpfs2 -r /usr/bin/ssh -R /usr/bin/scp
Note. gpfs1 as primary configuration server, gpfs2 as secondary
On some node, start the cluster on all the nodes,
mmstartup -a
On some node, create the NSD,
vi /etc/diskdef.txt
mmcrnsd -F /etc/diskdef.txt
On some node, create the filesystem,
mmcrfs gpfs1 -F /etc/diskdef.txt -A yes -T /gpfs
Note. '-A yes' for automount
Note. check for changes into '/etc/fstab'
On some node, mount /gpfs on all the nodes,
mmmount /gpfs -a
On some node, check you've got access to the GUI,
/etc/init.d/gpfsgui start
Note. if you need to change the default ports, edit those file and change "80" and "443" to the ports you want,
#vi /usr/lpp/mmfs/gui/conf/
#vi /usr/lpp/mmfs/gui/conf/
wait a few seconds (starting JAVA...) and go to node's GUI URL,
https: //gpfs2/ibm/console/
On each node, you can now disable the GUI to save some RAM,
/etc/init.d/gpfsgui stop
chkconfig gpfsgui off
and make sure gpfs is enable everywhere,
chkconfig --list | grep gpfs
Note. also make sure the shared disk shows up at boot.
For toubleshooting, watch the logs there,
tail -F /var/log/messages | grep 'mmfs:'
On some node, to start the cluster and mount the file system on all the nodes,
mmstartup -a
mmmount /gpfs -a
Note. "mmshutdown" to stop the cluster.
Show cluster informations,
Show file systems and mounts,
#mmlsdisk gpfs1
mmlsmount all
show file systems options,
mmlsfs gpfs1 -a
To disable automount,
mmchfs gpfs1 -A no
to reenable automount,
mmchfs gpfs1 -A yes
Install and configure General Parallel File System (GPFS) on xSeries :
General Parallel File System (GPFS) :
Managing File Systems :
GPFS V3.1 Problem Determination Guide :
GPFS V3.2 and GPFS V3.1 FAQs :

(obsolete, see the new doc)