this is obsolete doc -- see http://doc.nethence.com/ instead

Redhat GFS2 configuration 

 

 

Introduction 

GFS is fine for large files, it's not recommended for small files (Redhat supports it still...) 

Note. procedures there after need to be applied on all the nodes unless specified otherwise. 

 

 

Prerequesties 

Make sure the nodes resolve to each other, 

vi /etc/hosts

 

Make sure you've got NTP configured and running, 

rpm -q ntp
#vi /etc/ntp.conf (eventually change server)
service ntpd start
chkconfig ntpd on

 

 

Installation 

Install the needed package groups, 

yum groupinstall Clustering
yum groupinstall "Cluster Storage"

 

Note the lvm configuration change, 

cd /etc/lvm
diff lvm.conf.lvmconfold lvm.conf

gives, 

>     library_dir = "/usr/lib"

 

Also, you need to tweak lvm.conf a little, to adapt LVM2 tools for CLVM, 

cd /etc/lvm
cp lvm.conf lvm.conf.dist
sed '
/^[[:space:]]*$/d;
/^[[:space:]]*#/d;
' lvm.conf.dist > lvm.conf
vi lvm.conf

change, 

locking_type = 3
fallback_to_local_locking = 0

Note. locking_type type 3 for built-in clustered locking 

Note. fallback_to_local_locking 0 doesn't prevent LVM from working when another node is down 

 

Restart the clustered LVM daemon just to be sure the changes are immediately applied, 

service clvmd restart

 

 

Configuration 

Setup the cluster, 

system-config-cluster

then e.g., 

Cluster nodes > Add a cluster node > node1 / votes: 1
Cluster nodes > Add a cluster node > node2 / votes: 1
File > Save (OK)
File > Quit

Note. eventually use a quorum disk (recommended!) 

 

For a two node cluster w/o quorum disk and w/o fencing you get, 

cd /etc/cluster
cat cluster.conf

e.g., 

<?xml version="1.0" ?>
<cluster config_version="2" name="clustername">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="node1" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="node2" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>

Note the '<cman expected_votes="1" two_node="1"/>' directive. You had to insert this kind of line manually in RHEL4 cluster suite for a two node cluster, otherwise it wouldn't run. 

Note. w/o fencing enabled you'll eventually get some of those in the logs, 

Jul 16 00:40:22 node2 fenced[1926]: fencing node "node1"
Jul 16 00:40:22 node2 fenced[1926]: fence "node1" failed

 

You can now start the daemons on ALL NODES : works best when you proceed with all the nodes at the same time for cluster startup, 

service cman start
service gfs2 start
service clvmd start
#service rgmanager start

don't forget to enable those at startup, 

chkconfig cman on
chkconfig gfs2 on
chkconfig clvmd on
#chkconfig rgmanager on

 

 

Make a GFS2 filesystem 

From only one node, create a CLVM logical volume, 

pvcreate /dev/dm-0
vgcreate vgcheck /dev/dm-0
lvcreate -L 10000 -n lvcheck vgcheck

note. lvm.conf has been adapted for CLVM see 'Installation' chapter 

note. FYI it's possible to prevent vgcreate to make the VG available to the cluster, hence visible only to one node, 

-c n

now look for the volume from the other nodes w/o rescaning, 

vgdisplay -v

note. no need to 'vgchange -ay' neither thanks to CLVM 

 

From only one node, make the filesystem, 

#man mkfs.gfs2
mkfs.gfs2 -p lock_dlm -t clustername:lvcheck -j 2 /dev/vgcheck/lvcheck

Note. use lock_dlm protocol for a shared disk storage configuration 

Note. clustername must match that in cluster.conf 

Note. lvcheck or any other name to distinguish this filesystem 

 

Mount it on all nodes and check, 

cd /mnt
mkdir -p gfs
mount.gfs2 /dev/vgcheck/lvcheck gfs
cd gfs
#echo ok > check
cat check

Note. mount.gfs2 doesn't tell if the filesystem is clean or not, hence the fsck before mounting when doing disater recovery, 

#fsck.gfs2 -y /dev/vgcheck/lvcheck

 

To mount the fs automaticly at boot, 

cd /etc
vi fstab

eventually add (here commented, see note below), 

#/dev/vgcheck/lvcheck /mnt/gfs gfs2 defaults 0 0

Note. but you might prefer to use a job scheduler or some cluster solution to enable/disable one node's services cleanly, w/o using fstab. Also, also mount.gfs2 doesn't tell if the filesystem is clean or not. 

 

 

Usage 

View the running nodes, 

cman_tool nodes

 

If using fencing, to manually fence a node, 

fence_ack_manual -n node2

 

 

Misc 

Note. to disable GFS2 completely, umount, clean up fstab and, 

service cman stop
service gfs2 stop
service clvmd stop
#service rgmanager stop

 

service cman status
service gfs2 status
service clvmd status
#service rgmanager status

 

chkconfig cman off
chkconfig gfs2 off
chkconfig clvmd off
#chkconfig rgmanager off

 

#rm -rf /etc/cluster

 

#cd /etc/lvm
#mv lvm.conf lvm.conf.cluster
#cp lvm.conf.lvmconfold lvm.conf

 

 

Troubbleshooting 

Note. if you get that error when trying to mount, 

mount.gfs2: can't connect to gfs_controld: Connection refused

start the cman service 

 

Note. if you get that error when trying to start cman, 

/usr/sbin/cman_tool: ccsd is not running

 

 

References 

http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Cluster_Logical_Volume_Manager/LVM_examples.html 

http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Cluster_Logical_Volume_Manager/LVM_Cluster_Overview.html 

http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.5/html/Global_File_System_2/index.html 

http://securfox.wordpress.com/2009/08/11/how-to-setup-gfs/ 

http://roala.blogspot.com/2007/12/centos-5-gfs.html 

http://kairo.wordpress.com/2009/04/09/gfs-gfs2-sharing-filesystems-multiple-servers-nodes/ 

http://gcharriere.com/blog/?tag=gfs2 

http://sourceware.org/cluster/doc/usage.txt