this is obsolete doc -- see http://doc.nethence.com/ instead

OCFS2 configuration 

 

 

Introduction 

OCFS2 is probably the fastest shared disk filesystem regarding read speeds. 

Note. procedures there after need to be applied on all the nodes unless specified otherwise. 

 

 

Prerequesties 

Make sure the nodes resolve to each other, 

vi /etc/hosts

 

Make sure those packages are installed, 

rpm -q \
vte \
pygtk2

 

 

Installation 

To download the right OCFS2 and tools (http://oss.oracle.com/projects/ocfs2/files/ & http://oss.oracle.com/projects/ocfs2-tools/files/) version, you need to check your exact kernel version, 

uname -r

and install it e.g. (depending on your kernel and OCFS2 versions), 

rpm -ivh \
ocfs2-2.6.18-164.el5-1.4.7-1.el5.i686.rpm \
ocfs2-tools-1.4.4-1.el5.i386.rpm \
ocfs2console-1.4.4-1.el5.i386.rpm

 

 

SSH without a password 

Configure SSH without a password from root to root users, 

cd ~/
mkdir -p .ssh
chmod 700 .ssh
cd .ssh
ssh-keygen -t dsa -P '' > ~/id_dsa.fingerprint

note. if using a passphrase, you will have to use ssh-agent and ssh-add 

copy/paste id_dsa.pub's content, 

#cp id_dsa.pub authorized_keys2
#vi authorized_keys2
cat id_dsa.pub
cd ..

into a common file you'll have to deploy on all the nodes, 

scp ~/.ssh/authorized_keys2 node2:~/.ssh
...
chmod 600 ~/.ssh/authorized_keys2
...

and make a test connection from each node to one another (and to itself, preferably) to make ~/.ssh/known_hosts, 

ssh node1
ssh node2

 

 

Configuration 

Only on 1 node, populate the configuration on all nodes, 

#rpm -q xorg-x11-xauth
#xclock
ocfs2console

then 

Cluster > Configure Nodes
Node configuration > Add > node1 / ... / 7777
Node configuration > Add > node2 / ... / 7777
...
Cluster > Propagate Configuration
File > Quit

check the cluster configuration (propagated on all nodes), 

cat /etc/ocfs2/cluster.conf

 

On each node (already done on the one you configured the cluster on in the first place, but applying it again doesn't hurt), enable OCFS2, 

/etc/init.d/o2cb enable

Note. you are then able to stop and start it. Also make sure it's started at boot time, 

#/etc/init.d/o2cb stop
#/etc/init.d/o2cb start
#chkconfig o2cb on
chkconfig --list | grep o2cb

Note. by the way, there's also the ocfs2 init script, 

#/etc/init.d/ocfs2 stop
#/etc/init.d/ocfs2 start
#chkconfig ocfs2 on
chkconfig --list | grep ocfs

 

 

Make an OCFS2 filesystem 

Prepare the filesystem, 

mkdir -p /mnt/ocfs
#on only one node : mkfs.ocfs2 /dev/dm-0
mount -t ocfs2 -o datavolume /dev/dm-0 /mnt/ocfs

note. not using any partition nor lvm for convenience here. If you want LVM, use the CLVM part from the GFS2 guide on this website. 

note. example options to mkfs : -b 4K -C 32K -N 4 

note. -L to mkfs and mount commands if you want labels instead of device paths 

check amoung the nodes, 

cd /mnt/ocfs
#on only one node : echo ok > check
cat check

 

Automate mounts at startup, 

umount /mnt/ocfs
vi /etc/fstab

add, 

/dev/dm-0 /mnt/ocfs ocfs2 _netdev,datavolume,nointr 0 0

check, 

mount -a
df -h

 

 

Misc 

Note. in case you need to fsck, 

fsck.ocfs2 -y /dev/dm-0

 

Note. to disable OCFS2 completely, umount, remove entries from fstab and disable the daemons on all nodes, 

service o2cb stop
service ocfs2 stop
chkconfig o2cb off
chkconfig ocfs2 off

 

 

Troubbleshooting 

Note. if you get that error message when trying to start o2cb, 

ls: /config: No such file or directory

it's simply because you didn't enable it (service o2cb enable) 

 

Note. if you get that error when trying to mkfs.ocfs2, 

/dev/dm-0 is apparently in use by the system; will not make a ocfs2 volume here!

it's probably because some mount point or PV used by LVM is still using the device