this is obsolete doc -- see http://doc.nethence.com/ instead

Veritas Storage Foundation Cluster File System v4.1 installation 

 

 

Introduction 

Robust and respectable shared disk filesystem. Hopefully Symantec won't mess with its code. 

 

 

Download 

Some 4.1 release is still available from Symantec's trialware (http://www.symantec.com/business/storage-foundation-cluster-file-system) : 

- Storage Foundation and HA Solutions, RHEL4 for 32-bit, v4.1 

==> Q17083H.sf_ha.cd1.4.1.00.10.rhel4_i686.tar.gz (425 MB) 

Note. RHEL4 i686 only 

and even its fourth Management Pack, 

- Storage Foundation and HA Solutions, RHEL4, v4.1MP4 - MP only - 32bit and Itanium 

==> sf_ha.4.1.40.00.rhel4.tar.gz (578 MB) 

Note. RHEL4 i686, ia64 and x86_64 

Note. contains only MP4, not the main installer 

 

Besides, you might also be interested in those : 

- Storage Foundation and HA Solutions 5.1 for Linux Red Hat 

==> VRTS_SF_HA_Solutions_5.1_RHEL.tar.gz (263 MB) 

Note. RHEL5 x86_64 only 

- Storage Foundation and HA Solutions RHEL4/5, v5.0MP4, Part 1 of 2 

Storage Foundation and HA Solutions RHEL4/5, v5.0MP4, Part 2 of 2 

==> VRTS_SF_HA_Solutions_5.0_MP4_RHEL.tar.gzaa (1.4 GB) 

==> VRTS_SF_HA_Solutions_5.0_MP4_RHEL.tar.gzab (1.3 GB) 

Note. RHEL4 i686, RHEL4 x86_64, RHEL5 i686, RHEL5 x86_64, RHEL5 ppc64 

Note. containts main installer + MP4 

Note. you'll have to concatenate the splitted archive, 

cat \
VRTS_SF_HA_Solutions_5.0_MP4_RHEL.tar.gzaa \
VRTS_SF_HA_Solutions_5.0_MP4_RHEL.tar.gzab \
> VRTS_SF_HA_Solutions_5.0_MP4_RHEL.tar.gz

 

An this one, 

- VCS Management Console, Linux, v5.5.1 (374 MB) 

 

 

Prerequesties 

Generally : 

- shared disks among the nodes, SAN seems to be mandatory. Software iSCSI doesn't work (more about that below) 

- two or more NIC cards. At least one of those will be used for heartbeat 

note. only configure your admin interface, the heartbeat gets configured by the VRTS cluster 

note. put the heartbeat interface(s) into a dedicated VLAN 

- at least two nodes 

 

In this guide we're proceeding with "Storage Foundation and HA Solutions, RHEL4 for 32-bit, v4.1". Therefore here's some pretty strict prerequesties : 

- RHEL4u1 x32 

- 2.6.9-11.EL kernel for x32 

- 2.6.9-11.smp kernel for Xeon x32/x64 

- 2.6.9-11.hugemem kernel for Opteron x32/x64 

note. althon/duron/phenom aren't supported 

note. seems to work even with CentOS/RHEL4u7 hence another kernel version 

- memory : 512MB 

- disk : 155 to 525MB per node depending on SF products and optional packages 

 

Also, make sure you've configured : 

- hostname resolution among the nodes, fix /etc/hosts like e.g., 

127.0.0.1       localhost.localdomain   localhost
192.168.0.254   gw
192.168.0.245   vrts1           vrts1.example.net
192.168.0.246   vrts2           vrts2.example.net
192.168.0.252   iscsi           iscsi.example.net

- SSH without a password among the nodes 

 

Note. if you're using CentOS or something, fix redhat-release on all the nodes, 

cd /etc
mv redhat-release redhat-release.dist
vi redhat-release

like, 

Red Hat Enterprise Linux AS release 4 (Nahant Update 1)

otherwise some RPMs won't get installed, 

Cannot detect host linux distribution. Aborting.

 

Note. the i686 version wants either an Intel, Xeon or Opteron processor, make sure it's not a usual AMD one, 

grep ^vendor /proc/cpuinfo

otherwise the VRTSvxvm-common-4.1.00.10-GA_RHEL4 RPM won't install and you would get, 

This package is not built for athlon processors. Exiting.

 

 

Installation 

Extract the archive and proceed, 

mkdir -p 4.1.00.10
tar xzf Q17083H.sf_ha.cd1.4.1.00.10.rhel4_i686.tar.gz -C 4.1.00.10
cd 4.1.00.10/rhel4_i686
./installer

choose, 

P) Perform a Preinstallation Check
4)  VERITAS Storage Foundation Cluster File System
system names : vrts1 vrts2
2)  Storage Foundation Cluster File System HA

run the installer again, 

./installer

and choose e.g., 

I) Install/Upgrade a Product
4)  VERITAS Storage Foundation Cluster File System
system names : vrts1 vrts2
SFCFS license key for vrts1: IZZE-3NGO-FUWU-R4KB-OILG-FGOP-8PP
more ? n
SFCFS license key for vrts2: IZZE-3NGO-FUWU-R4KB-OILG-FGOP-8PP
more ? n
1)  Install all of the optional rpms

note. based on this example permanent serial number you can also get the product-relative relative ones looking at the install log 

and proceed, 

Will you be configuring I/O fencing after SFCFS install ? y
Configure SFCFS now ? n
Install simultaneously ? y

note. we'll configure SFCFS later 

note. if rpm install errors there are, they will be identical on all nodes anyways, so just proceed simultaneously. 

note. if some package fails to install, try it manually to check again for errors, 

cd ~/4.1.00.10/rhel4_i686
rpm -Uvh --nodeps storage_foundation_cluster_file_system/rpms/...

note. you'll find some logs in there, 

/opt/VRTS/install/logs

note. if someday you need to automate all this, 

#./installvcs -responsefile

 

Configure SFCFS, 

cd /opt/VRTS/install
./installsfcfs -configure

then, 

stop SFCFS processes ? y
cluster name : e.g. sfcfscl
cluster id : e.g. 5000
private heartbeat link : eth1
second private heartbeat link ? n
low priority heartbeat link : eth0 (same as admin network)
same NICs for private heartbeat links ? y
Cluster Volume Manager cluster reconfiguration timeout (sec): 200
enclosure-based naming scheme ? n (for now)

note. see the system logs on the nodes if VCS doesn't start, 

tail -500 -F /var/log/messages

once everything starts, 

set up the default disk group ? y
one disk group name for all ? y
default disk group : e.g. group0

 

 

Post-installation 

On all the nodes, add some Veritas binaries to your path, 

cat >> ~/.bashrc <<EOF9
PATH=/opt/VRTS/bin:/opt/VRTSvcs/bin:/opt/VRTSob/bin:\$PATH
export PATH
alias ha='hastatus -summary'
alias vx='vxdctl -c mode'
alias cfs='cfscluster status'
EOF9
source ~/.bashrc

 

Check the VRTS init script is in place, 

ls -l /etc/init.d/vxvm-recover

Note. hot-replication is enabled by default 

 

Check vx modules, especially vxfs, are used, 

lsmod | grep vx

 

Check LLT (Low Latency Transport) configuration, 

#cat /etc/llthosts
#cat /etc/llttab
lltstat -nvv | more
lltstat -p

 

Check GAB (Group membership and Atomic Broadcast) configuration, 

#cat /etc/gabtab
gabconfig -a

 

Check for cluster configuration, 

cat /etc/VRTSvcs/conf/config/main.cf

 

Check group disk activation is enabled on all nodes, 

cat /etc/default/vxdg

should give, 

default_activation_mode=shared-write
enable_activation=true

 

 

I/O Fencing 

See, although the nodes are up, the cluster isn't, 

hastatus -summary

I/O fencing needs to be either configured or disabled. 

 

Eventually disable I/O Fencing for simple VxFS testing convenience, 

echo 'vxfen_mode=disabled' > /etc/vxfenmode
/etc/init.d/vxfen stop
/etc/init.d/vxfen start

Note. otherwise you would have to bring up 3 small (>=1MB) and SCSI-III PR (Persistent Reservation) capable LUNs and configure Fencing. The fencing LUNs may be check with, 

#/opt/VRTSvcs/vxfen/bin/vxfentsthdw

 

 

VEA administration 

Download and install the win32 client from the server to some Windows Desktop, 

4.1.00.10/rhel4_i686/windows/VRTSobgui.msi

 

Start the relevant daemon on the server, 

#vxsvcctrl status
vxsvcctrl start

 

You can now connect from your win32 client to VEA. 

 

Note. eventually enable ISP (Intelligent Software Provisioning), using VRTSalloc. Restart the VEA daemon for the change to take effect. 

 

 

Management Pack 4 

update the damn thing 

Extact and install Management Pack 4 for SFCFS 4.1, 

mkdir 4.1.40.00
tar xzf sf_ha.4.1.40.00.rhel4.tar.gz -C 4.1.40.00
cd 4.1.40.00/rhel4_i686
./installmp

the interactive installer is self-explanatory. Just follow the steps. When the update has finished, either restart all installed SF product services or simply reboot the nodes. 

 

 

References 

http://sfdoccentral.symantec.com/ 

http://sfdoccentral.symantec.com/Storage_Foundation_HA_41_Linux.html