Pages

September 16, 2009

GFS Cluster on VMware vSphere (RHEL | CentOS 5)

First of all, to make vms access to shared block devices, a new scsi controller must be created. The controller must be configured as physical scsi bus sharing to make vms distribute among hosts. Raw devices are recommended. Keep in mind, ha, drs and snapshot features are disabled for the vms that have shared disks.


Install the clustering packages on vms
# yum groupinstall Clustering Cluster\ Storage -y
Install vmware-ng for fencing
# tar xvzf vmware-ng.tar.gz
# cd vmware-ng/agent-itself/
# cp -r * /
# vmware-config-tools.pl
Install VMware-VIPerl
# tar xvzf VMware-VIPerl-<version>.tar.gz
# cd vmware-viperl-distrib/
# yum install -y openssl-devel
# ./vmware-install.pl
Create the cluster
# ccs_tool create <cluster_name>
For each vm, run
# ccs_tool addfence -C <vmname>_vmware_ng fence_vmware_ng ipaddr=<vcenter_name> login=<vcenter_user> passwd=<password>
# ccs_tool addnode -C <vmname> -n 1 -v 1 -f <vmname>_vmware_ng 
# service gfs start
# service gfs2 start
# service clvmd start
# chkconfig gfs on
# chkconfig gfs2 on
# chkconfig clvmd on
# chkconfig cman on
On the first node run
# lvmconf --enable-cluster
# pvcreate /dev/sd<x> /dev/sd<y>
# vgcreate <vgname> /dev/sd<x> /dev/sd<y>
# lvcreate -n <lvname> -L <size> <vgname>
# gfs_mkfs -j 8 -p lock_dlm -t <cluster_name>:<gfs_vol_name> /dev/<vgname>/<lvname>
On each node, run
# service luci start
# service ricci start
# chkconfig luci on
# chkconfig ricci on
# service luci stop
# luci_admin init
# service luci restart

No comments:

Post a Comment