Saturday 14 May 2011

configure a high-availability cluster, using either physical or virtual systems, that

#Install luci and ricci on cluster nodes
[root@localhost ~]# yum install luci ricci -y

#Initialize luci
[root@localhost ~]# luci_admin init

[root@localhost ~]# service luci restart

 go to http://localhost:8084 to manage

#Open the ports required for the cluster
luci - 8084
ricci - 11111

Setting up Xen fencing in RHEL 5

Introduction:
The Xen fence agent is a virtual power fence device. Like power fencing when a Xen guest cluster node is unresponsive, the other nodes in the cluster contact the Xen host through a multicast address to fence the unrepsonsive node. For added reliability the Xen host must be part of a cluster and use a supported hardware power fence device. For the simplist configuration the Xen host can be in a single node cluster. There are two basic parts to the Xen fencing agent fence_xvm and fence_xvmd. fence_xvm is like other fencing agents, it is called by the cluster when a node needs to be fenced. It contains the logic to talk to the fence device. fence_xvmd is a deamon process that runs on the Xen host. It accepts fence requests from the node, then forcibly reboots the unreponsive node. There are several manual steps that must be completed after the cluster is initally configured to enable the Xen fence agent. In later updates to RHEL 5 and RHEL 4, GUI tools will be added to ease configuration of the Xen fence agent.

Prerequistes:
* The Xen host must be part of a cluster that is seperate from the cluster formed by the Xen guests. The Xen host must be running RHEL 5. A single node cluster is acceptable, however, this configuration should only be used for testing as reliability will be nonexistant. All the services started by the cman init script must be running.
* Xen guest cluster, The cluster must be initially configured with the node definitions. You can use system-config-cluster or Conga to configure the cluster. It is recommend that cluster service configuration wait until the fence agent is successfully configured. 


Tuesday 10 May 2011

Manage software RAID and LVM to provide

#create raid 1
 [root@localhost ~]# mdadm --create /dev/md0 -l 1 -n 2 /dev/sdb1 /dev/sdb2

#create raid 1 with write-intent bitmap
[root@localhost ~]# mdadm --create /dev/md1 -l 1 -n 2 --bitmap=internal /dev/sdb3 /dev/sdb5

#Create mdadm.conf
[root@localhost ~]# mdadm --examine --scan > /etc/mdadm.conf

#start the monitor service

[root@localhost dev]# service mdmonitor start
#turn the autostart on
[root@localhost dev]# chkconfig mdmonitor on

Saturday 7 May 2011

configure iSCSI targets and initiators

TARGET SYSTEM

[root@localhost ~]# yum install iscsi-target-utils -y

[root@localhost ~]# chkconfig tgtd on

[root@localhost ~]# service tgtd start

#Create a target
[root@localhost ~]# tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2011-05.com.example:disk1

#Add a disk to the target
[root@localhost ~]# tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 --backing-store /dev/sdb1

#Make the config persistent
[root@localhost ~]# tgt-admin -dump >> /etc/tgt/targets.conf

#List the targets on local system
[root@localhost ~]# tgt-admin -s

REMOTE SYSTEM

#List targets from remote system
[root@localhost ~]# iscsiadm -m discovery -t sendtargets -p 172.16.101.132

#Login to target
[root@localhost iscsi]# iscsiadm -m node -T iqn.2011-05.com.example:disk1 -p 172.16.101.132:3260,1 -l