Tuesday 16 August 2011

provides a service fail-over between the nodes



Adding a Failover Domain
To add a failover domain, follow the steps in this section. The starting point of the procedure is at the
cluster-specific page that you navigate to from Choose a cluster to administer displayed on the
cluster tab.
1. At the detailed menu for the cluster (below the clusters menu), click Failover Domains. Clicking
Failover Domains causes the display of failover domains with related services and the display of
menu items for failover domains: Add a Failover Domain and Configure a Failover Domain .

2. Click Add a Failover Domain. Clicking Add a Failover Domain causes the display of the Add a
Failover Domain page.

3. At the Add a Failover Domain page, specify a failover domain name at the Failover Domain
Name text box.
Note
The name should be descriptive enough to distinguish its purpose relative to other names
used in your cluster.

4. To enable setting failover priority of the members in the failover domain, click the Prioritized
checkbox. With Prioritized checked, you can set the priority value, Priority, for each node
selected as members of the failover domain.

5. To restrict failover to members in this failover domain, click the checkbox next to Restrict failover
to this domain's members. With Restrict failover to this domain's members checked,
services assigned to this failover domain fail over only to nodes in this failover domain.

6. To specify that a node does not fail back in this failover domain, click the checkbox next to Do not
fail back services in this domain. With Do not fail back services in this domain checked, if a
service fails over from a preferred node, the service does not fail back to the original node once it
has recovered.

7. Configure members for this failover domain. Under Failover domain membership, click the
Member checkbox for each node that is to be a member of the failover domain. If Prioritized is
checked, set the priority in the Priority text box for each member of the failover domain.

8. Click Submit. Clicking Submit causes a progress page to be displayed followed by the display
of the Failover Domain Form page. That page displays the added resource and includes the
failover domain in the cluster menu to the left under Domain.

support filesystem quotas


mount -o quota=on BlockDevice MountPoint

example
mount -o quota=on /dev/vg01/lvol0 /mygfs2

Setting Quotas, Hard Limit
gfs2_quota limit -u User -l Size -f MountPoint
gfs2_quota limit -g Group -l Size -f MountPoint

Setting Quotas, Warn Limit
gfs2_quota warn -u User -l Size -f MountPoint
gfs2_quota warn -g Group -l Size -f MountPoint

Synchronizing Quota Information
gfs2_quota sync -f MountPoint

This example synchronizes the quota information from the node it is run on to file system /mygfs2.
gfs2_quota sync -f /mygfs2
This example changes the default time period between regular quota-file updates to one hour (3600 seconds) for file system /mygfs2 on a single node.
gfs2_tool settune /mygfs2 quota_quantum 3600


Sunday 14 August 2011

Custom udev for iscsi

Create a rule that will create a symlink called /dev/iscsi[1-9] that points to /dev/sda[1-9]

/etc/udev/rules.d/75-iscsi_sda.rules

KERNEL=="sda[1-9]", \
POGRAM=="scsi_id -g -s /block/sda/sda%n", \
RESULT=="GUID", \
SYMLINK+="iscsi%n"

replace GUID with output from scsi_id -g -s /block/sda

Thursday 11 August 2011

configure SNMP to provide cluster monitoring

[root@localhost ~]# yum install cluster-snmp



 edit /etc/snmp/snmpd.conf 
dlmod RedHatCluster /usr/lib/cluster-snmp/libClusterMonitorSnmp.so
view systemview included REDHAT-CLUSTER-MIB:RedHatCluster
snmpwalk -v 2c -c public node1.example.com REDHAT-CLUSTER-MIB::RedHatCluster