Saturday 24 January 2015

RHEL 7 -- 2 node GFS2

Scenario

  • RHEL Workstation and Hypervisor
  • RHEL Workstation VM on private NAT network
  • Share an LVM volume between both workstations without using a network file server to slow things down.

Using

Prepare “Resilient Storage” Channel

The “Resilient Storage” channel is only available on the RHEL 7 Server base channel. I’m going to clone the “Resilient Storage” channel on my local Satellite server to the RHEL 7 Workstation base channel so my two workstations have access to the necessary packages. I may not have Red Hat support from this point forward ;-)

Subscribed both RHEL 7 Workstation systems to the cloned rhel-x86_64-server-rs-7 channel.

Install Software on each Node

On each node in the cluster, install the Red Hat High Availability Add-On software packages along with all available fence agents from the High Availability channel:

GFS2

yum install rgmanager lvm2-cluster gfs2-utils

Clustering:

yum install pcs fence-agents-all

Shared Block Device

Create the block device to be shared between both workstations. My case this is the laptop/hypervisor:

# lvcreate -n gfs2_docs -L 50G laptop500

Format the new block device. I am precreating 3 journals as I’m expecting this to be so successful that I will have a third VM using the same share before too long :-D

# mkfs.gfs2 -p lock_dlm -t laptop:docs -j 3 /dev/laptop500/gfs2_docs
/dev/laptop500/gfs2_docs is a symbolic link to /dev/dm-10
This will destroy any data on /dev/dm-10
Are you sure you want to proceed? [y/n]y
Device:                    /dev/laptop500/gfs2_docs
Block size:                4096
Device size:               50.00 GB (13107200 blocks)
Filesystem size:           50.00 GB (13107198 blocks)
Journals:                  3
Resource groups:           200
Locking protocol:          "lock_dlm"
Lock table:                "laptop:docs"
UUID:                      0fb834e7-0f8c-d4b6-f0fb-193b56b8299a

Configure Clustering

Initial Steps for all Nodes

Allow the clustering service to be accessed through the firewall:

# firewall-cmd --permanent --add-service=high-availability
success
# firewall-cmd --add-service=high-availability
success

Set the same password for clustering administration on each node:

# passwd hacluster
Changing password for user hacluster.
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.

Enable the pcsd daemon:

# systemctl start pcsd.service
# systemctl enable pcsd.service
ln -s '/usr/lib/systemd/system/pcsd.service' '/etc/systemd/system/multi-user.target.wants/pcsd.service'

Steps for One Node Only (any node)

Authenticate the hacluster user for each node in the cluster:

# pcs cluster auth reilly.spurrier.net.au rhel7desk.spurrier.net.au
Username: hacluster
Password: 
reilly.spurrier.net.au: Authorized
rhel7desk.spurrier.net.au: Authorized

Create the Cluster:

# pcs cluster setup --start --name laptop reilly.spurrier.net.au rhel7desk.spurrier.net.au
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
reilly.spurrier.net.au: Succeeded
reilly.spurrier.net.au: Starting Cluster...
rhel7desk.spurrier.net.au: Succeeded
rhel7desk.spurrier.net.au: Starting Cluster...

Enable Cluster services on all nodes at boot time:

pcs cluster enable --all

Check the Cluster’s status:

# pcs cluster status
Cluster Status:
 Last updated: Fri Feb 13 23:43:11 2015
 Last change: Fri Feb 13 23:40:21 2015 via crmd on reilly.spurrier.net.au
 Stack: corosync
 Current DC: reilly.spurrier.net.au (1) - partition WITHOUT quorum
 Version: 1.1.10-32.el7_0.1-368c726
 2 Nodes configured
 0 Resources configured

PCSD Status:
  reilly.spurrier.net.au: Online
  rhel7desk.spurrier.net.au: Online

Cluster Fencing Configuration

Maybe this is where the whole project comes to an end. Both the GFS2 and the High Availability Add-on - Administration manuals warn that fencing must be enabled. As this is my laptop with a desktop VM forming the cluster I don’t want them fencing each other!

I will ignore Fencing Devices for now and see how bad things get.

Configure Cluster for GFS2

Steps for One Node Only (any node)

Set the global Pacemaker parameter no_quorum_policy to freeze:

# pcs property set no-quorum-policy=freeze

Set up a dlm resource. This is a required dependency for clvmd and GFS2. I have changed the on-fail attribute to restart instead of the recommended value of fence.

# pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=restart clone interleave=true ordered=true

Steps for All Nodes

Enable clustered locking:

# /sbin/lvmconf --enable-cluster

Steps for One Node Only (any node)

Set up clvmd as a cluster resource. I have changed the on-fail attribute to restart instead of the recommended value of fence.

# pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=restart clone interleave=true ordered=true

Set up clvmd and dlm dependency and start up order. clvmd must start after dlm and must run on the same node as dlm:

# pcs constraint order start dlm-clone then clvmd-clone
Adding dlm-clone clvmd-clone (kind: Mandatory) (Options: first-action=start then-action=start)
# pcs constraint colocation add clvmd-clone with dlm-clone

Configure a clusterfs resource. I have changed the on-fail attribute to restart instead of the recommended value of fence.

# pcs resource create clusterfs Filesystem device="/dev/laptop500/gfs2_drew" directory="/home/drew/docs" fstype="gfs2" "options=noatime" op monitor interval=10s on-fail=restart clone interleave=true

Verify that GFS2 is mounted as expected:

mount | grep docs

Set up GFS2 and clvmd dependency and startup order.

# pcs constraint order start clvmd-clone then clusterfs-clone
# pcs constraint colocation add clusterfs-clone with clvmd-clone

not required

Mounting GFS2 File System

# mount -t gfs2 -o noatime /dev/laptop500/gfs2_docs /mnt/docs

Written with StackEdit.

1 comment:

  1. the solid stay wire. The S knot is smooth to the touch and strong enough to resist animal impact.
    HINGE JOINT wooden fence post

    ReplyDelete