Saturday 16 July 2016

GlusterFS 3.8 on Fedora 24

GlusterFS 3.8 on Fedora 24

References:
- http://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Quickstart/

Environment:
Originally I had all the GlusterFS servers as qemu/kvm VMs on each of the small physical computers. External USB3 docks for attaching multiple bare SATA disks proved to be unreliable and the old Intel NUC had lousy disk performance over USB3, no heavy interrupts just high I/O wait. I had a $42 (auction) HP Desktop (SFF) with 4xSATA ports sitting there. Since the HP desktop only has 2GB RAM I install GlusterFS server on the physical server.

  • 1 subnet, all servers bridged
  • 3 physical servers:
    • Aldi Laptop (8GB RAM, 3.5TB disk – 2 internal SATA, 2 external USB3, Intel(R) Core(TM) i3-2310M CPU @ 2.10GHz)
    • JW mini-ITX (4GB RAM, 3.2TB disk – 2 internal SATA, AMD Athlon(tm) II X3 400e)
    • gfs2: HP Desktop (2GB RAM, 2.6TB disk – 4 internal SATA, Intel(R) Core(TM)2 Quad CPU Q9505 @ 2.83GHz)
  • 2 virtual servers
    • gfs1: (2GB RAM, 10GB(vda), 3TB(vdb), 2 virtual CPUs)
    • gfs3: (2GB RAM, 10GB(vda), 3TB(vdb), 2 virtual CPUs)

With hind-sight and after successfully rebuilding gfs1 and gfs2 nodes a few times on the weekend due to damaged XFS filesystems as a result of USB disks disappearing I will advise:

  1. Be very fusy about your USB3 to SATA docks!
  2. Go physical GlusterFS servers all the way, or go home!
  3. GlusterFS is awesome! I think the stress of rebuilding one node while maintaining a test load broke the second set of USB3 disks but everything recovered! (Thank heavens the disks did not flip/flop, clean death.)

I will rebuild my gfs1 and gfs3 servers as physicals in good time. Probably when I buy bigger disks as 10 disks for ~9TB (raw) is messy and noisy. I am happy that it has been running about 48 hours without issues and under load much of that time.

Finishing Initial Operating System Setup

Configure the network to use Static IPs:

dnf install NetworkManager-tui
nmtui

Register with the local IPA service:

dnf install freeipa-client
ipa-client-install  --enable-dns-updates --mkhomedir --ssh-trust-dns

As much as it pains me, firewall will be configured to allow all traffic. I can revisit later:

systemctl disable firewalld

Update and restart the server:

dnf update
reboot

Install GlusterFS

dnf  install glusterfs-server
systemctl enable glusterd
systemctl start glusterd

Create the GlusterFS Cluster

From gfs1:

gluster peer probe gfs2
gluster peer probe gfs3

From gfs2 or gfs3:

gluster peer probe gfs1

Check that the nodes can see each other:

gluster peer status

Prepare Each Brick on Each GlusterFS Node

As all my GlusterFS nodes are virtual machines for maximum flexibility, I have prepared each node a second virtual disk which will be used directly for the brick’s filesystem, no partitions or volumes. Back at the hypervisor the brick’s device is a LVM2 logical volume configured with “–type striped -i 4 -I 128”. So no redundancy within each brick. Striping across multiple disks multiples I/O performance by the number of disks. Don’t just make a “linear” volume because it is easy and the default.

mkfs.xfs -L gfs1_br1 /dev/vdb
mkdir -p /data/glusterfs/gfsVol01/brick01
echo '/dev/vdb  /data/glusterfs/gfsVol01/brick01    xfs noatime 0 0' >> /etc/fstab
mount /data/glusterfs/gfsVol01/brick01

Configure a Volume

STOP – all nodes and all bricks must be configured before proceeding to create a GlusterFS Volume.

gluster peer status
gluster volume create gv0 replica 3 gfs1:/data/glusterfs/gfsVol01/brick01/brick gfs2:/data/glusterfs/gfsVol01/brick01/brick gfs3:/data/glusterfs/gfsVol01/brick01/brick
gluster volume start gv0
gluster volume info

Test the Volume

For the test just mount the volume on one of the nodes and start copying files into it:

mount -t glusterfs server1:/gv0 /mnt
cp -av /var /mnt

Check the files are being replicated on each of the other nodes:

find /data/glusterfs/gfsVol01/brick01/brick -ls

Add a Client

Same as mounting the volume for testing on the server:

mount -t glusterfs gfs1:/gv0 /mnt/gv0

Written and Published with StackEdit.

No comments:

Post a Comment