Wednesday, 27 December 2017

Configure Bash Prompt for Git

 git-prompt.sh script shipped with the git package. Add the following lines to your ~/.bashrc file.


source /usr/share/git-core/contrib/completion/git-prompt.sh
export GIT_PS1_SHOWDIRTYSTATE=true
export GIT_PS1_SHOWUNTRACKEDFILES=true
export PS1='[\u@\h \W$(declare -F __git_ps1 &>/dev/null && __git_ps1 " (%s)")]\$ '

For a fancy coloured prompt try this:export $PS1='\[\033[0;32m\]✔\[\033[0;0m\] \[\033[0;33m\]\w\[\033[0;0m\] [\[\033[0;35m\]${GIT_BRANCH}\[\033[0;0m\]|\[\033[0;34m\]✚ 10\[\033[0;0m\]\[\033[0;36m\]…26\[\033[0;0m\]\[\033[0;0m\]] \n\[\033[0;37m\]$(date +%H:%M)\[\
033[0;0m\] $ '

Monday, 31 July 2017

Reverse SSH Tunnelling

References


So you have access to console access on a protected (NAT-ed) network and can reach the whole Internet. You can use a reverse SSH tunnel to get back to using a full featured and fast terminal window instead of the slow graphical console.

In my case I am doing lab exercises for online Linux training in a course provided lab. The lab has Internet access which is great so you can even backup you lab exercises to GitHub for example. However, working through a graphical desktop for terminal work is not desirable with lag, special keys not mapped and dropped keys.   I could not SSH into the lab environment directly so I used a reverse tunnel.


My public SSH server:
  1. only allows access via SSH key. I specifically used "ed25519" key as the public keys are really short, easier to copy out of, especially if you have to type it;
  2. operates on a non-standard port.

On the  Network Protected Client

Create a port forward on the loopback interface of the public server.  Every user with access to the public server can now connect back to the SSH daemon on the protected client.  They still have to authenticate but in my case "student" is not a good password, so be careful.

ssh [-i <identity_file>] [-p <public_port>] -R <local_port>:localhost:22 <user>@<public_server>

Common Server Accessible by Both Parties

Identify a public server that both your workstation and the network protected client can access via SSH.  Enable compression on the inner SSH session as it is the one that has access to the raw text and therefore maximum compression.  There is no point enabling compression on the (outer) reverse tunnel as it only sees the inner encrypted SSH session.

ssh [-C] -p <local_port> <protected_user>@localhost

Friday, 17 February 2017

dokuwiki on Fedora 25 with Docker

dokuwiki on Fedora 25 with Docker

References

Install Docker

dnf config-manager --add-repo https://docs.docker.com/engine/installation/linux/repo_files/fedora/docker.repo
dnf makecache fast
dnf install docker-engine
systemctl start docker
systemctl enable docker
docker run hello-world

Install docuwiki

docker search dokuwiki
docker run --name dokuwiki-data --entrypoint /bin/echo istepanov/dokuwiki Data-only container for dokuwiki.
docker run -d -p 8000:80 --name dokuwiki --volumes-from dokuwiki-data istepanov/dokuwiki
docker container list

Auto-start docuwiki

Create the dokuwiki service file:

cat >/etc/systemd/system/docker-dokuwiki_server.service <<EOT
[Unit]
Description=DokuWiki Container
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker run -p 8000:80 --name dokuwiki --volumes-from dokuwiki-data istepanov/dokuwiki
ExecStop=/usr/bin/docker stop dokuwiki
ExecStopPost=/usr/bin/docker rm -f dokuwiki

[Install]
WantedBy=default.target
EOT

chown root:root /etc/systemd/system/docker-dokuwiki_server.service
chmod 0644 /etc/systemd/system/docker-dokuwiki_server.service
restorecon -v /etc/systemd/system/docker-dokuwiki_server.service

Enable the dokuwiki service:

systemctl daemon-reload
systemctl start docker-dokuwiki_server.service
systemctl enable docker-dokuwiki_server.service

Use It

Browse to: http://localhost:8000/doku.php?id=start

Backup

Manual backup of the dokuwiki-data container:

docker container exec dokuwiki /bin/tar -cvjf - /var/dokuwiki-storage > /tmp/dokuwiki-data-$(date +%Y%m%d).tar.bz2

Note: only these folders are backed up:
* data/pages/
* data/meta/
* data/media/
* data/media_attic/
* data/media_meta/
* data/attic/
* conf/

Written with StackEdit.

Thursday, 17 November 2016

rsync via SSH proxy

Tested between Fedora 24 (source) and RHEL 7 (destination).
  • The same username is used at both the proxy host and the destination host.
  • The "nc" format for host and port changes with your distribution of linux.
  • Compression is turned off for the intermediate proxy_host leg and turned on for the end-to-end connection with dest_host.
  • To make ssh agent forwarding work, remember to:
    • allow "Agent Forwording" from your ssh client at the source_host (/etc/ssh/ssh_config),
    • allow "Agent Forwarding" on sshd on the proxy_host (/etc/ssh/sshd_config) AND restart sshd.
rsync -avP -e 'ssh -o "ProxyCommand ssh <proxy_host> exec nc %h %p 2>/dev/null"' <user>@<dest_host>:<remote_path> <local_path>


If you configure ~/.ssh/config then you can dramatically shorten the above command:
Host <dest_host_nickname>
        user                    <username>
        GSSAPIAuthentication    no
        Hostname                <dest_host as known by the proxy_host>
        Compression             no
        ForwardAgent            yes
        ProxyCommand ssh -C <proxy_host> exec nc %h %p


...the same rsync command becomes:
rsync -avP <user>@<dest_host_nickname>:<remote_path> <local_path>


Naturally this means you can also SSH straight to the final destination with the same ~/.ssh/config block:
ssh  <user>@<dest_host_nickname>



Code blocks were created by http://markup.su/highlighter/ and pasted into this post while in HTML mode.

Saturday, 16 July 2016

GlusterFS 3.8 on Fedora 24

GlusterFS 3.8 on Fedora 24

References:
- http://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Quickstart/

Environment:
Originally I had all the GlusterFS servers as qemu/kvm VMs on each of the small physical computers. External USB3 docks for attaching multiple bare SATA disks proved to be unreliable and the old Intel NUC had lousy disk performance over USB3, no heavy interrupts just high I/O wait. I had a $42 (auction) HP Desktop (SFF) with 4xSATA ports sitting there. Since the HP desktop only has 2GB RAM I install GlusterFS server on the physical server.

  • 1 subnet, all servers bridged
  • 3 physical servers:
    • Aldi Laptop (8GB RAM, 3.5TB disk – 2 internal SATA, 2 external USB3, Intel(R) Core(TM) i3-2310M CPU @ 2.10GHz)
    • JW mini-ITX (4GB RAM, 3.2TB disk – 2 internal SATA, AMD Athlon(tm) II X3 400e)
    • gfs2: HP Desktop (2GB RAM, 2.6TB disk – 4 internal SATA, Intel(R) Core(TM)2 Quad CPU Q9505 @ 2.83GHz)
  • 2 virtual servers
    • gfs1: (2GB RAM, 10GB(vda), 3TB(vdb), 2 virtual CPUs)
    • gfs3: (2GB RAM, 10GB(vda), 3TB(vdb), 2 virtual CPUs)

With hind-sight and after successfully rebuilding gfs1 and gfs2 nodes a few times on the weekend due to damaged XFS filesystems as a result of USB disks disappearing I will advise:

  1. Be very fusy about your USB3 to SATA docks!
  2. Go physical GlusterFS servers all the way, or go home!
  3. GlusterFS is awesome! I think the stress of rebuilding one node while maintaining a test load broke the second set of USB3 disks but everything recovered! (Thank heavens the disks did not flip/flop, clean death.)

I will rebuild my gfs1 and gfs3 servers as physicals in good time. Probably when I buy bigger disks as 10 disks for ~9TB (raw) is messy and noisy. I am happy that it has been running about 48 hours without issues and under load much of that time.

Finishing Initial Operating System Setup

Configure the network to use Static IPs:

dnf install NetworkManager-tui
nmtui

Register with the local IPA service:

dnf install freeipa-client
ipa-client-install  --enable-dns-updates --mkhomedir --ssh-trust-dns

As much as it pains me, firewall will be configured to allow all traffic. I can revisit later:

systemctl disable firewalld

Update and restart the server:

dnf update
reboot

Install GlusterFS

dnf  install glusterfs-server
systemctl enable glusterd
systemctl start glusterd

Create the GlusterFS Cluster

From gfs1:

gluster peer probe gfs2
gluster peer probe gfs3

From gfs2 or gfs3:

gluster peer probe gfs1

Check that the nodes can see each other:

gluster peer status

Prepare Each Brick on Each GlusterFS Node

As all my GlusterFS nodes are virtual machines for maximum flexibility, I have prepared each node a second virtual disk which will be used directly for the brick’s filesystem, no partitions or volumes. Back at the hypervisor the brick’s device is a LVM2 logical volume configured with “–type striped -i 4 -I 128”. So no redundancy within each brick. Striping across multiple disks multiples I/O performance by the number of disks. Don’t just make a “linear” volume because it is easy and the default.

mkfs.xfs -L gfs1_br1 /dev/vdb
mkdir -p /data/glusterfs/gfsVol01/brick01
echo '/dev/vdb  /data/glusterfs/gfsVol01/brick01    xfs noatime 0 0' >> /etc/fstab
mount /data/glusterfs/gfsVol01/brick01

Configure a Volume

STOP – all nodes and all bricks must be configured before proceeding to create a GlusterFS Volume.

gluster peer status
gluster volume create gv0 replica 3 gfs1:/data/glusterfs/gfsVol01/brick01/brick gfs2:/data/glusterfs/gfsVol01/brick01/brick gfs3:/data/glusterfs/gfsVol01/brick01/brick
gluster volume start gv0
gluster volume info

Test the Volume

For the test just mount the volume on one of the nodes and start copying files into it:

mount -t glusterfs server1:/gv0 /mnt
cp -av /var /mnt

Check the files are being replicated on each of the other nodes:

find /data/glusterfs/gfsVol01/brick01/brick -ls

Add a Client

Same as mounting the volume for testing on the server:

mount -t glusterfs gfs1:/gv0 /mnt/gv0

Written and Published with StackEdit.

Saturday, 2 July 2016

iSCSI Example on RHEL 7.2

iSCSI Example on RHEL 7.2

Servers:
- iSCSI Target: akoya.spurrier.net.au – 192.168.1.26
- iSCSI Initiator: nuc1.spurrier.net.au – 192.168.1.27

Install iSCSI Target and Configure the Firewall

On each server:

subscription-manager register
subscription-manager attach --pool=8a85f98153dab2f00153dea83bf25daf
subscription-manager repos --enable rhel-7-server-rpms
yum make clean
yum repolist
yum groupinstall base
yum update
systemctl reboot

yum install firewalld
systemctl start firewalld.service
systemctl status firewalld.service
systemctl enable firewalld.service
firewall-cmd --permanent --add-service iscsi-target
firewall-cmd --reload

iSCSI Target – akoya.spurrier.net.au – 192.168.1.26

Ultimately I plan to try Gluster on this node but for now I want to practice with iSCSI so I created the future Gluster Brick below but instead used if as the backend to an iSCSI target. Sorry for any confusion.

Clean up “sdb” to use for the iSCSI target’s storage backend. (It had a Fedora install on it.)

pvs
fdisk -l /dev/sda
fdisk -l /dev/sdb
vgs  
lvs  
mount
# unmount any file systems on the disk to be repurposed.
pvs  
vgscan
vgchange -an fedora
pvremove /dev/sdb2 --force --force
# delete all partitions of the sdb disk.
fdisk /dev/sdb
pvcreate /dev/sdb
pvs  
vgcreate akoya_gfs1 /dev/sdb
lvcreate -l 100%FREE -n brick1 akoya_gfs1

We have an unformatted block device at /dev/akoya_gfs1/brick1. Get the iSCSI Initiator Name for the Target’s ACL before you begin. (See the first step in the next server’s block.) Lets make an iSCSI Target:

ll /dev/akoya_gfs1/
firewall-cmd --permanent --add-service iscsi-target
firewall-cmd --reload
yum install targetcli
systemctl start target
systemctl enable target
targetcli
    cd backstores/block
    create name=brick1 dev=/dev/akoya_gfs1/brick1
    cd ../../iscsi
    delete iqn.2003-01.org.linux-iscsi.akoya.x8664:sn.5bdc844021fa
    create iqn.2016-06.net.spurrier.akoya:gfs1-brick1
    cd iqn.2016-06.net.spurrier.akoya:gfs1-brick1/tpg1/luns
    create /backstores/block/brick1
    cd ../acls
    create iqn.1994-05.com.redhat:ff8456a7e3e0
    exit

iSCSI Initiator – nuc1.spurrier.net.au – 192.168.1.27

Check the iSCSI Initiator Name for adding to the ACL on the iSCSI Target (see above, iqn.1994-05.com.redhat:ff8456a7e3e0).

cat /etc/iscsi/initiatorname.iscsi 
    InitiatorName=iqn.1994-05.com.redhat:ff8456a7e3e0

Connect to the iSCSI Target:

yum install iscsi-initiator-utils
iscsiadm -m discovery -t st -p akoya
    192.168.1.26:3260,1 iqn.2016-06.net.spurrier.akoya:gfs1-brick1

iscsiadm -m node -T iqn.2016-06.net.spurrier.akoya:gfs1-brick1 -l

Confirm the iSCSI disk is “sdb”, format and mount:

ll /dev/sd*
fdisk -l /dev/sdb
fdisk -l /dev/sda
mkfs.xfs -L akoya_gfs1 /dev/sdb
mount /dev/sdb /mnt

Test by coping the root file system (/) to the mounted iSCSI volume:

df -h
mkdir /mnt/backup-nuc1
rsync -avPx / /mnt/backup-nuc1/
sync

(I threw in the “sync” as the root file system was so small (3GB) the writes were cached in RAM (16GB) and I had not seen any meaningful traffic with iotop. Yes the transfer rate was at my full 1Gb Ethernet speed.)

Written with StackEdit.

IPA with Replica on RHEL 7.2

IPA with Replica on RHEL 7.2

Registering Clients to IPA

yum install ipa-client
KRB5_TRACE=/dev/stdout ipa-client-install  --enable-dns-updates --mkhomedir --ssh-trust-dns --force-join
systemctl reboot

Building the IPA Cluster

Servers:

Install IPA and Configure the Firewall

On each server:

subscription-manager register
subscription-manager attach --pool=8a85f98153dab2f00153dea83bf25daf
subscription-manager repos --enable rhel-7-server-extras-rpms --enable rhel-7-server-rpms
yum make clean
yum repolist
yum groupinstall base
yum update
systemctl reboot
    
yum install firewalld
systemctl start firewalld.service
systemctl status firewalld.service
systemctl enable firewalld.service
firewall-cmd --permanent --add-port={80/tcp,443/tcp,389/tcp,636/tcp,88/tcp,464/tcp,53/tcp,88/udp,464/udp,53/udp,123/udp}
firewall-cmd --reload

yum install ipa-server bind bind-dyndb-ldap ipa-server-dns

On akoya.spurrier.net.au – 192.168.1.26

hostname
ipa-server-install -r SPURRIER.NET.AU -n spurrier.net.au  --setup-dns --forwarder=192.168.1.1 --mkhomedir --ip-address=192.168.1.26 --ssh-trust-dns
kinit admin
/usr/sbin/ipa-replica-conncheck --replica nuc1.spurrier.net.au
ipa-replica-prepare nuc1.spurrier.net.au --ip-address 192.168.1.27
scp /var/lib/ipa/replica-info-nuc1.spurrier.net.au.gpg root@nuc1:

On nuc1.spurrier.net.au – 192.168.1.27

hostname
ipa-replica-conncheck --master akoya.spurrier.net.au
ipa-replica-install --mkhomedir --ip-address=192.168.1.27 --ssh-trust-dns --setup-dns --forwarder=192.168.1.1 /root/replica-info-nuc1.spurrier.net.au.gpg