Saturday, 16 July 2016

GlusterFS 3.8 on Fedora 24

GlusterFS 3.8 on Fedora 24

References:
- http://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Quickstart/

Environment:
Originally I had all the GlusterFS servers as qemu/kvm VMs on each of the small physical computers. External USB3 docks for attaching multiple bare SATA disks proved to be unreliable and the old Intel NUC had lousy disk performance over USB3, no heavy interrupts just high I/O wait. I had a $42 (auction) HP Desktop (SFF) with 4xSATA ports sitting there. Since the HP desktop only has 2GB RAM I install GlusterFS server on the physical server.

  • 1 subnet, all servers bridged
  • 3 physical servers:
    • Aldi Laptop (8GB RAM, 3.5TB disk – 2 internal SATA, 2 external USB3, Intel(R) Core(TM) i3-2310M CPU @ 2.10GHz)
    • JW mini-ITX (4GB RAM, 3.2TB disk – 2 internal SATA, AMD Athlon(tm) II X3 400e)
    • gfs2: HP Desktop (2GB RAM, 2.6TB disk – 4 internal SATA, Intel(R) Core(TM)2 Quad CPU Q9505 @ 2.83GHz)
  • 2 virtual servers
    • gfs1: (2GB RAM, 10GB(vda), 3TB(vdb), 2 virtual CPUs)
    • gfs3: (2GB RAM, 10GB(vda), 3TB(vdb), 2 virtual CPUs)

With hind-sight and after successfully rebuilding gfs1 and gfs2 nodes a few times on the weekend due to damaged XFS filesystems as a result of USB disks disappearing I will advise:

  1. Be very fusy about your USB3 to SATA docks!
  2. Go physical GlusterFS servers all the way, or go home!
  3. GlusterFS is awesome! I think the stress of rebuilding one node while maintaining a test load broke the second set of USB3 disks but everything recovered! (Thank heavens the disks did not flip/flop, clean death.)

I will rebuild my gfs1 and gfs3 servers as physicals in good time. Probably when I buy bigger disks as 10 disks for ~9TB (raw) is messy and noisy. I am happy that it has been running about 48 hours without issues and under load much of that time.

Finishing Initial Operating System Setup

Configure the network to use Static IPs:

dnf install NetworkManager-tui
nmtui

Register with the local IPA service:

dnf install freeipa-client
ipa-client-install  --enable-dns-updates --mkhomedir --ssh-trust-dns

As much as it pains me, firewall will be configured to allow all traffic. I can revisit later:

systemctl disable firewalld

Update and restart the server:

dnf update
reboot

Install GlusterFS

dnf  install glusterfs-server
systemctl enable glusterd
systemctl start glusterd

Create the GlusterFS Cluster

From gfs1:

gluster peer probe gfs2
gluster peer probe gfs3

From gfs2 or gfs3:

gluster peer probe gfs1

Check that the nodes can see each other:

gluster peer status

Prepare Each Brick on Each GlusterFS Node

As all my GlusterFS nodes are virtual machines for maximum flexibility, I have prepared each node a second virtual disk which will be used directly for the brick’s filesystem, no partitions or volumes. Back at the hypervisor the brick’s device is a LVM2 logical volume configured with “–type striped -i 4 -I 128”. So no redundancy within each brick. Striping across multiple disks multiples I/O performance by the number of disks. Don’t just make a “linear” volume because it is easy and the default.

mkfs.xfs -L gfs1_br1 /dev/vdb
mkdir -p /data/glusterfs/gfsVol01/brick01
echo '/dev/vdb  /data/glusterfs/gfsVol01/brick01    xfs noatime 0 0' >> /etc/fstab
mount /data/glusterfs/gfsVol01/brick01

Configure a Volume

STOP – all nodes and all bricks must be configured before proceeding to create a GlusterFS Volume.

gluster peer status
gluster volume create gv0 replica 3 gfs1:/data/glusterfs/gfsVol01/brick01/brick gfs2:/data/glusterfs/gfsVol01/brick01/brick gfs3:/data/glusterfs/gfsVol01/brick01/brick
gluster volume start gv0
gluster volume info

Test the Volume

For the test just mount the volume on one of the nodes and start copying files into it:

mount -t glusterfs server1:/gv0 /mnt
cp -av /var /mnt

Check the files are being replicated on each of the other nodes:

find /data/glusterfs/gfsVol01/brick01/brick -ls

Add a Client

Same as mounting the volume for testing on the server:

mount -t glusterfs gfs1:/gv0 /mnt/gv0

Written and Published with StackEdit.

Saturday, 2 July 2016

iSCSI Example on RHEL 7.2

iSCSI Example on RHEL 7.2

Servers:
- iSCSI Target: akoya.spurrier.net.au – 192.168.1.26
- iSCSI Initiator: nuc1.spurrier.net.au – 192.168.1.27

Install iSCSI Target and Configure the Firewall

On each server:

subscription-manager register
subscription-manager attach --pool=8a85f98153dab2f00153dea83bf25daf
subscription-manager repos --enable rhel-7-server-rpms
yum make clean
yum repolist
yum groupinstall base
yum update
systemctl reboot

yum install firewalld
systemctl start firewalld.service
systemctl status firewalld.service
systemctl enable firewalld.service
firewall-cmd --permanent --add-service iscsi-target
firewall-cmd --reload

iSCSI Target – akoya.spurrier.net.au – 192.168.1.26

Ultimately I plan to try Gluster on this node but for now I want to practice with iSCSI so I created the future Gluster Brick below but instead used if as the backend to an iSCSI target. Sorry for any confusion.

Clean up “sdb” to use for the iSCSI target’s storage backend. (It had a Fedora install on it.)

pvs
fdisk -l /dev/sda
fdisk -l /dev/sdb
vgs  
lvs  
mount
# unmount any file systems on the disk to be repurposed.
pvs  
vgscan
vgchange -an fedora
pvremove /dev/sdb2 --force --force
# delete all partitions of the sdb disk.
fdisk /dev/sdb
pvcreate /dev/sdb
pvs  
vgcreate akoya_gfs1 /dev/sdb
lvcreate -l 100%FREE -n brick1 akoya_gfs1

We have an unformatted block device at /dev/akoya_gfs1/brick1. Get the iSCSI Initiator Name for the Target’s ACL before you begin. (See the first step in the next server’s block.) Lets make an iSCSI Target:

ll /dev/akoya_gfs1/
firewall-cmd --permanent --add-service iscsi-target
firewall-cmd --reload
yum install targetcli
systemctl start target
systemctl enable target
targetcli
    cd backstores/block
    create name=brick1 dev=/dev/akoya_gfs1/brick1
    cd ../../iscsi
    delete iqn.2003-01.org.linux-iscsi.akoya.x8664:sn.5bdc844021fa
    create iqn.2016-06.net.spurrier.akoya:gfs1-brick1
    cd iqn.2016-06.net.spurrier.akoya:gfs1-brick1/tpg1/luns
    create /backstores/block/brick1
    cd ../acls
    create iqn.1994-05.com.redhat:ff8456a7e3e0
    exit

iSCSI Initiator – nuc1.spurrier.net.au – 192.168.1.27

Check the iSCSI Initiator Name for adding to the ACL on the iSCSI Target (see above, iqn.1994-05.com.redhat:ff8456a7e3e0).

cat /etc/iscsi/initiatorname.iscsi 
    InitiatorName=iqn.1994-05.com.redhat:ff8456a7e3e0

Connect to the iSCSI Target:

yum install iscsi-initiator-utils
iscsiadm -m discovery -t st -p akoya
    192.168.1.26:3260,1 iqn.2016-06.net.spurrier.akoya:gfs1-brick1

iscsiadm -m node -T iqn.2016-06.net.spurrier.akoya:gfs1-brick1 -l

Confirm the iSCSI disk is “sdb”, format and mount:

ll /dev/sd*
fdisk -l /dev/sdb
fdisk -l /dev/sda
mkfs.xfs -L akoya_gfs1 /dev/sdb
mount /dev/sdb /mnt

Test by coping the root file system (/) to the mounted iSCSI volume:

df -h
mkdir /mnt/backup-nuc1
rsync -avPx / /mnt/backup-nuc1/
sync

(I threw in the “sync” as the root file system was so small (3GB) the writes were cached in RAM (16GB) and I had not seen any meaningful traffic with iotop. Yes the transfer rate was at my full 1Gb Ethernet speed.)

Written with StackEdit.

IPA with Replica on RHEL 7.2

IPA with Replica on RHEL 7.2

Registering Clients to IPA

yum install ipa-client
KRB5_TRACE=/dev/stdout ipa-client-install  --enable-dns-updates --mkhomedir --ssh-trust-dns --force-join
systemctl reboot

Building the IPA Cluster

Servers:

Install IPA and Configure the Firewall

On each server:

subscription-manager register
subscription-manager attach --pool=8a85f98153dab2f00153dea83bf25daf
subscription-manager repos --enable rhel-7-server-extras-rpms --enable rhel-7-server-rpms
yum make clean
yum repolist
yum groupinstall base
yum update
systemctl reboot
    
yum install firewalld
systemctl start firewalld.service
systemctl status firewalld.service
systemctl enable firewalld.service
firewall-cmd --permanent --add-port={80/tcp,443/tcp,389/tcp,636/tcp,88/tcp,464/tcp,53/tcp,88/udp,464/udp,53/udp,123/udp}
firewall-cmd --reload

yum install ipa-server bind bind-dyndb-ldap ipa-server-dns

On akoya.spurrier.net.au – 192.168.1.26

hostname
ipa-server-install -r SPURRIER.NET.AU -n spurrier.net.au  --setup-dns --forwarder=192.168.1.1 --mkhomedir --ip-address=192.168.1.26 --ssh-trust-dns
kinit admin
/usr/sbin/ipa-replica-conncheck --replica nuc1.spurrier.net.au
ipa-replica-prepare nuc1.spurrier.net.au --ip-address 192.168.1.27
scp /var/lib/ipa/replica-info-nuc1.spurrier.net.au.gpg root@nuc1:

On nuc1.spurrier.net.au – 192.168.1.27

hostname
ipa-replica-conncheck --master akoya.spurrier.net.au
ipa-replica-install --mkhomedir --ip-address=192.168.1.27 --ssh-trust-dns --setup-dns --forwarder=192.168.1.1 /root/replica-info-nuc1.spurrier.net.au.gpg

Saturday, 28 November 2015

Satellite 6.1 on RHEL 6

Really, really reconsider running Satellite 6 on RHEL 6.  While technically Red Hat support it there are just too many performance reasons to ignore RHEL 7.  Get your SOE updated to RHEL 7 if you have to before going with Satellite 6. 
-------

subscription-manager clean
subscription-manager register
subscription-manager list --available | tee /root/subs.avail
sed -n '/^Subscription Name:   Red Hat Satellite$/,/^Pool ID:/ p' /root/subs.avail
subscription-manager attach --pool=<satellite_pool_id>
subscription-manager release --set=6Server
subscription-manager repos --disable=*

Repositories to subscribe to for RHEL 6.
subscription-manager repos
--enable rhel-6-server-rpms --enable rhel-server-rhscl-6-rpms --enable rhel-6-server-satellite-6.1-rpms

yum repolist
yum remove java*
yum update


iptables -A INPUT -m state --state NEW -p udp --dport 53 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 53 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p udp --dport 67 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p udp --dport 68 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p udp --dport 69 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 80 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 443 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 5647 -j ACCEPT \
&& iptables -A INPUT -m state --state NEW -p tcp --dport 8140 -j ACCEPT \
&& iptables-save > /etc/sysconfig/iptables

service iptables start
chkconfig iptables on


init 6

yum install katello

katello-installer --foreman-initial-organization "Spud" \
--foreman-initial-location "Private"
 

Could not set 'present' on ensure: 422 Unprocessable Entity at 12:/usr/share/katello-installer/modules/foreman_proxy/manifests/register.pp
Found my answer here.  It really does seem to be an issue for some KVM test rigs:
http://unixrevolution.blogspot.com.au/2015/09/satellite-6-installation-issues.html
Resolution: 
foreman-rake config -- -k idle_timeout -v 60
foreman-rake config -- -k proxy_request_timeout -v 99

katello-installer --foreman-initial-organization "Spud" \
--foreman-initial-location "Private"

 
 
 

Saturday, 14 November 2015

Install Red Hat Satellite 6.1 on RHEL 7.1 with Internet Access

References

KVM Hardware

  •  3 cores of an i7 @2.9GHz
  • 10GB RAM (Red Hat recommends a minimum of 12, some play with as little as 8GB. 
  • 100GB Virt SCSI Disk ("/" root only file system layout)
Disk usage for RHEL 7 + Red Hat Satellite + following RHEL 7 Repositories = 32GB
I recommend starting with a 100GB disk if you intend to keep and use the Satellite instance.

Laptop Hardware

  • Intel(R) Core(TM) i7-4800MQ CPU @ 2.70GHz
  • 16GB RAM
  • 150GB SATA Disk @7200rpm ("/" root only file system layout)

 Installing Red Hat Satellite 6.1 (with Direct Internet Access)

Fix up the Product Subscriptions: 
(I did a crazy thing and re-used a RHEL 7 Workstation installation so I had to copy the file /etc/pki/product/69.pem from a RHEL 7 Server to the laptop/workstation, and remove the other "product" pem file for "Workstation. See Why does command 'subscription-manager list' return: "No Installed Products found" ?)
subscription-manager register
subscription-manager remove --all
subscription-manager list --available --all | sed -n '/^Subscription Name:   Red Hat Satellite$/,/^Pool ID:/ p'
subscription-manager subscribe --pool=<pool_id_Red_Hat_Satellite>
subscription-manager release --set=7Server


Disable any existing repositories:

subscription-manager repos --disable=*

Confirm all the repositories are disabled:
subscription-manager repos --list-enabled
Repositories to subscribe to for RHEL 7:
subscription-manager repos --enable rhel-7-server-extras-rpms --enable rhel-7-server-satellite-6.1-rpms --enable rhel-server-rhscl-7-rpms --enable rhel-7-server-rh-common-rpms --enable rhel-7-server-satellite-tools-6.1-rpms --enable rhel-7-server-rpms
Check the repositories listed are the *only* that were just subscribed:
yum repolist

Going to request that "java" is removed twice as it is a common source of installation failure!
yum remove java*
yum update


Enable external entities to connect to the following *optional* services on the Red Hat Satellite server:   DNS
firewall-cmd --permanent --add-port="53/udp" --add-port="53/tcp"

Enable external entities to connect to the following *optional* services on the Red Hat Satellite server:  DHCP
firewall-cmd --permanent --add-port="67/udp" --add-port="68/udp"

Enable external entities to connect to the following *optional* services on the Red Hat Satellite server:  TFTP
firewall-cmd --permanent --add-port="69/udp"

Enable external entities to connect to the following *optional* services on the Red Hat Satellite server:  Puppet Master
firewall-cmd --permanent --add-port="8140/tcp"

Enable external entities to connect to the following services on the Red Hat Satellite server:  HTTP, HTTPS, Katello Message Router
firewall-cmd --permanent --add-port="80/tcp" --add-port="443/tcp" --add-port="5647/tcp"

firewall-cmd --reload

The following reboot is optional but it is recommended following software updates and firewall rule changes.
systemctl reboot


yum remove java*  # Yes this is a repeat, making sure you did it!
Java will be installed as a dependency of the "katello" package.
yum install katello



Backup and edit /etc/katello-installer/answers.katello-installer.yaml
cp /etc/katello-installer/answers.katello-installer.yaml /etc/katello-installer/answers.katello-installer.yaml_$(date '+%y%m%d%H%M')

Change the values for "initial_organization" and "initial_location" to something meaningful to your installation in the file /etc/katello-installer/answers.katello-installer.yaml
...
  foreman:
...
    initial_organization: "Default Organization"
...
    initial_location: "Default Location"
...








For a "simple" installation of Red Hat Satellite without DNS, DHCP and TFTP run "katello-installer" without options or changes to the default answers file.  "katello-installer" can be re-run again to add any of those components at a later time.
katello-installer
Save the initial password for the admin user.

Secure "elasticsearch" to only be accessible by the users "foreman" and "root".
firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 0 -o lo -p tcp -m tcp --dport 9200 -m owner --uid-owner foreman -j ACCEPT && firewall-cmd --permanent --direct --add-rule ipv6 filter OUTPUT 0 -o lo -p tcp -m tcp --dport 9200 -m owner --uid-owner foreman -j ACCEPT && firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 0 -o lo -p tcp -m tcp --dport 9200 -m owner --uid-owner root -j ACCEPT && firewall-cmd --permanent --direct --add-rule ipv6 filter OUTPUT 0 -o lo -p tcp -m tcp --dport 9200 -m owner --uid-owner root -j ACCEPT && firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 1 -o lo -p tcp -m tcp --dport 9200 -j DROP && firewall-cmd --permanent --direct --add-rule ipv6 filter OUTPUT 1 -o lo -p tcp -m tcp --dport 9200 -j DROP
firewall-cmd --reload

First Login

Log into the Red Hat Satellite URL and credentials provided by the instructions at the end of the "katello-installer" command. Save the admin user's password or set a new password via:
Admin User -> My Account -> User.  Enter the new password at the "Password" and "Verify" fields and then click the "Submit" button.

Load a Satellite Manifest

 Content -> Red Hat Subscriptions -> [Mange Manifest ->] Actions -> Upload New Manifest - Browse -> Upload

Initial Red Hat Content Sync

Content -> Red Hat Repositories -> RPMs -> Red Hat Enterprise Linux Server -> Red Hat Satellite Tools 6.1 (for RHEL 7 Server) (RPMS) -> Enabled (tick)

Content -> Sync Status -> Expand All -> Red Hat Satellite Tools 6.1 for RHEL 7 Server RPMs x86_64 (tick) -> Synchronize Now

Wait for it to download the content to make sure there are no issues with Satellite reaching back to Red Hat's Content Delivery Network.

Base RHEL Content Sync

Content -> Red Hat Repositories -> RPMs -> Red Hat Enterprise Linux Server -> Red Hat Enterprise Linux 7 Server (RPMS) -> Red Hat Enterprise Linux 7 Server RPMs X86_64 7Server -> Enabled (tick)
Consider enabling the repositories for:
  • Extras
  • Fastrack
  • Optional
  • Optional Fastrack
  • RH Common
  • Supplementary
 Content -> Sync Status -> Select All -> Synchronize Now

Configuration Note

There is no single path to follow to configure Satellite 6. Many of the resources depend on other resources but you define each single resource in turn.  For example an Organization has many Subnets and a Subnet can be associated with many Organizations.

It gets a lot more complicated than that. Some resources when you initially define them only show a few fields to complete. Afterwards when viewing that same resource you may find it has many tabs of configurable items.

Don't beat yourself up for not configuring in an optimum order.  There probably is not one. You will find as you configure Satellite you will have to revisit some resources to add additional configuration information. Satellite allows for this and it is just the nature of circular relationships.

Sanity Check

Before you can ever run katello-installer again, for example to add other modules such as dhcp and tftp, make sure the same fully qualified host name is returned by both commands:
facter fqdnhostname -f

It is my experience that an entry for the Satellite server needs to be added to /etc/hosts  for its primary NIC. Adding alias to the 127.0.0.1 entry does not cut it ;-)

Configure DHCP and TFTP

Backup and edit /etc/katello-installer/answers.katello-installer.yaml
...
    tftp: true
    tftp_syslinux_root:
    tftp_syslinux_files:
    tftp_root: /var/lib/tftpboot/
    tftp_dirs:
      - /var/lib/tftpboot/pxelinux.cfg
      - /var/lib/tftpboot/boot
    tftp_servername:
...

    dhcp: true
    dhcp_listen_on: https
    dhcp_option_domain:
      - spud.did.it
    dhcp_managed: true
    dhcp_interface: enp0s25
    dhcp_gateway: "192.168.1.1"
    dhcp_range: "192.168.1.100 192.168.1.239"
    dhcp_nameservers: "192.168.1.1"
    dhcp_vendor: isc
    dhcp_config: /etc/dhcp/dhcpd.conf
    dhcp_leases: /var/lib/dhcpd/dhcpd.leases
    dhcp_key_name: ""
    dhcp_key_secret: ""

...


Update Satellite 6 configuration:
katello-installer

Checks

cat /etc/dhcp/dhcpd.conf
systemctl status dhcpd
cat /etc/xinetd.d/tftp
find /var/lib/tftpboot/
systemctl status xinetd

Import Subnets

Infrastructure -> Capsules -> (drop down list for Satellite server) Import subnets
Fill in any missing details about the sub-net.

IPAM == IP Address Management

You can select one of three possible IPAM modes:
  • DHCP - will manage the IP on DHCP through assigned DHCP proxy, auto-suggested IPs come from DHCP
  • Internal DB - use internal DB to auto-suggest free IP based on other interfaces on same subnet respecting range if specified, useful mainly with static boot mode
  • None - leave IP management solely on user, no auto-suggestion

Finish Configuring the Subnet

Infrastructure ->Subnets -> (imported subnet name)

Domain Configuration

Satellite 6 considers a domain and a DNS zone as the same thing. That is, if you are planning to manage a site where all the machines are or the form hostname.somewhere.com then the domain is somewhere.com. This allows Satellite 6 to associate a puppet variable with a domain/site and automatically append this variable to all external node requests made by machines at that site.
The fullname field is used for human readability in reports and other pages that refer to domains, and also available as an external node parameter.

Configure a Compute Resource

Infrastructure -> Compute Resources -> New Compute Resource

Saturday, 11 July 2015

Headless firewall-config on RHEL 7.1


The following package dependencies are missing on firewall-config-0.3.9-11.el7.noarch

  • libcanberra-gtk3-0.30-5.el7.x86_64
  • PackageKit-gtk3-module-0.8.9-11.el7.x86_64
  • NetworkManager-glib.x86_64
  • xorg-x11-fonts-ISO8859-1-100dpi
  • xorg-x11-fonts-ISO8859-1-75dpi
  • xorg-x11-fonts-misc
  • xorg-x11-fonts-Type1
  • xorg-x11-font-utils

Without these extra packages running firewall-config over "ssh -Y" fails.

Try this to get it working in one command:

yum install firewall-config xorg-x11-xauth libcanberra-gtk3-0.30-5.el7.x86_64 PackageKit-gtk3-module-0.8.9-11.el7.x86_64 NetworkManager-glib.x86_64 xorg-x11-fonts-ISO8859-1-100dpi xorg-x11-fonts-ISO8859-1-75dpi xorg-x11-fonts-misc xorg-x11-fonts-Type1 xorg-x11-font-utils

============================
The error message was:


# firewall-config

** (firewall-config:1325): WARNING **: Couldn't connect to accessibility bus: Failed to connect to socket /tmp/dbus-uu77vqCHr1: Connection refused
Gtk-Message: Failed to load module "pk-gtk-module"
Gtk-Message: Failed to load module "canberra-gtk-module"
ERROR:root:Could not find any typelib for NetworkManager
Traceback (most recent call last):
  File "/usr/bin/firewall-config", line 34, in <module>
    from gi.repository import NetworkManager
ImportError: cannot import name NetworkManager
 

Thursday, 19 February 2015

Red Hat Satellite 5.6 to 5.7 Cheat Sheet

References:

Procedure

  1. Commence the download of the installation ISO for Red Hat Satellite 5.7 Installer from the Red Hat Customer Portal, Downloads section.
  2. Create the new Red Hat Satellite 5.7 Certificate and attach the required subscriptions. It may take some time for the certificate to become available so it is good to start this early on. Don’t forget to include a Satellite entitlement otherwise the “Download Certificate” button will never activate. You may have to wait awhile and refresh the page but mostly I find it works within a few minutes.
  3. Backup your database. Note that PostgreSQL (postmaster) takes awhile to shutdown:
    rhn-satellite stop
    mkdir /var/satellite/Backups/20150219-2211.dump
    db-control backup /var/satellite/Backups/20150219-2211.dump
    
  4. Consider using spacewalk-data-fsck to help clean-up the Red Hat Satellite database before you start. This takes a long while!
    spacewalk-data-fsck -v -r
    
  5. Ensure you Red Hat Satellite 5.6 server is up-to-date:
    yum update
    
    Depending on what packages get updated you may want to reboot and stop Red Hat Satellite to get back to the same state:
     init 6
     rhn-satellite stop
    
  6. Check the database schema version and the installed schema package are the same. Otherwise you may have to update the database schema:
    service postgresql start
    rhn-schema-version
    rpm -q --qf '%{version}-%{release}\n' satellite-schema
    
    If the versions are different then update the schema:
    spacewalk-schema-upgrade
    
  7. Install the software that will perform the upgrade to Red Hat Satellite 5.7. It is in the Red Hat Satellite 5.6 software channel redhat-rhn-satellite-5.6-server-x86_64-6:
    yum install rhn-upgrade
    
  8. Ensure there is more free space on the file system that will house /opt/rh/postgresql92/root/var/lib/pgsql/data than is presently consumed at /var/lib/pgsql. A minimum of 12GB is required:
        du -hs /var/lib/pgsql/
    
    Consider deleting the satsync directory contents if you require additional free space for the upgrade to occur.
    Important – /opt/rh/postgresql92/root/var/lib/pgsql/data
    Due to an updated version of the PostgreSQL Embedded Database, the database location has changed from /var/lib/pgsql/data in Red Hat Satellite 5.6 to /opt/rh/postgresql92/root/var/lib/pgsql/data in Red Hat Satellite 5.7. Make sure to allocate enough hard disk space to this location.
    rm -rf /var/cache/rhn/satsync/*
    
  9. Transfer the ISO for Red Hat Satellite 5.7 Installer to the Red Hat Satellite server.
  10. Transfer the Red Hat Satellite 5.7 certificate from RHN to the server.
  11. Mount the ISO for Red Hat Satellite 5.7 Installer:
    mount -o ro,loop /root/satellite-5.7.0-20150108-rhel-6-x86_64.iso /mnt
    
  12. Run the installer with the upgrade switch:
    Important
    Use additional options if your Red Hat Satellite is disconnected or using a Managed Database or External Database.
    cd /mnt
    ./install.pl --upgrade
    
    Accept the offer to resolve the dependencies.
    “Installing RHN packages” appears to take a really long time (many many minutes). Going to bed now :-p
    Yes it does take a long time and then “Setting up SELinux…” takes a while longer!
  13. Upgrade the database schema:
    spacewalk-schema-upgrade
    
    Check the database schema version and the installed schema package are the same. Otherwise you may have to update the database schema:
    rhn-schema-version
    rpm -q --qf '%{version}-%{release}\n' satellite-schema
    
  14. Activate the Red Hat Satellite.
    If using a connected Satellite:
    rhn-satellite-activate --rhn-cert [PATH-TO-NEW-CERT] --ignore-version-mismatch
    
    If disconnected, run:
    rhn-satellite-activate --rhn-cert [PATH-TO-NEW-CERT] --disconnected --ignore-version-mismatch
    
  15. Rebuild search indexes with the following command:
    service rhn-search cleanindex
    
  16. The upgrade process saves a backup of rhn.conf and other configuration files to /etc/sysconfig/rhn/backup-$DATE-$TIME. Refer to the backup copy of the rhn.conf file and ensure any previous custom values are set in the new Red Hat Satellite’s /etc/rhn/rhn.conf file. For example:
    debug = 3
    pam_auth_service = rhn-satellite
    
  17. Restart all Red Hat Satellite services:
    /usr/sbin/rhn-satellite restart
    

Written with StackEdit.