Proxmox, clustering and high availability for the home lab

To enable HA in Proxmox you need a way to allow Proxmox to move VMs between systems without having to move the associated disks of the VM.

This is achieved by using a shared disk system between the cluster nodes.

Options I have tried include:

NFS to a (very old) Dlink NAS

Using a DNS-345 with 4x 500gb in Raid 5 and a single 1Gb network connection.

I enabled NFS and allowed the cluster access. After adding the share to the cluster I was able to use it to store VM and CT disks.

I was able to run about 20 Windows and Linux based systems as long as they didn’t need heavy disk IO. Any actions that did required patience to complete.

Monitoring the DNS-345 it became apparent the limitation seemed to be the network connection. CPU and disk IO on the NAS itself was minimal.

The NAS itself is not upgradeable.

NFS to Unraid

Similar to the DNS-345 but with a more powerful system.

A bit of background if you are not familiar, Unraid is not aimed at this sort of task and is more suited to home usage as a media and file storage system. It worked well but I also like the feature in Unraid where it shuts down disks during quiet times. This is achieved by the ability to read data from a single drive (unlike ‘proper’ RAID where reads are spread out across disks). Running VMs meant there was going to be disk access to any disks hosting a VM. I could limit usage of disks in Unraid’s share settings.

I also went through upgrades on this server including:

  • Change CPU from Ryzen3 to Xeon. The Ryzen worked well but the associated motherboards in the price range I was in lacked certain features (PCI-e slots or ECC memory) that were better served by a server style board.
  • SATA add on cards where changed to SAS to allow use of SAS drives. Enterprise style drives are surprisingly cheap on ebay.
  • Emulex cards allowed 10gb fibre connections via IP for Unraid

To cut a long story short, I felt a more dedicated storage system would work better than Unraid.

iSCSI to Unraid

While looking over options I had a look into the iSCSI plugin available on Unraid. This was a great way to learn about iSCSI as it leads you through the process. I was able to make this available to the Proxmox cluster just as you would any iSCSI drives. Along with the 10gb fiber IP network this allowed for quite a quick disk backend.

iSCSI to ESOS

After acquiring more dedicated hardware which included a rack mount server with SAS 10k drives and (lowend) hardware RAID cards I went looking for appropriate software to allow Proxmox access.

I settled on Enterprise Storage Systems. This is designed to boot off a USB stick, is Linux based and is dedicated to being a storage system. Unlike FreeNAS and its various versions it has little features outside those required. It does have a text based user interface for configuration and some statistics on connections & usage.

I initially ran with Qlogic fiber cards but these will not work with Unraid, and I didnt want to run two seperate fiber networks. I went with Emulex and iSCSI to share the disks with the Proxmox cluster.

This is easily the best disk IO out of the options I had tried in my low budget home lab.

Laravel and Docker

Laravel is a PHP framework that makes building a web application faster (once you climb the mountain to learn it!)

Docker is the hosting environment that brings your ‘development’ and ‘production’ environments closer together.

Laradock is the glue holding them together.

WordPress Appliance - Powered by TurnKey Linux