My Kubernetes Cluster Experience

Homelab

Note: This post was originally posted on my personal blog. I have copied the content to this blog.

(Links to code for my environment at the bottom)

I didn’t like how my main unRAID server was a single point of failure, so I set out to make a Kubernetes cluster. Initially I used Rancher to create it and it was very easy – simply install Docker and run a single command for the first master node, and a different command for the worker nodes to join.

This worked very well, and it was very useful to see cluster resources in a GUI. Creating and managing secrets and configmaps, and restarting pods was very simple.

I ran with a 2-node setup for quite a while (A VM running on my unRAID host for a master and an old laptop given second life as a node).

Eventually, the fragility of the setup and my need to continue to mess with things got me thinking about how I could improve. I brought in my old desktop that I had stopped playing games on (which would make younger versions of me very sad to hear that indeed) (I do still game on the Nintendo Switch) as another worker node. I configured GPU passthrough to the pods so both nodes could transcode on PleX more efficiently. This was where things started getting interesting, as I shuffled around pods on the nodes. At this point I was running primarily PleX and Bitcoin Core.

I also started using metal-lb. It’s very straightforward, and really cool to see IPs get picked out of the pool and just work on my network.


Running those it became clear I had a need for better storage management. I had to confine pods with storage to specific nodes because that’s where the storage was. I had a NAS but it’s already being used for a lot on my network, plus access times and transfer speeds for it paled in comparison to local storage.

I happened upon Longhorn and I thought it was a cool idea. I had seen other similar technologies (ones that were more mature even) but I felt experimental. So I installed it in my cluster (thanks Rancher app installer!).

Longhorn was deceptively easy to set up; just install and throw some storage at it. I ensured I had backups of my data and configured my workloads to use it.

It proved to be a little brittle but overall reliable enough for me to trust it with data (though there are some rules you NEED to know about Longhorn):

  • Never let your Kubernetes master go offline.
  • Cordon and drain nodes (with –ignore-daemonsets) before restarting.
  • Any loss of connectivity will trigger a rebuild of all volumes on the node.
  • Configure volume sizes to have a little padding, but not too much so as to stay efficient. For example, a volume that uses 60GB should be given about 80GB of space. This is because Longhorn operates at the block level, not the filesystem level, so if you give it too much space, the application might write and rewrite data such that it makes it look like it’s using all of the space in the volume.

It was around this time that I started experimenting with other apps, such as ElectrumX and btc-rpc-explorer. They had storage requirements too, which gave me experience with managing multiple apps and volumes in Rancher.

While I was playing with all of that, I set up NewRelic and Terraform for my cluster as well. I had experience with NewRelic, and wanted to get to know Terraform better. My friend recommended Atlantis so I set that up. It was a little tricky but now I have PR planning and applying right from GitHub if I ever want it. I also started dabbling in PagerDuty. NewRelic and PagerDuty were both configured via Terraform as much as possible (I even ended up making a dashboard in Terraform code!)

I started feeling the restrictiveness of setting a cluster up with Rancher. I wrestled with kube-proxy and kube-dns (I could only run one instance or else my cluster’s DNS would have issues. I also had issues managing upgrades of Rancher, and documentation for Rancher generally assumed you didn’t do what I did since what I had was meant for development and demonstration purposes only, really.

So I set out to remake my whole cluster, lol.


I decided to go the kubeadm route. The only thinking I ended up having to do was what my subnets should have been (half of the whole 192.168 space should be good, right?)

Since I had a lot of what I did defined in code, it was pretty easy to re-set up everything I had. I think it took me about a day. At the same time, I added a new server so I could have 3 worker nodes (though the master is still a VM, which I’ll address at the end)

I also decided to set up Flux, which was mostly straightforward. I have a repo with a few Kustomization manifests and one helm chart that gets automatically re-applied any time there are changes.

After a good amount of tweaking and playing around with things, here’s what my cluster looks like now:

  • Kubeadm configured, 3 worker nodes, 1 master (VM)
  • Longhorn with 1TB SSD on each node, replicated between them
  • NewRelic alerts and data along with PagerDuty, all configured via Terraform
  • Flux for applying what’s in source control to the cluster
  • Still have Rancher but it’s just a glorified Kubernetes dashboard now

I also run the following applications:

  • Bitcoin: The Bitcoin Core daemon.
  • BTC-RPC-Explorer: Blockchain web explorer
  • ElectrumX: Used as a Bitcoin address balance database for BTC-RPC-EXP
  • Folding@Home: Currently running folding simulations for COVID-19 on all three workers. Makes them run pretty toasty 🙂
  • iperf3: Used to benchmark connectivity between nodes, usually scaled down
  • praqma/network-multitool: Used to get a shell in a pod when I need it, for additional troubleshooting
  • NextCloud (+Cache, +DB): Might be where I migrate to from my NextCloud instance on unRAID. I am tight on Longhorn storage since I only have 1TB (and the Bitcoin Blockchain currently takes up 550GB (instead of 390GB like it should) because of the learning experience I mentioned earlier about block storage.)
  • PleX: My backup plex in case my main one goes down.
  • Youtube-Dl-Material: Watches youtube playlists and downloads new videos as they come out.

With the master being a VM, I decided I will turn that physical and also have three of them so I can have a more robust cluster. I currently have a Dell Wyse 5040 ordered, which is a small Atom quad-core, 2GB of RAM computer normally meant to be a thin client or terminal rather than a full fledged computer. I’m hoping I can get away with spending about $120 each for those and some storage for each k8s master. The RAM won’t be an issue because the current master only uses 800MB of RAM (plus my cluster won’t expand much more after this), but it remains to be seen if a quad core Atom processor can keep up with the demand of a Kubernetes cluster.

  • https://github.com/tyzbit/rancher-cd-apps
  • https://github.com/tyzbit/terraform
  • https://github.com/tyzbit/fleet-infra

My Servers:

  • Left: freeman (K8S Worker) Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz 32GB RAM 256GB SSD + 1TB SSD (NVMe) RTX 970 Ubuntu 20.04.1
  • Top rack, left: anchorman (K8S Worker) Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz 16GB RAM 512GB SSD + 1TB SSD (NVMe)
  • Top rack, right: architect (router) Intel(R) Core(TM) i5-3470T CPU @ 2.90GHz (4 cores) 512GB SSD 12GB RAM
  • Middle rack, left: simulacrum (unRAID) Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz 2TB SSD (NVMe) 32GB RAM
  • Middle rack, right: braavos (NAS) WD MyCloud EX4100 4x8TB WD80EFAX, RAID1
  • Laptop on wall: trainman (K8S Worker) Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz (Turbo boost to 3.4) 16GB RAM 128GB SSD + 1TB SSD (External)

Homelab Homelab 2