This post kicks off an ongoing series exploring the components and applications that make up my newly rebuild homelab.
A homelab is a great resource for honing DevOps and Infrastructure skills. It can be as simple as a temporary environment stood up using Docker Compose to see how some self-hosted applications work or as intricate as a whole home datacenter rack or cloud environment running everything you can think of. Just as important as the skills you gain from running a homelab is the control it gives you. If you don’t like an update to an application you can switch to a new one, you aren’t caught by the vendor because they have your data.
For the last few years I’d had a single desktop server running Nomad and Consul and a handful of applications. I hadn’t used Nomad in any practical way but was curious about what it offered compared to Kubernetes. For the most part it worked but almost every time the server rebooted I would have to go and restart several of the applications to get them to properly detect one another. There were also multiple leftover configurations from experiments that hadn’t worked out making it harder to update or change anything.
Finally I decided it was time to rebuild, to create a homelab that could be rebuilt programmatically when faced with a disaster. The rebuild wasn’t a single, clean sweep but evolved as I realized I wanted, or was missing, something. The changes were incorporated while ensure I would be able to redeploy and learn as I went.
The plan
I can split the plan into two parts, rules I wanted to follow when setting up my homelab, and types of technology I wanted to use in it.
My rules
- Everything should be configured through IaC1 or other programmatic methods. There are definitely gaps where I still need to implement this better, but the gaps are much smaller than in the past
- Systems should be backed up, in particular if they have any important data. As mentioned in my disaster post, I do have this implemented.
- Self-hosted applications should be monitored to catch failures. I don’t have logging or telemetry at the moment, however I do have Gatus handling the monitoring and sending alerts.
- Everything should be self-hosted if possible. The only exceptions are Infisical for secrets management and public git hosting. I am running my own copy of Forgejo but my repos are either hosted or mirrored to a public service. Secrets and the repositories themselves are part of the bootstrap process so fully self-hosting them becomes a chicken and egg situation2.
Technology desires
- Nodes and applications should connect securely. Headscale manages my Tailscale network and ensures secure connection between systems.
- Use Kubernetes for orchestration. I liked the idea of Nomad but it never quite worked for me.
- Persistent Volumes should use some sort of distributed driver to allow applications to migrate when nodes reboot.
- SSO should be leveraged in hosted applications that support it. In this case Authentik.
- Look for useful self-hosted applications that my family could use as well to ensure I actually pay attention to their remaining functional and consistent. It is very easy to set things up for myself and know I have to make changes when I break something, running a homelab for others means I actually have external expectations of stability.
The Infrastructure
Originally I had the single server, my desktop that has a graphics card and a Raspberry Pi 4 running HomeAssistant. While this could technically have been enough to run everything I had in mind I wanted to be able to control where and how things ran and were deployed. So I took advantage of various Black Friday specials to get some low cost VPSes3 and then a MiniPC for Christmas.
This gives me a decent spread of machines to work with as well as a slight room for expansion if new ideas come up.
- dresden
- Desktop running Archlinux with GPU. Purely being included in the Homelab because it runs Ollama for LLM experimentation. If GPU prices drop it will be removed from homelab duty.
- blackstaff
- Server running Debian. This was the original homelab now rebuilt to server as Salt Master node, Kubernetes Master Node and backup replica target due to large capacity hard drives.
- arthur
- Mini PC server running Archlinux. Since my desktop and Laptop run Arch I want a proxy for the mirrors as well as a common location to compile packages.
- jlpks8888
- BlackFriday special Debian VPS meant as the public node in the cluster.
- jlpgreencloud
- BlackFriday special Debian VPS serving as the Caddy reverse proxy between my homelab and the wider internet.
- headscale
- Oracle Always-Free instance running Headscale to serve as the Tailscale orchestrator for my network.
- homeassistant
- Raspberry Pi 4 running HomeAssistant
I have one other small VPS that is not currently in use, it could be added to the cluster if ever there is a need, or turned into a separate reverse proxy if I want to keep domains separate.
-
Infrastructure as Code. Use code, scripts or configuration files to manage infrastructure as opposed to doing manual configuration each time you set it up. ↩︎
-
If I host my secrets tool and repository in Kubernetes then I need Kubernetes to deploy the tools, but Kubernetes needs the tools to be deployed. ↩︎
-
Virtual Private Servers. Significantly cheaper than a dedicated machine being hosted in a datacenter it still provides full root level access to the machine and ability to reinstall it with any supported OS. ↩︎