I’ve been in the process of upgrading my homelab for the last few months. I will be writing a more in depth series of posts about it in the near future and will update this post with a link to the series once it is started.
When I first deployed my new homelab (see my previous post on why I had to redeploy) I went with Uptime Kuma for status monitoring. I quite liked the UI for the statuses themselves as well as the dashboards I could create. Unfortunately all the persistent storage data for these status checks was lost during the rebuild and when looking for a more programmatic way to re-create the checks I found out that there is no official API to do so.
Now I could install the Unofficial UptimeKuma API Wrapper and use Python to create all monitor endpoints, but then I would need to ensure I kept making these updates each time I added a new application. Instead I was able to settle on Gatus by browsing the list of awesome-status-pages and then using a combination of Claude Sonnet, Reddit and Google to determine a path forward.
Where UptimeKuma doesn’t meet my needs
I like automation. When given the choice between creating configuration files or scripts and manually entering information through a WebUI I will choose the former every time. Manual entry with mouse clicks can be error-prone and can be quite time consuming, particularly when re-creating an existing configuration.
UptimeKuma looks great. If it let me officially call an API to make changes, or better still provide a configuration file1, I would still be using it. Unfortunately UptimeKuma lacks a public API and community feedback suggests that implementing one is seen as low priority.
How Gatus lets me keep being efficient
Gatus leverages yaml configuration files to identify endpoints to monitor, how to group them, where to send outage notifications, and everything else. By default it uses a single file, but using a folder path will combine all the files to create the final configuration. It also automatically reloads the configuration when it changes.
By combining this with k8s-sidecar (a tool for collecting Kubernetes Config Maps as files for the paired container) I am able to define endpoint configurations in my kustomize (a tool for customizing Kubernetes manifests) overlays so that each application automatically adds itself to the monitoring stack when deployed. It also simplifies reorganizing my status monitors as I determine what works best for me, since the change is just updating a few lines of configuration.
-
A configuration file works better in this case since I can just re-apply it if anything breaks as part of the restoration. I don’t have to remember to call the API. ↩︎