This was originally going to be a heading in my previous post, however I realized that app deployment and configuration deserved its own post.

Deploying containers into Kubernetes is quite straightforward. Almost every self-hosted app has instructions on how to deploy either to Kubernetes (be it directly via yaml files or a tool like Helm) or using a Docker Compose file. If there aren’t instructions for Kubernetes the container definitions in docker-compose.yml can be rewritten as a Pod specification and then wrapped in Deployments, StatefulSets or left purely as a pod. Deploying multiple containers in a consistent and coherent manner adds a layer of complexity since you want to ensure all your containers use the right annotations, keep their persistent data in the right StorageClass and define their services and load balancing rules in the same manner.

I went through a few iterations of deployment methods to get to my current state. There are still a few rough edges and changes I want to revisit at a future date but my current setup gets everything done and makes it easy to add additional applications or redeploy the entire setup to get back to my current state in case of failure.

Helm Charts Everywhere

For anyone who sees the section title and thinks going this route was a mistake, don’t worry. Beyond a momentary thought experiment I didn’t go down this route. For everyone else, Helm charts are a common way to define application deployments with their dependencies that work quite well for deploying an app/stack of apps. If I wanted to use Helm Charts for all my deployments I would need to create a separate chart for each app with very little re-use, other than through copy-pasting.

Several of the apps I’m using come with Helm Charts, and in most cases I ended up using the charts for my deployments. In cases where I didn’t need a full production scale deployment I based my deployment on what the Helm Chart provides but kept it limited to what I actually needed. For everything else I realized that Helm was not the right choice for my environment and moved on.

On the upside, K3s uses a Helm Controller Custom Resource Definition plus Deployment to simplify Helm Chart usage. By creating a valid HelmChart resource the entire chart lifecycle is handled by the cluster removing the need to worry about version mismatches between my machine and the cluster when running helm.

For example deploying the cert-manager chart with CRDs and disabling Prometheus just requires the yaml file below. The valuesContent string corresponds to the values.yaml file you would normally use and the chart and repo values under spec tell helm where to find the chart to use.

 1apiVersion: helm.cattle.io/v1
 2kind: HelmChart
 3metadata:
 4  name: cert-manager
 5  namespace: kube-system
 6spec:
 7  chart: cert-manager
 8  repo: https://charts.jetstack.io
 9  targetNamespace: cert-manager
10  createNamespace: true
11  valuesContent: |-
12    installCRDs: true
13    prometheus:
14      enabled: false

Straight Kubernetes Manifests

My first iteration of deployments was fairly simple in nature. Create a set of Kubernetes yaml definitions in a given file appName.yaml that included all the required components. Then copy paste that to appName2.yaml and update the portions that needed modification to fit the next app. This worked extremely well for the first few apps and then I realized I wanted to make a change that affected all my apps, which meant updating every single file that the change applied to and hoping I didn’t miss any of them. And then I considered what would happen once I had over a dozen apps and needed to go and update them all and quickly went looking for another solution.

Standardizing with Kustomize

For the time being I’ve settled on using Kustomize for templating. Everything gets deployed based on one base template or another with values updated to correspond to the actual application being deployed. This allows me to have all my applications more or less identically deployed and makes it quite clear what to change when adding new functionality such as programmatic monitoring.

I also leverage the ability to list required resources in a kustomization.yaml file to track all applications within a given namespace. This way I can simply run kubectly apply -k namespaces/apps and all my apps will deploy rather than having to iterate between them.

Base templates

I have a handful of bases that cover my usual needs:

backups
Cron Job templates to trigger specific backup tasks.
deployment
A Kubernetes deployment with a PVC, an Infisical Secret, a Service and matching HTTP and HTTPs gateway routes.
deployservice
A deployment with only a service added to it, no routes, persistence or secrets.
noservice
A deployment without the service or gateway routes.
public-oidc
More or less a duplicate of deployment but with X-Forwarded-Proto: https added to the http based gateway route to ensure all inbound traffic gets handled properly by OIDC rules.
statefulset
Similar to deployment but with a statefulset (ss) rather than a deployment, the persistent volume gets incorporated into the ss definition.
helmchart
Deploys a helm chart similar to the example above but includes an Infisical secret and the pair of gateway routes.
helmchart-noaddons
A pure helm chart deployment, simply to ensure consistency with my use of kustomize.

At a future time I will likely rework these options to be more basic and include proper inheritance between them. Right now all yamls are re-created in each base except for public-oidc since it only adds the Header.

Per-App Kustomization

Specific settings for each app are handled by patching their matching deployment or statefulset spec to include the relevant pod information. In addition to this any ConfigMap values are created using a configMapGenerator so that changes get properly detected by the pod so restarts occur. In the cases where I do not need all the defined resources, in particular not needing a PVC, I use the delete pvc stanza below as a patch.

The primary per-application addition I make to almost every app is a gatus endpoints similar to the one below, ensuring that the app gets monitored. This has to be added per-app rather than as a template since the endpoints are defined as nested yaml files, modifying the values via a Kustomize replacements stanza would become overly complex.

Speaking of replacements, I end up with a significant number of them that get consistently added to allow the final kubectl apply to find all the correct settings.

  • delete pvc
    1$patch: delete
    2apiVersion: v1
    3kind: PersistentVolumeClaim
    4metadata:
    5  name: pvc
    
  • gatus endpoints
     1apiVersion: v1
     2kind: ConfigMap
     3metadata:
     4  labels:
     5    homelab.kubernetes.io/k8s-sidecar: gatus
     6  name: gatus-endpoints
     7data:
     8  atuin-internal-gatus-endpoints.yaml: |
     9    endpoints:
    10​      - name: Atuin
    11        url: http://atuin-svc.apps.svc.cluster.local/
    12        group: Internal/Apps
    13        interval: 5m
    14        alerts:
    15​          - type: ntfy
    16        conditions:
    17​          - "[STATUS] == 200"
    18  atuin-external-gatus-endpoints.yaml: |
    19    endpoints:
    20​      - name: Atuin
    21        url: https://atuin.leechpepin.com/
    22        group: External
    23        interval: 5m
    24        alerts:
    25​          - type: ntfy
    26        conditions:
    27​          - "[STATUS] == 200"
    
  • example replacements
     1replacements:
     2​  - source:
     3      kind: Service
     4      name: svc
     5    targets:
     6​      - select:
     7          kind: HTTPRoute
     8        options:
     9          create: true
    10        fieldPaths:
    11​          - spec.rules.0.backendRefs.0.name
    12​  - source:
    13      kind: Deployment
    14      name: app
    15      fieldPath: metadata.labels.[app.kubernetes.io/appName]
    16    targets:
    17​      - select:
    18          kind: HTTPRoute
    19        options:
    20          create: true
    21          delimiter: "."
    22          index: 0
    23        fieldPaths:
    24​          - spec.hostnames.0
    25​      - select:
    26          kind: InfisicalSecret
    27        options:
    28          delimiter: "-"
    29          index: 0
    30        fieldPaths:
    31​          - spec.managedSecretReference.secretName
    32​      - select:
    33          kind: InfisicalSecret
    34        options:
    35          delimiter: "/"
    36          index: 2
    37        fieldPaths:
    38​          - spec.authentication.universalAuth.secretsScope.secretsPath
    39​      - select:
    40          kind: Service
    41        fieldPaths:
    42​          - spec.ports.0.name
    43​          - spec.ports.0.targetPort
    44​  - source:
    45      kind: Deployment
    46      name: app
    47      fieldPath: metadata.labels.[app.kubernetes.io/appNamespace]
    48    targets:
    49​      - select:
    50          kind: InfisicalSecret
    51        fieldPaths:
    52​          - spec.managedSecretReference.secretNamespace
    

How I add a new App to my stack

  1. Create new folder in the appropriate location (namespaces/<namespace>/<newApp>).
  2. Copy kustomization.yaml and update with appropriate app-specific settings.
  3. Create patches/ and extra/ content as applicable.
  4. Add app to parent folder kustomization.yaml after validating that the deployment worked.

Kustomize shortcoming

To be clear, this shortcoming is for my specific use case, it will not apply to everyone. I’ve found workarounds for my issue however I would ideally want to change things at some future date to be ‘cleaner’.

In particular it is the eager expansion of any replacements or templating within a base template. I have a generic service and generic HTTP Gateway route within my base template that need to be properly bound to one another and from there to the underlying pod. To do so while also providing an appropriate URI for the Gateway I need to leverage replacements to find the appName and port and replace them within the resources. I do the same thing for my Infisical Secret so it can find the correct path.

These replacement rules are identical from one app to the next however I must copy paste the rules into each kustomization.yaml rather than reference a consistent set. I could use a pre-processor to do further string replacement, but then I lose the ability to simply run kubectl [apply|diff] -k ... and get the output.

Streamlining the deployment process

Although I typically just use targeted kubectl apply [...] commands when updating individual files or adding new applications I have a justfile (details below) set up to help bootstrap and rebuild the entire cluster.

The steps are based on the actual dependency chain to limit the pending deployments and errors due to resources not showing up in order. It should be able to salvage itself regardless of the ordering but this ensures everything that is needed should exist before the next step.

  1. (deploy/00-infisical.sh) Ensure Infisical secret is created to allow all subsequent secrets to pull from Infisical.
  2. (kubectl apply -k deploy/00-infisical) Deploy the actual Infisical resources.
  3. Deploy Infrastructure resources
    • (deploy/01-infra.sh) CRDs for gateway, longhorn, and cert-manager
    • (kubectl apply -k deploy/01-infra) Deploy Cert-Manager
    • (kubectl apply -k deploy/01-infra) Deploy Custom Traefik Gateway with appropriate ports
    • (kubectl apply -k deploy/01-infra) Deploy Longhorn for persistent storage
    • (kubectl apply -k deploy/01-infra) Deploy Traefik for routing
  4. (kubectl apply -k deploy/10-apps) Deploy all the apps

The redeploy task ensures I can rebuild the Infisical secret if it ever needs rotating, diff will show me all pending changes and apply will actually apply them. They all leverage the relevant namespace level kustomization.yaml files to track all the apps to be deployed.

set export := true
verbose := "false"
diff := "kubectl diff -k"
apply := "kubectl apply -k"
redirect := if verbose == "true" { "" } else { "> /dev/null" }

default:
    @just --list

infisical_bootstrap_secret recreate="":
    @echo "-- Adding Infisical Bootstrap secret --"
    @./deploy/00-infisical.sh {{ recreate }}

# [ ... ]

_apply_infisical:
    @echo "-- Applying Infisical resources --"
    @{{ apply }} deploy/00-infisical {{ redirect }}
_apply_infra:
    @echo "-- Applying Infra resources --"
    @{{ apply }} deploy/01-infra {{ redirect }}
    @./deploy/01-infra.sh
_apply_apps:
    @echo "-- Applying Apps --"
    @{{ apply }} deploy/10-apps {{ redirect }}

_apply_post: _apply_infra _apply_apps

apply: _apply_infisical _apply_post
_deploy recreate="": _apply_infisical (infisical_bootstrap_secret recreate) _apply_post
deploy: _deploy

redeploy: (_deploy "--recreate")