containers/<name>/
├── Dockerfile (built by Dagger on the k8s runner)
├── default.nix (built by nix-build on the ringtail runner)
└── (optional scripts, configs)
A container can have one or both build files. The directory name becomes the image name: registry.ops.eblu.me/blumeops/<name>.
Or with nix-build directly (requires nix, e.g. on ringtail):
nix-build containers/<name>/default.nix -o result
3. Release
Container builds trigger automatically when changes to containers/<name>/ are merged to main. Both workflows fire and each skips if the relevant build file is absent.
To trigger a manual build (e.g. from a branch or to rebuild at a specific commit):
mise run container-build-and-release <name>mise run container-build-and-release <name> --ref <commit-sha>
The version (X.Y.Z) is extracted from ARG CONTAINER_APP_VERSION= in the Dockerfile or version = "..." in default.nix. The SHA is the short (7-char) commit hash.
Check available images and tags with:
mise run container-list
4. Update k8s manifests
Change the image reference in argocd/manifests/<service>/deployment.yaml:
Container image tags include the git commit SHA they were built from (e.g. v3.9.1-74029e1). When a PR is squash-merged, the original branch commits are replaced by a single new commit on main — the SHA in the image tag no longer exists on main. After branch cleanup (30 days), the SHA becomes unreachable and the container loses source traceability.
The rule: Production manifests must reference images built from a commit on main. After merging a PR that changed containers/<name>/:
The merge to main automatically triggers a rebuild (the build-container.yaml / build-container-nix.yaml workflows fire on pushes to main that touch containers/**)
Wait for the workflow to complete — check at https://forge.ops.eblu.me/eblume/blumeops/actions
Find the new main-SHA tag:
mise run container-list <name>
Tags marked [main] were built from a commit on main; tags marked [branch] are from PR branches
Commit a C0 follow-up updating the manifest to use the [main] tag:
This follow-up C0 is expected and routine — it’s the cost of squash-merge + SHA-tagged containers.
Common Patterns
Existing containers demonstrate several build approaches:
Pattern
Example
Notes
Alpine package install
Simplest — install from apk
Go from source
Clone upstream, go build
Multi-stage with Node + Go
Separate UI and backend build stages
Multi-stage Elixir
Elixir release with Node assets
Runtime tarball download
Download pre-built binary with arch detection
Nix dockerTools
buildLayeredImage with nixpkgs tools
transmission
containers/transmission/Dockerfile — Installs transmission-daemon directly from Alpine packages. Good starting point for services available in apk.
miniflux
containers/miniflux/Dockerfile — Two-stage Go build. Clones upstream at a pinned version tag, runs make, copies the binary into a minimal Alpine runtime.
navidrome
containers/navidrome/Dockerfile — Three-stage build with separate Node.js UI compilation, Go backend build with CGO (taglib), and a minimal Alpine runtime with ffmpeg.
teslamate
containers/teslamate/Dockerfile — Two-stage Elixir build with Node.js asset compilation. Uses Debian-based images due to Elixir/OTP dependencies.
kiwix-serve
containers/kiwix-serve/Dockerfile — Downloads a pre-built binary from upstream, with architecture detection for cross-platform support.
nettest (nix)
containers/nettest/default.nix — Uses dockerTools.buildLayeredImage with buildEnv to merge nixpkgs tools (curl, jq, dnsutils, bash). Runs alongside the existing Dockerfile; the nix variant is tagged :version-nix in the registry.