I’ve been playing a lot with Nix over the last year, using it for my dev and build environments.

The major draw with Nix is the deep focus on build reproducibility:

  • No network access, so all dependencies must be downloaded ahead-of-time
  • No host system access, so all dependencies (build-time and runtime, library and binary) must be declared
  • Outputs are content-addressed, so identical builds and dependencies are de-duplicated

As a result, the same Nix definitions should result in the same output (barring things like timestamps and non-determinism).

Nix definitions can be nested arbitrarily so outputs from one Nix expression can become inputs to another expression, and thanks to the content-addressing scheme, changes are automatically rebuilt only when needed.

The nixpkgs project produces a massive set of build definitions, leveraging Nix’s benefits - as well as various build helpers providing integration points with other languages and their dependency definitions, including Rust, Go, C,…

All this requires a bit of extra work, and the occasional debugging - it turns out a lot of build tools assume they can access the internet willy-nilly - but once you’ve managed that, what you end up with is a far more reliable build process.

This article assumes some level of familiarity with Nix and nixpkgs - If you’re not already familiar with both, I’d recommend starting from NixOS , or following the excellent guide from fasterthamline.

Pretext

Note: all the code in this article is viewable at JaimeValdemoros/multi-arch-demo. Each change in this article is roughly one commit.

The starting point is a project building a single binary and wrapping it up in a docker image.

Let’s start off with a simple flake.nix:

# flake.nix
{
    inputs = {
        nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
        flake-utils.url = "github:numtide/flake-utils";
    };
    outputs = { self, nixpkgs, flake-utils }:
        flake-utils.lib.eachDefaultSystem (system:
            let
                pkgs = nixpkgs.legacyPackages.${system};
                myproject = pkgs.rustPlatform.buildRustPackage {
                    pname = "multi-arch-demo";
                    version = "1.0.0";
                    src = ./.;
                    cargoLock = {
                        lockFile = ./Cargo.lock;
                    };
                };
            in {
                packages = { inherit myproject; };
                devShell = pkgs.mkShell {
                    buildInputs = [ pkgs.cargo ];
                };
            }
        );
}

We can run nix-shell to enter the shell, then use cargo to initialise the Rust project:

nix develop
cargo init
#     Creating binary (application) package
# note: see more `Cargo.toml` keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
cargo c
#    Checking multi-arch-demo v0.1.0 (/home/nixos/code/multi-arch-demo)
#    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.19s
git add .
$ nix run .#myproject
# Hello, world!

Since we’re going to be working with multiple architectures, let’s tweak our Rust code so we can tell what we’re running:

// src/main.rs
use std::env::consts::{ARCH, OS};

fn main() {
    println!("Hello from {OS}/{ARCH}!");
}
cargo run
# Hello from linux/x86_64!

Now let’s add in a docker build for good measure:

# flake.nix
{
    inputs = {
        nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
        flake-utils.url = "github:numtide/flake-utils";
    };
    outputs = { self, nixpkgs, flake-utils }:
        flake-utils.lib.eachDefaultSystem (system:
            let
                pkgs = nixpkgs.legacyPackages.${system};
                myproject = pkgs.rustPlatform.buildRustPackage {
                    pname = "multi-arch-demo";
                    version = "1.0.0";
                    src = ./.;
                    cargoLock = {
                        lockFile = ./Cargo.lock;
                    };
                };
                docker = pkgs.dockerTools.buildlayeredImage {
                    name = "localhost/myproject";
                    config = {
                        Cmd = ["${myproject}/bin/multi-arch-demo"];
                    };
                };
            in {
                packages = { inherit myproject docker; };
                devShell = pkgs.mkShell {
                    buildInputs = [ pkgs.cargo ];
                };
            }
        );
}

and test it out:

nix build .#docker
podman load <result
# Loaded image: localhost/myproject:fjzrf96i0smk5iyp9n8plz2yh77pa7wf
podman run localhost/myproject:fjzrf96i0smk5iyp9n8plz2yh77pa7wf
# Hello from linux/x86_64!

Excellent! We’ve got our starting point.

Let’s have a closer look at what’s happening when we do nix build .#docker, though.

$ nix flake show . --all-systems
# git+file:///home/nixos/code/multi-arch-demo
# ├───devShell
# │   ├───aarch64-darwin: development environment 'nix-shell'
# │   ├───aarch64-linux: development environment 'nix-shell'
# │   ├───x86_64-darwin: development environment 'nix-shell'
# │   └───x86_64-linux: development environment 'nix-shell'
# └───packages
#     ├───aarch64-darwin
#     │   ├───docker: package 'myproject.tar.gz'
#     │   └───myproject: package 'multi-arch-demo-1.0.0'
#     ├───aarch64-linux
#     │   ├───docker: package 'myproject.tar.gz'
#     │   └───myproject: package 'multi-arch-demo-1.0.0'
#     ├───x86_64-darwin
#     │   ├───docker: package 'myproject.tar.gz'
#     │   └───myproject: package 'multi-arch-demo-1.0.0'
#     └───x86_64-linux
#         ├───docker: package 'myproject.tar.gz'
#         └───myproject: package 'multi-arch-demo-1.0.0'

Our clever flake-utils.lib.eachDefaultSystem helper is actually defining several sets of packages, one per platform - and when we run nix build or nix run, nix is automagically picking the right platform for our system to build and/or run.

So what happens if we call the full path for x86_64 (the platform we’re running on)?

nix build .#packages.x86_64-linux.myproject
# <no output>
nix run .#packages.x86_64-linux.myproject
# Hello from linux/x86_64!

Great - and what if we try to build for a different platform?

nix build .#packages.aarch64-linux.myproject
# error: a 'aarch64-linux' with features {} is required to build '/nix/store/kwx5j3yhx7ipwcnfk9mh9an8m1jg7bql-cargo-vendor-dir.drv', but I am a 'x86_64-linux' with features {benchmark, big-parallel, kvm, nixos-test}

Ah.

Cross-compilation

So what’s happening here?

In order to build an aarch64 package, we first need to download aarc64-based build tools. Unfortunately those tools only run on aarch64, and we’re on a x86_64 system.

There’s nothing that says a build tool has to produce outputs for the same platform it’s running on - it’s just that that’s generally a safe default, since people often develop and test on the platform they’re currently running on. Convincing tools to output for different platforms often requires some configuration.

Building also often involves linking to libraries installed on the system, and those libraries are generally compiled for that architecture. So if you’re compiling for another platform and you need to link against external libraries, you need to get hold of the right copies and make them available to the build tool.

Fortunately, nixpkgs has already solved half the battle for us - we’ve got a completely reproducible description of what build inputs we need from nixpkgs, so nix is happy enough to go and fetch those for different architectures.

The remaining problem is getting a version of the compiler that will run on our computer (the build architecture), but that will build output for our host architecture. (nix.dev has a pretty good overview of the build and host terms.)

Configuring compilers to cross-compile is tedious and often messy, but fortunately nixpkgs has put a lot of work into wrapping it up into one simple module - the pkgCross module.

Let’s give it a try!

Instead of invoking pkgs.rustPackage and pkgs.dockerTools, we can invoke pkgs.pkgsCross.aarch64-multiplatform.rustPlatform and pkgs.pkgsCross.aarch64-multiplatform.dockerTools:

# flake.nix
{
    inputs = {
        nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
        flake-utils.url = "github:numtide/flake-utils";
    };
    outputs = { self, nixpkgs, flake-utils }:
        flake-utils.lib.eachDefaultSystem (system:
            let
                pkgs = nixpkgs.legacyPackages.${system};
                myproject-aarch64 = pkgs.pkgsCross.aarch64-multiplatform.rustPlatform.buildRustPackage {
                    pname = "multi-arch-demo";
                    version = "1.0.0";
                    src = ./.;
                    cargoLock = {
                        lockFile = ./Cargo.lock;
                    };
                };
                docker-aarch64 = pkgs.pkgsCross.aarch64-multiplatform.dockerTools.buildlayeredImage {
                    name = "localhost/myproject";
                    config = {
                        Cmd = ["${myproject}/bin/multi-arch-demo"];
                    };
                };
            in {
                packages = { inherit myproject-aarch64 docker-aarch64; };
                devShell = pkgs.mkShell {
                    buildInputs = [ pkgs.cargo ];
                };
            }
        );
}
nix build .#packages.x86_64-linux.myproject-aarch64
nix run .#packages.x86_64-linux.myproject-aarch64
# error: unable to execute '/nix/store/yq6amandg13avpvr7v90w31xhdjvg7ch-multi-arch-demo-aarch64-unknown-linux-gnu-1.0.0/bin/multi-arch-demo': Exec format error
file result/bin/multi-arch-demo
# result/bin/multi-arch-demo: ELF 64-bit LSB pie executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /nix/store/nkzsi9nnnk5ldqb5d45z02yzxsf1hcfx-glibc-aarch64-unknown-linux-gnu-2.40-66/lib/ld-linux-aarch64.so.1, for GNU/Linux 3.10.0, not stripped

Excellent, that seems to have built an aarch64 binary (although we can’t run it directly, being on x64_64).

However, we can run it under qemu (which we’ve added to our devShell):

nix build .#packages.x86_64-linux.myproject-aarch64
qemu-aarch64 result/bin/multi-arch-demo
# Hello from linux/aarch64!

Making it a bit tidier

So far, we’ve modified the definition - but we probably still want access to our original x64_64 package. We’d probably want to avoid repeating the same blocks of code over and over.

Can we do better?

Let’s go back to the previous (non-cross-compiled) definition, and factor out the definition of building our program into a separate definition:

# default.nix
{
    rustPlatform,
    dockerTools
}:
let
    myproject = rustPlatform.buildRustPackage {
        pname = "multi-arch-demo";
        version = "1.0.0";
        src = ./.;
        cargoLock = {
            lockFile = ./Cargo.lock;
        };
    };
    docker = dockerTools.buildLayeredImage {
        name = "localhost/myproject";
        config = {
	        Cmd = ["${myproject}/bin/multi-arch-demo"];
        };
    };
in { inherit myproject docker; }

We can then call that from our flake.nix:

# flake.nix
{
    inputs = {
        nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
        flake-utils.url = "github:numtide/flake-utils";
    };
    outputs = { self, nixpkgs, flake-utils }:
        flake-utils.lib.eachDefaultSystem (system: let
            pkgs = nixpkgs.legacyPackages.${system};
		    build = pkgs.callPackage ./default.nix {};
        in {
            defaultPackage = self.packages.${system}.myproject;
            packages = { inherit (build) myproject docker; };
            devShell = with pkgs; mkShell {
                buildInputs = [ cargo qemu ];
            };
        }
    );
}

So what’s happening here?

nix-pills has a good overview, but effectively callPackage is a convenience function defined in nixpkgs.

Instead of pulling out individual packages from nixpkgs ourselves, we define a function (in our default.nix) that declares what packages it needs (in this case rustPlatform and dockerTools).

We then pass that function to callPackage, and callPackage handles passing the right inputs in.

Why is this useful?

Well, let’s try using pkgsCross again. We don’t need to touch our default.nix, just flake.nix:

# flake.nix
{
    inputs = {
        nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
        flake-utils.url = "github:numtide/flake-utils";
    };
    outputs = { self, nixpkgs, flake-utils }:
        flake-utils.lib.eachDefaultSystem (system: let
            pkgs = nixpkgs.legacyPackages.${system};
            native = pkgs.callPackage ./default {};
            aarch64 = pkgs.pkgsCross.aarch64-multiplatform.callPackage ./default.nix {};
        in {
            defaultPackage = self.packages.${system}.myproject;
            packages = { inherit native aarch64; };
            devShell = with pkgs; mkShell {
                buildInputs = [ cargo qemu ];
            };
        }
    );
}

In a one-line change, we’ve added cross-compilation across the entire build stack:

$(nix build .#native.myproject --print-out-paths)/bin/multi-arch-demo
# Hello from linux/x86_64!

qemu-aarch64 $(nix build .#aarch64.myproject --print-out-paths)/bin/multi-arch-demo
# Hello from linux/aarch64!

We can also load both of our docker images:

podman load <$(nix build .#native.docker --print-out-paths)
# Loaded image: localhost/myproject:v1dhc61jbgnmlv3hhbjfilqvyahlwllb

podman load <$(nix build .#aarch64.docker --print-out-paths)
# Loaded image: localhost/myproject:41kx0zwqzvan7k4ls1hiqis33bcjxpk4

Although of course we can’t run the aarch64 image:

podman run localhost/myproject:41kx0zwqzvan7k4ls1hiqis33bcjxpk4
# WARNING: image platform (linux/arm64) does not match the expected platform (linux/amd64)
# {"msg":"exec container process `/nix/store/lkab24sba9cck5c64v2x7c5piqx9ln0b-multi-arch-demo-aarch64-unknown-linux-gnu-1.0.0/bin/multi-arch-demo`: Exec format error","level":"error","time":"2025-08-27T18:46:04.766159Z"}

Unless we make qemu available to podman, for example via a docker image

(warning: this makes permanent changes to your system):

sudo podman run --rm --privileged docker.io/multiarch/qemu-user-static --reset -p yes
# ...
# Setting /usr/bin/qemu-aarch64-static as binfmt interpreter for aarch64
# ...
podman run localhost/myproject:41kx0zwqzvan7k4ls1hiqis33bcjxpk4
# WARNING: image platform (linux/arm64) does not match the expected platform (linux/amd64)
# Hello from linux/aarch64!

Building a multi-container docker image

So now we could be done - all we need to do is give our docker images different tags, and the user on each platform can pull the right one for their architecture.

That’s a bit annoying though, isn’t it?

Helpfully, docker supports something called multi-platform builds.

This allows a single tag to reference multiple images, one per architecture. All your users reference the same tag, but the runtime selects the right version to pull based on the platform it’s running on.

Unfortunately, there isn’t already a builder in nixpkgs that will do this for us (at least, looking at pkgs.dockerTools), so we’ll have to get our hands dirty.

Getting started

Before we do anything else, let’s prototype this by hand to make sure it works.

Our goal is to produce a single multi-platform archive, that we can then run images from.

Ideally we could use something like skopeo to manage this for us, since it’s great for copying images around without a daemon. Unfortunately it doesn’t handle creating multi-image manifests (see skopeo/issues/1136), so we’ll fall back to using podman.

Following the guidance in the podman blog, let’s create the manifest, adding in our two images:

podman manifest create localhost/multi-arch-demo:test \
	docker-archive:$(nix build .#packages.x86_64-linux.native.docker --print-out-paths) \
	docker-archive:$(nix build .#packages.x86_64-linux.aarch64.docker --print-out-paths)
podman manifest inspect localhost/multi-arch-demo:test
# {
#     "schemaVersion": 2,
#     "mediaType": "application/vnd.oci.image.index.v1+json",
#     "manifests": [
#         {
#             "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
#             "size": 1572,
#             "digest": "sha256:18282a02ce8fbc4d9b2b4652810449fcfe0af06a5a079a6c041f66e2c5422c73",
#             "platform": {
#                 "architecture": "amd64",
#                 "os": "linux"
#             }
#         },
#         {
#             "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
#             "size": 1082,
#             "digest": "sha256:2542f92b8adaa6a3ff277e5161378e21a67074d2f84781650438f112d2706a45",
#             "platform": {
#                 "architecture": "arm64",
#                 "os": "linux"
#             }
#         }
#     ]
# }

Did it work?

podman run --platform linux/amd64 localhost/multi-arch-demo:test
# Trying to pull localhost/multi-arch-demo:test...
# WARN[0000] Failed, retrying in 1s ... (1/3). Error: initializing source docker://localhost/multi-arch-demo:test: pinging container registry localhost: Get "https://localhost/v2/": dial tcp [::1]:443: connect: connection refused
# WARN[0001] Failed, retrying in 1s ... (2/3). Error: initializing source docker://localhost/multi-arch-demo:test: pinging container registry localhost: Get "https://localhost/v2/": dial tcp [::1]:443: connect: connection refused
# WARN[0002] Failed, retrying in 1s ... (3/3). Error: initializing source docker://localhost/multi-arch-demo:test: pinging container registry localhost: Get "https://localhost/v2/": dial tcp [::1]:443: connect: connection refused
# Error: unable to copy from source docker://localhost/multi-arch-demo:test: initializing source docker://localhost/multi-arch-demo:test: pinging container registry localhost: Get "https://localhost/v2/": dial tcp [::1]:443: connect: connection refused

Hmm.

podman/pull/14827 suggests this should work, so it’s unclear what’s going on here.

We can, in a slightly roundabout way, set up a local registry, push the images there, and have podman run from the registry:

# Launch a local registry, listening on port 5000
podman run --name registry -d -p 5000:5000 docker.io/library/registry:3
# Create multi-platform image and push to registry (note the port after localhost)
podman manifest create localhost:5000/multi-arch-demo:test \
	docker-archive:$(nix build .#packages.x86_64-linux.native.docker --print-out-paths) \
	docker-archive:$(nix build .#packages.x86_64-linux.aarch64.docker --print-out-paths)
podman push --tls-verify=false localhost:5000/multi-arch-demo:test
# Getting image list signatures
# Copying 2 images generated from 2 images in list
# <snip>
# Storing list signatures
podman run --tls-verify=false --platform linux/amd64 localhost:5000/multi-arch-demo:test
# Trying to pull localhost:5000/multi-arch-demo:test...
# <snip>
# Hello from linux/x86_64!
podman run --tls-verify=false --platform linux/aarch64 localhost:5000/multi-arch-demo:test
# Trying to pull localhost:5000/multi-arch-demo:test...
# <snip>
# Hello from linux/aarch64!

This works, but it’s a bit tedious. Every time we want to run a different platform, podman will pull a different image (from our local registry, but still..), copy it to the local image store, give it that tag (as a single image, not a multi-image manifest) and run it.

What about saving to an OCI archive?

Calling the recommended command from Working with container image manifest lists, but passing oci-archive and a folder path as the destination, we get something that seems to write both images out:

podman manifest push --all multi-arch-demo:test oci-archive:./manifest
# Getting image list signatures
# Copying 2 images generated from 2 images in list
# Copying image sha256:d67e6ad9eaf5f70ffe0482498940619cbb3657dcb4cb66d62e8da5087385fc07 (1/2)
# <snip>
# Copying image sha256:2ff30c2c41e621da64cb871d969e9434ca33b79e0b50c10dc58b6fbd005b0afb (2/2)
# <snip>
# Storing list signatures

However when we try to load it back, we seem to lose the manifest information:

podman manifest rm localhost/multi-arch-demo:test
podman load -i manifest
# Getting image source signatures
# <snip>
# Writing manifest to image destination
# Loaded image: sha256:3fe782d7d7b8bb481f73ee6650105844f39da8222b887c3295a271fb75e6d897
podman manifest inspect localhost/multi-arch-demo:test
# Error: reading image "docker://localhost/multi-arch-demo:test": pinging container registry localhost: Get "https://localhost/v2/": dial tcp [::1]:443: connect: connection refused
podman image list
# REPOSITORY                  TAG         IMAGE ID      CREATED       SIZE
# <none>                      <none>      3fe782d7d7b8  55 years ago  44.3 MB

So we’ve got two issues:

  • Podman can’t run a container directly from a manifest
  • Podman can’t load a multi-image manifest back out of an OCI archive

At the very least we have an archive, even if podman can’t load it - for example, we can use skopeo to copy the image to the local registry we set up earlier using skopeo. This seems to bypass both issues and podman can then run it:

skopeo copy --all --dest-tls-verify=false oci-archive:./manifest docker://localhost:5000/multi-arch-demo:test
# Copying 2 images generated from 2 images in list
# <snip>
podman run --platform linux/amd64 --tls-verify=false localhost:5000/multi-arch-demo:test
# Trying to pull localhost:5000/multi-arch-demo:test...
# Getting image source signatures
# <snip>
# Hello from linux/x86_64!
podman run --platform linux/aarch64 --tls-verify=false localhost:5000/multi-arch-demo:test
Trying to pull localhost:5000/multi-arch-demo:test...
# Getting image source signatures
# <snip>
# Hello from linux/aarch64!

Depending on your needs, you might be done!

  • Construct your images with nix
  • Create a mulit-image manifest with podman
  • Save it to an OCI archive with podman
  • Pass around the OCI archive between systems
  • Sync the OCI archive to a registry with skopeo

Packaging up as a nix step

We can try to package this up as a nix derivation:

let build-script = pkgs.writeShellScript "build" ''
    set -euo pipefail
    MANIFEST="localhost/multi-arch-demo:latest"
    HOME=$TMP ${pkgs.podman}/bin/podman --log-level trace manifest create "$MANIFEST" \
        docker-archive:${self.packages.${system}.native.docker} \
        docker-archive:${self.packages.${system}.aarch64.docker}
    HOME=$TMP ${pkgs.podman}/bin/podman manifest push "$MANIFEST" oci-archive:$out
'';
in derivation {
    name = "multi-arch-demo";
    builder = "${pkgs.bash}/bin/bash";
    args = [ build-script ];
    inherit system;
};

Unfortunately this doesn’t quite work:

 nix build
# error: builder for '/nix/store/w6svn8yd49cswgz9ygq3m88v13bp8nf4-multi-arch-demo.drv' failed with exit code 127;
#        last 2 log lines:
#        > Creating manifest
#        > /nix/store/pjwzix05xclvwc6lwc9jzdxsdzajzaf6-build: line 5: /nix/store/dmqhv59fkh8rrdbxzzbxpnzrrv4cvxbc-buildah-wrapper-1.41.3/bin/podman: No such file or directory
#        For full logs, run 'nix log /nix/store/w6svn8yd49cswgz9ygq3m88v13bp8nf4-multi-arch-demo.drv'.

If we run it under strace, we can see what’s going wrong:

> statfs("/sys/fs/cgroup", 0xc000051400)  = -1 ENOENT (No such file or directory)
> write(2, "Error: no such file or directory"..., 33Error: no such file or directory

it seems nix’s hermetic builds are a bit too hermetic for podman, and podman requires access to features nix is unlikely to allow.

Various other tools (buildah, skopeo, oras, regctl…) seem to have support for manifests, but I haven’t managed to put together a process that:

  • Takes two images
  • combines them into a single image output
  • in a way that doesn’t require an intermediate registry

In the meantime, let’s try a different approach.

Changing tack

So, we need to upload the images to some kind of registry, but Nix builds are done without a network. Does that mean we’re scuppered?

While Nix can’t hermetically build something that requires network access, what it can do is construct a reproducible script that describes something that requires network access, and has its dependencies perfectly pinned in a way that means we’re still getting the benefits of Nix.

Let’s see what that would look like:

# flake.nix
{
    inputs = {
        nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
        flake-utils.url = "github:numtide/flake-utils";
    };
    outputs = { self, nixpkgs, flake-utils }:
        flake-utils.lib.eachDefaultSystem (system: let
            pkgs = nixpkgs.legacyPackages.${system};
            native = pkgs.callPackage ./default {};
            aarch64 = pkgs.pkgsCross.aarch64-multiplatform.callPackage ./default.nix {};
        in {
            defaultPackage = pkgs.writeShellScriptBin "manifest" ''
              set -euo pipefail
              MANIFEST="$1"
              PUSH_ARGS=( "''${@:2}" )
              # Store images in temp directory instead of polluting user's store
              TMPDIR="$(${pkgs.mktemp}/bin/mktemp -d)"
              HOME=$TMPDIR ${pkgs.podman}/bin/podman manifest create "$MANIFEST" \
                docker-archive:${self.packages.${system}.native.docker} \
                docker-archive:${self.packages.${system}.aarch64.docker}
              HOME=$TMPDIR ${pkgs.podman}/bin/podman manifest push "''${PUSH_ARGS[@]}" "$MANIFEST"
              rm -r "$TMPDIR"
            '';
            packages = { inherit native aarch64; };
            devShell = with pkgs; mkShell {
                buildInputs = [ cargo qemu ];
            };
        }
    );
}

What’s happening here?

We’re creating a script that:

  • Pins the exact versions of bash, mktemp and podman it needs
  • Sets up a temporary $HOME for podman to store images in
  • Asks podman to create a manifest, using the images we’ve defined
  • Pushes that manifest to the right registry (with any user-supplied arguments)

Let’s see what that looks like when we build it and run it:

nix build
cat ./result/bin/manifest
# #!/nix/store/4bacfs7zrg714ffffbjp57nsvcz6zfkq-bash-5.3p3/bin/bash
# set -euo pipefail
# MANIFEST="$1"
# PUSH_ARGS=( "${@:2}" )
# # Store images in temp directory instead of polluting user's store
# TMPDIR="$(/nix/store/cwb8mv0iw7arj5glfhq1044bqj3rd6ll-mktemp-1.7/bin/mktemp -d)"
# HOME=$TMPDIR /nix/store/5j033lyhirm09p8pk031y1py2f42afm7-podman-5.6.0/bin/podman manifest create "$MANIFEST" \
#   docker-archive:/nix/store/q9lkvzfcf9lsfwkvc6aingg97g6dhgyp-myproject.tar.gz \
#   docker-archive:/nix/store/nvzyyb8qiqp057zmvms03shf07w5kfnl-myproject.tar.gz
# HOME=$TMPDIR /nix/store/5j033lyhirm09p8pk031y1py2f42afm7-podman-5.6.0/bin/podman manifest push "${PUSH_ARGS[@]}" "$MANIFEST"
# rm -r "$TMPDIR"
nix run .# -- localhost:5000/multi-arch-demo:test --tls-verify=false
# ee392ded145eb2182c9c9507578f138b7adb4da52d080325f6aeb2b69cbc8bfb
# Getting image list signatures
# Copying 2 images generated from 2 images in list
# <snip>

Did it work?

podman run --tls-verify=false localhost:5000/multi-arch-demo:test
# Trying to pull localhost:5000/multi-arch-demo:test...
# Getting image source signatures
# <snip>
# Hello, world!

Nice!

Not the ideal solution - it would be nice if we could create a local archive and copy it around instead of relying on a registry - but does the job well enough to move on to the next section.

Running it in CI

Well, it seems multi-platform images aren’t super well supported across the OCI ecosystem. I’ve found various issues flagging up holes in manifest support and subsequent attempts to fix them, but it’s still not quite as simple as passing around normal images.

That’s great if you have access to a container registry, but trying to manipulate them locally is a bit more of a pain.

Fortunately we’ve been able to work around it with a bit of a ’trick’ of generating a script rather than a binary artifact. This still gains the benefits of using nix - any time you call nix build or nix run, it’ll re-evaluate all the transitive dependencies, including the Rust code in two languages, and the corresponding docker images, re-build them if necessary, and (if you’ve called run) push the resulting image.

If you have a CI pipeline with an appropriate cache, you can run a single nix run command that

  • Doesn’t require anything to be installed (beyond nix)
  • Uses the cache effectively
  • Reproducibly builds and pushes your images

Let’s see what that might look like:

name: Publish Docker image using Nix
on:
  pull_request:
  push:
    branches:
      - main
jobs:
  build:
    runs-on: ubuntu-22.04
    environment: build
    permissions:
      contents: read
      actions: read
      packages: write
      checks: write
    steps:
      # Checkout repo
      - name: git checkout
        uses: actions/checkout@v3
      # Install Nix and set up a local Nix store under /nix/store
      - name: Install Nix
        uses: nixbuild/nix-quick-install-action@v30  
      # Run our script
      - name: Build and push
        run: nix run .# -- "ghcr.io/${GITHUB_REPOSITORY@L}:${GITHUB_RUN_ID}" --creds "${{ github.repository_owner }}:${{ secrets.GITHUB_TOKEN }}"

(You’ll probably want to set up cache-nix-action to cache between runs, but this is enough for demo purposes).

Did it work?

https://github.com/JaimeValdemoros/multi-arch-demo/actions/runs/17291897493/job/49080965522

> Pushing ghcr.io/jaimevaldemoros/multi-arch-demo:17291897493
> Copying 2 images generated from 2 images in list

Nice!

If anyone’s spent hours playing whack-a-mole trying to come up with a CI pipeline that has all the right tools, and occasionally changes under your feet, they’ll really appreciate a setup that isolates all that out the way this does.

For the full code listing, see JaimeValdemoros/multi-arch-demo.