Let’s build a container around a Rust webserver and some static files using Nix and Gitlab CI. The process is what you’d expect, but there are a few details that are annoying to puzzle out.

Final config

Let’s briefly look at the final config, then go through the interesting bits in later sections.

First up is the Nix flake which has the build definitions. The packages.​site output is the Rust webserver built with naersk. Then there’s the packages.​container output created with buildLayeredImage from nixpkgs. The container includes both the site binary and the ./dist directory of static files. The latter doesn’t have a dedicated package and is just included as-is.

{
  inputs = {
    nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
    flake-utils.url = "github:numtide/flake-utils";
    naersk = {
      url = "github:nix-community/naersk";
      inputs.nixpkgs.follows = "nixpkgs";
    };
  };

  outputs = { self, nixpkgs, flake-utils, naersk }:
    flake-utils.lib.eachDefaultSystem (
      system:
      let
        pkgs = nixpkgs.legacyPackages."${system}";
        naersk-lib = naersk.lib."${system}";
      in
      rec {
        packages.site = naersk-lib.buildPackage {
          src = ./site;
          pname = "site";
        };
        defaultPackage = packages.site;

        packages.container = pkgs.dockerTools.buildLayeredImage {
          name = "scvalex.net";
          tag = "flake";
          created = "now";
          config = {
            ExposedPorts = { "80/tcp" = { }; };
            Entrypoint = [ "${packages.site}/bin/site" ];
            Cmd = [ "serve" "--dist-dir" ./dist "--listen-on" "0.0.0.0:80" ];
          };
        };
      }
    );
}
flake.nix

Next up is the Gitlab CI config file which tells the build host how to actually build our project. Essentially, we want to create the container with nix build .#container and then upload it to our registry with skopeo. Practically, there’s lots of pomp and ceremony around this:

build-container:
  image:
    name: "nixos/nix:2.12.0"
  variables:
    CACHIX_CACHE_NAME: scvalex-scvalex-net
    IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
  before_script:
    - nix-env --install --attr nixpkgs.cachix
    - nix-env --install --attr nixpkgs.skopeo
    - cachix use "$CACHIX_CACHE_NAME"
  script:
    - mkdir -p "$HOME/.config/nix"
    - echo 'experimental-features = nix-command flakes' > "$HOME/.config/nix/nix.conf"
    - mkdir -p "/etc/containers/"
    - echo '{"default":[{"type":"insecureAcceptAnything"}]}' > /etc/containers/policy.json
    - cachix watch-exec "$CACHIX_CACHE_NAME" nix build .#container
    - skopeo login --username "$CI_REGISTRY_USER" --password "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
    - ls -lh ./result
    - 'skopeo inspect docker-archive://$(readlink -f ./result)'
    - 'skopeo copy docker-archive://$(readlink -f ./result) docker://$IMAGE_TAG'
.gitlab-ci.yml

Nix flake

With the overview done, let’s zoom in on the Nix flake. It starts with the usual boilerplate for Rust flakes (see A NixOS flake for Rust, egui, and OpenGL if you need a refresher).

The first interesting bit is the definition of the packages.site output:

packages.site = naersk-lib.buildPackage {
  src = ./site;
  pname = "site";
};

These unassuming four lines not only tell Nix how to build our Rust project, they also implicitly specify the runtime dependencies of the binary.

$ ldd site/target/release/site
linux-vdso.so.1 (0x00007ffd7b787000)
libstdc++.so.6 => /nix/store/2vqp383jfrsjb3yq0szzkirya257h1dp-gcc-11.3.0-lib/lib/libstdc++.so.6 (0x00007fd89673b000)
libgcc_s.so.1 => /nix/store/hsk71z8admvgykn7vzjy11dfnar9f4r1-glibc-2.35-163/lib/libgcc_s.so.1 (0x00007fd896721000)
libm.so.6 => /nix/store/hsk71z8admvgykn7vzjy11dfnar9f4r1-glibc-2.35-163/lib/libm.so.6 (0x00007fd896641000)
libc.so.6 => /nix/store/hsk71z8admvgykn7vzjy11dfnar9f4r1-glibc-2.35-163/lib/libc.so.6 (0x00007fd896438000)
/nix/store/hsk71z8admvgykn7vzjy11dfnar9f4r1-glibc-2.35-163/lib/ld-linux-x86-64.so.2 => /nix/store/hsk71z8admvgykn7vzjy11dfnar9f4r1-glibc-2.35-163/lib64/ld-linux-x86-64.so.2 (0x00007fd897749000)

Just copying the binary to an empty container wouldn’t work because it would be missing all of its dynamic libraries. We could manually include these in the container definition, but then they might not be the right versions. Luckily, we don’t have to worry about any of this because naersk handles all the details for us as long as we use it to build the project.

Next is the definition of the packages.container output:

packages.container = pkgs.dockerTools.buildLayeredImage {
  name = "scvalex.net";
  tag = "flake";
  created = "now";
  config = {
    ExposedPorts = { "80/tcp" = { }; };
    Entrypoint = [ "${packages.site}/bin/site" ];
    Cmd = [ "serve" "--dist-dir" ./dist "--listen-on" "0.0.0.0:80" ];
  };
};

This uses buildLayeredImage. There are a few other ways to build containers with Nix, but this is the one that seems most mature.

The name and tag fields are self-explanatory. It doesn’t matter what we pick here because we’ll rename the image when pushing it to the container registry.

The ExposedPorts, Entrypoint, and Cmd fields are the same ones from regular Dockerfiles. The interesting bit is that we can just refer to packages.site and ./dist (note the lack of double-quotes) directly. This causes Nix to include them and all their dependencies in the image. We don’t need a separate step where we list the contents of the container (although we could do that if we wanted to; see the docs).

Once built with nix build .#container, we can inspect the result with dive:

$ nix build .#container
$ gunzip --stdout ./result > ~/tmp/container.tar
$ dive ~/tmp/container.tar --source docker-archive
Screenshot of dive showing a container with 6 layers
Screenshot of dive showing a container with 6 layers

In the left pane, we see the six layers of the container. Each contains a Nix store path and is a few megabytes in size. In the right pane, we see the final filesystem of the container: it’s just one big /nix/store. We see the site binary, the dist directory of static files, and the gcc, glibc, libidn2, and libunistring dependencies.

The only thing we haven’t talked about is the created = "now" option to buildLayeredImage. If we don’t set it, the image gets a creation time of 1970-01-01 00:00:01Z. When we later push this to Gitlab, it will look like this in the UI:

Screenshot of Gitlab container registry showing an image with a creation time of 52 years ago
Screenshot of Gitlab container registry showing an image with a creation time of 52 years ago

As far as I can tell, this doesn’t interfere with anything. In particular, the container registry garbage collection can still figure out which is the actual oldest image to delete. That said, it does make the UI harder to read, and since I don’t need reproducible container builds, I just set created to now to get real timestamps.

Gitlab CI config

So far, we’ve told Nix how to build our project and container. Next, we need to tell Gitlab CI how to run the Nix build and how to upload the container image to the registry.

First off, we use a recent nixos/nix image so that we don’t have to worry about updating channels.

build-container:
  image:
    name: "nixos/nix:2.12.0"

Next, we set a couple of variables, install cachix and skopeo in the build container, and configure cachix. We don’t strictly need the latter, but it makes the Nix builds significantly faster by caching intermediate artifacts. The $IMAGE_​TAG is going to be the name of our final image and it follows this format: registry.​gitlab.​com/​NAMESPACE/​PROJECT:BRANCH. The $CI_​REGISTRY_​IMAGE and $CI_​COMMIT_​REF_​SLUG variables are set automatically by Gitlab.

  variables:
    CACHIX_CACHE_NAME: scvalex-scvalex-net
    IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
  before_script:
    - nix-env --install --attr nixpkgs.cachix
    - nix-env --install --attr nixpkgs.skopeo
    - cachix use "$CACHIX_CACHE_NAME"

Next, we enable Nix flakes:

  script:
    - mkdir -p "$HOME/.config/nix"
    - echo 'experimental-features = nix-command flakes' > "$HOME/.config/nix/nix.conf"

Next, we disable checking for container image signatures because we know exactly where our image came from and where it’s going:

    - mkdir -p "/etc/containers/"
    - echo '{"default":[{"type":"insecureAcceptAnything"}]}' > /etc/containers/policy.json

If we don’t set the above policy, we get errors like the following:

time="2022-12-13T14:57:35Z" level=fatal msg="Error loading trust policy: open /etc/containers/policy.json: no such file or directory"
skopeo error if the policy is not set
Error: payload does not match any of the supported image formats:
 * oci: open /etc/containers/policy.json: no such file or directory
 * oci-archive: open /etc/containers/policy.json: no such file or directory
 * docker-archive: open /etc/containers/policy.json: no such file or directory
 * dir: open /etc/containers/policy.json: no such file or directory
podman error if the policy is not set

Next, we build our container and wrap the invocation with cachix:

    - cachix watch-exec "$CACHIX_CACHE_NAME" nix build .#container

Finally, we log in to the container registry with skopeo, output some debugging information, and upload the image to the registry. The $CI_​REGISTRY, $CI_​REGISTRY_​USER, $CI_​REGISTRY_​PASSWORD variables are set automatically by Gitlab.

    - skopeo login --username "$CI_REGISTRY_USER" --password "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
    - ls -lh ./result
    - 'skopeo inspect docker-archive://$(readlink -f ./result)'
    - 'skopeo copy docker-archive://$(readlink -f ./result) docker://$IMAGE_TAG'

An alternative to skopeo is podman which has the advantage of being more familiar to people. To use it, we just replace all the skopeo related lines with:

    - nix-env --install --attr nixpkgs.podman
    - podman login --username "$CI_REGISTRY_USER" --password "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
    - podman load < ./result
    - podman image ls --all
    - podman push scvalex.net:flake "$IMAGE_TAG"

The disadvantage of podman is that we introduce an intermediate step of loading the image to the build container’s local registry.

Conclusion

And there you have it: the recipe for bundling a Rust program and some files into a container using Nix and Gitlab CI. Like everything else involving multiple complex systems, it’s conceptually easy to do, but practically complicated to configure.