---
title: "Building ARM64 Container Images When Upstream Doesn't Ship Them"
description: "How to produce a working ARM64 container image when the upstream project publishes binaries but no ARM64 Docker tag, including diagnostic signatures, the binary-tarball Dockerfile shape, manifest verification, and the buildx + QEMU emulation trap."
url: https://agent-zone.ai/knowledge/kubernetes/building-arm64-container-images-when-upstream-doesnt-ship-them/
section: knowledge
date: 2026-05-07
categories: ["kubernetes"]
tags: ["arm64","docker","buildx","qemu","minikube","apple-silicon","container-images"]
skills: ["arm64-image-construction","container-platform-debugging","manifest-verification"]
tools: ["docker","docker-buildx","minikube","crane"]
levels: ["intermediate"]
word_count: 1642
formats:
  json: https://agent-zone.ai/knowledge/kubernetes/building-arm64-container-images-when-upstream-doesnt-ship-them/index.json
  html: https://agent-zone.ai/knowledge/kubernetes/building-arm64-container-images-when-upstream-doesnt-ship-them/?format=html
  api: https://api.agent-zone.ai/api/v1/knowledge/search?q=Building+ARM64+Container+Images+When+Upstream+Doesn%27t+Ship+Them
---


A pod is `CrashLoopBackOff` with no application stack trace. The container manifest's `image` field references an upstream tag that "should" work. The Docker pull succeeded. The container starts, exits with no log output, and restarts. The cause is almost always architecture: the image was published `linux/amd64` only, the host is ARM64 (Apple Silicon, Graviton, Ampere), and the runtime is silently emulating — or failing to emulate — the binary. When upstream publishes ARM64 source artifacts but no ARM64 image, the fix is to build one.

This article picks up where [ARM64 Kubernetes: The QEMU Problem with Go Binaries](../arm64-k8s-images/) leaves off. That article diagnoses why amd64 Go binaries crash under QEMU on ARM64 hosts. This one covers what to do once the diagnosis is made and an ARM64 image has to be produced from scratch.

## Diagnostic signatures

Three signatures point at architecture mismatch. They look unrelated on the surface; the root cause is the same.

**Silent CrashLoopBackOff with empty logs.** The pod restarts every few seconds. `kubectl logs <pod> --previous` returns nothing or a single blank line. The container's ENTRYPOINT was invoked, the kernel rejected the binary format before any output flushed, and the runtime recorded only the non-zero exit.

```bash
kubectl describe pod <pod> | grep -A2 "Last State"
# Last State: Terminated
#   Reason: Error
#   Exit Code: 1
```

**`exec format error` in container runtime logs.** When the runtime is honest about the rejection, the kubelet or containerd log records it directly:

```
failed to start container: exec format error
```

This is the kernel's `ENOEXEC` surfacing through the container runtime. The image's binary was built for an architecture the host kernel will not execute, and no emulation layer is registered for that architecture.

**QEMU `lfstack.push invalid packing` panic.** When emulation *is* registered, Go binaries fail differently. The garbage collector's lock-free stack packs pointer high bits with counter values; QEMU user-mode address translation invalidates the packing. The signature is unmistakable:

```
runtime: lfstack.push invalid packing: node 0xffff8b410100 cnt 0x1 packed 0x...
fatal error: lfstack.push
```

Any of the three signatures means upstream did not ship an image that runs on ARM64. The next decision is how to build one.

## Two strategies for getting an ARM64 image

Two paths produce a native ARM64 image. They have different operational profiles.

| Strategy | Source of binary | Build host | Reproducibility | When to use |
|---|---|---|---|---|
| **Build from upstream binary tarball** | Pre-compiled ARM64 binary published by upstream | Any (no cross-compile) | Tied to upstream release artifact | Upstream ships ARM64 binaries but no ARM64 image (Mattermost, older Elasticsearch, many Java apps) |
| **Cross-build from source with `docker buildx`** | Source repository, compiled in builder stage | x86 or ARM (with QEMU for cross-arch) | Fully reproducible from source | Upstream ships source but no ARM64 binary, or the binary needs custom patches |

Build-from-tarball is faster (no compile time, no toolchain in image) and produces a smaller layer count. Cross-build with buildx is more flexible but pays a steep emulation tax for any non-trivial compile step (see [the buildx + QEMU emulation trap](#the-buildx--qemu-emulation-trap)).

The decision rule: **if upstream ships an ARM64 binary tarball, build from it.** Falling back to cross-compile is correct only when the binary is unavailable.

## Building from upstream binary tarball

The pattern is a thin wrapper that downloads the ARM64 binary, lays out filesystem expectations, and sets the entrypoint. The Dockerfile shape:

```dockerfile
# syntax=docker/dockerfile:1
FROM --platform=linux/arm64 ubuntu:22.04

RUN apt-get update \
    && apt-get install -y --no-install-recommends ca-certificates curl \
    && rm -rf /var/lib/apt/lists/*

ARG MM_VERSION=10.5.0
ADD --chown=root:root \
    https://releases.mattermost.com/${MM_VERSION}/mattermost-team-${MM_VERSION}-linux-arm64.tar.gz \
    /tmp/mattermost.tar.gz
RUN tar -xzf /tmp/mattermost.tar.gz -C /opt \
    && rm /tmp/mattermost.tar.gz \
    && mkdir -p /opt/mattermost/data /opt/mattermost/logs /opt/mattermost/config \
    && adduser --system --group mattermost \
    && chown -R mattermost:mattermost /opt/mattermost

USER mattermost
WORKDIR /opt/mattermost
EXPOSE 8065
ENTRYPOINT ["/opt/mattermost/bin/mattermost"]
```

Three details earn their keep.

`FROM --platform=linux/arm64` is load-bearing. Without it, `docker build` on a multi-platform builder may resolve `ubuntu:22.04` to whichever architecture matches the build host's default. The explicit platform forces resolution to the ARM64 variant of the base image, which matters when the build is running on an x86 CI runner.

`ADD <url>` is one of the few legitimate uses of `ADD` over `COPY` (see [Dockerfile best practices](../dockerfile-best-practices/) on `COPY` vs `ADD`). The URL form streams the tarball into a layer without a separate `curl` step. If reproducibility matters, replace it with `COPY` from a vendored tarball or pin the URL by checksum using BuildKit's `--checksum` flag.

The `ARG MM_VERSION` lets a single Dockerfile track upstream releases without source edits. Tag the resulting image with the same version: `myorg/mattermost-arm64:10.5.0`.

For software where upstream does not publish a tarball directly, but a binary is available inside an amd64 image, `crane export` (from `go-containerregistry`) can extract the binary from one image and feed it into a new ARM64 image. Most projects make this step unnecessary by publishing tarballs alongside Docker tags.

## imagePullPolicy when minikube docker-env

Building inside minikube's Docker daemon is the right pattern for development clusters: `eval $(minikube docker-env)` redirects the local Docker client to the daemon inside the minikube VM, and `docker build -t myorg/mattermost-arm64:10.5.0 .` produces an image already present on the node. No registry push is needed. See [Minikube docker-env in-cluster builds](../../infrastructure/minikube-docker-driver/) for the full pattern and pitfalls.

The implication for pod manifests: the image is on the node but is **not** in any registry. The default `imagePullPolicy: Always` (set when the tag is `:latest`) or `IfNotPresent` will both fail differently on different paths. The safest setting for a locally built image is explicit:

```yaml
image: myorg/mattermost-arm64:10.5.0
imagePullPolicy: Never
```

`Never` tells the kubelet to use the image already present on the node and not attempt a pull. With `IfNotPresent`, behavior depends on whether the tag was previously pulled from a registry under a colliding name; with `Always`, the kubelet attempts a pull and fails because the image was never pushed.

When the build is intended for a multi-node cluster, push to a registry and switch to `IfNotPresent` with a versioned tag. `Never` is a development-cluster pattern, not a production one.

## Verifying the build is actually ARM64

`docker build` succeeds and the image has a name. That does not prove the layers inside are ARM64. The build host's default platform, a stale base image cache, or a buildx misconfiguration can produce an "ARM64-named" image that is internally amd64.

Three commands produce three independent confirmations.

**Local image inspect:**

```bash
docker image inspect myorg/mattermost-arm64:10.5.0 \
  --format '{{.Architecture}}/{{.Os}}'
# Expected: arm64/linux
```

If this returns `amd64/linux`, the build resolved to the wrong platform. The most common cause is a missing `--platform=linux/arm64` on a `FROM` directive; the second is a buildx builder configured with a default platform of `linux/amd64`.

**Manifest inspect (multi-arch tags):**

```bash
docker manifest inspect myorg/mattermost-arm64:10.5.0 \
  | jq '.manifests[].platform // .platform'
# Expected: {"architecture": "arm64", "os": "linux"}
```

For a single-platform image, the top-level `.platform` field is set. For a multi-platform manifest list, `.manifests[].platform` enumerates each entry. If `arm64` is missing from the list, the tag does not include an ARM64 variant.

**Binary inspect (ground truth):**

```bash
docker run --rm --entrypoint=/bin/sh myorg/mattermost-arm64:10.5.0 \
  -c 'file /opt/mattermost/bin/mattermost'
# Expected: ELF 64-bit LSB executable, ARM aarch64
```

This is the only check that examines the actual binary inside the image. Manifest metadata can lie (or be misconfigured); the ELF header cannot. When the build pipeline runs in CI, the binary inspect should be a gate: if `file` does not report `aarch64`, the build fails.

## The buildx + QEMU emulation trap

`docker buildx build --platform=linux/arm64,linux/amd64` looks like it should "just work" for cross-architecture builds. On an Apple Silicon host (or any ARM64 build host), the amd64 leg is emulated by QEMU. On an x86 host, the ARM64 leg is emulated.

The trap has two faces.

**Slowness.** Compilation steps under QEMU emulation run at a fraction of native speed — typical reports range from 5x to 20x slower for large Go or C++ builds. A `go build` that takes 30 seconds natively can take 5–10 minutes emulated. CI build times balloon, and timeouts that pass natively start failing.

**Outright failure for Go binaries.** The `lfstack.push invalid packing` panic that crashes amd64 Go binaries on ARM64 hosts also crashes ARM64 Go binaries built by amd64 buildx workers under QEMU when the build itself runs Go code (e.g., `go generate`, `go test` during build). The buildx builder process is fine; the Go binaries it invokes during the build are not.

The mitigation is **native runners per architecture, not emulation**. A buildx setup with one ARM64 builder and one amd64 builder, joined into a buildx instance, avoids QEMU entirely:

```bash
docker buildx create --name native-multi --driver docker-container \
  --platform linux/arm64 --node arm-runner

docker buildx create --append --name native-multi \
  --platform linux/amd64 --node amd-runner

docker buildx use native-multi
docker buildx build --platform=linux/arm64,linux/amd64 \
  --tag myorg/mattermost-arm64:10.5.0 \
  --push .
```

Each platform's leg compiles on a native runner. The buildx instance assembles the final manifest list. No QEMU is involved in the compile path.

For one-off development builds where setting up two runners is overhead, the binary-tarball approach (build the ARM64 image natively from a pre-compiled binary, skip cross-compile entirely) avoids the trap by sidestepping the build of the binary itself.

## Quotable lessons

**An "ARM64 image" name is not a guarantee.** Verify with `docker image inspect`, manifest inspect, and a binary `file` check. The build pipeline lies more often than the kernel does.

**`FROM --platform=linux/arm64` is not redundant.** It is the only way to force base-image resolution on a multi-platform builder. Leave it out and the build host's default architecture wins silently.

**Build from upstream's ARM64 binary if one exists.** Cross-compile is for the case where it doesn't. Tarball builds are faster, smaller, and never trigger QEMU.

**`imagePullPolicy: Never` matches `eval $(minikube docker-env)` builds.** `IfNotPresent` is a registry pattern; `Never` is a node-local pattern. Use the one that matches how the image arrived on the node.

**QEMU emulation under buildx is a fallback, not a strategy.** Native runners per architecture are the default for any pipeline that builds more than once a week. Slowness is the visible failure; silent Go runtime corruption is the invisible one.

