Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Building from a multi-arch image unpacks wrong architecture image #182

Closed
mschrupp opened this issue May 15, 2018 · 9 comments
Closed

Building from a multi-arch image unpacks wrong architecture image #182

mschrupp opened this issue May 15, 2018 · 9 comments

Comments

@mschrupp
Copy link
Contributor

Hi again,

another issue that I noticed while testing with kaniko and gitlab:

When building a project and the Dockerfile build FROM a multi-arch image,
the image that is pulled and unpacked is not matching the system architecture.

Tested with an arm device:

When I have FROM alpine:3.7 (which is multi-arch and supports arm) in my dockerfile, it fails (exec format error)

Using FROM arm32v6/alpine:3.7 instead works.

(btw: Kaniko itself runs perfectly on arm!)

This would be a nice feature!
It would also make sense to annotate the manifest correctly when pushing.

super cool stuff, thank you!

@priyawadhwa
Copy link
Collaborator

Would you be able to provide the Dockerfile you're trying to build and logs of the error? We have an integration test which pulls alpine:3.7 and is passing, so the error might be coming from some command in the Dockerfile which isn't being handled correctly.

@mschrupp
Copy link
Contributor Author

Hey @priyawadhwa, sure!

I'm running kaniko in my own container, but I think that's not important, because I can reproduce the error.

I can build this Dockerfile on a RPi Kubernetes node:

FROM arm32v6/alpine:3.7
RUN touch /testfile

But not this one:

FROM alpine:3.7
RUN touch /testfile

I get this error:

Running with gitlab-runner 10.7.1 (b9bba623)
  on Kubernetes Runner f1c50857
Using Kubernetes namespace: gitlab
Using Kubernetes executor with image registry.***.***/***/kaniko-gitlab:arm-0.1.3 ...
Waiting for pod gitlab/runner-f1c50857-project-27-concurrent-0zv65g to be running, status is Pending
Waiting for pod gitlab/runner-f1c50857-project-27-concurrent-0zv65g to be running, status is Pending
Running on runner-f1c50857-project-27-concurrent-0zv65g via gitlab-runner-arm-7cb466c9c-s8vz5...
Cloning repository...
Cloning into '/***/test-kaniko'...
Checking out cdb481e2 as master...
Skipping Git submodules setup
$ mkdir -p /root/.docker
$ echo "{\"auths\":{\"registry.***.****\":{\"username\":\"gitlab-ci-token\",\"password\":\"$CI_JOB_TOKEN\"}}}" > /root/.docker/config.json
$ /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:arm-$VERSION
time="2018-05-15T18:10:17Z" level=info msg="Unpacking filesystem of alpine:3.7..."
2018/05/15 18:10:17 No matching credentials found for index.docker.io, falling back on anonymous
time="2018-05-15T18:10:18Z" level=info msg="Mounted directories: [/kaniko /var/run /proc /dev /dev/pts /sys/fs/cgroup /sys/fs/cgroup/systemd /sys/fs/cgroup/blkio /sys/fs/cgroup/devices /sys/fs/cgroup/cpuset /sys/fs/cgroup/freezer /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/net_cls /sys/fs/cgroup/memory /dev/mqueue /samsonpe /sys /sys/fs/cgroup /sys/fs/cgroup/systemd /sys/fs/cgroup/blkio /sys/fs/cgroup/devices /sys/fs/cgroup/cpuset /sys/fs/cgroup/freezer /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/net_cls /sys/fs/cgroup/memory /sys/kernel/debug /sys/kernel/config /dev/termination-log /etc/resolv.conf /etc/hostname /etc/hosts /dev/shm /var/run/docker.sock /var/run/secrets/kubernetes.io/serviceaccount /proc/asound /proc/bus /proc/fs /proc/irq /proc/sys /proc/sysrq-trigger /proc/keys /proc/latency_stats /proc/timer_list /proc/sched_debug /sys/firmware]"
time="2018-05-15T18:10:19Z" level=info msg="Unpacking layer: 0"
time="2018-05-15T18:10:19Z" level=info msg="Not adding /dev because it is whitelisted"
time="2018-05-15T18:10:19Z" level=info msg="Not adding /etc/hostname because it is whitelisted"
time="2018-05-15T18:10:19Z" level=info msg="Not adding /etc/hosts because it is whitelisted"
time="2018-05-15T18:10:20Z" level=info msg="Not adding /proc because it is whitelisted"
time="2018-05-15T18:10:20Z" level=info msg="Not adding /sys because it is whitelisted"
time="2018-05-15T18:10:20Z" level=info msg="Not adding /var/run because it is whitelisted"
time="2018-05-15T18:10:20Z" level=info msg="Taking snapshot of full filesystem..."
time="2018-05-15T18:10:28Z" level=info msg="cmd: /bin/sh"
time="2018-05-15T18:10:28Z" level=info msg="args: [-c touch /bla]"
time="2018-05-15T18:10:28Z" level=error msg="fork/exec /bin/sh: exec format error"
ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1

The Dockerfile I used for building my test kaniko-gitlab:arm-0.1.3 image (built with docker build directly on a RPi). Reason for the git patch here.

FROM golang:1.10
RUN git clone https://github.com/GoogleContainerTools/kaniko.git /go/src/github.com/GoogleContainerTools/kaniko
WORKDIR /go/src/github.com/GoogleContainerTools/kaniko
COPY gitlab.patch ./gitlab.patch
RUN git apply gitlab.patch
RUN GOOS=linux CGO_ENABLED=0 go build -ldflags '-extldflags "-static" -X .version=v0.1.0 -w -s  ' -o out/executor github.com/GoogleContainerTools/kaniko/cmd/executor

FROM scratch
COPY --from=0 /go/src/github.com/GoogleContainerTools/kaniko/out/executor /kaniko/executor
COPY --from=0 /usr/share/ca-certificates/mozilla/ /kaniko/ssl/certs/
COPY --from=busybox /bin /usr/local/bin
ENV HOME /root
ENV USER /root
ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin
ENV SSL_CERT_DIR=/kaniko/ssl/certs
ENTRYPOINT ["/usr/local/bin/sh"]

@priyawadhwa
Copy link
Collaborator

Hey @jesusofsuburbia -- I was able to build an image with both alpine:3.7 and arm32v6/alpine:3.7 on a standard Kubernetes cluster, but we don't plan on supporting any other architectures at this time.

@priyawadhwa
Copy link
Collaborator

I'm going to go ahead and close this issue, please open another if you have any more questions!

@everflux
Copy link

everflux commented Feb 2, 2019

I have the same problem with a ARM64 kubernetes cluster using CRI-O runtime and a multi-stage dockerfile for building, build is controlled by kaniko.

I use a custom image for the first stage, then switch to nginx and get an exec format error.

FROM registry.docker-registry:31000/angular-cli:latest AS ngcli
....
FROM nginx:alpine AS app
COPY --from=ngcli /app/dist/demo /usr/share/nginx/html/
COPY nginx/default.conf /etc/nginx/conf.d/default.conf.template
RUN chown -R nginx /etc/nginx

COPY nginx/start.sh /start.sh
CMD ["/start.sh"]

The nginx:alpine image on dockerhub is a multiarch manifest, arm64 beeing available.

@everflux
Copy link

everflux commented Feb 3, 2019

I suspect this is might be caused by #535 - when specifying the exact image (without :latest or any other tag!), the build works:

FROM docker.io/library/nginx@sha256:3e495e5de0023ec106254479c4a683fce191592a33bb24e88649c362c86ecd02

But this is obviously not the desired behaviour for multi-arch images.
The nginx manifest refers to images that have both tags and hashes as identifier.

@jonjohnsonjr
Copy link
Contributor

@everflux I added a WithPlatform option in the library that kaniko is using: google/go-containerregistry#408

You could use that here and passing in a Platform based on the current runtime would make that work.

It's unclear to me if that should always happen or be based on some flag.

@lisa
Copy link

lisa commented Apr 27, 2019

I ran into this as well, and have opened #646 in the kaniko project.

This, however, is more widespread than just kaniko. For example, https://github.com/google/ko also ends up calling image.Remote. Any project that doesn't use the WithPlatform option will hit this behaviour.

@jonjohnsonjr
Copy link
Contributor

jonjohnsonjr commented Apr 29, 2019

At least for ko, using the current platform is almost always going to be the wrong thing for e.g. anyone using macOS, since AFAIK there isn't a way to run kubernetes on a mac, though I agree this could definitely be handled better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants