Add Dockerfile to build manager binary and emit production image#46
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cchengleo The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
63deee3 to
78e5a2b
Compare
78e5a2b to
c1b5669
Compare
davidvossel
left a comment
There was a problem hiding this comment.
I have also been looking into this recently. One frustration I ran into is I still haven't found a good way to preserve the golang cache between docker builds. Re-downloading the go mod every build is kind of slow locally for me.
To get around this, I created #42 which uses docker to build the manager controller and dumps it into the normal bin/manager directory in the source tree just as if it was built using make manager without a docker container.
The advantage of #42 is I'm using a docker volume to persist the golang cache between builds. This lets me use a docker container for builds, but don't have to re-download the go mod every time building and publishing the manager container locally.
thoughts?
| RUN --mount=type=cache,target=/root/.cache/go-build \ | ||
| --mount=type=cache,target=/go/pkg/mod \ |
There was a problem hiding this comment.
While this preserves the cache across the multi stage docker build process, it doesn't cache between docker builds.
For example, In our dev environment's we'd have to re-download go mod every time this multi state docker build runs.
There was a problem hiding this comment.
Actually, as inspired by CAPI Dockerfile, the caching is handled appropriately across docker builds.
Below is the 1st build without any caching, the [builder 5/8] RUN --mount=type=cache,target=/go/pkg/mod go mod download step takes significant amounts (751.2s/795.6s) of total build time:
make docker-build
...
[+] Building 795.6s (19/19) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 2.19kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> resolve image config for docker.io/docker/dockerfile:1.1-experimental 0.0s
=> CACHED docker-image://docker.io/docker/dockerfile:1.1-experimental 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 2.19kB 0.0s
=> [internal] load metadata for gcr.io/distroless/static:nonroot 1.3s
=> [internal] load metadata for docker.io/library/golang:1.16.2 0.0s
=> [stage-1 1/3] FROM gcr.io/distroless/static:nonroot@sha256:bca3c203cdb36f5914ab8568e4c25165643ea9b711b41a8a58b42c80a51ed609 0.0s
=> => resolve gcr.io/distroless/static:nonroot@sha256:bca3c203cdb36f5914ab8568e4c25165643ea9b711b41a8a58b42c80a51ed609 0.0s
=> => sha256:bca3c203cdb36f5914ab8568e4c25165643ea9b711b41a8a58b42c80a51ed609 1.67kB / 1.67kB 0.0s
=> => sha256:213a6d5205aa1421bd128b0396232a22fbb4eec4cbe510118f665398248f6d9a 426B / 426B 0.0s
=> => sha256:bff4de2cb7e1dd0ed9797c6e33688f32f2ff0293ecee6fa069051081710bb61b 478B / 478B 0.0s
=> [builder 1/8] FROM docker.io/library/golang:1.16.2 0.0s
=> [internal] load build context 0.7s
=> => transferring context: 21.07MB 0.7s
=> [builder 2/8] WORKDIR /workspace 0.0s
=> [builder 3/8] COPY go.mod go.mod 0.0s
=> [builder 4/8] COPY go.sum go.sum 0.0s
=> [builder 5/8] RUN --mount=type=cache,target=/go/pkg/mod go mod download 751.2s
=> [builder 6/8] COPY ./ ./ 0.1s
=> [builder 7/8] RUN --mount=type=cache,target=/root/.cache/go-build --mount=type=cache,target=/go/pkg/mod go build . 18.5s
=> [builder 8/8] RUN --mount=type=cache,target=/root/.cache/go-build --mount=type=cache,target=/go/pkg/mod CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -ldflags "${ldflags} -extldflags '-static'" 23.1s
=> [stage-1 2/3] COPY --from=builder /workspace/manager . 0.1s
=> exporting to image 0.2s
=> => exporting layers 0.2s
=> => writing image sha256:734f443b21a43060fdd5ceee4573172c2fcada70fe0662d0d83933441a73923f 0.0s
=> => naming to localhost:5000/capk-manager-amd64:dev 0.0s
...
And blew is the 2nd build after I made a minor code change. Benefitting from the cache, the [builder 5/8] RUN --mount=type=cache,target=/go/pkg/mod go mod download step takes 0.0s this time:
$ make docker-build
...
[+] Building 30.5s (19/19) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> resolve image config for docker.io/docker/dockerfile:1.1-experimental 0.0s
=> CACHED docker-image://docker.io/docker/dockerfile:1.1-experimental 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load metadata for gcr.io/distroless/static:nonroot 1.5s
=> [internal] load metadata for docker.io/library/golang:1.16.2 0.0s
=> CACHED [stage-1 1/3] FROM gcr.io/distroless/static:nonroot@sha256:bca3c203cdb36f5914ab8568e4c25165643ea9b711b41a8a58b42c80a51ed609 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 31.80kB 0.0s
=> [builder 1/8] FROM docker.io/library/golang:1.16.2 0.0s
=> CACHED [builder 2/8] WORKDIR /workspace 0.0s
=> CACHED [builder 3/8] COPY go.mod go.mod 0.0s
=> CACHED [builder 4/8] COPY go.sum go.sum 0.0s
=> CACHED [builder 5/8] RUN --mount=type=cache,target=/go/pkg/mod go mod download 0.0s
=> [builder 6/8] COPY ./ ./ 0.1s
=> [builder 7/8] RUN --mount=type=cache,target=/root/.cache/go-build --mount=type=cache,target=/go/pkg/mod go build . 2.8s
=> [builder 8/8] RUN --mount=type=cache,target=/root/.cache/go-build --mount=type=cache,target=/go/pkg/mod CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -ldflags "${ldflags} -extldflags '-static'" 25.4s
=> [stage-1 2/3] COPY --from=builder /workspace/manager . 0.1s
=> exporting to image 0.2s
=> => exporting layers 0.2s
=> => writing image sha256:2de0b2782d8bb40fc294027986e99e863ee0bcb3b856533f10ee97c38b69dab8 0.0s
=> => naming to localhost:5000/capk-manager-amd64:dev 0.0s
...
However, depending on the different CI systems, the cache may not be preserved across CI builds. For example, as we use Tekton for our CI, we have to create a separate PVC to preserve the cache across CI builds. I guess maybe somehow mount the PVC and wire up to RUN --mount=type=cache,target=/root/.cache/go-build --mount=type=cache,target=/go/pkg/mod go build .. But that is less concern for now.
There was a problem hiding this comment.
@davidvossel I think you are partially right too. Like if you change the go.mod or the go.sum file you will have to download all dependencies again, but otherwise the COPY command should see that the file did not change and just go on. I did not think of this possibility when we discussed it. It may be good enough.
There was a problem hiding this comment.
oh i see. this works because of the lines which COPY then RUN the go mod download before the lines that COPY/RUN the build. so container image layering helps us here as long as go.mod/go.sum do not change.
This makes sense from a container build perspective.
However, what do we want to do with make manager make target now? It would be nice to be able to build using a containerized environment during the dev process in a way that doesn't generate a new container image every time.
There was a problem hiding this comment.
@davidvossel Seems we have reached the agreement on building the production image part with this PR.
Regarding the make manager part, we actually could enforce to use containerized environment during the dev process (some of my past projects are enforced). To satisfy the caching requirement, I am thinking of two approaches:
1). Launch a build-util container to build the manager binary, and cache the dependencies with volume. Like what you propose in #42
2). Perform make docker build with the Dockerfile proposed by this PR. Then create a temporary container (without running) and copy the manager binary out of the container. Remove the temporary container. The caching will be implemented by the image layers implicitly. And I can see this approach have one less script to maintain.
There was a problem hiding this comment.
@rmohr and @davidvossel, if you are okay with this Dockerfile for production image, could you approve the PR? My internal CI is blocked by this.
There was a problem hiding this comment.
However, what do we want to do with
make managermake target now? It would be nice to be able to build using a containerized environment during the dev process in a way that doesn't generate a new container image every time.
@davidvossel I agree that would be nice. Let's continue iterating on that in your PR.
@rmohr and @davidvossel, if you are okay with this Dockerfile for production image, could you approve the PR? My internal CI is blocked by this.
Works for me. I am not sure if prow will accept my lgtm, since I am not in the org. Let's see.
|
/lgtm |
|
@rmohr: changing LGTM is restricted to collaborators DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
…m_2023-06-27-09-21 Auto sync upstream 2023 06 27 09 21
What this PR does / why we need it:
Update Dockerfile to build manager binary and emit production image. The Dockerfile is inspired from CAPI.
Which issue this PR fixes: fixes #45