-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pip wheel bustes cache #1099
Comments
here is docker-compose second run
|
|
Am I right to assume that these builds happen from different checkouts? A common thing that is not guaranteed to be deterministic is the |
@tonistiigi the build setup checkout the same commit id This is just one example, and I have have several more services doing the same. |
@FernandoMiguel Can you post an example that is reproducible? |
@tonistiigi since I'm running this against our organisation code base, I can't share that code. |
As this seems to be the last layer that loses the cache it might also be that is gets cleaned up by gc. You can try to turn it off of set a bigger limit. If you run with debug logs you can see the gc invocations from the logs. |
@tonistiigi I've seen it happen with intermediary layers too. How can I increase the cache? |
@FernandoMiguel |
cheers @tonistiigi |
it exports to the local docker registry on the host. |
i spent a while on this, but can't find a way to enable debug or gc=false, when using |
@tonistiigi thanks. |
I don't know what you mean by inspect, the flag is for |
@tonistiigi I see. Thanks. I'll give it another go tomorrow |
/usr/libexec/docker/cli-plugins/docker-buildx build --platform=local -o . git://github.com/docker/buildx
/usr/libexec/docker/cli-plugins/docker-buildx create --use --buildkitd-flags '--debug --oci-worker-gc=false'
/usr/libexec/docker/cli-plugins/docker-buildx inspect quirky_merkle
Name: quirky_merkle
Driver: docker-container
Nodes:
Name: quirky_merkle0
Endpoint: unix:///var/run/docker.sock
Status: running
Flags: --debug --oci-worker-gc=false
Platforms: linux/amd64
Client: Docker Engine - Community
Version: 19.03.1
API version: 1.40
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:21:22 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.1
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:19:53 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683
Client:
Debug Mode: false
Plugins:
buildx: Build with BuildKit (Docker Inc.)
Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 7
Server Version: 19.03.1
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.0.0-1010-aws
Operating System: Ubuntu 19.04
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 30.96GiB
Name: ip-10-50-0-9
ID: FWMR:6PLO:IWGI:LHTL:KQXD:2IO2:XCUP:PAYZ:USP7:ZFAN:P575:CIEH
Docker Root Dir: /mnt/nvme/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
/usr/libexec/docker/cli-plugins/docker-buildx bake --progress plain -f .jenkins/docker-compose.images-base.1.yml -f .jenkins/buildx.images-base.1.yml flask
|
# syntax=docker/dockerfile:experimental
FROM python:3.6-alpine AS ms-python-wheel
WORKDIR /src
RUN --mount=type=cache,id=apk,sharing=locked,target=/var/cache/apk ln -vs /var/cache/apk /etc/apk/cache && \
apk add --update --virtual build-dependencies \
gcc \
build-base \
libffi-dev \
libressl-dev \
postgresql-dev
COPY . /src/
RUN --mount=type=cache,id=wheel,sharing=locked,target=/root \
pip wheel -e . |
@FernandoMiguel from the output, you can see that |
@tiborvass yes I'm aware of the risks of using dot, and when possible avoid it. These are consequentive runs of the same commit id. When using docker compose with a similar Dockerfile (minus the buildkit cache stuff) this very same build will be fully cached. I would love to dig deeper and understand what is making a cache miss. If you have any ideas, I would love to see BuildKit beat docker-compose 😊 |
@FernandoMiguel this still an issue? |
@thaJeztah i no longer have access to the underlying code base to try to reproduce |
.jenkins/buildx.images-base.1.yml
.jenkins/docker-compose.images-base.1.yml
this is the 2nd run, after cache has been created from a previous successful build
The text was updated successfully, but these errors were encountered: