-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: enforce minimum version of docker/podman #1961
Conversation
a60a00c
to
0569010
Compare
gitlab might or might not fail if this goes in. Travis-CI is failing (despite result being OK). |
b102b9c
to
5ee397b
Compare
So far, the only CI that exhibited issues is Travis CI on graviton2 VM (still missing gitlab tests & Travis CI ppc64le/s390x). We can either allow docker 19.03 but print a warning or require TravisCI graviton2 VM users to update docker. |
5ee397b
to
d06bd6d
Compare
AppVeyor build can be seen successful on my fork: https://ci.appveyor.com/project/mayeut/cibuildwheel/builds/50422602 |
d06bd6d
to
1fac3b2
Compare
That sounds fine to me. I assume this is the incantation to upgrade docker on that platform?
addons:
apt:
sources:
- sourceline: 'deb https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable'
packages:
- docker-ce docker-ce-cli containerd.io |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broadly, very happy with this!
Apologies, I didn't make it all the way through my last review so I've reviewed again and spotted a couple more things.
This allows to always pass `--platform` to the OCI engine thus fixing issues with multiarch images.
1fac3b2
to
6cf264c
Compare
There's an issue with s390x & ppc64le on Travis CI so this requires a bit more work... |
It seems the installation of QEMU is failing on s390x & ppc64le which leads to the oci_container_test timeouts. |
Converting to draft until the remaining (weird) issue on s390x gets solved. |
The weird issue is that diff --git a/unit_test/oci_container_test.py b/unit_test/oci_container_test.py
index e913c203..803b09bd 100644
--- a/unit_test/oci_container_test.py
+++ b/unit_test/oci_container_test.py
@@ -8,6 +8,7 @@
import subprocess
import sys
import textwrap
+import time
from pathlib import Path, PurePath, PurePosixPath
import pytest
@@ -134,16 +135,23 @@ def test_container_removed(container_engine):
with OCIContainer(
engine=container_engine, image=DEFAULT_IMAGE, oci_platform=DEFAULT_OCI_PLATFORM
) as container:
- docker_containers_listing = subprocess.run(
- f"{container.engine.name} container ls",
- shell=True,
- check=True,
- stdout=subprocess.PIPE,
- text=True,
- ).stdout
assert container.name is not None
- assert container.name in docker_containers_listing
old_container_name = container.name
+ retry_count = 3
+ while retry_count > 0:
+ retry_count -= 1
+ docker_containers_listing = subprocess.run(
+ f"{container.engine.name} container ls",
+ shell=True,
+ check=True,
+ stdout=subprocess.PIPE,
+ text=True,
+ ).stdout
+ if container.name in docker_containers_listing:
+ break
+ if retry_count == 0:
+ pytest.xfail("s390x travis...")
+ time.sleep(0.5)
docker_containers_listing = subprocess.run(
f"{container.engine.name} container ls", I will let the debug run finish before pushing again to this branch skipping this test on Travis CI s390x... |
Skipping the flaky test on s390x allowed to run all tests successfully. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks great!
FYI This change causes a build failure for mypy on For the build, the mypy repo is first cloned in a previous step and cibuildwheel subsequently run with The error
--
The link to the PR: mypyc/mypy_mypyc-wheels#82 |
This allows to always pass
--platform
to the OCI engine thus fixing issues with multiarch images.It also allows to use
docker cp
instead of system tar workaround.fix #1771
fix #1957
fix #1962
close #1968
Last time this was attempted, it failed on some CI (c.f. #1773), let's check the status now.