Skip to content
Closed
Show file tree
Hide file tree
Changes from 21 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions .circleci/config.yml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

28 changes: 28 additions & 0 deletions .circleci/config.yml.in
Original file line number Diff line number Diff line change
Expand Up @@ -359,6 +359,33 @@ jobs:
- run_tests_selective:
file_or_dir: test/test_prototype_*.py

unittest_jpeg_ref:
docker:
- image: condaforge/mambaforge
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any chance we could stick to conda instead of mamba, so as to stick to the current tools that we use in the rest of the job?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably can (there is most likely a docker image that gives us conda), but I'm not sure we should. mamba is a re-implementation of the protocol, but faster. So we effectively waste CI resources by using conda.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we rely on the same image as the other unittest jobs here, to avoid any risk of deviating from our "baseline" testing infra? I believe they all rely on conda as well so there must be a way to install conda, even if we don't choose condaforge/mambaforge as the image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides our CPU workflows on Linux, none of the other workflows even use docker:

vision/.circleci/config.yml

Lines 719 to 722 in d675c0c

unittest_linux_cpu:
<<: *binary_common
docker:
- image: "pytorch/manylinux-cuda102"

vision/.circleci/config.yml

Lines 758 to 761 in d675c0c

unittest_linux_gpu:
<<: *binary_common
machine:
image: ubuntu-1604-cuda-10.2:202012-01

vision/.circleci/config.yml

Lines 808 to 811 in d675c0c

unittest_windows_cpu:
<<: *binary_common
executor:
name: windows-cpu

vision/.circleci/config.yml

Lines 846 to 849 in d675c0c

unittest_windows_gpu:
<<: *binary_common
executor:
name: windows-gpu

vision/.circleci/config.yml

Lines 893 to 896 in d675c0c

unittest_macos_cpu:
<<: *binary_common
macos:
xcode: "12.0"

Apart from that, they all download the miniconda installer:

wget -O miniconda.sh "http://repo.continuum.io/miniconda/Miniconda3-latest-${os}-x86_64.sh"

curl --output miniconda.exe https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe -O

There is a container for that, but again, I see no point. It makes no difference if you install the environment with conda or mamba other than the latter is significantly faster.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to switch all of our jobs to mamba in another PR, but could we please stick to the same workflow that the other jobs are using? There is value in avoiding discrepancies with the rest of our CI jobs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What other workflow? Manually installing conda and setting things up or can I at least use a docker image with conda pre-installed?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I'm a bit confused: what prevents us from staying as close as possible to unittest_linux_cpu: ?

resource_class: xlarge
steps:
- run:
name: Prepare mamba
command: |
mv ~/.bashrc ~/._bashrc
conda init bash
cat ~/.bashrc >> $BASH_ENV
mv ~/._bashrc ~/.bashrc
- checkout
- run:
name: Create environment
command: |
mamba env create -n jpeg-ref -f .circleci/unittest/jpeg-ref-env.yml
echo 'conda activate jpeg-ref' >> $BASH_ENV
- run:
name: Install torchvision
command: pip install -v --no-build-isolation --editable .
- run:
name: Enable JPEG ref tests
command: echo 'export PYTORCH_TEST_JPEG_REF=1' >> $BASH_ENV
- run_tests_selective:
file_or_dir: test/test_image.py::TestJPEGRef

binary_linux_wheel:
<<: *binary_common
docker:
Expand Down Expand Up @@ -1093,6 +1120,7 @@ workflows:
- unittest_torchhub
- unittest_onnx
- unittest_prototype
- unittest_jpeg_ref
{{ unittest_workflows() }}

cmake:
Expand Down
19 changes: 19 additions & 0 deletions .circleci/unittest/jpeg-ref-env.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
channels:
- pytorch-nightly
- conda-forge

dependencies:
- python == 3.7.*
- setuptools
- compilers
- ninja
- cmake

- cpuonly
- pytorch

- numpy
- requests
- libpng
- jpeg
- pillow >=5.3.0, !=8.3.*
63 changes: 32 additions & 31 deletions test/test_image.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
import torch
import torchvision.transforms.functional as F
from common_utils import needs_cuda, assert_equal
from common_utils import run_on_env_var
from PIL import Image, __version__ as PILLOW_VERSION
from torchvision.io.image import (
decode_png,
Expand Down Expand Up @@ -478,50 +479,50 @@ def test_write_jpeg_reference(img_path, tmpdir):
assert_equal(torch_bytes, pil_bytes)


# TODO: Remove the skip. See https://github.com/pytorch/vision/issues/5162.
@pytest.mark.skip("this test fails because PIL uses libjpeg-turbo")
@run_on_env_var(
"PYTORCH_TEST_JPEG_REF",
skip_reason=(
"JPEG reference tests compare `torchvision` JPEG encoding against `Pillow`'s. "
"By default `torchvision` is build against `libjpeg` while `Pillow` builds against `libjpeg-turbo`. "
"Make sure to use the same underlying library and set PYTORCH_TEST_JPEG_REF=1 to run the tests."
),
)
@pytest.mark.parametrize(
"img_path",
[pytest.param(jpeg_path, id=_get_safe_image_name(jpeg_path)) for jpeg_path in get_images(ENCODE_JPEG, ".jpg")],
)
def test_encode_jpeg(img_path):
img = read_image(img_path)
class TestJPEGRef:
def test_encode_jpeg(self, img_path):
img = read_image(img_path)

pil_img = F.to_pil_image(img)
buf = io.BytesIO()
pil_img.save(buf, format="JPEG", quality=75)
pil_img = F.to_pil_image(img)
buf = io.BytesIO()
pil_img.save(buf, format="JPEG", quality=75)

encoded_jpeg_pil = torch.frombuffer(buf.getvalue(), dtype=torch.uint8)
encoded_jpeg_pil = torch.frombuffer(buf.getvalue(), dtype=torch.uint8)

for src_img in [img, img.contiguous()]:
encoded_jpeg_torch = encode_jpeg(src_img, quality=75)
assert_equal(encoded_jpeg_torch, encoded_jpeg_pil)
for src_img in [img, img.contiguous()]:
encoded_jpeg_torch = encode_jpeg(src_img, quality=75)
assert_equal(encoded_jpeg_torch, encoded_jpeg_pil)

def test_write_jpeg(self, img_path, tmpdir):
tmpdir = Path(tmpdir)
img = read_image(img_path)
pil_img = F.to_pil_image(img)

# TODO: Remove the skip. See https://github.com/pytorch/vision/issues/5162.
@pytest.mark.skip("this test fails because PIL uses libjpeg-turbo")
@pytest.mark.parametrize(
"img_path",
[pytest.param(jpeg_path, id=_get_safe_image_name(jpeg_path)) for jpeg_path in get_images(ENCODE_JPEG, ".jpg")],
)
def test_write_jpeg(img_path, tmpdir):
tmpdir = Path(tmpdir)
img = read_image(img_path)
pil_img = F.to_pil_image(img)
torch_jpeg = str(tmpdir / "torch.jpg")
pil_jpeg = str(tmpdir / "pil.jpg")

torch_jpeg = str(tmpdir / "torch.jpg")
pil_jpeg = str(tmpdir / "pil.jpg")
write_jpeg(img, torch_jpeg, quality=75)
pil_img.save(pil_jpeg, quality=75)

write_jpeg(img, torch_jpeg, quality=75)
pil_img.save(pil_jpeg, quality=75)
with open(torch_jpeg, "rb") as f:
torch_bytes = f.read()

with open(torch_jpeg, "rb") as f:
torch_bytes = f.read()

with open(pil_jpeg, "rb") as f:
pil_bytes = f.read()
with open(pil_jpeg, "rb") as f:
pil_bytes = f.read()

assert_equal(torch_bytes, pil_bytes)
assert_equal(torch_bytes, pil_bytes)


if __name__ == "__main__":
Expand Down