Skip to content

Conversation

@jeet4320
Copy link
Contributor

Issue #, if available:

PR Checklist

  • I've prepended PR tag with frameworks/job this applies to : [mxnet, tensorflow, pytorch] | [ei/neuron] | [build] | [test] | [benchmark] | [ec2, ecs, eks, sagemaker]
  • (If applicable) I've documented below the DLC image/dockerfile this relates to
  • (If applicable) I've documented below the tests I've run on the DLC image
  • (If applicable) I've reviewed the licenses of updated and new binaries and their dependencies to make sure all licenses are on the Apache Software Foundation Third Party License Policy Category A or Category B license list. See https://www.apache.org/legal/resolved.html.
  • (If applicable) I've scanned the updated and new binaries to make sure they do not have vulnerabilities associated with them.

Pytest Marker Checklist

  • (If applicable) I have added the marker @pytest.mark.model("<model-type>") to the new tests which I have added, to specify the Deep Learning model that is used in the test (use "N/A" if the test doesn't use a model)
  • (If applicable) I have added the marker @pytest.mark.integration("<feature-being-tested>") to the new tests which I have added, to specify the feature that will be tested
  • (If applicable) I have added the marker @pytest.mark.multinode(<integer-num-nodes>) to the new tests which I have added, to specify the number of nodes used on a multi-node test
  • (If applicable) I have added the marker @pytest.mark.processor(<"cpu"/"gpu"/"eia"/"neuron">) to the new tests which I have added, if a test is specifically applicable to only one processor type

EIA/NEURON Checklist

  • When creating a PR:
  • I've modified src/config/build_config.py in my PR branch by setting ENABLE_EI_MODE = True or ENABLE_NEURON_MODE = True
  • When PR is reviewed and ready to be merged:
  • I've reverted the code change on the config file mentioned above

Benchmark Checklist

  • When creating a PR:
  • I've modified src/config/test_config.py in my PR branch by setting ENABLE_BENCHMARK_DEV_MODE = True
  • When PR is reviewed and ready to be merged:
  • I've reverted the code change on the config file mentioned above

Reviewer Checklist

  • For reviewer, before merging, please cross-check:
  • I've verified the code change on the config file mentioned above has already been reverted

Description:

Tests run:

DLC image/dockerfile:

Additional context:

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

os_version: "ubuntu18.04"
cuda_version: "cu110"
example: False
disable_sm_tag: True # [Default: False] This option is not used by Example images
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

disable_sm_tag should be False

device_types: ["cpu", "gpu"]
python_versions: ["py36"]
os_version: "ubuntu18.04"
cuda_version: "cu110"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cuda_version should be cu101

device_types: ["gpu"]
device_types: ["cpu"]
python_versions: ["py36"]
os_version: "ubuntu18.04"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

os_version should be ubuntu16.04

inference:
device_types: ["cpu", "gpu"]
python_versions: ["py36"]
os_version: "ubuntu18.04"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

os_version should be ubuntu16.04

@jeet4320 jeet4320 merged commit 6d75e64 into aws:master Apr 28, 2021
@jeet4320 jeet4320 deleted the release-pt1.6-cpu branch April 28, 2021 22:03
jeet4320 added a commit to jeet4320/deep-learning-containers that referenced this pull request Apr 29, 2021
* aws/master:
  [pytorch][release] Release pt1.6 Inference cpu, gpu and training cpu (aws#1074)
  [tensorflow, pytorch] [build] [test] [ec2, ecs, eks, sagemaker] Add EFA stack and tests (aws#1044)
  [pytorch][build][test] Update PT1.6.0 for pillow to 8.2.0 (aws#1071)
  Revert "[build,test] Disable dedicated telemetry tests and tags (aws#1045)" (aws#1055)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants