You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 19, 2024. It is now read-only.
what changes you made (git diff) or what code you wrote: I haven't changed anything after install.
what exact command you run: Run the example SimCLR-1GPU: python tools/run_distributed_engines.py hydra.verbose=true config=configs/config/test/integration_test/quick_simclr config.DATA.TRAIN.DATA_SOURCES=[datasource_path]
what you observed (including full logs):
Traceback (most recent call last):
File "tools/run_distributed_engines.py", line 57, in <module>
hydra_main(overrides=overrides)
File "tools/run_distributed_engines.py", line 33, in hydra_main
cfg = compose_hydra_configuration(overrides)
File "/home/shashank/tung/vissl/vissl/utils/hydra_config.py", line 125, in compose_hydra_configuration
return compose("defaults", overrides=overrides)
File "/home/shashank/anaconda3/envs/tung_ssl/lib/python3.8/site-packages/hydra/experimental/compose.py", line 31, in compose
cfg = gh.hydra.compose_config(
File "/home/shashank/anaconda3/envs/tung_ssl/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 505, in compose_config
self.config_loader.ensure_main_config_source_available()
File "/home/shashank/anaconda3/envs/tung_ssl/lib/python3.8/site-packages/hydra/_internal/config_loader_impl.py", line 120, in ensure_main_config_source_available
if not source.available():
File "/home/shashank/anaconda3/envs/tung_ssl/lib/python3.8/site-packages/hydra/_internal/core_plugins/importlib_resources_config_source.py", line 72, in available
ret = resources.is_resource(self.path, "__init__.py") # type:ignore
AttributeError: module 'importlib_resources' has no attribute 'is_resource'
please simplify the steps as much as possible so they do not require additional resources to
run, such as a private dataset.
Expected behavior:
If there are no obvious error in "what you observed" provided above,
please tell us the expected behavior.
Environment:
Provide your environment information using the following command:
/home/shashank/anaconda3/envs/tung_ssl/lib/python3.8/site-packages/torch/cuda/__init__.py:104: UserWarning:
NVIDIA A100 80GB PCIe with CUDA capability sm_80 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the NVIDIA A100 80GB PCIe GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
------------------- -------------------------------------------------------------------------------------------
sys.platform linux
Python 3.8.18 (default, Sep 11 2023, 13:40:15) [GCC 11.2.0]
numpy 1.19.5
Pillow 9.3.0
vissl 0.1.6 @/home/shashank/tung/vissl/vissl
GPU available True
GPU 0,1 NVIDIA A100 80GB PCIe
CUDA_HOME /usr
torchvision 0.9.1 @/home/shashank/anaconda3/envs/tung_ssl/lib/python3.8/site-packages/torchvision
hydra 1.0.7 @/home/shashank/anaconda3/envs/tung_ssl/lib/python3.8/site-packages/hydra
classy_vision 0.7.0.dev @/home/shashank/anaconda3/envs/tung_ssl/lib/python3.8/site-packages/classy_vision
tensorboard 2.14.0
apex 0.1 @/home/shashank/anaconda3/envs/tung_ssl/lib/python3.8/site-packages/apex
cv2 4.9.0
PyTorch 1.8.1 @/home/shashank/anaconda3/envs/tung_ssl/lib/python3.8/site-packages/torch
PyTorch debug build False
------------------- -------------------------------------------------------------------------------------------
PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
CPU info:
---------------------------------- --------------------------------------------------------------------------------------------------------
Architecture x86_64
CPU op-mode(s) 32-bit, 64-bit
Byte Order Little Endian
Address sizes 43 bits physical, 48 bits virtual
CPU(s) 128
On-line CPU(s) list 0-127
Thread(s) per core 1
Core(s) per socket 64
Socket(s) 2
NUMA node(s) 2
Vendor ID AuthenticAMD
CPU family 23
Model 49
Model name AMD EPYC 7742 64-Core Processor
Stepping 0
Frequency boost enabled
CPU MHz 1499.869
CPU max MHz 2250.0000
CPU min MHz 1500.0000
BogoMIPS 4500.00
Virtualization AMD-V
L1d cache 4 MiB
L1i cache 4 MiB
L2 cache 64 MiB
L3 cache 512 MiB
NUMA node0 CPU(s) 0-63
NUMA node1 CPU(s) 64-127
Vulnerability Gather data sampling Not affected
Vulnerability Itlb multihit Not affected
Vulnerability L1tf Not affected
Vulnerability Mds Not affected
Vulnerability Meltdown Not affected
Vulnerability Mmio stale data Not affected
Vulnerability Retbleed Vulnerable
Vulnerability Spec store bypass Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1 Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2 Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds Not affected
Vulnerability Tsx async abort Not affected
---------------------------------- --------------------------------------------------------------------------------------------------------
When to expect Triage
VISSL devs and contributors aim to triage issues asap however, as a general guideline, we ask users to expect triaging in 1-2 weeks.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Instructions To Reproduce the 🐛 Bug:
git diff
) or what code you wrote: I haven't changed anything after install.python tools/run_distributed_engines.py hydra.verbose=true config=configs/config/test/integration_test/quick_simclr config.DATA.TRAIN.DATA_SOURCES=[datasource_path]
run, such as a private dataset.
Expected behavior:
If there are no obvious error in "what you observed" provided above,
please tell us the expected behavior.
Environment:
Provide your environment information using the following command:
When to expect Triage
VISSL devs and contributors aim to triage issues asap however, as a general guideline, we ask users to expect triaging in 1-2 weeks.
The text was updated successfully, but these errors were encountered: