Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
e456cd3
Drop cuFile workaround for pre-CUDA 11.8
jakirkham Jul 16, 2025
5a6b778
Use dynamic library for nvJPEG
jakirkham Jul 16, 2025
0ca8ef7
Remove lingering CUDA 11 checks from Conda recipes
jakirkham Jul 16, 2025
b94d4e2
Drop CUDA 11 trove classifier
jakirkham Jul 16, 2025
9e46e6f
Drop CUDA 11 from docs
jakirkham Jul 16, 2025
19d6af2
Drop unused variable `CUDA_MAJOR_VERSION`
jakirkham Jul 16, 2025
6cc92a6
Use CMake's `CUDA::nvjpeg` target directly
jakirkham Jul 16, 2025
e899a6b
Drop workarounds to retrieve cuFile and nvJPEG
jakirkham Jul 16, 2025
47602ba
Drop more `libnvjpeg-static` occurences
jakirkham Jul 16, 2025
0d94e84
Move cuFile include to stub source
jakirkham Jul 16, 2025
0b5fb05
Move cuFile include back to header
jakirkham Jul 16, 2025
6abd4b5
Always include cuFile header
jakirkham Jul 16, 2025
3a296bd
Use CMake's CUDA::cuFile target
jakirkham Jul 16, 2025
6ba441a
Find CMake's CUDAToolkit for CUDA dependencies
jakirkham Jul 16, 2025
64a69c7
Use `libcufile` on ARM
jakirkham Jul 16, 2025
53af8f2
Merge branch 'branch-25.08' into drop_cuda_11
jakirkham Jul 16, 2025
65f00dd
Merge branch 'branch-25.08' into drop_cuda_11
jakirkham Jul 22, 2025
2315f06
Use static NVJPEG if found otherwise use shared
jakirkham Jul 22, 2025
d53e58e
Drop unneeded link dependencies
jakirkham Jul 22, 2025
050ef58
Ensure CUDA Toolkit headers are picked up
jakirkham Jul 22, 2025
c44e7fd
Consolidate `target_*_definitions`
jakirkham Jul 22, 2025
3158ca9
Add CUDA Toolkit includes to cufile_stub consumers
jakirkham Jul 22, 2025
49934d0
Merge branch 'branch-25.08' into drop_cuda_11
jakirkham Jul 31, 2025
fdf955b
Merge branch 'branch-25.08' into drop_cuda_11
jakirkham Jul 31, 2025
a3d2f7a
Merge branch 'branch-25.10' into drop_cuda_11
jakirkham Jul 31, 2025
ad02b70
Update gds/CMakeLists.txt
grlee77 Aug 20, 2025
f06bdc9
Merge branch 'branch-25.10' into drop_cuda_11
bdice Aug 20, 2025
1b5a984
Update gds/CMakeLists.txt
grlee77 Aug 21, 2025
d42ec91
update cupy install strings in notebooks
grlee77 Aug 21, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion conda/environments/all_cuda-129_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ dependencies:
- ipython
- lazy-loader>=0.4
- libnvjpeg-dev
- libnvjpeg-static
- matplotlib-base>=3.7
- nbsphinx
- ninja
Expand Down
1 change: 0 additions & 1 deletion conda/environments/all_cuda-129_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ dependencies:
- lazy-loader>=0.4
- libcufile-dev
- libnvjpeg-dev
- libnvjpeg-static
- matplotlib-base>=3.7
- nbsphinx
- ninja
Expand Down
22 changes: 3 additions & 19 deletions conda/recipes/libcucim/conda_build_config.yaml
Original file line number Diff line number Diff line change
@@ -1,14 +1,11 @@
c_compiler_version:
- 13 # [not os.environ.get("RAPIDS_CUDA_VERSION", "").startswith("11")]
- 11 # [os.environ.get("RAPIDS_CUDA_VERSION", "").startswith("11")]
- 13

cxx_compiler_version:
- 13 # [not os.environ.get("RAPIDS_CUDA_VERSION", "").startswith("11")]
- 11 # [os.environ.get("RAPIDS_CUDA_VERSION", "").startswith("11")]
- 13

cuda_compiler:
- cuda-nvcc # [not os.environ.get("RAPIDS_CUDA_VERSION", "").startswith("11")]
- nvcc # [os.environ.get("RAPIDS_CUDA_VERSION", "").startswith("11")]
- cuda-nvcc
Comment on lines 1 to +8
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No more GCC 11 now that we are on CUDA 12+

This sticks with GCC 13 on CUDA 12 for simplicity

Though we have been bumping to GCC 14 in RAPIDS projects. So that could be done as a follow up


c_stdlib:
- sysroot
Expand All @@ -18,16 +15,3 @@ c_stdlib_version:

cmake_version:
- ">=3.30.4"

# The CTK libraries below are missing from the conda-forge::cudatoolkit package
# for CUDA 11. The "*_host_*" version specifiers correspond to `11.8` packages
# and the "*_run_*" version specifiers correspond to `11.x` packages.

cuda11_libcufile_host_version:
- "1.4.0.31"

cuda11_libcufile_run_version:
- ">=1.0.0.82,<=1.4.0.31"

cuda11_libnvjpeg_host_version:
- "11.6.0.55"
Comment on lines -22 to -33
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are CUDA 11 pinnings for dependencies outside the Conda cudatoolkit package

None of that is relevant for CUDA 12+, which has the full CTK as packages with associated pinning

Also the usage of these values was dropped already with PR: #889

Hence these can just be dropped

5 changes: 2 additions & 3 deletions conda/recipes/libcucim/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -55,15 +55,14 @@ requirements:
host:
- cuda-version ={{ cuda_version }}
- cuda-cudart-dev
- libcufile-dev # [linux64]
- libcufile-dev
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cuFile is available on all platforms. Hence it is added as a dependency here

- libnvjpeg-dev
- libnvjpeg-static
Comment on lines 59 to -60
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nvJPEG is now dynamically linked to in the Conda case. Hence no need for the static library

- nvtx-c >=3.1.0
- openslide
run:
- {{ pin_compatible('cuda-version', max_pin='x', min_pin='x') }}
- cuda-cudart
- libcufile # [linux64]
- libcufile
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This adds the cuFile dependency on ARM

However the CTK only added cuFile for ARM in CUDA 12.2: conda-forge/libcufile-feedstock#9

So this won't work for ARM with CUDA 12.0 or 12.1. Meaning we need to use a similar workaround as KvikIO did: rapidsai/kvikio#754

Another option would be to just start depending on KvikIO

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressing in PR: #930

- libnvjpeg
run_constrained:
- {{ pin_compatible('openslide') }}
Expand Down
23 changes: 16 additions & 7 deletions cpp/plugins/cucim.kit.cuslide/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -125,8 +125,6 @@ superbuild_depend(cli11)
superbuild_depend(pugixml)
superbuild_depend(json)
superbuild_depend(libdeflate)
superbuild_depend(nvjpeg)
superbuild_depend(libculibos)
Comment on lines -128 to -129
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These handled finding the nvJPEG and cuLIBOS dependencies via custom CMake logic

That is no longer needed now that they come from the CTK

Hence this and the associated files are dropped


################################################################################
# Find cucim package
Expand Down Expand Up @@ -216,11 +214,6 @@ target_compile_options(${CUCIM_PLUGIN_NAME} PRIVATE $<$<COMPILE_LANGUAGE:CXX>:-W
# Link libraries
target_link_libraries(${CUCIM_PLUGIN_NAME}
PRIVATE
# Use custom nvjpeg_static that supports GPU input (>= CUDA 11.6)
deps::nvjpeg_static # add this before cudart so that nvjpeg.h in static library takes precedence.
# Add CUDA::culibos to link necessary methods for 'deps::nvjpeg_static'
CUDA::culibos # for nvjpeg
CUDA::cudart
deps::fmt
cucim::cucim
deps::libtiff
Expand All @@ -231,6 +224,22 @@ target_link_libraries(${CUCIM_PLUGIN_NAME}
deps::json
deps::libdeflate
)
if (TARGET CUDA::nvjpeg_static)
target_link_libraries(${CUCIM_PLUGIN_NAME}
PRIVATE
# Add nvjpeg before cudart so that nvjpeg.h in static library takes precedence.
CUDA::nvjpeg_static
# Add CUDA::culibos to link necessary methods for 'deps::nvjpeg_static'
CUDA::culibos
Comment on lines +230 to +233
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like I goofed on the indenting here. Will follow up on this

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressing in PR: #931

CUDA::cudart
)
else()
target_link_libraries(${CUCIM_PLUGIN_NAME}
PRIVATE
CUDA::nvjpeg
CUDA::cudart
)
endif()
Comment on lines +227 to +242
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now this checks for nvJPEG static and uses it if it finds it. Otherwise dynamically linking to nvJPEG

This works for the needs of cuCIM. Namely wheels continue to statically link to nvJPEG. Meanwhile Conda packages can dynamically link to nvJPEG

However what would be better is to add an option to build nvJPEG either dynamically or statically (like with cuFile's option). The code here could similarly be updated to check that option (just like cuFile's linking logic). Then wheel and Conda builds could pass that flag based on their needs

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Raised this in a new issue: #932


target_include_directories(${CUCIM_PLUGIN_NAME}
PUBLIC
Expand Down
42 changes: 0 additions & 42 deletions cpp/plugins/cucim.kit.cuslide/cmake/deps/libculibos.cmake

This file was deleted.

41 changes: 0 additions & 41 deletions cpp/plugins/cucim.kit.cuslide/cmake/deps/nvjpeg.cmake

This file was deleted.

1 change: 0 additions & 1 deletion dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,6 @@ dependencies:
cuda: "12.*"
packages:
- libnvjpeg-dev
- libnvjpeg-static
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that the nvJPEG version we need is in the CTK and there are Conda packages for it, we have switched to dynamic linking

Already commented on the relevant build logic making this change above

- output_types: conda
matrices:
- matrix:
Expand Down
51 changes: 10 additions & 41 deletions gds/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,9 @@ if (NOT APPLE)
set(CMAKE_INSTALL_RPATH $ORIGIN)
endif ()

# Find CUDA Toolkit for cudart and cufile.
find_package(CUDAToolkit REQUIRED)
Comment on lines +27 to +28
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To ensure cuFile is picked up from the CTK, we add this find_package line

Note that in the case where nvJPEG is used, we already had an equivalent line. Hence no change there


################################################################################
# Add library: cufile_stub
################################################################################
Expand Down Expand Up @@ -52,52 +55,19 @@ target_compile_options(cufile_stub
)

## Link libraries
target_link_libraries(cufile_stub
if (CUCIM_STATIC_GDS)
# Enabling CUCIM_STATIC_GDS statically links cuFile
target_link_libraries(cufile_stub
PUBLIC
${CMAKE_DL_LIBS}
Comment on lines +58 to 62
Copy link
Member Author

@jakirkham jakirkham Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As another note, it appears we define a bunch of stuff unconditionally that needs dlopen, etc. and thus need to link to ${CMAKE_DL_LIBS}. However we don't actually need to use it in the static case. So it seems this linkage could be dropped (after adding some of the C++ code in guards)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Raised in issue: #933

PRIVATE
Comment on lines +58 to 63
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This diff (and the diff behind the fold) looks more exciting than it is

All this does is reproduce the previous logic

cucim/gds/CMakeLists.txt

Lines 96 to 101 in 54cc342

target_link_libraries(cufile_stub
PUBLIC
${CMAKE_DL_LIBS}
PRIVATE
deps::gds_static
)

It just replaces deps::gds_static with CUDA::cuFile_static. The former was a stand-in for what CMake now provides. So we just switch to the latter

CUDA::cudart
)

# Set GDS include path (cufile.h)
if (DEFINED ENV{CONDA_BUILD} AND EXISTS $ENV{PREFIX}/include/cufile.h)
set(GDS_INCLUDE_PATH "$ENV{PREFIX}/include")
elseif (DEFINED ENV{CONDA_PREFIX} AND EXISTS $ENV{CONDA_PREFIX}/include/cufile.h)
set(GDS_INCLUDE_PATH "$ENV{CONDA_PREFIX}/include")
elseif (EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/../temp/cuda/include/cufile.h)
set(GDS_INCLUDE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/../temp/cuda/include)
else ()
set(GDS_INCLUDE_PATH /usr/local/cuda/include)
endif ()

message("Set GDS_INCLUDE_PATH to '${GDS_INCLUDE_PATH}'.")
Comment on lines -62 to -73
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was just to find the path to the cuFile download we did on CI

Using CMake's CUDA::cuFile* targets to pick up the cuFile in the CUDA Toolkit obviates this need


# Enabling CUCIM_STATIC_GDS assumes that lib/libcufile_static.a and include/cufile.h is available
# under ../temp/cuda folder.
if (CUCIM_STATIC_GDS)
add_library(deps::gds_static STATIC IMPORTED GLOBAL)

if (DEFINED ENV{CONDA_BUILD})
set(GDS_STATIC_LIB_PATH "$ENV{PREFIX}/lib/libcufile_static.a")
elseif (DEFINED ENV{CONDA_PREFIX})
set(GDS_STATIC_LIB_PATH "$ENV{CONDA_PREFIX}/lib/libcufile_static.a")
elseif (EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/../temp/cuda/lib64/libcufile_static.a)
set(GDS_STATIC_LIB_PATH ${CMAKE_CURRENT_SOURCE_DIR}/../temp/cuda/lib64/libcufile_static.a)
else ()
set(GDS_STATIC_LIB_PATH /usr/local/cuda/lib64/libcufile_static.a)
endif ()

message("Set GDS_STATIC_LIB_PATH to '${GDS_STATIC_LIB_PATH}'.")
Comment on lines -80 to -90
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again obviated by CMake's CUDA::cuFile* targets


set_target_properties(deps::gds_static PROPERTIES
IMPORTED_LOCATION "${GDS_STATIC_LIB_PATH}"
INTERFACE_INCLUDE_DIRECTORIES "${GDS_INCLUDE_PATH}"
CUDA::cuFile_static
)
Comment on lines +64 to 65
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is where deps::gds_static was replaced with CUDA::cuFile_static . For some reason GitHub put a large number of deleted lines between the two. Unclear why it did that. In any event, this is connected with the above content

else()
# Use `dlopen` to load cuFile at runtime
target_link_libraries(cufile_stub
PUBLIC
${CMAKE_DL_LIBS}
PRIVATE
deps::gds_static
)
Comment on lines +66 to 71
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just the same dynamic cuFile library case as before

cucim/gds/CMakeLists.txt

Lines 54 to 60 in 54cc342

## Link libraries
target_link_libraries(cufile_stub
PUBLIC
${CMAKE_DL_LIBS}
PRIVATE
CUDA::cudart
)

For some reason GitHub included deps::gds_static as removed, but it is just confusing things. So ignore that

Also as we use dlopen to load cuFile, we don't need to link to it in this case. Hence why CUDA::cuFile is not here. We do add the CTK header files later (as before)

We did include the CUDA::cudart before. However did not actually see usage of cudart. So dropped that. If there was an issue, we would have seen that at link time, but we don't see any

That said, @gigony please let me know if I'm missing something here

endif()

Expand All @@ -110,9 +80,8 @@ PUBLIC
target_include_directories(cufile_stub
PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/include>
$<BUILD_INTERFACE:${GDS_INCLUDE_PATH}>
${CUDAToolkit_INCLUDE_DIRS}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since cuFile is picked up from the CTK now, this can just be the CTK header directories

PRIVATE
# Add path to cufile.h explicitly. ${TOP}/temp/cuda would be available by `./run copy_gds_files_`
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As Greg already noted, the comment above is irrelevant. So it is dropped

${CMAKE_CURRENT_SOURCE_DIR}/../cpp/include # for including helper.h in cucim/dynlib
)

Expand Down
7 changes: 6 additions & 1 deletion gds/include/cufile_stub.h
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,12 @@
#ifndef CUCIM_CUFILE_STUB_H
#define CUCIM_CUFILE_STUB_H

#include "cufile.h"
// Try to include the real cufile.h, fall back to minimal types if not available
#if __has_include(<cufile.h>)
#include <cufile.h>
#else
#include "cufile_stub_types.h"
#endif
Comment on lines +19 to +24
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@grlee77 it looks like you added this change. Could you please share a bit more context?

Given the CTK has cufile.h, would think we could just use that #include directly

Is there some other issue that is coming up?


#include "cucim/dynlib/helper.h"

Expand Down
10 changes: 0 additions & 10 deletions gds/src/cufile_stub.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -61,19 +61,9 @@ void CuFileStub::load()
{
// Note: Load the dynamic library with RTLD_NODELETE flag because libcufile.so uses thread_local which can
// cause a segmentation fault if the library is dynamically loaded/unloaded. (See #158)
// CUDA versions before CUDA 11.7.1 did not ship libcufile.so.0, so this is
// a workaround that adds support for all prior versions of libcufile.
handle_ = cucim::dynlib::load_library(
{
"libcufile.so.0",
"libcufile.so.1.3.0" /* 11.7.0 */,
"libcufile.so.1.2.1" /* 11.6.2, 11.6.1 */,
"libcufile.so.1.2.0" /* 11.6.0 */,
"libcufile.so.1.1.1" /* 11.5.1 */,
"libcufile.so.1.1.0" /* 11.5.0 */,
"libcufile.so.1.0.2" /* 11.4.4, 11.4.3, 11.4.2 */,
"libcufile.so.1.0.1" /* 11.4.1 */,
"libcufile.so.1.0.0" /* 11.4.0 */
},
Comment on lines 62 to 67
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In CUDA 11, cuFile's SOVERSION changed repeatedly. Hence we had this workaround. Since we used dlopen, the workarounds for all CUDA 11 versions were needed even when building with CUDA 11.8 only

Now that CUDA 11 is dropped, none of these workarounds are needed. Hence this now just loads libcufile.so.0

RTLD_LAZY | RTLD_LOCAL | RTLD_NODELETE);
if (handle_ == nullptr)
Expand Down
4 changes: 2 additions & 2 deletions notebooks/Accessing_File_with_GDS.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
"\n",
"or\n",
"```\n",
"!pip install cupy-cuda110\n",
"!pip install cupy-cuda12x\n",
"!pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html\n",
Comment on lines -37 to 38
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@grlee77 thanks for updating the CuPy packages! 🙏

Should we do the same with PyTorch and friends?

There are a bunch more lines like these throughout the notebooks. Perhaps we should update them all and rerun to make sure they are working ok

"```"
]
Expand All @@ -49,7 +49,7 @@
"\n",
"# or\n",
"\n",
"#!pip install cupy-cuda110\n",
"#!pip install cupy-cuda12x\n",
"#!pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html"
]
},
Expand Down
6 changes: 3 additions & 3 deletions notebooks/Basic_Usage.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
"or\n",
"```\n",
"!pip install pillow\n",
"!pip install numpy scipy scikit-image cupy-cuda110 # for cucim dependency (assuming that CUDA 11.0 is used for CuPy)\n",
"!pip install numpy scipy scikit-image cupy-cuda12x # for cucim dependency (assuming that CUDA 12.x is used for CuPy)\n",
"```"
]
},
Expand All @@ -32,12 +32,12 @@
"metadata": {},
"outputs": [],
"source": [
"#!conda install -c conda-forge pillow\n",
"# !conda install -c conda-forge pillow\n",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like an errant space snuck in here

"\n",
"# or\n",
"\n",
"# !pip install pillow\n",
"# !pip install numpy scipy scikit-image cupy-cuda110 # for cucim dependency (assuming that CUDA 11.0 is used for CuPy)"
"# !pip install numpy scipy scikit-image cupy-cuda12x # for cucim dependency (assuming that CUDA 12.x is used for CuPy)"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions notebooks/Working_with_DALI.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
"```\n",
"or\n",
"```\n",
"!pip install cucim scipy scikit-image cupy-cuda110 matplotlib\n",
"!pip install cucim scipy scikit-image cupy-cuda12x matplotlib\n",
"!pip install --extra-index-url https://developer.download.nvidia.com/compute/redist --upgrade nvidia-dali-cuda110\n",
"```"
]
Expand All @@ -40,7 +40,7 @@
"\n",
"# or\n",
"\n",
"#!pip install cucim scipy scikit-image cupy-cuda110 matplotlib\n",
"#!pip install cucim scipy scikit-image cupy-cuda12x matplotlib\n",
"# Assume that CUDA Toolkit 11.0 is available on the systsem.\n",
"#!pip install --extra-index-url https://developer.download.nvidia.com/compute/redist --upgrade nvidia-dali-cuda110"
]
Expand Down
Loading