Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 2 additions & 5 deletions docker/install/install_python_packages.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,8 @@ set -u
CUDA_VERSION=${1:-cu128}

pip3 install torch --index-url https://download.pytorch.org/whl/${CUDA_VERSION}
pip3 install requests responses ninja pytest numpy scipy build nvidia-ml-py cuda-python einops nvidia-nvshmem-cu12
pip3 install click
pip3 install 'apache-tvm-ffi==0.1.0b15'
pip3 install nvidia-cutlass-dsl
pip3 install 'nvidia-cudnn-frontend>=1.13.0'
pip3 install -r requirements.txt
pip3 install responses pytest scipy build cuda-python nvidia-nvshmem-cu12
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve efficiency, you can combine these two pip install commands into a single call. This allows pip to perform dependency resolution once for all packages, which is faster and can help avoid potential dependency conflicts.

Suggested change
pip3 install -r requirements.txt
pip3 install responses pytest scipy build cuda-python nvidia-nvshmem-cu12
pip3 install -r requirements.txt responses pytest scipy build cuda-python nvidia-nvshmem-cu12


# Install cudnn package based on CUDA version
if [[ "$CUDA_VERSION" == *"cu13"* ]]; then
Expand Down
13 changes: 13 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
apache-tvm-ffi==0.1.0b15
click
einops
ninja
numpy
nvidia-cudnn-frontend>=1.13.0
nvidia-cutlass-dsl>=4.2.1
nvidia-ml-py
packaging>=24.2
requests
tabulate
torch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

torch is installed separately in docker/install/install_python_packages.sh using a specific index URL based on the CUDA version. Including it in requirements.txt is redundant and can lead to version conflicts or incorrect installations. For users installing via setup.py, it's standard practice to expect them to have torch pre-installed according to their specific hardware and CUDA setup. Please remove torch from this file.

tqdm
30 changes: 15 additions & 15 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,21 +41,21 @@ def generate_build_meta() -> None:

ext_modules: List[setuptools.Extension] = []
cmdclass: Mapping[str, type[setuptools.Command]] = {}
install_requires = [
"numpy",
"torch",
"ninja",
"requests",
"nvidia-ml-py",
"einops",
"click",
"tqdm",
"tabulate",
"apache-tvm-ffi==0.1.0b15",
"packaging>=24.2",
"nvidia-cudnn-frontend>=1.13.0",
"nvidia-cutlass-dsl>=4.2.1",
]


def get_install_requires() -> List[str]:
"""Read install requirements from requirements.txt."""
requirements_file = root / "requirements.txt"
if not requirements_file.exists():
return []
Comment on lines +49 to +50
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Silently returning an empty list when requirements.txt is missing can lead to a successful installation with missing dependencies, which will cause hard-to-debug runtime errors. It's better to let the build fail fast if requirements.txt is not found. Removing this check will allow path.read_text() to raise a FileNotFoundError, which is the desired behavior.

return [
line.strip()
for line in requirements_file.read_text().splitlines()
if line.strip() and not line.strip().startswith("#")
]


install_requires = get_install_requires()
generate_build_meta()


Expand Down
Loading