Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 5 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,46 +83,18 @@ TileLang achieves exceptional performance across a variety of computational patt
</div>

## Installation
### Method 1: Install with Pip

The quickest way to get started is to install the latest release from PyPI:
### Quick Installation

```bash
pip install tilelang
```

Alternatively, you can install directly from the GitHub repository:

```bash
pip install git+https://github.com/tile-ai/tilelang
```

Or install locally:
The easiest way to install TileLang is via pip:

```bash
# install required system dependencies
sudo apt-get update
sudo apt-get install -y python3-setuptools gcc libtinfo-dev zlib1g-dev build-essential cmake libedit-dev libxml2-dev

pip install -e . -v # remove -e option if you don't want to install in editable mode, -v for verbose output
pip install tilelang
```

### Method 2: Build from Source
We currently provide three ways to install **tile-lang** from source:
- [Install from Source (using your own TVM installation)](./docs/get_started/Installation.md#method-1-install-from-source-using-your-own-tvm-installation)
- [Install from Source (using the bundled TVM submodule)](./docs/get_started/Installation.md#method-2-install-from-source-using-the-bundled-tvm-submodule)
- [Install Using the Provided Script](./docs/get_started/Installation.md#method-3-install-using-the-provided-script)

### Method 3: Install with Nightly Version

For users who want access to the latest features and improvements before official releases, we provide nightly builds of **tile-lang**.

```bash
pip install tilelang -f https://tile-ai.github.io/whl/nightly/cu121/
# or pip install tilelang --find-links https://tile-ai.github.io/whl/nightly/cu121/
```
### Advanced Installation

> **Note:** Nightly builds contain the most recent code changes but may be less stable than official releases. They're ideal for testing new features or if you need a specific bugfix that hasn't been released yet.
For more installation options, including **Nightly Builds**, **Building from Source**, and **Docker**, please refer to the [Installation Guide](./docs/get_started/Installation.md).

## Quick Start

Expand Down
198 changes: 105 additions & 93 deletions docs/get_started/Installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,74 @@ After installing tilelang, you can verify the installation by running:
python -c "import tilelang; print(tilelang.__version__)"
```

## Install with Nightly Version

For users who want access to the latest features and improvements before official releases, we provide nightly builds of tilelang.

```bash
pip install tilelang -f https://tile-ai.github.io/whl/nightly/cu121/
# or pip install tilelang --find-links https://tile-ai.github.io/whl/nightly/cu121/
```

> **Note:** Nightly builds contain the most recent code changes but may be less stable than official releases. They're ideal for testing new features or if you need a specific bugfix that hasn't been released yet.

## Install Using Docker

For users who prefer a containerized environment with all dependencies pre-configured, tilelang provides Docker images for different CUDA versions. This method is particularly useful for ensuring consistent environments across different systems.

**Prerequisites:**
- Docker installed on your system
- NVIDIA Docker runtime or GPU is not necessary for building tilelang, you can build on a host without GPU and use that built image on other machine.

1. **Clone the Repository**:

```bash
git clone --recursive https://github.com/tile-ai/tilelang
cd tilelang
```

2. **Build Docker Image**:

Navigate to the docker directory and build the image for your desired CUDA version:

```bash
cd docker
docker build -f Dockerfile.cu120 -t tilelang-cu120 .
```

Available Dockerfiles:
- `Dockerfile.cu120` - For CUDA 12.0
- Other CUDA versions may be available in the docker directory

3. **Run Docker Container**:

Start the container with GPU access and volume mounting:

```bash
docker run -itd \
--shm-size 32g \
--gpus all \
-v /home/tilelang:/home/tilelang \
--name tilelang_b200 \
tilelang-cu120 \
/bin/zsh
```

**Command Parameters Explanation:**
- `--shm-size 32g`: Increases shared memory size for better performance
- `--gpus all`: Enables access to all available GPUs
- `-v /home/tilelang:/home/tilelang`: Mounts host directory to container (adjust path as needed)
- `--name tilelang_b200`: Assigns a name to the container for easy management
- `/bin/zsh`: Uses zsh as the default shell

4. **Access the Container and Verify Installation**:

```bash
docker exec -it tilelang_b200 /bin/zsh
# Inside the container:
python -c "import tilelang; print(tilelang.__version__)"
```

## Building from Source

**Prerequisites for building from source:**
Expand Down Expand Up @@ -111,79 +179,8 @@ TVM_ROOT=<your-tvm-repo> pip install . -v

> **Note**: This will still rebuild the TVM-related libraries (stored in `TL_LIBS`). And this method often leads to some path issues. Check `env.py` to see some environment variables which are not set properly.

(install-using-docker)=

## Install Using Docker

For users who prefer a containerized environment with all dependencies pre-configured, tilelang provides Docker images for different CUDA versions. This method is particularly useful for ensuring consistent environments across different systems.

**Prerequisites:**
- Docker installed on your system
- NVIDIA Docker runtime or GPU is not necessary for building tilelang, you can build on a host without GPU and use that built image on other machine.

1. **Clone the Repository**:

```bash
git clone --recursive https://github.com/tile-ai/tilelang
cd tilelang
```

2. **Build Docker Image**:

Navigate to the docker directory and build the image for your desired CUDA version:

```bash
cd docker
docker build -f Dockerfile.cu120 -t tilelang-cu120 .
```

Available Dockerfiles:
- `Dockerfile.cu120` - For CUDA 12.0
- Other CUDA versions may be available in the docker directory

3. **Run Docker Container**:

Start the container with GPU access and volume mounting:

```bash
docker run -itd \
--shm-size 32g \
--gpus all \
-v /home/tilelang:/home/tilelang \
--name tilelang_b200 \
tilelang-cu120 \
/bin/zsh
```

**Command Parameters Explanation:**
- `--shm-size 32g`: Increases shared memory size for better performance
- `--gpus all`: Enables access to all available GPUs
- `-v /home/tilelang:/home/tilelang`: Mounts host directory to container (adjust path as needed)
- `--name tilelang_b200`: Assigns a name to the container for easy management
- `/bin/zsh`: Uses zsh as the default shell

4. **Access the Container and Verify Installation**:

```bash
docker exec -it tilelang_b200 /bin/zsh
# Inside the container:
python -c "import tilelang; print(tilelang.__version__)"
```

## Install with Nightly Version

For users who want access to the latest features and improvements before official releases, we provide nightly builds of tilelang.

```bash
pip install tilelang -f https://tile-ai.github.io/whl/nightly/cu121/
# or pip install tilelang --find-links https://tile-ai.github.io/whl/nightly/cu121/
```

> **Note:** Nightly builds contain the most recent code changes but may be less stable than official releases. They're ideal for testing new features or if you need a specific bugfix that hasn't been released yet.

## Install Configs

### Build-time environment variables

`USE_CUDA`: If to enable CUDA support, default: `ON` on Linux, set to `OFF` to build a CPU version. By default, we'll use `/usr/local/cuda` for building tilelang. Set `CUDAToolkit_ROOT` to use different cuda toolkit.

`USE_ROCM`: If to enable ROCm support, default: `OFF`. If your ROCm SDK does not located in `/opt/rocm`, set `USE_ROCM=<rocm_sdk>` to enable build ROCm against custom sdk path.
Expand All @@ -205,27 +202,6 @@ where `<sdk>={cuda,rocm,metal}`. Specifically, when `<sdk>=cuda` and `CUDA_VERSI
`<sdk>=cu<cuda_major><cuda_minor>`, similar with this part in pytorch.
Set `NO_TOOLCHAIN_VERSION=ON` to disable this.

### Run-time environment variables

Please refer to the `env.py` file for a full list of supported run-time environment variables.

## Other Tips

### IDE Configs

Building tilelang locally will automatically generate a `compile_commands.json` file in `build` dir.
VSCode with clangd and [clangd extension](https://marketplace.visualstudio.com/items?itemName=llvm-vs-code-extensions.vscode-clangd) should be able to index that without extra configuration.

### Compile Cache

The default path of the compile cache is `~/.tilelang/cache`. `ccache` will be automatically used if found.

### Repairing Wheels

If you plan to use your wheel in other environment,
it's recommended to use auditwheel (on Linux) or delocate (on Darwin)
to repair them.

(faster-rebuild-for-developers)=

### Faster Rebuild for Developers
Expand Down Expand Up @@ -258,3 +234,39 @@ you'll see logs like below:
$ python -c 'import tilelang'
2025-10-14 11:11:29 [TileLang:tilelang.env:WARNING]: Loading tilelang libs from dev root: /Users/yyc/repo/tilelang/build
```

## Other Tips

### IDE Configs

Building tilelang locally will automatically generate a `compile_commands.json` file in `build` dir.
VSCode with clangd and [clangd extension](https://marketplace.visualstudio.com/items?itemName=llvm-vs-code-extensions.vscode-clangd) should be able to index that without extra configuration.

### Compile Cache

The default path of the compile cache is `~/.tilelang/cache`. `ccache`/`sccache` will be automatically used if found.

### Repairing Wheels

If you plan to use your wheel in other environment,
it's recommended to use auditwheel (on Linux) or delocate (on Darwin)
to repair them.

### Run-time environment variables

Please refer to the `env.py` file for a full list of supported run-time environment variables.

## Troubleshooting

### Z3 Dependency Issue during Installation

If you encounter Z3-related errors when running `pip install .` (often happening in `uv` environments), it might be due to `pip`'s build isolation. `libtvm` might depend on a `libz3` present in the temporary build environment but missing or incompatible in your actual runtime environment.

To resolve this, use the `--no-build-isolation` flag:

```bash
pip install -r requirements-dev.txt
pip install . -v --no-build-isolation
```

This ensures `libtvm` builds against the dependencies in your current environment.
Loading