Skip to content

Feature/amd specific coverage#6

Merged
diptorupd merged 4 commits intoamd-integrationfrom
feature/amd-specific-coverage
Jan 26, 2026
Merged

Feature/amd specific coverage#6
diptorupd merged 4 commits intoamd-integrationfrom
feature/amd-specific-coverage

Conversation

@diptorupd
Copy link
Owner

Test merge

@diptorupd diptorupd merged commit 9ffbe93 into amd-integration Jan 26, 2026
diptorupd pushed a commit that referenced this pull request Jan 28, 2026
This PR fixes some of the unit test failures that occur in Single
Decode. It also disables clang formatting of headers.
The clang format of headers causes compilation issues. The compiler is
unable to find `HIP WARP SYNC INTRINSICS` causing failures. Disabling
clang format fixes these issues

```
    Start 1: MathTest
1/6 Test #1: MathTest .........................   Passed    3.31 sec
    Start 2: PosEncTest
2/6 Test #2: PosEncTest .......................   Passed    3.36 sec
    Start 3: CascadeTest
3/6 Test #3: CascadeTest ......................   Passed    3.35 sec
    Start 4: PageTest
4/6 Test #4: PageTest .........................   Passed  114.08 sec
    Start 5: SingleDecodeTest
5/6 Test #5: SingleDecodeTest .................   Passed   35.22 sec
    Start 6: BatchDecodeTest
6/6 Test #6: BatchDecodeTest ..................   Passed  559.75 sec

100% tests passed, 0 tests failed out of 6

Total Test time (real) = 719.07 sec
```
diptorupd pushed a commit that referenced this pull request Jan 28, 2026
In this PR, we add infra for enabling decode via flashinfer gpu_iface.
This PR does not change existing infrastructure and we can still build
decode using AOT and JIT.

Tested locally 
```
    Start 5: SingleDecodeTest
5/6 Test #5: SingleDecodeTest .................   Passed   35.12 sec
    Start 6: BatchDecodeTest
6/6 Test #6: BatchDecodeTest ..................   Passed  541.87 sec
```

We will have a follow up PR for enabling AOT decode using flashinfer
gpu_iface
diptorupd pushed a commit that referenced this pull request Jan 28, 2026
CPP test suite was using `hipified` headers. In this PR, we port over unit tests to use `gpu_iface`. This is necessary for us as the next step is to move the build infrastructure to use `gpu_iface`

This PR has been tested locally 
```
Test project /root/flashinfer/libflashinfer/tests/hip/build
    Start 1: MathTest
1/6 Test #1: MathTest .........................   Passed    3.40 sec
    Start 2: PosEncTest
2/6 Test #2: PosEncTest .......................   Passed    3.40 sec
    Start 3: CascadeTest
3/6 Test #3: CascadeTest ......................   Passed  985.27 sec
    Start 4: PageTest
4/6 Test #4: PageTest .........................   Passed  112.40 sec
    Start 5: SingleDecodeTest
5/6 Test #5: SingleDecodeTest .................   Passed   35.46 sec
    Start 6: BatchDecodeTest
6/6 Test #6: BatchDecodeTest ..................   Passed  556.81 sec

100% tests passed, 0 tests failed out of 6
```

To replicate the tests
```
cd flashinfer/libflashinfer/tests/hip
```
```
mkdir build && cd build/
```
```
cmake -DCMAKE_PREFIX_PATH=/root/libtorch -DCMAKE_CXX_COMPILER:PATH=/opt/rocm/bin/amdclang++ -DFLASHINFER_INCLUDE_DIRS=/root/flashinfer/libflashinfer/include/ ..
```
```
make
```
```
ctest
```
diptorupd pushed a commit that referenced this pull request Jan 28, 2026
In this PR I remove the `libtorch` dependency and removed
`test_page.cpp`. `test_page.cpp` is the only unit test that uses
libtorch. However, we also have a pytest for testing page. We will use
that for validation.

Removing the libtorch dependency will help us speed docker builds and
remove additional dependencies.


```Test project /root/flashinfer/libflashinfer/tests/hip/build
    Start 1: MathTest
1/8 Test #1: MathTest ............................   Passed    0.31 sec
    Start 2: PosEncTest
2/8 Test #2: PosEncTest ..........................   Passed    0.31 sec
    Start 3: CascadeTest
3/8 Test #3: CascadeTest .........................   Passed  1369.12 sec
    Start 4: SingleDecodeTest
4/8 Test #4: SingleDecodeTest ....................   Passed  7726.35 sec
    Start 5: BatchDecodeTest
5/8 Test #5: BatchDecodeTest .....................   Passed  811.61 sec
    Start 6: test_mfma_fp32_16x16x16fp16
6/8 Test #6: test_mfma_fp32_16x16x16fp16 .........   Passed    0.30 sec
    Start 7: test_transpose_4x4_half_registers
7/8 Test #7: test_transpose_4x4_half_registers ...   Passed    0.28 sec
    Start 8: test_rowsum
8/8 Test #8: test_rowsum .........................   Passed    0.27 sec

100% tests passed, 0 tests failed out of 8
```
diptorupd pushed a commit that referenced this pull request Jan 28, 2026
### Summary

This PR updates the Dockerfile to ensure the Micromamba environment is automatically activated when the Docker container is started, without requiring the user to manually run activation commands.

### Changes

- Appended Micromamba shell hook and environment activation to `~/.bashrc`.
  - This ensures the environment is active in all interactive shell sessions (e.g., `docker run -it`).
- Keeps existing build process and environment creation logic unchanged.

### Why?

Previously, users needed to manually activate the Micromamba environment after starting the container:

This update streamlines the user experience by making the environment ready-to-use immediately after container startup.

### Test
You can test this by building and running the container interactively:
```bash
docker build -f docker/Dockerfile.rocm_ci --target flashinfer_base -t flashinfer-rocm . 2>&1 | tee docker_build.log
```

and running

```bash
docker run -it --network=host --group-add=video \
           --privileged --ipc=host --cap-add=SYS_PTRACE \
           --security-opt seccomp=unconfined --device /dev/kfd \
           --device /dev/dri flashinfer-rocm
```

Then inside the container, run:

```bash
pip show flashinfer
```

<img width="598" height="120" alt="{AF28D7FF-D427-499B-9FAA-1EE2C4F71C9B}" src="https://github.com/user-attachments/assets/27be1c83-7ccf-41bf-9006-28066a222321" />
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant