Skip to content

[BE] use more minimal torch headers for hopper/flash_api.cpp#1674

Merged
tridao merged 2 commits intoDao-AILab:mainfrom
janeyx99:minimal-torch-headers
May 22, 2025
Merged

[BE] use more minimal torch headers for hopper/flash_api.cpp#1674
tridao merged 2 commits intoDao-AILab:mainfrom
janeyx99:minimal-torch-headers

Conversation

@janeyx99
Copy link
Copy Markdown
Contributor

Tiny change to narrow the libtorch APIs used, found during my exploration work for designing a minimal set of stable torch ABIs.

narrowed <torch/nn/functional.h> to <torch/nn/functional/padding.h>
narrowed <ATen/cuda/CUDAContext.h> to <ATen/cuda/CUDAContextLight.h>

After #1662 we should also be able to delete <torch/version.h> but that's also p tiny.

@janeyx99 janeyx99 force-pushed the minimal-torch-headers branch from 9a2034f to 5ab9189 Compare May 21, 2025 21:08
@tridao tridao merged commit 0e79d71 into Dao-AILab:main May 22, 2025
playerzer0x pushed a commit to Liqhtworks/flash-attention that referenced this pull request Jul 24, 2025
…ab#1674)

Co-authored-by: Tri Dao <tridao@users.noreply.github.com>
elewarr pushed a commit to elewarr/flash-attention that referenced this pull request Feb 4, 2026
…ab#1674)

Co-authored-by: Tri Dao <tridao@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants