Skip to content

Conversation

@masahi
Copy link
Member

@masahi masahi commented Jul 28, 2023

Flash attention v2 was released last week and it is claimed to give good speed up compared to existing implementations. I integrated flash v2 into our Relax CUTLASS BYOC and got 2-3 iter / sec speed up on SD v1.5 UNet compared to our existing flow that uses the xformer kernel.

The original code depends on libtorch so it is hard to integrate. A version without torch dep was created in https://github.com/tlc-pack/libflash_attn and I'm adding it as a submodule.

@tqchen @vinx13 @yzh119 @sunggg

@tvm-bot
Copy link
Collaborator

tvm-bot commented Jul 28, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

  • No users to tag found in teams: submodule See #10317 for details

Generated by tvm-bot

@tqchen
Copy link
Member

tqchen commented Jul 28, 2023

@junrushao
Copy link
Member

Very cool work! This will substantially help with TVM performance on stable diffusion and LLM training workloads

@tqchen tqchen merged commit 5b431f5 into apache:main Jul 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants