[TUTORIAL] Remove grouped gemm simulation from 09-persistent-matmul#5461
Merged
peterbell10 merged 1 commit intomainfrom Dec 19, 2024
Merged
[TUTORIAL] Remove grouped gemm simulation from 09-persistent-matmul#5461peterbell10 merged 1 commit intomainfrom
peterbell10 merged 1 commit intomainfrom
Conversation
As discussed in the [multi-buffering PR], the persistent matmul should be kept as an apples-to-apples performance comparison. In particular, the existing perf results makes tensor-descriptor look bad. With this updated tutorial I get results like (`K=4096, prec=fp8`): ``` ├─ 1278.215 4731.062 cublas [M=8192, N=8192, K=4096] │ └─ nan 4731.062 sm90_xmma_gemm_e4m3e4m3_e4m3f32_f32_tn_n_tilesize128x128x128_warpgroupsize1x1x1_bias_f16_execute_segment_k_off_kernel__5x_cublas ├─ 1208.855 454.774 matmul_kernel [M=8192, N=8192, K=4096] ├─ 1285.360 427.706 matmul_kernel_persistent [M=8192, N=8192, K=4096] ├─ 1330.667 413.143 matmul_kernel_descriptor_persistent [M=8192, N=8192, K=4096] └─ 1347.254 408.057 matmul_kernel_tma_persistent [M=8192, N=8192, K=4096] ``` So on H100 tensor descriptor is a 3.5% flops uplift over the plain persistent matmul vs. 4.8% for host-side TMA. For the same shapes with fp16 I see a 13% uplift from tensor descriptor vs. 13.4% from host-side TMA. [multi-buffering PR]: #5290 (comment)
pawelszczerbuk
approved these changes
Dec 19, 2024
Contributor
pawelszczerbuk
left a comment
There was a problem hiding this comment.
Looks good, thanks!
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
As discussed in the multi-buffering PR, the persistent matmul should be kept as an apples-to-apples performance comparison. In particular, the existing perf results make tensor-descriptors look bad. With this updated tutorial I get results like (
K=4096, prec=fp8):So on H100 tensor descriptor is a 3.5% flops uplift over the plain persistent matmul vs. 4.8% for host-side TMA.
For the same shapes with fp16 I see a 13% uplift from tensor descriptor vs. 13.4% from host-side TMA.