Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.compile cast to mxfp8 with blocked scales should be performant #1773

Open
vkuzo opened this issue Feb 24, 2025 · 0 comments
Open

torch.compile cast to mxfp8 with blocked scales should be performant #1773

vkuzo opened this issue Feb 24, 2025 · 0 comments

Comments

@vkuzo
Copy link
Contributor

vkuzo commented Feb 24, 2025

What this cast is doing

  • reshape the tensor into shape of (-1, block_size), where block_size is usually 32 or 16
  • for each block, calculate a single scale, and then cast that block to torch.float8_e4m3fn
  • rearrange the scale to swizzled format expected by gemm kernel
  • return the casted elements and the swizzled scale

What we currently see:

TORCH_LOGS_FORMAT=short TORCH_LOGS=aot_graphs,output_code python benchmarks/float8/profile_lowp_training.py ~/local/tmp/20250223_test --mx_recipe_name mxfp8_emulated --experiment_filter lowp --mode_filter cast_with_to_blocked

Output: https://gist.github.com/vkuzo/9bb4194b289003b6d8bf32d066e3f8e1

(i) one kernel to calculate the unswizzled scale and to cast the elements, (ii) one kernel to convert scale layout.

vkuzo added a commit that referenced this issue Feb 26, 2025
Summary:

Thanks to investigation from @eellison, moving the reshape
to the end of the cast helps inductor fuse the cast into a single
kernel.  This doesn't yet work with fp4, but let's unblock fp8 and deal
with fp4 later.

Fixes #1690

Note: in the repro with swizzling from
#1773, we go from 3 to 2 kernels.
Further investigation is needed whether we can fuse the swizzling.

Test Plan:

```
pytest test/prototype/mx_formats/test_mx_tensor.py -x -s -k test_to_mx_inductor_single_kernel
```

Reviewers:

Subscribers:

Tasks:

Tags:
vkuzo added a commit that referenced this issue Feb 26, 2025
Summary:

Thanks to investigation from @eellison, moving the reshape
to the end of the cast helps inductor fuse the cast into a single
kernel.  This doesn't yet work with fp4, but let's unblock fp8 and deal
with fp4 later.

Fixes #1690

Note: in the repro with swizzling from
#1773, we go from 3 to 2 kernels.
Further investigation is needed whether we can fuse the swizzling.

Test Plan:

```
pytest test/prototype/mx_formats/test_mx_tensor.py -x -s -k test_to_mx_inductor_single_kernel
```

Reviewers:

Subscribers:

Tasks:

Tags:
vkuzo added a commit that referenced this issue Feb 26, 2025
Summary:

Thanks to investigation from @eellison, moving the reshape
to the end of the cast helps inductor fuse the cast into a single
kernel.  This doesn't yet work with fp4, but let's unblock fp8 and deal
with fp4 later.

Fixes #1690

Note: in the repro with swizzling from
#1773, we go from 3 to 2 kernels.
Further investigation is needed whether we can fuse the swizzling.

Test Plan:

```
pytest test/prototype/mx_formats/test_mx_tensor.py -x -s -k test_to_mx_inductor_single_kernel
```

Reviewers:

Subscribers:

Tasks:

Tags:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant