Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support MXFP6 packing and fused unpack-dequantise kernel (conflicts resolved) #1810

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

alex-titterton
Copy link

Updated version of #1687 to resolve some merge conflicts.

Good afternoon! Following recent developments and increased support for MXFP formats, it would be useful to support efficient packing for MXFP6 to benefit from the decrease in memory consumption and bandwidth requirements vs (MX)FP8.

MXFP6 has shown to perform similarly well compared with MXFP8 in LLM inference tasks, and with sufficient QAT even as well as float32, e.g. in the MXFP paper.

This PR packs the bits representing the FP6 values in a 4+2 fashion as is done in the FP6 LLM paper, and supports both E2M3 and E3M2 variants. Packing is done via standalone Triton kernel, with unpacking and dequantisation performed via a fused kernel for better performance.

Tests have been added in test_custom_cast.py and test_mx_tensor.py to cover accuracy in quantise-pack-unpack-dequantise with various FP6 values (min/max norm, min/max subnorm, -0.0 etc for both E2M3 and E3M2 variants) as well as checking packed tensor dimensions.

Note: due to the 4+2 packing scheme this requires the packing dimension to be a multiple of 4 since the packed dimension will be 3/4 of this. However the typical MX block size is 32 (—> 24 when packed), and HW implementations tend to require dims to be multiples of 16 or 32, so this should not be a problem. The relevant test case dimensions have been changed from 6 to 8 and the MX block sizes from 2 to 4 where applicable in order to accommodate this requirement.

Note: I've added a bool flag into config.py to enable/disable FP6 packing, but it doesn't belong to any class as such. I wasn't sure where best to put it following the restructuring of the config file, and so for now it's just accessed from other functions/classes as config.pack_fp6.

Copy link

pytorch-bot bot commented Mar 3, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1810

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 3, 2025
@@ -37,23 +37,25 @@ def forward(
grad_elem_dtype: Any,
block_size: int,
gemm_kernel_choice: MXGemmKernelChoice,
pack_fp6: bool,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need pack_fp6 for training, or is it an inference only optimization? I would have guessed inference, but interested to learn more.

@vkuzo
Copy link
Contributor

vkuzo commented Mar 4, 2025

Thank you! I took a closer look at the PR after the rebase, and just have a couple of follow-up questions:

  1. Is this inference only, or do you expect usage during training? If inference only it may be simpler to just omit this from training code.
  2. would it be possible to add a reference plain PyTorch kernel for the packing, similar to how our packed fp4x2 currently works? That can be in a separate PR is needed, but at least we should have a TODO comment tracking it.

@alex-titterton
Copy link
Author

Thank you! I took a closer look at the PR after the rebase, and just have a couple of follow-up questions:

  1. Is this inference only, or do you expect usage during training? If inference only it may be simpler to just omit this from training code.
  2. would it be possible to add a reference plain PyTorch kernel for the packing, similar to how our packed fp4x2 currently works? That can be in a separate PR is needed, but at least we should have a TODO comment tracking it.
  1. I'd say the use case for inference is of course clearer, but for training I could imagine it being a benefit perhaps more in the case of fine-tuning on hardware with limited memory capacity/bandwidth (i.e. when the dimensions of the model have been nailed down such that the dimensions shouldn't pose a problem for packing). On that basis I think it's perhaps worth having as an option where the default is not to pack, but it can be used for either training or inference?

  2. Sure -- I'll add a TODO for now if that's ok, and will put together a neater reference implementation in a parallel thread.

Thanks,
Alex

@alex-titterton
Copy link
Author

  1. would it be possible to add a reference plain PyTorch kernel for the packing, similar to how our packed fp4x2 currently works? That can be in a separate PR is needed, but at least we should have a TODO comment tracking it.

I've added a reference implementation just now 👍

@vkuzo
Copy link
Contributor

vkuzo commented Mar 5, 2025

I'd say the use case for inference is of course clearer, but for training I could imagine it being a benefit perhaps more in the case of fine-tuning on hardware with limited memory capacity/bandwidth (i.e. when the dimensions of the model have been nailed down such that the dimensions shouldn't pose a problem for packing). On that basis I think it's perhaps worth having as an option where the default is not to pack, but it can be used for either training or inference?

Makes sense. How about adding it to inference on-by-default for fp6, and we revisit training when the use case is more clear?

@alex-titterton
Copy link
Author

I'd say the use case for inference is of course clearer, but for training I could imagine it being a benefit perhaps more in the case of fine-tuning on hardware with limited memory capacity/bandwidth (i.e. when the dimensions of the model have been nailed down such that the dimensions shouldn't pose a problem for packing). On that basis I think it's perhaps worth having as an option where the default is not to pack, but it can be used for either training or inference?

Makes sense. How about adding it to inference on-by-default for fp6, and we revisit training when the use case is more clear?

Sorry, just to clarify: do you mean to leave the training support parts in but off-by-default, or remove entirely?

@vkuzo
Copy link
Contributor

vkuzo commented Mar 5, 2025

hmm, thinking out loud here:

  • packing weights to fp6 seems clearly useful for inference: we save model size on disk, and potentially throughput if the workload is memory bound
  • packing weights/activations/grads to fp6 value to training is not yet clear

that context would point me to say "on by default in inference" and "not supported at all in training to minimize complexity, unless someone articulates the actual benefit" - thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants