Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix wrong scale eps applied #1770

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

alexsamardzic
Copy link
Collaborator

Fixes #1766.

Copy link

pytorch-bot bot commented Feb 24, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1770

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit beab4c1 with merge base 7963f9c (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 24, 2025
@alexsamardzic alexsamardzic added float8 topic: bug fix Use this tag for PRs that fix bugs labels Feb 24, 2025
@alexsamardzic alexsamardzic marked this pull request as draft February 24, 2025 18:50
@alexsamardzic
Copy link
Collaborator Author

Please don't merge yet, this isn't good enough...

@alexsamardzic alexsamardzic marked this pull request as ready for review February 24, 2025 21:48
@alexsamardzic
Copy link
Collaborator Author

Ok, I think it could be reviewed now. Basically, in calculate_scale_eps_for_dtype(), scale calculations are kind of emulated, and the minimum value that won't produce an Inf when reciprocated is returned. This should produce such eps value for choose_qparams_affine() that would calculate the scale so that the range of quantized values is maximized, but that scale reciprocal, used when given tensor actually quantized, doesn't become Inf.

Now, this is all probably an overkill: it's pretty much relevant only for float16 inputs, it could be that it fixes only one of several quantization code paths, etc. So maybe I just put this in my #1671 for now, specifically for the quantization type where I've encountered the issue while I was testing it?

@alexsamardzic
Copy link
Collaborator Author

Closing as the issue is rather improbable to encounter in practice.

@@ -944,10 +944,16 @@ def _choose_qparams_affine(
else:
zero_point = torch.full_like(scale, int((quant_max + quant_min + 1) / 2))
scale = torch.clamp(scale, min=eps)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we modify eps to the right value instead of trying to clamp twice? Right now eps is set to torch.finfo(input.dtype).eps, it seems like that just isn't the right way to set it here?

Copy link
Contributor

@vkuzo vkuzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO we should fix eps instead of clamping twice

@vkuzo
Copy link
Contributor

vkuzo commented Feb 28, 2025

By the way, thanks for fixing this!

I think this PR should include a test case which fails before and passes after these changes.

@alexsamardzic
Copy link
Collaborator Author

I think this PR should include a test case which fails before and passes after these changes.

Added a test case - without changes in torchao/quantization/quant_primitives.py, it will produce Inf scale for all "high-precision" floating point data types tested. (I've put some comments in the test code that I hope explain the issue.)

I see you point about clamping twice. I need to see if some further changes in torchao/quantization/quant_primitives.py are needed anyway. The problem is that, it seems to me, the scale could end up calculated in data type different from scale_data type, that it will be eventually casted to, and also that for asymmetric mapping, its reciprocal actually gets used immediately - and it has to be properly clamped from below in all the cases. Thus, I don't think it is possible just to fix eps before branching on mapping type.

(The eps argument probably should hot have been there to start with - if we're not completely sure how to choose it, the users are even less.)

@alexsamardzic alexsamardzic force-pushed the fix-wrong-scale-eps branch 4 times, most recently from 9a43a80 to 42a2347 Compare February 28, 2025 20:58
@alexsamardzic
Copy link
Collaborator Author

Pushed an update, I think this is it. Namely, in _choose_qparams_affine():

  1. For floating point inputs: scale is calculated in min_val/max_val dtype, so eps is clamped against the smallest normalized value of this datatype, to have scale clamped againt this eps value, and thus prevent scale reciprocal, used here to become Inf.
  2. For integer inputs, scale ends up calculated as torch.float32 tensor (because min_val/max_val, that is an integer tensor, is part of arithmetic operations with a Python float value, and it seems the results in such case is promoted to torch.float32), so eps is clamped to smallest normalized value of torch.float32 - the clamping is for the same reason as in the previous case.
  3. At the end of the function, scale is converted to scale_dtype dtype, so if this dtype is floating point, then before returning the value, it is clamped against the smallest normalized value of this datatype, again to prevent scale reciprocal (now if used from call site), to become Inf.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. float8 topic: bug fix Use this tag for PRs that fix bugs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[QST] About NaNs generated during FP16->FP8 quantization
4 participants