-
Notifications
You must be signed in to change notification settings - Fork 185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add general fake_quantize_affine op #492
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/492
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit 9c60424 with merge base 12ac498 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@@ -335,6 +378,62 @@ def _dequantize_affine( | |||
|
|||
return dequant.view(original_shape).to(output_dtype) | |||
|
|||
|
|||
def _fake_quantize_affine( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can be a top level quant primitive I think
Summary: Add a general `fake_quantize_affine` op that simulates `quantize_affine` + `dequantize_affine` but without casting the intermediate quantized values to lower bit-widths, intended for quantization-aware training (QAT). Test Plan: python test/quantization/test_quant_primitives.py -k test_fake_quantize_affine
3ae3a6a
to
9c60424
Compare
Summary: Add a general `fake_quantize_affine` op that simulates `quantize_affine` + `dequantize_affine` but without casting the intermediate quantized values to lower bit-widths, intended for quantization-aware training (QAT). Test Plan: python test/quantization/test_quant_primitives.py -k test_fake_quantize_affine
Fixes FP16 + BF16 issue in aoti runner. ``` torchchat % ./cmake-out/aoti_run ./model_bf16.so -z ./.model-artifacts/meta-llama/Llama-2-7b-chat-hf/tokenizer.bin -t 0 -i "Once upon a time" Failed to load ./.model-artifacts/meta-llama/Llama-2-7b-chat-hf/tokenizer.bin into a Tiktoken tokenizer. Trying sentencepiece tokenizer.. Once upon a time, there was a little girl named Lily. She loved to play outside in the sunshine. One day, she saw a big, red ball in the sky. It was the sun! She thought it was so pretty. Lily wanted to play with the ball, but it was too high up in the sky. She tried to jump and reach it, but she couldn't. Then, she had an idea. She would use a stick to get the ball down. Lily found a long stick and tried to reach the ball. She poked and poked, but the ball didn't come down. She was sad. But then, she saw a bird flying by. The bird had a big, red ball in its beak! Lily was so happy! She thanked the bird and played with her new ball all day long. achieved tok/s: 65.842124 ```
Summary: Add a general
fake_quantize_affine
op that simulatesquantize_affine
+dequantize_affine
but without casting the intermediate quantized values to lower bit-widths, intended for quantization-aware training (QAT).Test Plan:
python test/quantization/test_quant_primitives.py -k test_fake_quantize_affine