-
Notifications
You must be signed in to change notification settings - Fork 39
[Accuracy diff No.89] Fix accuracy diff for paddle.incubate.nn.functional.fused_bias_act API #389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Thanks for your contribution! |
|
能定位是 torch 算错了还是 paddle 算错了呀?127 很大了 |
| paddle.incubate.nn.functional.fused_bias_act(Tensor([2, 22016],"int32"), None, act_method="swiglu", compute_dtype="fp16", dequant_scales=Tensor([22016],"float32"), shift=None, smooth=None, quant_scale=0.0009313154732808471, quant_round_type=0, quant_max_bound=127.0, quant_min_bound=-127.0, ) | ||
| paddle.incubate.nn.functional.fused_bias_act(Tensor([2, 22016],"int32"), None, act_method="swiglu", compute_dtype="fp16", dequant_scales=Tensor([22016],"float32"), shift=None, smooth=None, quant_scale=0.0009654839523136616, quant_round_type=0, quant_max_bound=127.0, quant_min_bound=-127.0, ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这些case不需要删除,还在这个文件里放着就可以。这里放着:
- 可以用于未来小范围的回归测试
- 如果所有问题都修复后,可以改变文件名。但这些case都是宝贵的测试用例还是需要保留下来的。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好的,已更新第二个 commit 恢复了这些文件
已经更新第二个 commit,现在 atol 只需要设为 1 了 |
cangtianhuang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
| if quant_scale > 0: | ||
| x = x / quant_scale | ||
| x = quant_max_bound * quant_scale * x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个地方乘以 quant_max_bound 的原因有待确定,此处与 paddle 源码写法一致,但是尚不清楚为什么会如此设计,需要进一步向 api 开发者确定~
乘以 quant_max_bound 后,所有精度测试能过通过
现有
paddle_to_torch的paddle.incubate.nn.functional.fused_bias_actAPI 实现有错误,已修复。在 fp16 精度测试下需要设置atol = 1(因为在量化时涉及round()操作会四舍五入)。tester/api_config/5_accuracy/accuracy_gpu_error.txt中的 20 个错误 case 现在可以通过。