Skip to content

Conversation

@hushenwei2000
Copy link
Contributor

@hushenwei2000 hushenwei2000 commented Jul 17, 2025

现有 paddle_to_torchpaddle.incubate.nn.functional.fused_bias_act API 实现有错误,已修复。在 fp16 精度测试下需要设置 atol = 1(因为在量化时涉及 round() 操作会四舍五入)。

tester/api_config/5_accuracy/accuracy_gpu_error.txt 中的 20 个错误 case 现在可以通过。

@paddle-bot
Copy link

paddle-bot bot commented Jul 17, 2025

Thanks for your contribution!

@cangtianhuang
Copy link
Collaborator

能定位是 torch 算错了还是 paddle 算错了呀?127 很大了

Comment on lines 2283 to 2284
paddle.incubate.nn.functional.fused_bias_act(Tensor([2, 22016],"int32"), None, act_method="swiglu", compute_dtype="fp16", dequant_scales=Tensor([22016],"float32"), shift=None, smooth=None, quant_scale=0.0009313154732808471, quant_round_type=0, quant_max_bound=127.0, quant_min_bound=-127.0, )
paddle.incubate.nn.functional.fused_bias_act(Tensor([2, 22016],"int32"), None, act_method="swiglu", compute_dtype="fp16", dequant_scales=Tensor([22016],"float32"), shift=None, smooth=None, quant_scale=0.0009654839523136616, quant_round_type=0, quant_max_bound=127.0, quant_min_bound=-127.0, )
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这些case不需要删除,还在这个文件里放着就可以。这里放着:

  1. 可以用于未来小范围的回归测试
  2. 如果所有问题都修复后,可以改变文件名。但这些case都是宝贵的测试用例还是需要保留下来的。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的,已更新第二个 commit 恢复了这些文件

@hushenwei2000
Copy link
Contributor Author

能定位是 torch 算错了还是 paddle 算错了呀?127 很大了

已经更新第二个 commit,现在 atol 只需要设为 1 了

Copy link
Collaborator

@cangtianhuang cangtianhuang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

if quant_scale > 0:
x = x / quant_scale
x = quant_max_bound * quant_scale * x
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个地方乘以 quant_max_bound 的原因有待确定,此处与 paddle 源码写法一致,但是尚不清楚为什么会如此设计,需要进一步向 api 开发者确定~

乘以 quant_max_bound 后,所有精度测试能过通过

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants