Skip to content

Conversation

@ooooo-create
Copy link
Contributor

@ooooo-create ooooo-create commented Jun 26, 2025

删除非法配置
paddle.incubate.nn.functional.fused_layer_norm(Tensor([2, 64],"float16"), norm_weight=Tensor([64],"float32"), norm_bias=Tensor([64],"float32"), epsilon=1e-05, begin_norm_axis=1, bias=Tensor([64],"float16"), residual=Tensor([2, 1, 64],"float16"), )
图片

paddle.incubate.nn.functional.fused_layer_norm(Tensor([16, 256],"float16"), Tensor([256],"float32"), Tensor([256],"float32"), 1e-05, begin_norm_axis=1, bias=Tensor([256],"float16"), residual=Tensor([16, 256],"float16"), residual_alpha=0.69204696, quant_scale=0.15, quant_round_type=1, quant_max_bound=127, quant_min_bound=-127, ) AMP 出现精度误差,参考 #389 添加atol = 1(因为在量化时涉及 round() 操作会四舍五入)手动改成 float32 是没有精度问题

其余测试均通过

@paddle-bot
Copy link

paddle-bot bot commented Jun 26, 2025

Thanks for your contribution!

Copy link
Collaborator

@cangtianhuang cangtianhuang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@cangtianhuang
Copy link
Collaborator

@ooooo-create 麻烦o师傅解决一下冲突~

@ooooo-create ooooo-create force-pushed the accuracy_fused_layer_norm branch from 66af659 to 55a9adb Compare July 23, 2025 13:23
Copy link
Collaborator

@cangtianhuang cangtianhuang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@cangtianhuang cangtianhuang merged commit 040bac4 into PFCCLab:main Jul 27, 2025
@ooooo-create ooooo-create deleted the accuracy_fused_layer_norm branch September 29, 2025 09:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants