[cherry-pick-2.2]Fix fused_attention_op and fused_feedforward_op bug when pre_layer_norm is false. #36816
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR types
Bug fixes
PR changes
OPs
Describe
问题描述:
fused_attention_op中,当pre_layer_norm为false时,fused_attention_op.cc的GradOpMaker将与pre layer_norm相关的张量也作为了输入和输出(比如,将d_scale,d_bias和d_ln_out作为输出)。此时,模型中调用该op时,梯度更新阶段会更新d_scale, d_bias和d_ln_out。由于这几个张量的值是没有初始化的,会偶发性的引发nan,inf的错误。
同理,fused_feedforwad_op也有类似的问题。
解决:
1.当pre_layer_norm为false时,反向的输入输出中去掉与pre layer_norm相关的张量;
2.为fused_attention_op增加单测,测试pre_layer_norm为true和false的情形。