-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【PIR】fused_elemwise_add_activation #57827
【PIR】fused_elemwise_add_activation #57827
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
单测可以先测试下 test_fuse_elewise_add_act_pass 能否通过 |
Sorry to inform you that cf20183's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually. |
cf20183
to
4be6756
Compare
可以仿照这个PR #57655 把 test_fuse_elewise_add_act_pass 也加到CI里监控起来 |
45fc433
to
e494a13
Compare
8063553
to
682550a
Compare
if (op->isa<paddle::dialect::FusedElemwiseAddActivationOp>() || | ||
op->isa<paddle::dialect::FusedElemwiseAddActivationGradOp>()) { | ||
if (kernel_key.backend() == phi::Backend::GPUDNN) { | ||
return true; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么要fallBack回GPU呢?
paddle/phi/infermeta/backward.h
Outdated
void FusedElemwiseAddActivationGradInferMeta( | ||
const MetaTensor& x, | ||
const MetaTensor& y, | ||
const MetaTensor& out, | ||
const MetaTensor& intermediate_out, | ||
const MetaTensor& out_grad, | ||
const std::vector<std::string>& functor_list, | ||
float scale, | ||
int axis, | ||
bool save_intermediate_out, | ||
MetaTensor* x_grad, | ||
MetaTensor* y_grad); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个函数放到 paddle/phi/infermeta/multiary.h
或者 paddle/phi/infermeta/fusion.h
比较合适吧, @zyfncg 看下哪个更好呢?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
放到fusion.h
中吧
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
func: FusedElemwiseAddActivationGradInferMeta | ||
kernel: | ||
func: fused_elemwise_add_activation_grad | ||
optional : x, intermediate_out |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
x
和intermediate_out
为什么是optional
的?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
x, intermediate_out 都可以为空
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
前向不为空,反向可以为空,这个逻辑感觉是有问题的,是不是再梳理下这里面的逻辑?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个属于之前的反向 OpMaker 里就存在的,有 InputXCanBeAbsent 的判断,intermediate_out 也当 save_intermediate_out 为 false 时就没有这个中间输出。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里应该还是是不是要以FusedElemwiseActivationGradMaker为准吧,里面在创建fused_elemwise_add_activation_grad应该是必然创建x和intermediate_out的。
如下:
//这里应该设置了X
for (auto &input_param : this->InputNames()) {
grad_op->SetInput(input_param, this->Input(input_param));
grad_op->SetOutput(framework::GradVarName(input_param),
this->InputGrad(input_param, true));
}
//这里应该设置了IntermediateOut
if (PADDLE_GET_CONST(bool, grad_op->GetAttr("save_intermediate_out"))) {
// PADDLE_ENFORCE_NE(Output("IntermediateOut").size(), 0);
grad_op->SetInput("IntermediateOut", this->Output("IntermediateOut"));
grad_op->SetOutput(framework::GradVarName("IntermediateOut"),
this->OutputGrad("IntermediateOut"));
} else {
grad_op->SetInput("IntermediateOut", this->EmptyOutput());
grad_op->SetOutput(framework::GradVarName("IntermediateOut"),
this->EmptyOutputGrad());
}
但是个别时候得到的OpDesc可能确实不符合这里的GradOpMaker, 因为fused_elemwise_add_activation_grad可能是被pass创建的,没有经过这里的GradOpMaker, @yangguohao 可以发下具体的单测或者检查下相关pass的代码中在创建 fused_elemwise_add_activation_grad时是否会有不创建这两个输入的可能。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
paddle/phi/infermeta/backward.cc
Outdated
// if (intermediate_out_grad) { | ||
// // For Unary(Binary(X, Y)), IntermediateOut should not be empty. | ||
// if (IsUnaryCompound(functor_list)) { | ||
// intermediate_out_grad->set_dims(out_grad.dims()); | ||
// intermediate_out_grad->share_lod(out_grad); | ||
// } else { | ||
// intermediate_out_grad->set_dims(y.dims()); | ||
// intermediate_out_grad->share_lod(y); | ||
// } | ||
// } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里的注释是否还有用?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
之后还需要添加 intermediate_out_grad。
paddle/phi/infermeta/backward.cc
Outdated
y_grad->set_dims(y.dims()); | ||
y_grad->share_lod(y); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
需要设置 dtype
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/phi/infermeta/backward.h
Outdated
void FusedElemwiseAddActivationGradInferMeta( | ||
const MetaTensor& x, | ||
const MetaTensor& y, | ||
const MetaTensor& out, | ||
const MetaTensor& intermediate_out, | ||
const MetaTensor& out_grad, | ||
const std::vector<std::string>& functor_list, | ||
float scale, | ||
int axis, | ||
bool save_intermediate_out, | ||
MetaTensor* x_grad, | ||
MetaTensor* y_grad); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
放到fusion.h
中吧
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* add fused_elemwise_add_activation * fix * fix scale attribute add intermediate_out_grad warning * fix * fix 2023-11-06 * modified test file * fix 2023-11-07 * revert * fix windows inference * fix * fix 2023-11-14
PR types
Others
PR changes
OPs
Description
add fused_elemwise_add_activation in PIR