Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【PIR】fused_elemwise_add_activation #57827

Merged
merged 20 commits into from
Nov 15, 2023

Conversation

yangguohao
Copy link
Contributor

PR types

Others

PR changes

OPs

Description

add fused_elemwise_add_activation in PIR

@paddle-bot
Copy link

paddle-bot bot commented Sep 27, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Sep 27, 2023
paddle/phi/api/yaml/fused_ops.yaml Outdated Show resolved Hide resolved
paddle/phi/api/yaml/fused_ops.yaml Outdated Show resolved Hide resolved
@kangguangli
Copy link
Contributor

单测可以先测试下 test_fuse_elewise_add_act_pass 能否通过

@paddle-ci-bot
Copy link

paddle-ci-bot bot commented Oct 5, 2023

Sorry to inform you that cf20183's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

@kangguangli
Copy link
Contributor

可以仿照这个PR #57655 把 test_fuse_elewise_add_act_pass 也加到CI里监控起来

@yangguohao yangguohao force-pushed the PIR_build_strategy branch 3 times, most recently from 45fc433 to e494a13 Compare October 31, 2023 11:59
Comment on lines 150 to 155
if (op->isa<paddle::dialect::FusedElemwiseAddActivationOp>() ||
op->isa<paddle::dialect::FusedElemwiseAddActivationGradOp>()) {
if (kernel_key.backend() == phi::Backend::GPUDNN) {
return true;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么要fallBack回GPU呢?

Comment on lines 198 to 210
void FusedElemwiseAddActivationGradInferMeta(
const MetaTensor& x,
const MetaTensor& y,
const MetaTensor& out,
const MetaTensor& intermediate_out,
const MetaTensor& out_grad,
const std::vector<std::string>& functor_list,
float scale,
int axis,
bool save_intermediate_out,
MetaTensor* x_grad,
MetaTensor* y_grad);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个函数放到 paddle/phi/infermeta/multiary.h 或者 paddle/phi/infermeta/fusion.h比较合适吧, @zyfncg 看下哪个更好呢?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

放到fusion.h中吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

func: FusedElemwiseAddActivationGradInferMeta
kernel:
func: fused_elemwise_add_activation_grad
optional : x, intermediate_out
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

xintermediate_out 为什么是optional的?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

x, intermediate_out 都可以为空

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

前向不为空,反向可以为空,这个逻辑感觉是有问题的,是不是再梳理下这里面的逻辑?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个属于之前的反向 OpMaker 里就存在的,有 InputXCanBeAbsent 的判断,intermediate_out 也当 save_intermediate_out 为 false 时就没有这个中间输出。

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里应该还是是不是要以FusedElemwiseActivationGradMaker为准吧,里面在创建fused_elemwise_add_activation_grad应该是必然创建x和intermediate_out的。
如下:

    //这里应该设置了X
    for (auto &input_param : this->InputNames()) {
      grad_op->SetInput(input_param, this->Input(input_param));
      grad_op->SetOutput(framework::GradVarName(input_param),
                         this->InputGrad(input_param, true));
    }
     //这里应该设置了IntermediateOut
     if (PADDLE_GET_CONST(bool, grad_op->GetAttr("save_intermediate_out"))) {
      // PADDLE_ENFORCE_NE(Output("IntermediateOut").size(), 0);
      grad_op->SetInput("IntermediateOut", this->Output("IntermediateOut"));
      grad_op->SetOutput(framework::GradVarName("IntermediateOut"),
                         this->OutputGrad("IntermediateOut"));
    } else {
      grad_op->SetInput("IntermediateOut", this->EmptyOutput());
      grad_op->SetOutput(framework::GradVarName("IntermediateOut"),
                         this->EmptyOutputGrad());
    }

但是个别时候得到的OpDesc可能确实不符合这里的GradOpMaker, 因为fused_elemwise_add_activation_grad可能是被pass创建的,没有经过这里的GradOpMaker, @yangguohao 可以发下具体的单测或者检查下相关pass的代码中在创建 fused_elemwise_add_activation_grad时是否会有不创建这两个输入的可能。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image
IntermediateOut 和 X 为空

Comment on lines 319 to 328
// if (intermediate_out_grad) {
// // For Unary(Binary(X, Y)), IntermediateOut should not be empty.
// if (IsUnaryCompound(functor_list)) {
// intermediate_out_grad->set_dims(out_grad.dims());
// intermediate_out_grad->share_lod(out_grad);
// } else {
// intermediate_out_grad->set_dims(y.dims());
// intermediate_out_grad->share_lod(y);
// }
// }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的注释是否还有用?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

之后还需要添加 intermediate_out_grad。

Comment on lines 315 to 316
y_grad->set_dims(y.dims());
y_grad->share_lod(y);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要设置 dtype

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment on lines 198 to 210
void FusedElemwiseAddActivationGradInferMeta(
const MetaTensor& x,
const MetaTensor& y,
const MetaTensor& out,
const MetaTensor& intermediate_out,
const MetaTensor& out_grad,
const std::vector<std::string>& functor_list,
float scale,
int axis,
bool save_intermediate_out,
MetaTensor* x_grad,
MetaTensor* y_grad);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

放到fusion.h中吧

paddle/phi/infermeta/multiary.cc Outdated Show resolved Hide resolved
Copy link
Contributor

@kangguangli kangguangli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@kangguangli kangguangli merged commit 3ac5e69 into PaddlePaddle:develop Nov 15, 2023
28 checks passed
SecretXV pushed a commit to SecretXV/Paddle that referenced this pull request Nov 28, 2023
* add fused_elemwise_add_activation

* fix

* fix scale attribute add intermediate_out_grad warning

* fix

* fix 2023-11-06

* modified test file

* fix 2023-11-07

* revert

* fix windows inference

* fix

* fix 2023-11-14
@yangguohao yangguohao deleted the PIR_build_strategy branch December 1, 2023 08:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants