Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

interpolate (grad) op support fp16 on gpu #45061

Merged
merged 9 commits into from
Sep 2, 2022

Conversation

yuanlehome
Copy link
Contributor

@yuanlehome yuanlehome commented Aug 11, 2022

PR types

Others

PR changes

Others

Describe

interpolate (grad) kernel support fp16 on gpu
include bilinear_interp_v2 nearest_interp_v2 trilinear_interp_v2 linear_interp_v2 bicubic_interp_v2 (grad)

performance

interp_bicubic

Config _id in Json FP32 Perf(ms) (grad)FP32 Perf(ms) FP16 Perf(ms) (grad)FP16 Perf(ms)
0 4.2525 4.8649 4.3282 15.479
1 4.2448 6.2143 4.1762 56.991
2 6.3533 7.2487 6.4865 32.324

interp_bilinear

Config _id in Json FP32 Perf(ms) (grad)FP32 Perf(ms) FP16 Perf(ms) (grad)FP16 Perf(ms)
0 16.681 38.686 14.645 98.237
1 26.222 30.251 25.933 74.770
2 0.22262 7.8993 0.2272 82.860
3 0.30256 2.4894 0.26096 147.38
4 3.1332 13.898 1.6822 209.75
5 5.4776 8.1423 4.7927 105.74
6 0.18054 20.567 0.10122 139.88

interp_nearest

Config _id in Json FP32 Perf(ms) (grad)FP32 Perf(ms) FP16 Perf(ms) (grad)FP16 Perf(ms)
0 0.16115 0.16214 0.15923 0.54301
1 0.17987 0.26403 0.25824 0.75984

interp_trilinear

Config _id in Json FP32 Perf(ms) (grad)FP32 Perf(ms) FP16 Perf(ms) (grad)FP16 Perf(ms)
0 5.7762 5.7804 5.7707 9.5857
1 0.73469 0.73664 0.73149 8.9316
2 1.9602 2.0339 1.9574 86.897

interp_linear

Config _id in Json FP32 Perf(us) (grad)FP32 Perf(us) FP16 Perf(us) (grad)FP16 Perf(us)
0 - - - -
1 30.144 30.336 28.064 52.511

inference 前向使用。由于反向性能下降明显,不适用于训练,已将其添加进amp训练黑名单PR链接

@paddle-bot
Copy link

paddle-bot bot commented Aug 11, 2022

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@yuanlehome yuanlehome changed the title nearest_interp_v2 and bilinear_interp_v2 op support fp16 interpolate (grad) op support fp16 Aug 15, 2022
@yuanlehome yuanlehome changed the title interpolate (grad) op support fp16 interpolate (grad) op support fp16 on gpu Aug 19, 2022
@CLAassistant
Copy link

CLAassistant commented Aug 22, 2022

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@zhangting2020 zhangting2020 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for fp16 tests

Copy link
Contributor

@Aurelius84 Aurelius84 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for dtye registeration

@jiweibo jiweibo merged commit b12c27e into PaddlePaddle:develop Sep 2, 2022
@yuanlehome yuanlehome deleted the interpolate_kernel branch September 9, 2022 04:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants