Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 7th No.39】为 Paddle 代码转换工具新增 API 转换规则(第 6 组) #477

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

Asthestarsfalll
Copy link

@Asthestarsfalll Asthestarsfalll commented Sep 18, 2024

PR Docs

PaddlePaddle/docs#6878

PR APIs

torch.nn.functional.lp_pool1d
torch.nn.functional.lp_pool2d
torch.nn.functional.threshold_
torch.nn.functional.feature_alpha_dropout
torch.nn.functional.scaled_dot_product_attention
torch.nn.LPPool1d
torch.nn.LPPool2d
torch.nn.Softmin
torch.nn.AdaptiveLogSoftmaxWithLoss
torch.nn.parameter.UninitializedParameter
torch.nn.parameter.UninitializedBuffer
torch.nn.CircularPad3d
torch.nn.utils.parametrizations.weight_norm
torch.optim.RAdam
torch.optim.NAdam

目前问题:

  1. lp_pool torch行为与Paddle不一致,当norm_type为inf时,应该与max_pool一致,而torch返回1。
  2. 未找到UninitializedParameter与UninitializedBuffer的映射

Copy link

paddle-bot bot commented Sep 18, 2024

Thanks for your contribution!

@zhwesky2010
Copy link
Collaborator

zhwesky2010 commented Sep 18, 2024

PR Docs

PaddlePaddle/docs#6878

PR APIs

torch.nn.functional.lp_pool1d torch.nn.functional.lp_pool2d torch.nn.functional.threshold_ torch.nn.functional.feature_alpha_dropout torch.nn.functional.scaled_dot_product_attention torch.nn.LPPool1d torch.nn.LPPool2d torch.nn.Softmin torch.nn.AdaptiveLogSoftmaxWithLoss torch.nn.parameter.UninitializedParameter torch.nn.parameter.UninitializedBuffer torch.nn.CircularPad3d torch.nn.utils.parametrizations.weight_norm torch.optim.RAdam torch.optim.NAdam

目前问题:

  1. lp_pool torch行为与Paddle不一致,当norm_type为inf时,应该与max_pool一致,而torch返回1。
  2. 未找到UninitializedParameter与UninitializedBuffer的映射
  1. 描述的问题的方式,从Pytorch的角度来描述。对Pytorch的某些具体功能,看Paddle的表现有什么差异,分析清楚差异是否是Paddle有功能Bug,或者功能缺陷。给出明确结论,不是简单的罗列问题。
  2. 这个不是都能有现成的映射,需要去分析出对应的替代实现方式,使得对网络计算无影响即可

@paddle-bot paddle-bot bot added the contributor External developers label Sep 18, 2024
@Asthestarsfalll
Copy link
Author

PR Docs

PaddlePaddle/docs#6878

PR APIs

torch.nn.functional.lp_pool1d torch.nn.functional.lp_pool2d torch.nn.functional.threshold_ torch.nn.functional.feature_alpha_dropout torch.nn.functional.scaled_dot_product_attention torch.nn.LPPool1d torch.nn.LPPool2d torch.nn.Softmin torch.nn.AdaptiveLogSoftmaxWithLoss torch.nn.parameter.UninitializedParameter torch.nn.parameter.UninitializedBuffer torch.nn.CircularPad3d torch.nn.utils.parametrizations.weight_norm torch.optim.RAdam torch.optim.NAdam
目前问题:

  1. lp_pool torch行为与Paddle不一致,当norm_type为inf时,应该与max_pool一致,而torch返回1。
  2. 未找到UninitializedParameter与UninitializedBuffer的映射
  1. 描述的问题的方式,从Pytorch的角度来描述。对Pytorch的某些具体功能,看Paddle的表现有什么差异,分析清楚差异是否是Paddle有功能Bug,或者功能缺陷。给出明确结论,不是简单的罗列问题。
  2. 这个不是都能有现成的映射,需要去分析出对应的替代实现方式,使得对网络计算无影响即可

240919_09h21m56s_screenshot

  1. 根据PyTorch的文档,当norm_type为inf时,应该与max_pool一致,Paddle的实现在norm_type为inf时,会直接调用max_pool,而PyTorch直接计算,由于浮点数范围原因,会输出为1
  2. 了解

@zhwesky2010
Copy link
Collaborator

zhwesky2010 commented Sep 19, 2024

@Asthestarsfalll 单测失败,请保证CI通过,CI未通过不予合入:

2024-09-19 10:46:30 FAILED tests/test_nn_LPPool1d.py::test_case_3 - AssertionError: API (torch.nn...
2024-09-19 10:46:30 FAILED tests/test_nn_LPPool2d.py::test_case_6 - AssertionError: API (torch.nn...
2024-09-19 10:46:30 FAILED tests/test_nn_Softmin.py::test_case_1 - AttributeError: module 'paddle...
2024-09-19 10:46:30 FAILED tests/test_nn_Softmin.py::test_case_2 - AttributeError: module 'paddle...
2024-09-19 10:46:30 FAILED tests/test_nn_functional_lp_pool1d.py::test_case_5 - AssertionError: A...
2024-09-19 10:46:30 FAILED tests/test_nn_functional_lp_pool2d.py::test_case_6 - AssertionError: A...
2024-09-19 10:46:30 ============ 6 failed, 8152 passed, 90 skipped in 188.53s (0:03:08) ============

@zhwesky2010
Copy link
Collaborator

@Asthestarsfalll 第1点问题,写一个屏蔽版本的单测,然后在注释备注好原因

class Softmin(paddle.nn.Softmax):
def forward(self, x):
return super().forward(-x)
setattr(paddle.nn, 'Softmin', Softmin)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好像没有办法获取forward的输入,所以只能这样

@zhwesky2010
Copy link
Collaborator

@Asthestarsfalll 现在CI未通过,需要修复

@zhwesky2010
Copy link
Collaborator

@Asthestarsfalll CI未通过,请自查问题

@zhwesky2010
Copy link
Collaborator

@Asthestarsfalll CI仍未通过,请自查问题

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants