Skip to content

Conversation

@PolaKuma
Copy link
Contributor

PR Category

User Experience

PR Types

Others

Description

针对 array_api_tests/test_linalg.py::test_linalg_tensordot 单测,按照api-array-compat标准,为 tensordot(matmul) 函数添加 0-size tensor 支持,添加后可跑通array-api-tests的如下测试(忽略result_type报错;其次直接跑 _test_tensordot_stacks 计算会存在段错误,但是经过形状验证没问题,注释掉只保留形状验证就可以跑通了)
Screenshot 2024-12-16 at 11 28 49
Screenshot 2024-12-16 at 11 55 04

@paddle-bot
Copy link

paddle-bot bot commented Dec 16, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Dec 16, 2024
@PolaKuma
Copy link
Contributor Author

其中好像还会涉及到sum,这里在等这个PR合入:#70146

@luotao1 luotao1 added the HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务 label Dec 16, 2024
Comment on lines +2010 to +2039
if (x.numel() == 0 || y.numel() == 0) {
auto x_dims = x.dims();
auto y_dims = y.dims();
if (transpose_x) {
std::swap(x_dims[x_dims.size() - 1], x_dims[x_dims.size() - 2]);
}
if (transpose_y) {
std::swap(y_dims[y_dims.size() - 1], y_dims[y_dims.size() - 2]);
}
std::vector<std::int64_t> out_dims(x_dims.size() - 1 + y_dims.size() - 1);
for (int64_t i = 0; i < x_dims.size() - 1; ++i) {
out_dims[i] = x_dims[i];
}
for (int64_t i = 1; i < y_dims.size(); ++i) {
out_dims[x_dims.size() - 1 + i - 1] = y_dims[i];
}
out->Resize(phi::make_ddim(out_dims));
ctx.template Alloc<T>(out);
return;
}
PADDLE_ENFORCE_GE(
common::product(x.dims()),
0,
common::errors::InvalidArgument(
"The dims of Input(X) should be greater than or equal to 0."));
PADDLE_ENFORCE_GE(
common::product(y.dims()),
0,
common::errors::InvalidArgument(
"The dims of Input(Y) should be greater than or equal to 0."));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

matmul_kernel如果仅修改PADDLE_ENFORCE_NE,是否能支持0-size Tensor?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好像不太行欸,会提示:** On entry to DGEMM parameter number 8 had an illegal value

Comment on lines +376 to +377
self.x_shape = [0, 5, 5, 5]
self.y_shape = [0, 5, 5, 5]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是否有其他形状组合?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已补充

Comment on lines +2010 to +2029
if (x.numel() == 0 || y.numel() == 0) {
auto x_dims = x.dims();
auto y_dims = y.dims();
if (transpose_x) {
std::swap(x_dims[x_dims.size() - 1], x_dims[x_dims.size() - 2]);
}
if (transpose_y) {
std::swap(y_dims[y_dims.size() - 1], y_dims[y_dims.size() - 2]);
}
std::vector<std::int64_t> out_dims(x_dims.size() - 1 + y_dims.size() - 1);
for (int64_t i = 0; i < x_dims.size() - 1; ++i) {
out_dims[i] = x_dims[i];
}
for (int64_t i = 1; i < y_dims.size(); ++i) {
out_dims[x_dims.size() - 1 + i - 1] = y_dims[i];
}
out->Resize(phi::make_ddim(out_dims));
ctx.template Alloc<T>(out);
return;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的输出计算逻辑,可以验证下batch广播的情况是否正确?
image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已补充,也可以过单测的

@PolaKuma PolaKuma marked this pull request as draft December 18, 2024 14:18
@PolaKuma PolaKuma marked this pull request as ready for review December 18, 2024 14:45
Copy link
Contributor

@HydrogenSulfate HydrogenSulfate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@HydrogenSulfate HydrogenSulfate merged commit 2c597ea into PaddlePaddle:develop Dec 19, 2024
28 checks passed
@PolaKuma PolaKuma deleted the matmul_0size branch March 20, 2025 04:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor External developers HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants