Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Paddle Tensor No.23] 新增TensorA[mask] = TensorB #69480

Merged

Conversation

LittleHeroZZZX
Copy link
Contributor

PR Category

User Experience

PR Types

New features

Description

Paddle Tensor 规范化:修改Tensor.__setitem__的 C++ 实现,实现当 index 是布尔类型,且右侧的输入是 Tensor 时的计算逻辑。

Copy link

paddle-bot bot commented Nov 18, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Nov 18, 2024
@luotao1 luotao1 added the HappyOpenSource 快乐开源活动issue与PR label Nov 19, 2024
Comment on lines 1867 to 1868
transed_sub_tensor =
index_put__ad_func(transed_sub_tensor, transed_index, value_tensor);
Copy link
Contributor

@HydrogenSulfate HydrogenSulfate Nov 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

跟切片API负责人讨论了一下,结论是:

  1. 原来的where优化是有BUG的,v在不是单个元素的张量的前提下,极大概率不能跟x[mask]进行where计算
  2. 原先这样写是为了优化 x[mask] = 0-D/1-D Tensor,即“将单个值赋值给x中mask为True的位置上”,原先的写法就可以把v广播到x的形状上,然后以out = where(mask, x, broadcasted_v)作为结果,这样速度会比index_put快一些。

所以是否可以把此处的代码保留,但是把逻辑(判断条件)改一下,让仅含有单个元素的v可以执行

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好嘞

@HydrogenSulfate
Copy link
Contributor

HydrogenSulfate commented Nov 20, 2024

@LittleHeroZZZX 我想了下,单测里是否可以考虑v本身可以广播的情况?如下所示

import torch

x = torch.zeros(2, 2, 3)
mask = torch.tensor(
    [
        [True, False],
        [False, True],
    ],
)

y = torch.arange(1, 4, dtype=torch.float32)

print(x[0, 0])
print(x[0, 1])
print(x[1, 0])
print(x[1, 1])
print(mask)
x[mask] = y
print(x[0, 0])
print(x[0, 1])
print(x[1, 0])
print(x[1, 1])

可以测试下你这个代码是否能处理v([3])被广播到x[mask]形状([2,3])这种case,如果可以的话,还麻烦在单测里加一个这种广播的测试(可以叫test_boolean_mask_tensor_broadcast_v)。然后基本就没啥问题了,如果本地测试OK的话,可以新提交一个PR单独加个单测,这个PR我可以先合入了,不然重新跑CI比较麻烦

@LittleHeroZZZX
Copy link
Contributor Author

@LittleHeroZZZX 我想了下,单测里是否可以考虑v本身可以广播的情况?如下所示

import torch

x = torch.zeros(2, 2, 3)
mask = torch.tensor(
    [
        [True, False],
        [False, True],
    ],
)

y = torch.arange(1, 4, dtype=torch.float32)

print(x[0, 0])
print(x[0, 1])
print(x[1, 0])
print(x[1, 1])
print(mask)
x[mask] = y
print(x[0, 0])
print(x[0, 1])
print(x[1, 0])
print(x[1, 1])

可以测试下你这个代码是否能处理v([3])被广播到x[mask]形状([2,3])这种case,如果可以的话,还麻烦在单测里加一个这种广播的测试(可以叫test_boolean_mask_tensor_broadcast_v)。然后基本就没啥问题了,如果本地测试OK的话,可以新提交一个PR单独加个单测,这个PR我可以先合入了,不然重新跑CI比较麻烦

这个case测试了没问题,与 PyTorch行为一样。

@HydrogenSulfate
Copy link
Contributor

@LittleHeroZZZX 我想了下,单测里是否可以考虑v本身可以广播的情况?如下所示

import torch

x = torch.zeros(2, 2, 3)
mask = torch.tensor(
    [
        [True, False],
        [False, True],
    ],
)

y = torch.arange(1, 4, dtype=torch.float32)

print(x[0, 0])
print(x[0, 1])
print(x[1, 0])
print(x[1, 1])
print(mask)
x[mask] = y
print(x[0, 0])
print(x[0, 1])
print(x[1, 0])
print(x[1, 1])

可以测试下你这个代码是否能处理v([3])被广播到x[mask]形状([2,3])这种case,如果可以的话,还麻烦在单测里加一个这种广播的测试(可以叫test_boolean_mask_tensor_broadcast_v)。然后基本就没啥问题了,如果本地测试OK的话,可以新提交一个PR单独加个单测,这个PR我可以先合入了,不然重新跑CI比较麻烦

这个case测试了没问题,与 PyTorch行为一样。

好的,那我先合入了,后面麻烦再提交一个PR,补充一个广播的单测

Copy link
Contributor

@HydrogenSulfate HydrogenSulfate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@HydrogenSulfate HydrogenSulfate merged commit 6c3c4fa into PaddlePaddle:develop Nov 20, 2024
27 of 28 checks passed
@luotao1
Copy link
Contributor

luotao1 commented Nov 25, 2024

hi, @LittleHeroZZZX

  • 非常感谢你对飞桨的贡献,我们正在运营一个PFCC组织,会通过定期分享技术知识与发布开发者主导任务的形式持续为飞桨做贡献,详情可见 https://github.com/luotao1 主页说明。
  • 如果你对PFCC有兴趣,请发送邮件至 [email protected],我们会邀请你加入~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers HappyOpenSource 快乐开源活动issue与PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants