Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[xdoctest] reformat example code with google style in No. 270 275-280 #56476

Merged
merged 14 commits into from
Aug 22, 2023

Conversation

Liyulingyue
Copy link
Contributor

PR types

Others

PR changes

Others

Description

#55629

@paddle-bot
Copy link

paddle-bot bot commented Aug 20, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added contributor External developers status: proposed labels Aug 20, 2023
@Liyulingyue
Copy link
Contributor Author

@SigureMo 发起Review申请🐕

... data = paddle.to_tensor([1, 2, 3])
... dist.recv(data, src=0)
>>> print(data)
[7, 8, 9] (2 GPUs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

最好保留原来的注释 ~

... task = dist.irecv(data, src=0)
>>> task.wait()
>>> print(data)
[7, 8, 9] (2 GPUs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

... data = paddle.to_tensor([[1, 2, 3], [1, 2, 3]])
>>> dist.all_reduce(data, op=dist.ReduceOp.SUM)
>>> print(data)
[[5, 7, 9], [5, 7, 9]] (2 GPUs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Comment on lines 120 to 121
[[5, 7, 9], [5, 7, 9]] (2 GPUs, out for rank 0)
[[1, 2, 3], [1, 2, 3]] (2 GPUs, out for rank 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Comment on lines 60 to 61
[4, 6] (2 GPUs, out for rank 0)
[8, 10] (2 GPUs, out for rank 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Comment on lines 108 to 109
[1, 3] (2 GPUs, out for rank 0)
[5, 7] (2 GPUs, out for rank 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Comment on lines 68 to 69
[1, 2, 3] [10, 11, 12] (2 GPUs, out for rank 0)
[4, 5, 6] [4, 5, 6] (2 GPUs, out for rank 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Comment on lines 107 to 108
[{'bar': [1, 2, 3]}] (2 GPUs, out for rank 0)
[{'bar': [4, 5, 6]}] (2 GPUs, out for rank 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

... data = paddle.to_tensor([1, 2, 3])
... dist.recv(data, src=0)
>>> print(data)
[7, 8, 9] (2 GPUs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

... task = dist.irecv(data, src=0)
>>> task.wait()
>>> print(data)
[7, 8, 9] (2 GPUs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

@luotao1 luotao1 added the HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务 label Aug 21, 2023
Copy link
Contributor

@megemini megemini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ~

这里很多 distributed 的 print,感觉用注释会比较容易看明白。

另外,中文文档里面的 warning 不渲染,不知道是不是一直有这个问题? @SigureMo @sunzhongkai588

image

image

@SigureMo
Copy link
Member

中文文档里面的 warning 不渲染

image

不是不渲染,是根本就没有……

@megemini
Copy link
Contributor

不是不渲染,是根本就没有……

呃 ... ... 忘了是在这儿了 ... ...

@luotao1 luotao1 merged commit f02261b into PaddlePaddle:develop Aug 22, 2023
@Liyulingyue Liyulingyue deleted the xdoc_261 branch November 6, 2023 19:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants