Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[xdoctest] reformat example code with google style in 211,281,308,323 #57301

Merged
merged 4 commits into from
Sep 22, 2023

Conversation

longranger2
Copy link
Contributor

PR types

Others

PR changes

Others

Description

修改如下文件的示例代码为新的格式,并通过 xdoctest 检查:

  • python/paddle/distributed/communication/stream/send.py
  • python/paddle/tensor/random.py
  • python/paddle/distributed/io.py
  • paddle/fluid/pybind/imperative.cc

@sunzhongkai588 @SigureMo @megemini


@paddle-bot
Copy link

paddle-bot bot commented Sep 13, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Sep 13, 2023
Copy link
Member

@SigureMo SigureMo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

随机输出不要删掉,可以用

>>> # doctest: +SKIP("Random output")
random output
>>> # doctest: -SKIP

包裹输出,虽然可能无法检查,但是这些输出对于用户来说是非常有帮助的

>>> x = paddle.zeros((1,2)).astype("bool")
>>> out10 = paddle.randint_like(x, low=-5, high=5, dtype="int64")
>>> print(out10.dtype)
>>> # paddle.int64
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

看样子不支持 float16 呢

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上


>>> paddle.enable_static()
>>> # Build the model
>>> main_prog = base.Program()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paddle.static.Program

>>> main_prog = base.Program()
>>> startup_prog = base.Program()
>>> with base.program_guard(main_prog, startup_prog):
... data = base.layers.data(name="img", shape=[64, 784], append_batch_size=False)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paddle.static.data,迁移后的是这个吧


>>> out = paddle.bernoulli(x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个示例输出怎么删掉了?


>>> x = paddle.uniform([2,3], min=1.0, max=5.0)
>>> out = paddle.poisson(x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上,为什么删掉了呢

out3 = paddle.normal(mean=mean_tensor, std=std_tensor)
# [1.00780561 3.78457445 5.81058198] # random
>>> std_tensor = paddle.to_tensor([1.0, 2.0, 3.0])
>>> out3 = paddle.normal(mean=mean_tensor, std=std_tensor)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

>>> # example 3:
>>> # attr shape is a Tensor, the data type must be int64 or int32.
>>> shape_tensor = paddle.to_tensor([2, 3])
>>> out3 = paddle.uniform(shape_tensor)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

>>> # example 5:
>>> # Input only one parameter
>>> # low=0, high=10, shape=[1], dtype='int64'
>>> out5 = paddle.randint(10)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上


>>> out2 = paddle.randperm(7, 'int32')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上


>>> # example 3: attr shape is a Tensor, the data type must be int64 or int32.
>>> shape_tensor = paddle.to_tensor([2, 3])
>>> out3 = paddle.rand(shape_tensor)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

>>> import paddle
>>> from paddle.base import core
>>> from paddle.device import cuda
...
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里没必要用 ...

...
>>> if core.is_compiled_with_cuda():
... src = paddle.rand(shape=[100, 50, 50])
... dst = paddle.emtpy(shape=[200, 50, 50]).pin_memory()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

emtpy -> empty

...
... stream = cuda.Stream()
... with cuda.stream_guard(stream):
... core.async_write(src, dst, offset, count)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SigureMo 这里使用 core.eager.async_write 可以正常调用,测试用例也是用的 core.eager,但是,这个示例实际是 paddle.base.core.async_write.__doc__ 的。是不是有问题?!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

写错了吧,async_write 明显是 bind 在 eager 上的

void BindEager(pybind11::module* module) {
auto m = module->def_submodule("eager");
static std::vector<PyMethodDef> methods;
AddPyMethodDefs(&methods, variable_methods);
AddPyMethodDefs(&methods, math_op_patch_methods);
auto heap_type = reinterpret_cast<PyHeapTypeObject*>(
PyType_Type.tp_alloc(&PyType_Type, 0));
heap_type->ht_name = ToPyObject("Tensor");
heap_type->ht_qualname = ToPyObject("Tensor");
auto type = &heap_type->ht_type;
type->tp_name = "Tensor";
type->tp_basicsize = sizeof(TensorObject);
type->tp_dealloc = (destructor)TensorDealloc;
type->tp_as_number = &number_methods;
type->tp_as_sequence = &sequence_methods;
type->tp_as_mapping = &mapping_methods;
type->tp_methods = methods.data();
type->tp_getset = variable_properties;
type->tp_init = TensorInit;
type->tp_new = TensorNew;
type->tp_doc = TensorDoc;
type->tp_weaklistoffset = offsetof(TensorObject, weakrefs);
Py_INCREF(&PyBaseObject_Type);
type->tp_base = reinterpret_cast<PyTypeObject*>(&PyBaseObject_Type);
type->tp_flags |=
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HEAPTYPE;
#if PY_VERSION_HEX >= 0x03050000
type->tp_as_async = &heap_type->as_async;
#endif
p_tensor_type = type;
if (PyType_Ready(type) < 0) {
PADDLE_THROW(platform::errors::Fatal(
"Init Paddle error in BindEager(PyType_Ready)."));
return;
}
Py_INCREF(type);
if (PyModule_AddObject(m.ptr(), "Tensor", reinterpret_cast<PyObject*>(type)) <
0) {
Py_DECREF(type);
Py_DECREF(m.ptr());
PADDLE_THROW(platform::errors::Fatal(
"Init Paddle error in BindEager(PyModule_AddObject)."));
return;
}
BindFunctions(m.ptr());
BindEagerPyLayer(m.ptr());
BindEagerOpFunctions(&m);
}

{"async_write",
(PyCFunction)(void (*)())eager_api_async_write,
METH_VARARGS | METH_KEYWORDS,
nullptr},

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

嗯 ~ 但是,这个 __doc__ 却是 paddle.base.core.async_write.__doc__ 的,paddle.base.core.eager.async_write.__doc__ 没有东西 ~

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

应该是老动态图遗留代码吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里我应该要怎么修改呢? @SigureMo @megemini

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

用新的 eager 就好

Comment on lines 1399 to 1417
>>> import numpy as np
>>> import paddle
>>> from paddle.base import core
>>> from paddle.device import cuda
...
>>> if core.is_compiled_with_cuda():
... src = paddle.rand(shape=[100, 50, 50], dtype="float32").pin_memory()
... dst = paddle.empty(shape=[100, 50, 50], dtype="float32")
... offset = paddle.to_tensor(
... np.array([0, 60], dtype="int64"), place=paddle.CPUPlace())
... count = paddle.to_tensor(
... np.array([40, 60], dtype="int64"), place=paddle.CPUPlace())
... buffer = paddle.empty(shape=[50, 50, 50], dtype="float32").pin_memory()
... index = paddle.to_tensor(
... np.array([1, 3, 5, 7, 9], dtype="int64")).cpu()
...
... stream = cuda.Stream()
... with cuda.stream_guard(stream):
... core.async_read(src, dst, index, buffer, offset, count)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里调用也有问题

Comment on lines +40 to +49
>>> import paddle
>>> import paddle.base as base

>>> paddle.enable_static()
>>> exe = base.Executor(base.CPUPlace())
>>> param_path = "./my_paddle_model"
>>> t = paddle.distributed.transpiler.DistributeTranspiler()
>>> t.transpile(...)
>>> pserver_prog = t.get_pserver_program(...)
>>> _load_distributed_persistables(executor=exe, dirname=param_path, main_program=pserver_prog)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

加个 >>> # doctest: +REQUIRES(env: DISTRIBUTED)

Comment on lines 210 to 219
>>> import paddle
>>> import paddle

paddle.enable_static()
exe = paddle.static.Executor(paddle.CPUPlace())
param_path = "./my_paddle_model"
t = paddle.distributed.transpiler.DistributeTranspiler()
t.transpile(...)
train_program = t.get_trainer_program()
_save_distributed_persistables(executor=exe, dirname=param_path, main_program=train_program)
>>> paddle.enable_static()
>>> exe = paddle.static.Executor(paddle.CPUPlace())
>>> param_path = "./my_paddle_model"
>>> t = paddle.distributed.transpiler.DistributeTranspiler()
>>> t.transpile(...)
>>> train_program = t.get_trainer_program()
>>> _save_distributed_persistables(executor=exe, dirname=param_path, main_program=train_program)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

加个 >>> # doctest: +REQUIRES(env: DISTRIBUTED)

Comment on lines 369 to 374
>>> import paddle
>>> import paddle.base as base

paddle.enable_static()
param = base.default_main_program().global_block().var('fc.b')
res = base.io.is_persistable(param)
>>> paddle.enable_static()
>>> param = base.default_main_program().global_block().var('fc.b')
>>> res = base.io.is_persistable(param)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

            >>> import paddle
            >>> import paddle.base as base
            >>> paddle.enable_static()
            >>> image = paddle.static.data(
            ...     name='image', shape=[None, 28], dtype='float32')
            >>> bias_attr = paddle.base.ParamAttr('fc.b')
            >>> fc = paddle.static.nn.fc(image, size=10, bias_attr=bias_attr)
            >>> param = base.default_main_program().global_block().var('fc.b')
            >>> res = paddle.distributed.io.is_persistable(param)

@SigureMo 帮忙看看这样行不行?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • base.default_main_program -> paddle.static.default_main_program
  • paddle.base.ParamAttr -> paddle.ParamAttr

其余我觉得没啥问题

>>> paddle.enable_static()
>>> dir_path = "./my_paddle_model"
>>> file_name = "persistables"
>>> image = paddle.static..data(name='img', shape=[None, 28, 28], dtype='float32')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paddle.static..data -> paddle.static.data

>>> file_name = "persistables"
>>> image = paddle.static..data(name='img', shape=[None, 28, 28], dtype='float32')
>>> label = paddle.static.data(name='label', shape=[None, 1], dtype='int64')
>>> feeder = paddle.static.DataFeeder(feed_list=[image, label], place=paddle.CPUPlace())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paddle.static.DataFeeder -> paddle.base.DataFeeder

@luotao1 luotao1 added the HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务 label Sep 14, 2023
Copy link
Member

@SigureMo SigureMo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

……全 skip 了岂不是每个示例代码都只跑了个 import paddle……

print(x)
# [[0.5535528 0.20714243 0.01162981 0.51577556]
# [0.36369765 0.2609165 0.18905126 0.5621971 ]]
>>> # doctest: +SKIP("Random output")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

别全 skip 吧,可以 skip 包裹输出

    >>> # doctest: +SKIP("Random output")
    output
    >>> # doctest: -SKIP

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

而且输出格式不对吧,Tensor 明显不是长这样的

out_1 = random.uniform_random_batch_size_like(input, [2, 4]) # out_1.shape=[1, 4]
# example 2:
out_2 = random.uniform_random_batch_size_like(input, [2, 4], input_dim_idx=1, output_dim_idx=1) # out_2.shape=[2, 3]
>>> import paddle
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

上面空一行

# [-0.3761474, -1.044801 , 1.1870178 ]] # random
>>> import paddle

>>> # doctest: +SKIP("Random output")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上,不要全 skip

# [-0.3761474, -1.044801 , 1.1870178 ]] # random
>>> import paddle

>>> # doctest: +SKIP("Random output")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

out1 = paddle.normal(shape=[2, 3])
# [[ 0.17501129 0.32364586 1.561118 ] # random
# [-1.7232178 1.1545963 -0.76156676]] # random
>>> # doctest: +SKIP("Random output")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上


>>> import paddle

>>> # doctest: +SKIP("Random output")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

# [7] # random
>>> import paddle

>>> # doctest: +SKIP("Random output")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上


out1 = paddle.randperm(5)
# [4, 1, 2, 3, 0] # random
>>> # doctest: +SKIP("Random output")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

# [0.4836288 , 0.24573246, 0.7516129 ]] # random
>>> import paddle

>>> # doctest: +SKIP("Random output")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

x.exponential_()
# [[0.80643415, 0.23211166, 0.01169797],
# [0.72520673, 0.45208144, 0.30234432]]
>>> # doctest: +SKIP("Random output")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

@longranger2
Copy link
Contributor Author

done

Copy link
Member

@SigureMo SigureMo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTMeow 🐾

@luotao1 luotao1 merged commit e5ee1da into PaddlePaddle:develop Sep 22, 2023
Frida-a pushed a commit to Frida-a/Paddle that referenced this pull request Oct 14, 2023
jiahy0825 pushed a commit to jiahy0825/Paddle that referenced this pull request Oct 16, 2023
@longranger2 longranger2 deleted the xdoctest7 branch October 29, 2023 03:00
danleifeng pushed a commit to danleifeng/Paddle that referenced this pull request Nov 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants