Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

supplet several interface of static Variable to consistent with dygraph Tensor #33330

Merged
merged 36 commits into from
Jun 24, 2021
Merged
Show file tree
Hide file tree
Changes from 24 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
e684e65
supplet several interface of static Variable to consistent with dygra…
CtfGo Jun 3, 2021
28e4836
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 4, 2021
fcf1296
implement Variable detach with adding share_data op
CtfGo Jun 8, 2021
7456c18
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 8, 2021
32b24d9
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 8, 2021
6711e56
Merge branch 'variable-keeping-with-ensor' of https://github.com/CtfG…
CtfGo Jun 8, 2021
9b16add
add framework.py
CtfGo Jun 8, 2021
e73f754
fix ut
CtfGo Jun 9, 2021
e4adeed
fix ut
CtfGo Jun 9, 2021
2764be8
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 9, 2021
b2a7a1b
fix ut gramma and add annotation
CtfGo Jun 10, 2021
16fdb9a
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 10, 2021
7227cff
move the unittests of added interface from test_variable to test_math…
CtfGo Jun 11, 2021
f2314ee
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 11, 2021
ec409db
skip pre-commit of framework.py for clean diff
CtfGo Jun 15, 2021
48af9a7
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 15, 2021
6b0f8b3
skip pre-commit of framework.py for clean diff
CtfGo Jun 15, 2021
cc37d2d
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 15, 2021
67837ee
fix ci ut converage
CtfGo Jun 16, 2021
1850dfc
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 16, 2021
13aacdb
Merge branch 'develop' into variable-keeping-with-ensor
CtfGo Jun 16, 2021
9f3475d
remove ShareInplaceVersionCounterWith
CtfGo Jun 18, 2021
dd05daa
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 18, 2021
064cb42
Merge branch 'variable-keeping-with-ensor' of https://github.com/CtfG…
CtfGo Jun 18, 2021
3b1d04c
update some comments and api usage to 2.0, also add a Variable.detach…
CtfGo Jun 21, 2021
58a9374
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 21, 2021
45c0ac4
fix ut
CtfGo Jun 22, 2021
217459f
revert format of vim auto-change
CtfGo Jun 22, 2021
9e75f25
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 22, 2021
58e7fcd
fix code format
CtfGo Jun 22, 2021
60c63bf
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 22, 2021
d88cec9
fix api docstring
CtfGo Jun 23, 2021
85d4e88
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 23, 2021
bdbeb60
fix docstring format
CtfGo Jun 23, 2021
2493b05
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
CtfGo Jun 23, 2021
7c1d012
fix test_share_data_op
CtfGo Jun 23, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 75 additions & 0 deletions paddle/fluid/operators/share_data_op.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
/* Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */

#include "paddle/fluid/operators/share_data_op.h"
#include "paddle/fluid/framework/op_registry.h"

namespace paddle {
namespace operators {

class ShareDataOp : public framework::OperatorWithKernel {
public:
using framework::OperatorWithKernel::OperatorWithKernel;

void InferShape(framework::InferShapeContext *ctx) const override {
OP_INOUT_CHECK(ctx->HasInput("Input"), "Input", "Input", "ShareData");
OP_INOUT_CHECK(ctx->HasOutput("Out"), "Output", "Out", "ShareData");
auto in_type = ctx->GetInputsVarType("Input")[0];
auto out_type = ctx->GetOutputsVarType("Out")[0];

PADDLE_ENFORCE_EQ(
in_type == framework::proto::VarType::LOD_TENSOR ||
in_type == framework::proto::VarType::SELECTED_ROWS,
true,
platform::errors::InvalidArgument(
"Type of Variable[Input] must be LoDTensor or SelectedRows!"));
PADDLE_ENFORCE_EQ(
in_type, out_type,
platform::errors::InvalidArgument(
"The type of input (Input) and output (Out) are inconsistent."));

ctx->ShareDim("Input", "Out");
}
};

class ShareDataOpMaker : public framework::OpProtoAndCheckerMaker {
public:
void Make() override {
AddInput("Input", "The input tensor.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The input tensor of ShareData operator?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, used on static.Variable.detach

AddOutput("Out",
"The returned tensor, will share data with the input Tensor.");
AddComment(R"DOC(
ShareData Operator.

Return a tensor that share data with the input tensor and
always doesn't have a Tensor copy.
)DOC");
}
};

} // namespace operators
} // namespace paddle

namespace ops = paddle::operators;
REGISTER_OPERATOR(
share_data, ops::ShareDataOp, ops::ShareDataOpMaker,
paddle::framework::EmptyGradOpMaker<paddle::framework::OpDesc>,
paddle::framework::EmptyGradOpMaker<paddle::imperative::OpBase>);
REGISTER_OP_CPU_KERNEL(share_data, ops::ShareDataKernel<bool>,
ops::ShareDataKernel<int>, ops::ShareDataKernel<int8_t>,
ops::ShareDataKernel<uint8_t>,
ops::ShareDataKernel<paddle::platform::float16>,
ops::ShareDataKernel<int64_t>,
ops::ShareDataKernel<float>,
ops::ShareDataKernel<double>)
25 changes: 25 additions & 0 deletions paddle/fluid/operators/share_data_op.cu
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
/* Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can change in next PR:

2019 -> 2021


Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */

#include "paddle/fluid/operators/share_data_op.h"

REGISTER_OP_CUDA_KERNEL(
share_data, paddle::operators::ShareDataKernel<bool>,
paddle::operators::ShareDataKernel<int>,
paddle::operators::ShareDataKernel<int8_t>,
paddle::operators::ShareDataKernel<uint8_t>,
paddle::operators::ShareDataKernel<paddle::platform::float16>,
paddle::operators::ShareDataKernel<int64_t>,
paddle::operators::ShareDataKernel<float>,
paddle::operators::ShareDataKernel<double>);
41 changes: 41 additions & 0 deletions paddle/fluid/operators/share_data_op.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
/* Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */

#pragma once
#include "paddle/fluid/framework/op_registry.h"

namespace paddle {
namespace operators {

template <typename T>
class ShareDataKernel : public framework::OpKernel<T> {
public:
void Compute(const framework::ExecutionContext &ctx) const override {
auto *in_var = ctx.InputVar("Input");
auto *out_var = ctx.OutputVar("Out");
if (in_var->IsType<framework::LoDTensor>()) {
const auto &origin_tensor = in_var->Get<framework::LoDTensor>();
auto *detach_tensor = out_var->GetMutable<framework::LoDTensor>();
detach_tensor->ShareDataWith(origin_tensor);
} else {
const auto &origin_selected_rows = in_var->Get<framework::SelectedRows>();
auto *detach_selected_rows =
out_var->GetMutable<framework::SelectedRows>();
detach_selected_rows->mutable_value()->ShareDataWith(
origin_selected_rows.value());
}
}
};
} // namespace operators
} // namespace paddle
71 changes: 55 additions & 16 deletions python/paddle/fluid/framework.py
Original file line number Diff line number Diff line change
Expand Up @@ -947,35 +947,45 @@ def __init__(self,
self._stop_gradient = stop_gradient
self.is_data = is_data

@fake_interface_only
def detach(self):
"""
**Notes**:
**This API is ONLY available in Dygraph mode**

Returns a new Variable, detached from the current graph.
It will share data with origin Variable and always doesn't have a Tensor copy.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"and always doesn't have a Tensor copy" -> "without tensor copy"

In addition, the detached Variable doesn't provide gradient propagation.

Returns:
( :ref:`api_guide_Variable_en` | dtype is same as current Variable): The detached Variable.


Examples:
.. code-block:: python
:name: code-example1

import paddle.fluid as fluid
from paddle.fluid.dygraph.base import to_variable
from paddle.fluid.dygraph import Linear
import numpy as np
import paddle
paddle.enable_static()

data = np.random.uniform(-1, 1, [30, 10, 32]).astype('float32')
with fluid.dygraph.guard():
linear = Linear(32, 64)
data = to_variable(data)
x = linear(data)
y = x.detach()
# create a static Variable
x = paddle.static.data(name='x', shape=[3, 2, 1])

# create a detached Variable
y = x.detach()

"""
pass
assert self.type == core.VarDesc.VarType.SELECTED_ROWS or \
self.type == core.VarDesc.VarType.LOD_TENSOR, \
"only support a variable with SELECTED_ROWS or LOD_TENSOR to be detached"

output = self.block.create_var(
name=unique_name.generate_with_ignorable_key("detach_" + self.name),
dtype=self.dtype,
type=self.type,
persistable=self.persistable,
stop_gradient=True)

self.block.append_op(
type='share_data',
inputs={'Input': [self]},
outputs={'Out': [output]})
return output

@fake_interface_only
def numpy(self):
Expand Down Expand Up @@ -1810,6 +1820,35 @@ def set_value(self, value, scope=None):

t.set(value, place)

def size(self):
"""
Returns the number of elements for current Variable, which is a int64 Variable with shape [1]

Returns:
Variable: the number of elements for current Variable

Examples:
.. code-block:: python
:name: code-example1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

有文档预览的结果吗?麻烦将结果截图放到PR里

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


import paddle
paddle.enable_static()

# create a static Variable
x = paddle.static.data(name='x', shape=[3, 2, 1])

# get the number of elements of the Variable
y = x.size()

"""
output = self.block.create_var(
name=unique_name.generate_with_ignorable_key(self.name + "_size"),
dtype=core.VarDesc.VarType.INT64)

self.block.append_op(
type='size', inputs={'Input': [self]}, outputs={'Out': [output]})
return output


def get_all_op_protos():
"""
Expand Down
36 changes: 32 additions & 4 deletions python/paddle/fluid/layers/math_op_patch.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@
"__rpow__": "A **= B",
"__floordiv__": "A //B",
"__mod__": "A % B",
"__matmul__": "A @ B",
"__eq__": "A == B",
"__ne__": "A != B",
"__lt__": "A < B",
Expand Down Expand Up @@ -195,6 +196,28 @@ def _scalar_op_(var, scale, bias):
def _neg_(var):
return _scalar_op_(var, -1.0, 0.0)

@property
def _ndim_(self):
"""
Returns the dimension of current Variable

Returns:
the dimension

Examples:
.. code-block:: python

import paddle

paddle.enable_static()

# create a static Variable
x = paddle.static.data(name='x', shape=[3, 2, 1])
# print the dimension of the Variable
print(x.ndim)
"""
return len(self.shape)

def _scalar_add_(var, value):
return _scalar_op_(var, 1.0, value)

Expand Down Expand Up @@ -228,17 +251,17 @@ def __impl__(self, other_var):
other_var = float(other_var)
# division is a special case
# NOTE(chenweihang): because we cast tensor to float32 instead float64,
# the division result can only guarantee the numerical accuracy of 6 digits
# after the decimal point. The result of numpy calculation is of float64 type,
# so the calculation result here and the calculation result of numpy are
# the division result can only guarantee the numerical accuracy of 6 digits
# after the decimal point. The result of numpy calculation is of float64 type,
# so the calculation result here and the calculation result of numpy are
# different after 6 decimal point. If necessary, we can also use float64 here.
# torch's behavior here is consistent with ours
if op_type == 'elementwise_div' and self.dtype in _supported_int_dtype_:
self = astype(self, 'float32')
# here use `scale` replace `elementwise` to get better performance
# but only +, -, * can use this method
# NOTE(chentianyu03): / can not use `scale` method,because the result of
# `scale` method (self*(1/other_var)) do not exactly equal with the result
# `scale` method (self*(1/other_var)) do not exactly equal with the result
# of `elementwise_div` method.
if scalar_method is not None:
return scalar_method(self, other_var)
Expand Down Expand Up @@ -321,6 +344,9 @@ def __impl__(self, other_var):
# b=-a
('__neg__', _neg_),
('astype', astype),
('dim', lambda x: len(x.shape)),
('ndimension', lambda x: len(x.shape)),
('ndim', _ndim_),
('__add__', _binary_creator_('__add__', 'elementwise_add', False,
_scalar_add_)),
# a+b == b+a. Do not need to reverse explicitly
Expand All @@ -347,6 +373,8 @@ def __impl__(self, other_var):
'elementwise_floordiv', False, None)),
('__mod__', _binary_creator_('__mod__', 'elementwise_mod', False,
None)),
('__matmul__', _binary_creator_('__matmul__', "matmul_v2", False,
None)),
# for logical compare
('__eq__', _binary_creator_('__eq__', 'equal', False, None)),
('__ne__', _binary_creator_('__ne__', 'not_equal', False, None)),
Expand Down
6 changes: 0 additions & 6 deletions python/paddle/fluid/tests/unittests/test_detach.py
Original file line number Diff line number Diff line change
Expand Up @@ -149,12 +149,6 @@ def test_NoDetachSingle_DetachMulti(self):
array_detach_multi = self.detach_multi()
assert np.array_equal(array_no_detach_single, array_detach_multi)

def test_detach_exception(self):
x = fluid.layers.data(name="a", shape=[3, 4], dtype='float32')
y = fluid.layers.fc(input=x, size=10, bias_attr=True)
with self.assertRaises(AssertionError):
y_detach = y.detach()


class TestInplace(unittest.TestCase):
def test_forward_version(self):
Expand Down
23 changes: 22 additions & 1 deletion python/paddle/fluid/tests/unittests/test_math_op_patch.py
Original file line number Diff line number Diff line change
Expand Up @@ -271,7 +271,6 @@ def test_astype(self):
fetch_list=[b])
self.assertTrue(numpy.allclose(a_np.astype('float32'), b_np))

@prog_scope()
def test_bitwise_and(self):
x_np = np.random.randint(-100, 100, [2, 3, 5]).astype("int32")
y_np = np.random.randint(-100, 100, [2, 3, 5]).astype("int32")
Expand Down Expand Up @@ -336,6 +335,28 @@ def test_bitwise_not(self):
fetch_list=[z])
self.assertTrue(np.array_equal(out[0], out_np))

@prog_scope()
def test_ndim(self):
a = paddle.static.data(name="a", shape=[10, 1])
self.assertEqual(a.dim(), 2)
self.assertEqual(a.ndimension(), 2)
self.assertEqual(a.ndim, 2)

@prog_scope()
def test_matmul(self):
a = fluid.layers.data(name='a', shape=[2, 3], dtype='float32')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can change in next PR:

use 2.0 api instead of fluid API for the new code.

b = fluid.layers.data(name='b', shape=[3, 5], dtype='float32')
c = a @b # __matmul__
a_np = numpy.random.uniform(-1, 1, size=[2, 3]).astype('float32')
b_np = numpy.random.uniform(-1, 1, size=[3, 5]).astype('float32')
place = fluid.CPUPlace()
exe = fluid.Executor(place)
c_np = exe.run(fluid.default_main_program(),
feed={"a": a_np,
"b": b_np},
fetch_list=[c])
self.assertTrue(numpy.allclose(a_np @b_np, c_np))


if __name__ == '__main__':
unittest.main()
Loading