Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend][PaddlePaddle] Add autopad for conv/pool #9295

Merged
merged 16 commits into from
Oct 22, 2021

Conversation

jiangjiajun
Copy link
Contributor

@jiangjiajun jiangjiajun commented Oct 15, 2021

This will solve the problem with dynamic input shape while padding==SAME in conv2d/pool2d

also include pad3d and squeeze, this two operators will be used for padding tensor.
And the _autopad function refers to ONNX frontend @mbrookhart

Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.

@jiangjiajun
Copy link
Contributor Author

@areusch @comaniac @AndrewZhaoLuo @mbrookhart Hi, Could you help to review this pull request, all the tests have passed

@AndrewZhaoLuo
Copy link
Contributor

I'll take a look today.

Copy link
Contributor

@AndrewZhaoLuo AndrewZhaoLuo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments, overall LGTM though I need to maybe read the spec a bit more closely.

@@ -43,18 +44,56 @@
__all__ = ["from_paddle"]


def _get_pad_size(in_size, dilated_kernel_size, stride_size):
"""Calculate the paddings size for Conv/Pool in SAME padding mode."""
def _autopad(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you just use

def autopad(
data,
strides,
kernel_shape,
dilations,
ndim,
pad_type="constant",
deconv=False,
mode="SAME_UPPER",
pad_value=0.0,
):
"""
Perform autopadding with dynamic input shapes
"""
# get attributes as constants
strides = _op.const(np.array(strides), dtype="int64")
dilated_kernel_shape = _op.const(
np.array(
[(kernel - 1) * dilation + 1 for kernel, dilation in zip(kernel_shape, dilations)]
),
dtype="int64",
)
# get input shape
shape = _op.strided_slice(shape_of(data, dtype="int64"), [2], [ndim])
# set up integer constants
zero = _op.const(0, dtype="int64")
one = _op.const(1, dtype="int64")
two = _op.const(2, dtype="int64")
# Calculate total padding
mod = _op.mod(shape, strides)
left = _op.maximum(dilated_kernel_shape - strides, zero)
right = _op.maximum(dilated_kernel_shape - mod, zero)
total_pad = _op.where(_op.equal(mod, zero), left, right)
if deconv:
total_pad = _op.const(np.array(kernel_shape), dtype="int64") - one - total_pad
# split total padding into before and after
pad_before = _op.floor_divide(total_pad, two)
pad_after = total_pad - pad_before
# combine
if "LOWER" in mode:
pad = _op.concatenate(
[_op.reshape(pad_after, [-1, 1]), _op.reshape(pad_before, [-1, 1])], axis=1
)
else:
pad = _op.concatenate(
[_op.reshape(pad_before, [-1, 1]), _op.reshape(pad_after, [-1, 1])], axis=1
)
# pad N and C with zeros
pad = _op.concatenate([_op.const(np.zeros([2, 2], dtype="int64"), dtype="int64"), pad], axis=0)
if isinstance(pad_value, (float, int)):
pad_value = _op.const(pad_value)
return _op.nn.pad(data, fold_constant(pad), pad_value, pad_type)
?

  1. Refactor _autopad in the onnx.py file to tvm/python/tvm/relay/frontend/common.py

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good advice, I think this function also works for tensorflow and tflite to solve dynamic shape problem.

shape_of and autopad are both removed to common.py

pad_h = _get_pad_size(in_h, (k_h - 1) * dilations[0] + 1, strides[0])
pad_w = _get_pad_size(in_w, (k_w - 1) * dilations[1] + 1, strides[1])
paddings = [pad_h[0], pad_w[0], pad_h[1], pad_w[1]]
dilations = [1, 1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we mean to override the dilations?

Copy link
Contributor Author

@jiangjiajun jiangjiajun Oct 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a history issue for Paddle framework, while padding==SAME, it will force dliations = 1
Here is the implementation code https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/conv_op.h#L113

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid confusion, I put a comment on this line of code to explain this problem.

heliqi and others added 3 commits October 22, 2021 17:24
Copy link
Contributor

@AndrewZhaoLuo AndrewZhaoLuo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@masahi masahi merged commit 4fb6fa5 into apache:main Oct 22, 2021
ylc pushed a commit to ylc/tvm that referenced this pull request Jan 7, 2022
* Add autopad for conv/pool

* add autopad for conv/pool

* fix pylint warning

* add some annotations

* add som annotations

* add som annotations

* Refactor autopad in the onnx.py and paddlepaddle.py to relay/frontend/common.py

* add comment for conv2d

Co-authored-by: heliqi <[email protected]>
ylc pushed a commit to ylc/tvm that referenced this pull request Jan 13, 2022
* Add autopad for conv/pool

* add autopad for conv/pool

* fix pylint warning

* add some annotations

* add som annotations

* add som annotations

* Refactor autopad in the onnx.py and paddlepaddle.py to relay/frontend/common.py

* add comment for conv2d

Co-authored-by: heliqi <[email protected]>
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants