-
Notifications
You must be signed in to change notification settings - Fork 730
NXP backend: added aten.slice support #15889
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
NXP backend: added aten.slice support #15889
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15889
Note: Links to docs will display an error until the docs builds have been completed. ❌ 5 New Failures, 2 Unrelated FailuresAs of commit 2f4c3af with merge base ca4e363 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "release notes: nxp" |
|
|
||
| import numpy as np | ||
| from backends.nxp.backend.edge_helper import input_tensor | ||
| from backends.nxp.backend.ir.converter.conversion.common import OpsList |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please don't use relative imports.
The tests pass because Python finds the backends directory in our development environment, but it won't work in deployment as there is no backends top-level module in the installed Python package.
So, replace all from backends... with from executorch.backends....
| parameters_mapping: dict[str, Parameter], | ||
| custom_delegation_options: CustomDelegationOptions, | ||
| ) -> bool: | ||
| if node_uses_shape_broadcasting(node): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shape broadcasting is something that allows us to perform element-wise operations on 2 tensors, that don't necessarily have the same shape. For example if you have 2 tensors with shapes (2, 3) and (2, 3), you can trivially add (or multiply, subtract, divide, ...) their elements together. E.g.:
[[1, 2, 3],
[4, 5, 6]]
+
[[0, -1, 1],
[2, -2, 3]]
=
[[1, 1, 4],
[6, 3, 9]]
If your tensors have different shapes, for example (2, 3) and (5, 4), the element-wise operations are not defined (how would you add them together?). But for some combinations of shapes, they are defined. Going from the right, the dimensions must either match, or one of them must be 1, and the ranks don't even need to be the same. For example with shapes (2, 3) and (1):
[[1, 2, 3],
[4, 5, 6]]
+
[-1]
=
[[0, 1, 2],
[3, 4, 5]]
or for shapes (2, 3) and (3):
[[1, 2, 3],
[4, 5, 6]]
+
[-1, 0, 1]
=
[[0, 2, 4],
[3, 5, 7]]
Other combinations that work are for example:
(2, 4, 6, 8) and (4, 1, 8)
(42, 1) and (2, 4, 1, 8)
...
The fact that the tensors can have a different rank (number of dimensions) causes some issues when we are handling the conversion from ExecuTorch's channels_first format to NeutronIR's channels_last format. These issues may require the insertion of Transpose operators to solve, hence the check on line 33.
But this is only the case for operators that support shape broadcasting (Add, Sub, Mul, Div, Pow, ...). Slice is not one of these element-wise operations, therefore this check does not make sense here, and it should be removed.
Sorry for the long comment, but I believe it is important to understand these issues going forward.
| input_shape = input_tensor(node, 0).shape | ||
| dim = node.args[1] | ||
|
|
||
| # The rank of the dimension that we want to slice must be divisible by num_macs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does The rank of the dimension mean?
|
|
||
| self.builder.append_operators(ops.flatten()) | ||
|
|
||
| Dim = Start = End = int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice 👍🏻
| Dim = Start = End = int | ||
|
|
||
| @staticmethod | ||
| def _get_clipped_slice_args(node: Node) -> (Dim, Start, End): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This type hint is not valid in Python. Use the following instead:
def _get_clipped_slice_args(node: Node) -> tuple[Dim, Start, End]:| pytest.param((24, 32), (0, -1), (0, 16), (24, 32), id="2D, negative dim arg"), | ||
| ], | ||
| ) | ||
| def test_slice_tensor_quant_conversion(mocker, x_input_shape, dims, starts, ends): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't necessarily mind the
if neutron_converter_flavor != "SDK_25_12":
pytest.skip("Neutron Software must be version 2.2.1 or higher.")but the test needs to be much more strict to ensure we either detect a failure after the Nutron update, or that the passing test correctly indicates expected (and correct behavior.
Right now, you are not checking the whole exported program at all:
_ = to_quantized_edge_program(model, x_input_shape).exported_program()replace it with something like
full_exported_program = to_quantized_edge_program(model, x_input_shape).exported_program()and make sure the slice operators are indeed delegated. Because right now, you are literally only checking that the last delegated partition produces the same output in ExecuTorch and in NeutronIR. So it is possible that the slice(s) will not be delegated, but some other operator will. converter_spy.call_args.args[1] might only return this unrelated operator, and you will not notice that you are not testing the slice at all.
This applies to all tests in general. I see that the SliceTensorModule only contains slice operators, but you never know what auxiliary ops are added during the export to aten dialect.
| ) | ||
| ], | ||
| ) | ||
| def test_slice_tensor_w_conv_quant_conversion( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as for the test above.
Since this one uses convolution it is even more necessary to properly check that the slice was delegated. As if it wasn't your test would only capture the Conv, successfully compare its ExecuTorch and NeutronIR output, and from the successful tests we would happily assume the slice delegation works.
Edit:
I ran the test and this is indeed what's already happening. Only the convolution is being delegated. Please have a look at the test.
| import pytest | ||
| import torch | ||
|
|
||
| from backends.nxp.tests.models import SliceTensorModule |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please fix the import (see other comment).
| self.starts = starts | ||
| self.ends = ends | ||
|
|
||
| def do_slice(self, x): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor:
Why define the do_slice method again instead of using the whole SliceTensorModule?
| exir_ops.edge.aten.mm.default: MMConverter, # noqa F405 | ||
| exir_ops.edge.aten.permute_copy.default: PermuteCopyConverter, # noqa F405 | ||
| exir_ops.edge.aten.relu.default: ReLUConverter, # noqa F405 | ||
| exir_ops.edge.aten.slice_copy.Tensor: SliceTensorConverter, # noqa F405 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor:
Pleas keep alphabetical order. (I know it was already broken by Sigmoid, but you can fix both ;) )
Summary
adds support for aten.slice operator
Test plan
tests can be manually run using
pytest -c /dev/null backends/nxp/tests/