Skip to content

Commit ad1ca3f

Browse files
mergennachinjackzhxngswolchokGregoryComerabhinaykukkadapu
authored
[RELEASE-ONLY] Cherry-pick doc only updates (#15083)
``` python scripts/pick_doc_commits.py --main=origin/main --release=origin/release/1.0 ``` https://gist.github.com/mergennachin/155d28c84069d7808469ebaeca07aa5f Cherry-picking - 42e8746 Update Voxral README.md (#14324) - 4d0961e Update Voxtral README.md (#14414) - 30568d2 Fix outdated lintrunner directions (#14449) - c98079a Update Voxtral README.md (#14544) - e608a21 [Backend Tester] Update README (#14739) - fc512fa Fix typos in docs ahead of GA (#14964) - 1a8acf6 Update top-level README.md file (#15049) - 9560800 Fix documentation link for Core ATen operators (#15050) - b9451c9 Use new logo in ExecuTorch (#14782) - 481c9cf Fix various minor links in top-level README.md (#15052) Skipping - d382f6b mypy fix (#15080) (not a doc) - 41b061e NXP backend: Update user guide and docs Readme (#14852) (merge conflict) --------- Co-authored-by: Jack <[email protected]> Co-authored-by: Scott Wolchok <[email protected]> Co-authored-by: Gregory Comer <[email protected]> Co-authored-by: Abhinayk <[email protected]>
1 parent 2897bde commit ad1ca3f

File tree

9 files changed

+495
-139
lines changed

9 files changed

+495
-139
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -199,8 +199,7 @@ We use [`lintrunner`](https://pypi.org/project/lintrunner/) to help make sure th
199199
code follows our standards. Set it up with:
200200

201201
```
202-
pip install lintrunner==0.12.7
203-
pip install lintrunner-adapters==0.12.4
202+
./install_requirements.sh # (automatically run by install_executorch.sh)
204203
lintrunner init
205204
```
206205

README.md

Lines changed: 228 additions & 50 deletions
Large diffs are not rendered by default.

backends/test/suite/README.md

Lines changed: 57 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -5,49 +5,83 @@ This directory contains tests that validate correctness and coverage of backends
55
These tests are intended to ensure that backends are robust and provide a smooth, "out-of-box" experience for users across the full span of input patterns. They are not intended to be a replacement for backend-specific tests, as they do not attempt to validate performance or that backends delegate operators that they expect to.
66

77
## Running Tests and Interpreting Output
8-
Tests can be run from the command line, either using the runner.py entry point or the standard Python unittest runner. When running through runner.py, the test runner will report test statistics, including the number of tests with each result type.
8+
Tests can be run from the command line using pytest. When generating a JSON test report, the runner will report detailed test statistics, including output accuracy, delegated nodes, lowering timing, and more.
99

10-
Backends can be specified with the `ET_TEST_ENABLED_BACKENDS` environment variable. By default, all available backends are enabled. Note that backends such as Core ML or Vulkan may require specific hardware or software to be available. See the documentation for each backend for information on requirements.
10+
Each backend and test flow (recipe) registers a pytest [marker](https://docs.pytest.org/en/stable/example/markers.html) that can be passed to pytest with the `-m marker` argument to filter execution.
1111

12-
Example:
12+
To run all XNNPACK backend operator tests:
1313
```
14-
ET_TEST_ENABLED_BACKENDS=xnnpack python -m executorch.backends.test.suite.runner
14+
pytest -c /dev/nul backends/test/suite/operators/ -m backend_xnnpack -n auto
1515
```
1616

17+
To run all model tests for the CoreML static int8 lowering flow:
18+
```
19+
pytest -c /dev/nul backends/test/suite/models/ -m flow_coreml_static_int8 -n auto
1720
```
18-
2465 Passed / 2494
19-
16 Failed
20-
13 Skipped
2121

22-
[Success]
23-
736 Delegated
24-
1729 Undelegated
22+
To run a specific test:
23+
```
24+
pytest -c /dev/nul backends/test/suite/ -k "test_prelu_f32_custom_init[xnnpack]"
25+
```
2526

26-
[Failure]
27-
5 Lowering Fail
28-
3 PTE Run Fail
29-
8 Output Mismatch Fail
27+
To generate a JSON report:
28+
```
29+
pytest -c /dev/nul backends/test/suite/operators/ -n auto --json-report --json-report-file="test_report.json"
3030
```
3131

32-
Outcomes can be interpreted as follows:
33-
* Success (delegated): The test passed and at least one op was delegated by the backend.
34-
* Success (undelegated): The test passed with no ops delegated by the backend. This is a pass, as the partitioner works as intended.
35-
* Skipped: test fails in eager or export (indicative of a test or dynamo issue).
36-
* Lowering fail: The test fails in to_edge_transform_and_lower.
37-
* PTE run failure: The test errors out when loading or running the method.
38-
* Output mismatch failure: Output delta (vs eager) exceeds the configured tolerance.
32+
See [pytest-json-report](https://pypi.org/project/pytest-json-report/) for information on the report format. The test logic in this repository attaches additional metadata to each test entry under the `metadata`/`subtests` keys. One entry is created for each call to `test_runner.lower_and_run_model`.
33+
34+
Here is a excerpt from a test run, showing a successful run of the `test_add_f32_bcast_first[xnnpack]` test.
35+
```json
36+
"tests": [
37+
{
38+
"nodeid": "operators/test_add.py::test_add_f32_bcast_first[xnnpack]",
39+
"lineno": 38,
40+
"outcome": "passed",
41+
"keywords": [
42+
"test_add_f32_bcast_first[xnnpack]",
43+
"flow_xnnpack",
44+
"backend_xnnpack",
45+
...
46+
],
47+
"metadata": {
48+
"subtests": [
49+
{
50+
"Test ID": "test_add_f32_bcast_first[xnnpack]",
51+
"Test Case": "test_add_f32_bcast_first",
52+
"Subtest": 0,
53+
"Flow": "xnnpack",
54+
"Result": "Pass",
55+
"Result Detail": "",
56+
"Error": "",
57+
"Delegated": "True",
58+
"Quantize Time (s)": null,
59+
"Lower Time (s)": "2.881",
60+
"Output 0 Error Max": "0.000",
61+
"Output 0 Error MAE": "0.000",
62+
"Output 0 SNR": "inf",
63+
"Delegated Nodes": 1,
64+
"Undelegated Nodes": 0,
65+
"Delegated Ops": {
66+
"aten::add.Tensor": 1
67+
},
68+
"PTE Size (Kb)": "1.600"
69+
}
70+
]
71+
}
72+
```
3973

4074
## Backend Registration
4175

4276
To plug into the test framework, each backend should provide an implementation of the Tester class, defined in backends/test/harness/tester.py. Backends can provide implementations of each stage, or use the default implementation, as appropriate.
4377

4478
At a minimum, the backend will likely need to provide a custom implementation of the Partition and ToEdgeTransformAndLower stages using the appropriate backend partitioner. See backends/xnnpack/test/tester/tester.py for an example implementation.
4579

46-
Once a tester is available, the backend flow(s) can be added in __init__.py in this directory by adding an entry to `ALL_TESTER_FLOWS`. Each flow entry consists of a name (used in the test case naming) and a function to instantiate a tester for a given model and input tuple.
80+
Once a tester is available, the backend flow(s) can be added under flows/ and registered in flow.py. It is intended that this will be unified with the lowering recipes under executorch/export in the near future.
4781

4882
## Test Cases
4983

50-
Operator test cases are defined under the operators/ directory. Tests are written in a backend-independent manner, and each test is programmatically expanded to generate a variant for each registered backend flow. The `@operator_test` decorator is applied to each test class to trigger this behavior. Tests can also be tagged with an appropriate type specifier, such as `@dtype_test`, to generate variants for each dtype. The decorators and "magic" live in __init__.py in this directory.
84+
Operator test cases are defined under the operators/ directory. Model tests are under models/. Tests are written in a backend-independent manner, and each test is programmatically expanded to generate a variant for each registered backend flow by use of the `test_runner` fixture parameter. Tests can additionally be parameterized using standard pytest decorators. Parameterizing over dtype is a common use case.
5185

5286
## Evolution of this Test Suite
5387

docs/source/_static/img/ExecuTorch-Logo-cropped.svg

Lines changed: 0 additions & 57 deletions
This file was deleted.
632 Bytes
Loading
1.88 KB
Loading

0 commit comments

Comments
 (0)