Skip to content

Commit 42a6b71

Browse files
committed
Add op support table
1 parent 0c47d79 commit 42a6b71

24 files changed

+399
-146
lines changed

.gitignore

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,6 @@ xcuserdata/
6262
/include/
6363
/share/
6464
/version.py
65-
*.csv
6665
*_etdump
6766

6867
# Android

backends/xnnpack/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,4 +134,4 @@ create an issue on [github](https://www.github.com/pytorch/executorch/issues).
134134
## See Also
135135
For more information about the XNNPACK Backend, please check out the following resources:
136136
- [XNNPACK Backend](https://pytorch.org/executorch/main/backends-xnnpack)
137-
- [XNNPACK Backend Internals](https://pytorch.org/executorch/main/backend-delegates-xnnpack-reference)
137+
- [XNNPACK Backend Internals](https://pytorch.org/executorch/main/backends/xnnpack/backend-delegates-xnnpack-reference)
Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
# Backend Documentation Template
2+
3+
This template provides a standardized structure and starting point for backend documentation. It is intended to provide a uniform experience for users while allowing for backends to customize their documentation as needed.
4+
5+
## Template Structure
6+
7+
The template includes the following files:
8+
9+
### Required Pages
10+
11+
- `backend-overview.md` - Main backend overview and introduction
12+
13+
### Recommended Pages
14+
15+
- `backend-quantization.md` - Quantization support and API documentation
16+
- `backend-partitioner.md` - Partitioner API reference
17+
- `backend-op-support.rst` - Operator support documentation (RST format)
18+
- `op-support.csv` - Operator support data in CSV format
19+
20+
### Optional Pages (and Subsections)
21+
22+
- `backend-troubleshooting.md` - Common issues and troubleshooting guide
23+
- `backend-arch-internals.md` - Architecture and internals documentation
24+
- `tutorials/backend-tutorials.md` - Tutorial sub-section
25+
- Use this sub-section to provide tutorials for your backend. Tutorials should present information about a use case in an end to end manner.
26+
- `tutorials/backend-guides.md` - Guides sub-section
27+
- Use this sub-section to provide guides or how-tos for backend-specific use cases or functionality. Examples might be static attention or device-specific memory management. These are intended to be used as a reference.
28+
29+
## Using the Template
30+
31+
To use this template for a new backend:
32+
33+
1. Copy the entire `template` directory contents to your backend's documentation directory
34+
2. Rename files to match your backend name (e.g., `backend-overview.md``mybackend-overview.md`)
35+
3. Populate the content for your backend.
36+
37+
### Additional Customization
38+
39+
You may need to:
40+
- Add backend-specific sections to any file
41+
- Remove sections that don't apply to your backend
42+
- Update the operator support CSV with your backend's supported operators
43+
- Add backend-specific images or diagrams
44+
- Update cross-references and links
45+
46+
Try to keep the landing page (`backend-overview.md`) simple and straigtforward. Use the child pages and sections to provide more detailed information.
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
# {BACKEND_NAME} Architecture and Internals
2+
3+
This page covers internal implementation details of the backend, and is mainly aimed at contributors and heavy power users. This is an optional page for each backend and has no set structure.
4+
5+
Some topics to consider:
6+
* High-level design of the backend
7+
* Details on the lowering flow
8+
* Internal debugging tools and techniques
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
================
2+
Operator Support
3+
================
4+
5+
This page lists the operators supported by the {BACKEND_NAME} backend. Operators are the building blocks of the ML model. See `IRs <https://docs.pytorch.org/docs/stable/torch.compiler_ir.html>`_ for more information on the PyTorch operator set.
6+
7+
{OPERATOR_SUPPORT_NOTES}
8+
9+
.. csv-table:: Operator Support
10+
:file: op-support.csv
11+
:header-rows: 1
12+
:widths: 20 15 30 30
13+
:align: center

docs/source/backend-template.md renamed to docs/source/backends/template/backend-overview.md

Lines changed: 1 addition & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Provide a brief overview/description of the backend. At a high-level, what does
44

55
## Features
66

7-
List high-level features of backend, such as general operator and hardware support.
7+
List high-level features of backend, such as operator and hardware support.
88

99
## Target Requirements
1010

@@ -18,24 +18,6 @@ What software and hardware is needed to create a .PTE file targeting this backen
1818

1919
This section describes the steps users need to take in order to generate a .PTE targeting this backend. Include a full code sample for exporting and lowering a model to this backend. Make sure relevant imports for the backend partitioner are included.
2020

21-
### Partitioner API
22-
23-
What options, if any, does the partitioner take? Are there any other export-time configurations that can be applied? Document each option.
24-
25-
### Quantization
26-
27-
What quantization schemes does this backend support? Consider including the following, as appropriate.
28-
- What operators are supported?
29-
- Number of bits?
30-
- Static vs dynamic activations?
31-
- Weight only vs activations + weights?
32-
- Symmetric vs asymmetric weights?
33-
- Per-tensor, per-chanel, group/blockwise?
34-
35-
If using a PT2E quantizer, document how to initialize the quantizer and all relevant configs and options.
36-
37-
Include a code snippet demonstrating how to perform quantization for this backend. Document, or link to, a description of the parameters that the user can specify.
38-
3921
## Runtime Integration
4022

4123
This section is intended to tell the user all of the steps they'll need to take to be able to run a .PTE file on-device that is targeting the given backend.
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# {BACKEND_NAME} Partitioner API
2+
3+
Documentate the partitioner API for the backend, including configuration options and compile specs.
Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
# {BACKEND_NAME} Quantization
2+
3+
Document quantization schemes and flows for the backend. This should include a description of each scheme and a code example to perform quantization. Example sections for PT2E and quantize_ are included below, to be replaced with details for the target backend.
4+
5+
### Supported Quantization Schemes
6+
The {BACKEND_NAME} delegate supports the following quantization schemes:
7+
8+
- {QUANTIZATION_SCHEME_1}
9+
- {QUANTIZATION_SCHEME_2}
10+
11+
### {QUANTIZATION_METHOD_1} using the PT2E Flow
12+
13+
To perform {QUANTIZATION_METHOD_1} with the PT2E flow, perform the following steps prior to exporting the model:
14+
15+
1) Create an instance of the `{BackendName}Quantizer` class. Set quantization parameters.
16+
2) Use `torch.export.export` to prepare for quantization.
17+
3) Call `prepare_pt2e` to prepare the model for quantization.
18+
4) For static quantization, run the prepared model with representative samples to calibrate the quantized tensor activation ranges.
19+
5) Call `convert_pt2e` to quantize the model.
20+
6) Export and lower the model using the standard flow.
21+
22+
The output of `convert_pt2e` is a PyTorch model which can be exported and lowered using the normal flow. As it is a regular PyTorch model, it can also be used to evaluate the accuracy of the quantized model using standard PyTorch techniques.
23+
24+
```python
25+
import torch
26+
import {MODEL_IMPORT_PATH} as models
27+
from {MODEL_WEIGHTS_IMPORT}
28+
from executorch.backends.{backend_name}.quantizer.{backend_name}_quantizer import {BackendName}Quantizer, {get_quantization_config_function}
29+
from executorch.backends.{backend_name}.partition.{backend_name}_partitioner import {BackendName}Partitioner
30+
from executorch.exir import to_edge_transform_and_lower
31+
from torchao.quantization.pt2e.quantize_pt2e import convert_pt2e, prepare_pt2e
32+
33+
model = models.{model_name}.{model_function}(weights={ModelWeights}.DEFAULT).eval()
34+
sample_inputs = ({SAMPLE_INPUT_SHAPE}, )
35+
36+
qparams = {get_quantization_config_function}({QUANTIZATION_PARAMS}) # (1)
37+
quantizer = {BackendName}Quantizer()
38+
quantizer.set_global(qparams)
39+
40+
training_ep = torch.export.export(model, sample_inputs).module() # (2)
41+
prepared_model = prepare_pt2e(training_ep, quantizer) # (3)
42+
43+
for cal_sample in [{CALIBRATION_SAMPLE}]: # Replace with representative model inputs
44+
prepared_model(cal_sample) # (4) Calibrate
45+
46+
quantized_model = convert_pt2e(prepared_model) # (5)
47+
48+
et_program = to_edge_transform_and_lower( # (6)
49+
torch.export.export(quantized_model, sample_inputs),
50+
partitioner=[{BackendName}Partitioner()],
51+
).to_executorch()
52+
```
53+
54+
See [PyTorch 2 Export Post Training Quantization](https://docs.pytorch.org/ao/main/tutorials_source/pt2e_quant_ptq.html) for more information.
55+
56+
### LLM Quantization with quantize_
57+
58+
The {BACKEND_NAME} backend also supports quantizing models with the [torchao](https://github.com/pytorch/ao) quantize_ API. {ADVANCED_QUANTIZATION_DESCRIPTION}
59+
60+
Below is a simple example, but a more detailed tutorial including accuracy evaluation on popular benchmarks can be found in the [torchao documentation]({TORCHAO_DOCS_URL}).
61+
62+
```python
63+
from torchao.quantization.granularity import PerGroup, PerAxis
64+
from torchao.quantization.quant_api import (
65+
IntxWeightOnlyConfig,
66+
Int8DynamicActivationIntxWeightConfig,
67+
quantize_,
68+
)
69+
70+
# Quantize embeddings with 8-bits, per channel
71+
embedding_config = IntxWeightOnlyConfig(
72+
weight_dtype=torch.int8,
73+
granularity=PerAxis(0),
74+
)
75+
qunatize_(
76+
eager_model,
77+
lambda m, fqn: isinstance(m, torch.nn.Embedding),
78+
)
79+
80+
81+
# Quatize linear layers with 8-bit dynamic activations and 4-bit weights
82+
linear_config = Int8DynamicActivationIntxWeightConfig(
83+
weight_dtype=torch.int4,
84+
weight_granularity=PerGroup(32),
85+
)
86+
quantize_(eager_model, linear_config)
87+
```
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# {BACKEND_NAME} Troubleshooting
2+
3+
This page describes common issues that you may encounter when using the {BACKEND_NAME} backend and how to debug and resolve them.
4+
5+
## {COMMON_ISSUE_1}
6+
7+
{ISSUE_DESCRIPTION_1}
8+
9+
{SOLUTION_STEPS_1}
10+
11+
## {COMMON_ISSUE_2}
12+
13+
{ISSUE_DESCRIPTION_2}
14+
15+
{SOLUTION_STEPS_2}
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Using {FEATURE} on {BACKEND_NAME}
2+
3+
This is a placeholder guide.

0 commit comments

Comments
 (0)