Skip to content

Commit bd30b09

Browse files
committed
Add op support table
1 parent 0c47d79 commit bd30b09

File tree

5 files changed

+65
-3
lines changed

5 files changed

+65
-3
lines changed

.gitignore

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,6 @@ xcuserdata/
6262
/include/
6363
/share/
6464
/version.py
65-
*.csv
6665
*_etdump
6766

6867
# Android

backends/xnnpack/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,4 +134,4 @@ create an issue on [github](https://www.github.com/pytorch/executorch/issues).
134134
## See Also
135135
For more information about the XNNPACK Backend, please check out the following resources:
136136
- [XNNPACK Backend](https://pytorch.org/executorch/main/backends-xnnpack)
137-
- [XNNPACK Backend Internals](https://pytorch.org/executorch/main/backend-delegates-xnnpack-reference)
137+
- [XNNPACK Backend Internals](https://pytorch.org/executorch/main/backends/xnnpack/backend-delegates-xnnpack-reference)
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
Operator,Compute DType,Quantization,Constraints
2+
_to_dim_order_copy,"fp16, fp32",,no dtype conversion
3+
abs,"fp16, fp32",,
4+
add,"fp16, fp32",PT2E: static int8,alpha=1
5+
avg_pool2d,"fp16, fp32",PT2E: static int8,"ceil_mode=False, count_include_pad=False, divisor_override=pooling_region"
6+
bmm,"fp16, fp32",,
7+
cat,"fp16, fp32",PT2E: static int8,
8+
ceil,"fp16, fp32",,
9+
clamp,"fp16, fp32",,
10+
constant_pad_nd,"fp16, fp32",,no negative padding values
11+
conv1d,"fp16, fp32","PT2E: static or dynamic int8 activations
12+
8-bit weights, symmetric per-tensor or per-channel",constant weights
13+
conv2d,"fp16, fp32","PT2E: static or dynamic int8 activations
14+
8-bit weights, symmetric per-tensor or per-channel",constant weights
15+
dequantize_per_tensor,"fp16, fp32",,
16+
div,"fp16, fp32",,
17+
elu,"fp16, fp32",,
18+
exp,"fp16, fp32",,
19+
floor,"fp16, fp32",,
20+
gelu,"fp16, fp32",,
21+
hardswish,"fp16, fp32",,
22+
hardtanh,"fp16, fp32",,
23+
leaky_relu,"fp16, fp32",,
24+
linear,"fp16, fp32","PT2E: static or dynamic int8 activations
25+
8-bit weights, symmetric per-tensor or per-channel
26+
27+
quantize\_: 8-bit dynamic activations
28+
4-bit groupwise weights",constant weights
29+
log,"fp16, fp32",,
30+
max_pool2d,"fp16, fp32",,"stride ≤ kernel_size, ceil_mode only for static shapes"
31+
maximum,"fp16, fp32",,
32+
mean,"fp16, fp32",,"4D tensors only; dims=[-2,-1] or [-1,-2]"
33+
minimum,"fp16, fp32",,
34+
mul,"fp16, fp32",PT2E: static int8,
35+
neg,"fp16, fp32",,
36+
permute_copy,"fp16, fp32",,
37+
pow,"fp16, fp32",,power=2 only
38+
quantize_per_tensor,"fp16, fp32",,
39+
relu,"fp16, fp32",,
40+
rsqrt,"fp16, fp32",,
41+
sigmoid,"fp16, fp32",,
42+
slice_copy,"fp16, fp32",,"no zero-dim tensors, no dynamic shapes"
43+
softmax,"fp16, fp32",,dim must be last dimension
44+
sqrt,"fp16, fp32",,
45+
sub,"fp16, fp32",,alpha=1
46+
tanh,"fp16, fp32",,
47+
upsample_bilinear2d,"fp16, fp32",,no dynamic output sizes
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
================
2+
Operator Support
3+
================
4+
5+
This page lists the operators supported by the XNNPACK backend. Operators are the building blocks of the ML model. See `IRs <https://docs.pytorch.org/docs/stable/torch.compiler_ir.html>`_ for more information on the PyTorch operator set.
6+
7+
All operators support dynamic input shapes unless otherwise noted.
8+
9+
.. csv-table:: Operator Support
10+
:file: op-support.csv
11+
:header-rows: 1
12+
:widths: 20 20 30 30
13+
:align: center

docs/source/backends/xnnpack/reference/xnnpack-reference.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@
66

77
**→{doc}`xnnpack-reference-quantization` — Supported quantization schemes.**
88

9+
**→{doc}`xnnpack-reference-op-support` — Supported operators.**
10+
911
## Internals
1012

1113
**→{doc}`xnnpack-reference-arch-internals` — XNNPACK backend internals.**
@@ -14,6 +16,7 @@
1416
:hidden:
1517
:maxdepth: 1
1618
17-
xnnpack-reference-arch-internals
1819
xnnpack-reference-partitioner
1920
xnnpack-reference-quantization
21+
xnnpack-reference-op-support
22+
xnnpack-reference-arch-internals

0 commit comments

Comments
 (0)