Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ executorch
│ ├── <a href="backends/openvino">openvino</a> - OpenVINO backend for Intel hardware.
│ ├── <a href="backends/qualcomm">qualcomm</a> - Qualcomm-specific backends. See <a href="docs/source/backends-qualcomm.md">doc</a>.
│ ├── <a href="backends/transforms">transforms</a> - Transformations for backend optimization.
│ ├── <a href="backends/vulkan">vulkan</a> - Vulkan backend for cross-platform GPU support. See <a href="docs/source/backends-vulkan.md">doc</a>.
│ ├── <a href="backends/vulkan">vulkan</a> - Vulkan backend for cross-platform GPU support. See <a href="docs/source/backends/vulkan/vulkan-overview.md">doc</a>.
│ └── <a href="backends/xnnpack">xnnpack</a> - XNNPACK backend for optimized neural network operations. See <a href="docs/source/backends/xnnpack/xnnpack-overview.md">doc</a>.
├── <a href="codegen">codegen</a> - Tooling to autogenerate bindings between kernels and the runtime.
├── <a href="configurations">configurations</a> - Configuration files.
Expand Down
113 changes: 113 additions & 0 deletions docs/source/backends/vulkan/vulkan-op-support-table.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
Namespace,Operator,Notes
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

due to .gitignore settings this file was accidentally left out of the original commit updating the documentation. Decided to just bundle it with this fix.

aten,_log_softmax,
aten,_native_batch_norm_legit_no_training,
aten,_softmax,
aten,_to_copy,dtype conversion between float types only
aten,_weight_int8pack_mm,
aten,abs,
aten,add,
aten,addmm,
aten,amax,keepdim=True required; max 2D reductions
aten,amin,keepdim=True required; max 2D reductions
aten,arange,
aten,avg_pool2d,
aten,bmm,
aten,cat,
aten,clamp,
aten,clone,
aten,constant_pad_nd,
aten,convolution,batch=1 for 2D conv; no transposed 1D conv; no 3D conv
aten,cos,
aten,div,
aten,div.Tensor_mode,
aten,embedding,
aten,eq,
aten,exp,
aten,expand_copy,no resize support
aten,flip,
aten,full,
aten,full_like,
aten,ge,
aten,gelu,
aten,gt,
aten,hardshrink,
aten,hardtanh,
aten,index_select,
aten,le,
aten,leaky_relu,
aten,linear,
aten,lt,
aten,max_pool2d,
aten,max_pool2d_with_indices,
aten,mean,keepdim=True required; max 2D reductions
aten,minimum,
aten,mm,
aten,native_group_norm,
aten,native_layer_norm,resize supported
aten,neg,
aten,ones,
aten,ones_like,
aten,permute,
aten,permute_copy,
aten,pow,
aten,relu,
aten,repeat,
aten,round,
aten,rsqrt,
aten,scalar_tensor,
aten,select_copy,
aten,sigmoid,
aten,sin,
aten,slice_copy,
aten,split,
aten,split_with_sizes_copy,
aten,sqrt,
aten,squeeze_copy,
aten,sub,
aten,sum,keepdim=True required; max 2D reductions
aten,t_copy,
aten,tanh,
aten,unsqueeze_copy,
aten,upsample_bilinear2d,
aten,upsample_nearest2d,
aten,view_copy,
aten,zeros,
aten,zeros_like,
aten,_assert_scalar,removed via graph pass
aten,sym_constrain_range_for_size,removed via graph pass
aten,sym_size,
dim_order_ops,_clone_dim_order,no dtype conversion; removable if no dtype change
dim_order_ops,_to_dim_order_copy,no dtype conversion; removable if no dtype change
llama,custom_sdpa,
llama,sdpa_with_kv_cache,
llama,update_cache,
operator,add,
operator,eq,
operator,ge,
operator,getitem,
operator,gt,
operator,le,
operator,lt,
quantized_decomposed,choose_qparams,
quantized_decomposed,choose_qparams_per_token_asymmetric,
quantized_decomposed,dequantize_per_channel,
quantized_decomposed,dequantize_per_tensor,
quantized_decomposed,dequantize_per_token,
quantized_decomposed,quantize_per_channel,
quantized_decomposed,quantize_per_tensor,
quantized_decomposed,quantize_per_token,
torchao,choose_qparams_affine,
torchao,dequantize_affine,
torchao,quantize_affine,
et_vk,add_q8ta_q8ta_q8to,no resize support
et_vk,apply_rotary_emb,
et_vk,conv2d_q8ta_q8csw_q8to,no resize support
et_vk,conv2d_q8ta_q8csw_q8to_dw,no resize support
et_vk,conv_with_clamp,batch=1 for 2D conv; no transposed 1D conv
et_vk,dequantize_q8to_from_conv2d,no resize support
et_vk,grid_priors,
et_vk,linear_dq8ca_q4gsw,
et_vk,linear_q4gsw,
et_vk,linear_q8ta_q8csw,
et_vk,linear_qcs4w,
et_vk,quantize_q8ta_for_conv2d,no resize support
2 changes: 1 addition & 1 deletion docs/source/backends/vulkan/vulkan-op-support.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ All operators support dynamic input shapes unless otherwise noted (i.e. "no
resize support"). The expectation is that over time, all operators will be able
to support dynamic shapes.

.. csv-table:: Operator Support
.. csv-table:: Vulkan Backend Operator Support
:file: vulkan-op-support-table.csv
:header-rows: 1
:widths: 25 25 75
Expand Down
3 changes: 2 additions & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,8 @@
"export-overview": "using-executorch-export.html",
"runtime-build-and-cross-compilation": "using-executorch-building-from-source.html",
"tutorials/export-to-executorch-tutorial": "../using-executorch-export.html",
"build-run-vulkan": "backends-vulkan.html",
"build-run-vulkan": "backends/vulkan/vulkan-overview.html",
"backends-vulkan": "backends/vulkan/vulkan-overview.html",
"executorch-arm-delegate-tutorial": "backends-arm-ethos-u.html",
"build-run-coreml": "backends-coreml.html",
"build-run-mediatek-backend": "backends-mediatek.html",
Expand Down
Loading