You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,17 +24,17 @@ For Apple, please refer to the [iOS documentation](docs/source/using-executorch-
24
24
executorch
25
25
├── <ahref="backends">backends</a> - Backend delegate implementations for various hardware targets. Each backend uses partitioner to split the graph into subgraphs that can be executed on specific hardware, quantizer to optimize model precision, and runtime components to execute the graph on target hardware. For details refer to the <ahref="docs/source/backend-delegates-integration.md">backend documentation</a> and the <ahref="docs/source/using-executorch-export.md">Export and Lowering tutorial</a> for more information.
│ │ ├── <ahref="backends/apple/coreml">coreml</a> - CoreML backend for Apple devices. See <ahref="docs/source/backends-coreml.md">doc</a>.
28
-
│ │ └── <ahref="backends/apple/mps">mps</a> - Metal Performance Shaders backend for Apple devices. See <ahref="docs/source/backends-mps.md">doc</a>.
27
+
│ │ ├── <ahref="backends/apple/coreml">coreml</a> - CoreML backend for Apple devices. See <ahref="docs/source/backends/coreml/coreml-overview.md">doc</a>.
28
+
│ │ └── <ahref="backends/apple/mps">mps</a> - Metal Performance Shaders backend for Apple devices. See <ahref="docs/source/backends/mps/mps-overview.md">doc</a>.
29
29
│ ├── <ahref="backends/arm">arm</a> - ARM architecture backends. See <ahref="docs/source/backends-arm-ethos-u.md">doc</a>.
30
30
│ ├── <ahref="backends/cadence">cadence</a> - Cadence-specific backends. See <ahref="docs/source/backends-cadence.md">doc</a>.
31
31
│ ├── <ahref="backends/example">example</a> - Example backend implementations.
32
32
│ ├── <ahref="backends/mediatek">mediatek</a> - MediaTek-specific backends. See <ahref="docs/source/backends-mediatek.md">doc</a>.
33
33
│ ├── <ahref="backends/openvino">openvino</a> - OpenVINO backend for Intel hardware.
34
34
│ ├── <ahref="backends/qualcomm">qualcomm</a> - Qualcomm-specific backends. See <ahref="docs/source/backends-qualcomm.md">doc</a>.
35
35
│ ├── <ahref="backends/transforms">transforms</a> - Transformations for backend optimization.
36
-
│ ├── <ahref="backends/vulkan">vulkan</a> - Vulkan backend for cross-platform GPU support. See <ahref="docs/source/backends-vulkan.md">doc</a>.
37
-
│ └── <ahref="backends/xnnpack">xnnpack</a> - XNNPACK backend for optimized neural network operations. See <ahref="docs/source/backends-xnnpack.md">doc</a>.
36
+
│ ├── <ahref="backends/vulkan">vulkan</a> - Vulkan backend for cross-platform GPU support. See <ahref="docs/source/backends/vulkan/vulkan-overview.md">doc</a>.
37
+
│ └── <ahref="backends/xnnpack">xnnpack</a> - XNNPACK backend for optimized neural network operations. See <ahref="docs/source/backends/xnnpack/xnnpack-overview.md">doc</a>.
38
38
├── <ahref="codegen">codegen</a> - Tooling to autogenerate bindings between kernels and the runtime.
├── <ahref="devtools">devtools</a> - Model profiling, debugging, and inspection. Please refer to the <ahref="docs/source/devtools-overview.md">tools documentation</a> for more information.
Copy file name to clipboardExpand all lines: backends/apple/coreml/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
# ExecuTorch Core ML Delegate
2
2
3
3
This subtree contains the Core ML Delegate implementation for ExecuTorch.
4
-
Core ML is an optimized framework for running machine learning models on Apple devices. The delegate is the mechanism for leveraging the Core ML framework to accelerate operators when running on Apple devices. To learn how to use the CoreML delegate, see the [documentation](https://github.com/pytorch/executorch/blob/main/docs/source/backends-coreml.md).
4
+
Core ML is an optimized framework for running machine learning models on Apple devices. The delegate is the mechanism for leveraging the Core ML framework to accelerate operators when running on Apple devices. To learn how to use the CoreML delegate, see the [documentation](https://github.com/pytorch/executorch/blob/main/docs/source/backends/coreml/coreml-overview.md).
5
5
6
6
## Layout
7
7
-`compiler/` : Lowers a module to Core ML backend.
Copy file name to clipboardExpand all lines: backends/nxp/README.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,14 +5,14 @@ This subtree contains the ExecuTorch Backend implementation for the
5
5
6
6
The eIQ® Neutron NPU is a highly scalable accelerator core architecture providing machine learning (ML) acceleration,
7
7
able to support common and critical tasks for edge AI such as anomaly detection, speech recognition,
8
-
image classification, object detection, facial recognition, image segmentation, and generative AI use cases like
8
+
image classification, object detection, facial recognition, image segmentation, and generative AI use cases like
9
9
large and small language models (LLMs & SLMs) and text-to-speech (TTS).
10
-
The architecture provides power and performance optimized NPUs integrated with NXP's broad portfolio of
10
+
The architecture provides power and performance optimized NPUs integrated with NXP's broad portfolio of
11
11
microcontrollers and applications processors.
12
12
13
-
The eIQ Neutron NPUs offer support for a wide variety of neural network types such as CNN, RNN, TCN and Transformer
13
+
The eIQ Neutron NPUs offer support for a wide variety of neural network types such as CNN, RNN, TCN and Transformer
14
14
networks, as well as the ability to adapt and scale to new model architectures, topologies and layer types introduced
15
-
to AI workloads. ML application development with the eIQ Neutron NPU is fully supported by the
15
+
to AI workloads. ML application development with the eIQ Neutron NPU is fully supported by the
16
16
[eIQ machine learning software development environment](https://www.nxp.com/design/design-center/software/eiq-ml-development-environment/eiq-toolkit-for-end-to-end-model-development-and-deployment:EIQ-TOOLKIT).
17
17
The eIQ AI SW Stack provides a streamlined development experience for developers and end-users of NXP products.
18
18
@@ -22,7 +22,7 @@ At this moment following eIQ® Neutron NPU variants and NXP platforms are suppor
22
22
23
23
***eIQ Neutron N3-64**, available on [i.MX RT700](https://www.nxp.com/products/i.MX-RT700)
24
24
25
-
In the future the NXP eIQ Neutron Backend will be extended to support [i.MX 9 Application Processors](https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-9-processors:IMX9-PROCESSORS)
25
+
In the future the NXP eIQ Neutron Backend will be extended to support [i.MX 9 Application Processors](https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-9-processors:IMX9-PROCESSORS)
26
26
with eIQ Neutron NPU, like the [i.MX 95](https://www.nxp.com/products/iMX95).
27
27
28
28
@@ -33,7 +33,7 @@ The eIQ Neutron NPU Backend should be considered as prototype quality at this mo
33
33
improvements. NXP and the ExecuTorch community is actively developing this codebase.
34
34
35
35
## Neutron Backend implementation and SW architecture
36
-
Neutron Backend uses the eIQ Neutron Converter as ML compiler to compile the delegated subgraph to Neutron microcode.
36
+
Neutron Backend uses the eIQ Neutron Converter as ML compiler to compile the delegated subgraph to Neutron microcode.
37
37
The Neutron Converter accepts the ML model in LiteRT format, for the **eIQ Neutron N3** class therefore the Neutron Backend
38
38
uses the LiteRT flatbuffers format as IR between the ExecuTorch and Neutron Converter ML compiler.
39
39
@@ -44,10 +44,10 @@ uses the LiteRT flatbuffers format as IR between the ExecuTorch and Neutron Conv
44
44
`node_conveters` is structured as single module for each Edge operator.
45
45
*`backend/ir/lib` - automatically generated handlers from LiteRT flatbuffers schema.
46
46
*`backend/ir/tflite_generator` and `backend/ir/tflite_optimizer` handle the serialization
47
-
of the in-memory built subgraph for delegation into LiteRT/TFLite flatbuffers
47
+
of the in-memory built subgraph for delegation into LiteRT/TFLite flatbuffers
48
48
representation. Code taken from the onnx2tflite tool.
49
-
*`edge_passes` - Various passes operating on Edge dialect level.
0 commit comments