Skip to content

Commit c19909d

Browse files
Fix more typos and broken links
1 parent 455639b commit c19909d

7 files changed

+8
-17
lines changed

docs/source/backends-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Backends are the bridge between your exported model and the hardware it runs on.
2828
| [MediaTek](backends-mediatek) | Android | NPU | MediaTek SoCs |
2929
| [ARM EthosU](backends-arm-ethos-u) | Embedded | NPU | ARM MCUs |
3030
| [ARM VGF](backends-arm-vgf) | Android | NPU | ARM platforms |
31-
| [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs |
31+
| [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs |
3232
| [NXP](backends-nxp) | Embedded | NPU | NXP SoCs |
3333
| [Cadence](backends-cadence) | Embedded | DSP | DSP-optimized workloads |
3434
| [Samsung Exynos](/backends/samsung/samsung-overview.md) | Android | NPU | Samsung SoCs |

docs/source/examples-end-to-end-to-lower-model-to-delegate.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ There are three flows for delegating a program to a backend:
1919
is good for reusing lowered modules exported from other flows.
2020
1. Lower parts of a module according to a partitioner. This is good for
2121
lowering models that include both lowerable and non-lowerable nodes, and is
22-
the most streamlined procecss.
22+
the most streamlined process.
2323

2424
### Flow 1: Lowering the whole module
2525

docs/source/getting-started.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Getting Started with ExecuTorch
2-
This section is intended to describe the necessary steps to take PyTorch model and run it using ExecuTorch. To use the framework, you will typically need to take the following steps:
2+
This section is intended to describe the necessary steps to take a PyTorch model and run it using ExecuTorch. To use the framework, you will typically need to take the following steps:
33
- Install the ExecuTorch python package and runtime libraries.
44
- Export the PyTorch model for the target hardware configuration.
55
- Run the model using the ExecuTorch runtime APIs on your development platform.
@@ -76,7 +76,7 @@ Quantization can also be done at this stage to reduce model size and runtime. Qu
7676

7777
After successfully generating a .pte file, it is common to use the Python runtime APIs to validate the model on the development platform. This can be used to evaluate model accuracy before running on-device.
7878

79-
For the MobileNet V2 model from torchvision used in this example, image inputs are expected as a normalized, float32 tensor with a dimensions of (batch, channels, height, width). The output See [torchvision.models.mobilenet_v2](https://pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v2.html) for more information on the input and output tensor format for this model.
79+
For the MobileNet V2 model from torchvision used in this example, image inputs are expected as a normalized, float32 tensor with a dimensions of (batch, channels, height, width). The output is a tensor containing class logits. See [torchvision.models.mobilenet_v2](https://pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v2.html) for more information on the input and output tensor format for this model.
8080

8181
```python
8282
import torch

docs/source/kernel-library-selective-build.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ gen_selected_ops(
6161
ROOT_OPS # comma separated operator names to be selected
6262
INCLUDE_ALL_OPS # boolean flag to include all operators
6363
OPS_FROM_MODEL # path to a pte file of model to select operators from
64-
DTYPE_SELECTIVE_BUILD # boolean flag to enable dtye selection
64+
DTYPE_SELECTIVE_BUILD # boolean flag to enable dtype selection
6565
)
6666
```
6767

docs/source/running-a-model-cpp-tutorial.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ each API please see the [Runtime API Reference](executorch-runtime-api-reference
1212
## Prerequisites
1313

1414
You will need an ExecuTorch model to follow along. We will be using
15-
the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore -->.
15+
the model `SimpleConv` generated from the [Exporting to ExecuTorch tutorial](tutorials/export-to-executorch-tutorial) <!-- @lint-ignore -->..
1616

1717
## Model Loading
1818

docs/source/using-executorch-android.md

Lines changed: 1 addition & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -28,19 +28,10 @@ The AAR artifact contains the Java library for users to integrate with their Jav
2828
- Optimized kernels
2929
- Quantized kernels
3030
- LLaMa-specific Custom ops library.
31-
- Comes with two ABI variants, arm64-v8a and x86\_64.
31+
- Comes with two ABI variants, arm64-v8a and x86_64.
3232

3333
The AAR library can be used for generic Android device with arm64-v8a or x86_64 architecture. It can be used across form factors, including phones, tablets, tv boxes, etc, as it does not contain any UI components.
3434

35-
XNNPACK backend
36-
37-
Portable kernels
38-
Optimized kernels
39-
Quantized kernels
40-
LLaMa-specific Custom ops library.
41-
Comes with two ABI variants, arm64-v8a and x86_64.
42-
The AAR library can be used for generic Android device with arm64-v8a or x86_64 architecture. It can be used across form factors, including phones, tablets, tv boxes, etc, as it does not contain any UI components.
43-
4435
## Using AAR from Maven Central
4536

4637
✅ Recommended for most developers

docs/source/using-executorch-export.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Commonly used hardware backends are listed below. For mobile, consider using XNN
3535
- [XNNPACK (CPU)](backends/xnnpack/xnnpack-overview.md)
3636
- [Core ML (iOS)](backends/coreml/coreml-overview.md)
3737
- [Metal Performance Shaders (iOS GPU)](backends/mps/mps-overview.md)
38-
- [Vulkan (Android GPU)](backends-vulkan.md)
38+
- [Vulkan (Android GPU)](backends/vulkan/vulkan-overview.md)
3939
- [Qualcomm NPU](backends-qualcomm.md)
4040
- [MediaTek NPU](backends-mediatek.md)
4141
- [Arm Ethos-U NPU](backends-arm-ethos-u.md)

0 commit comments

Comments
 (0)