Skip to content

Commit fc512fa

Browse files
Fix typos in docs ahead of GA (#14964)
1 parent f443ebb commit fc512fa

12 files changed

+15
-15
lines changed

docs/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ To build the documentation locally:
4343
git clone -b viable/strict https://github.com/pytorch/executorch.git && cd executorch
4444
```
4545

46-
1. If you don't have it already, start either a Python virtual envitonment:
46+
1. If you don't have it already, start either a Python virtual environment:
4747

4848
```bash
4949
python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip
@@ -111,7 +111,7 @@ You can use the variables in both regular text and code blocks.
111111
## Including READMEs to the Documentation Build
112112

113113
You might want to include some of the `README.md` files from various directories
114-
in this repositories in your documentation build. To do that, create an `.md`
114+
in this repository in your documentation build. To do that, create an `.md`
115115
file and use the `{include}` directive to insert your `.md` files. Example:
116116

117117
````
@@ -177,7 +177,7 @@ file:
177177
````
178178

179179
In the `index.md` file, I would add `tutorials/selective-build-tutorial` in
180-
both the `toctree` and the `cusotmcarditem` sections.
180+
both the `toctree` and the `customcarditem` sections.
181181

182182
# Auto-generated API documentation
183183

docs/source/backends-coreml.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ The Core ML partitioner API allows for configuration of the model delegation to
6161
- `skip_ops_for_coreml_delegation`: Allows you to skip ops for delegation by Core ML. By default, all ops that Core ML supports will be delegated. See [here](https://github.com/pytorch/executorch/blob/14ff52ff89a89c074fc6c14d3f01683677783dcd/backends/apple/coreml/test/test_coreml_partitioner.py#L42) for an example of skipping an op for delegation.
6262
- `compile_specs`: A list of `CompileSpec`s for the Core ML backend. These control low-level details of Core ML delegation, such as the compute unit (CPU, GPU, ANE), the iOS deployment target, and the compute precision (FP16, FP32). These are discussed more below.
6363
- `take_over_mutable_buffer`: A boolean that indicates whether PyTorch mutable buffers in stateful models should be converted to [Core ML `MLState`](https://developer.apple.com/documentation/coreml/mlstate). If set to `False`, mutable buffers in the PyTorch graph are converted to graph inputs and outputs to the Core ML lowered module under the hood. Generally, setting `take_over_mutable_buffer` to true will result in better performance, but using `MLState` requires iOS >= 18.0, macOS >= 15.0, and Xcode >= 16.0.
64-
- `take_over_constant_data`: A boolean that indicates whether PyTorch constant data like model weights should be consumed by the Core ML delegate. If set to False, constant data is passed to the Core ML delegate as inputs. By deafault, take_over_constant_data=True.
64+
- `take_over_constant_data`: A boolean that indicates whether PyTorch constant data like model weights should be consumed by the Core ML delegate. If set to False, constant data is passed to the Core ML delegate as inputs. By default, take_over_constant_data=True.
6565
- `lower_full_graph`: A boolean that indicates whether the entire graph must be lowered to Core ML. If set to True and Core ML does not support an op, an error is raised during lowering. If set to False and Core ML does not support an op, the op is executed on the CPU by ExecuTorch. Although setting `lower_full_graph`=False can allow a model to lower where it would otherwise fail, it can introduce performance overhead in the model when there are unsupported ops. You will see warnings about unsupported ops during lowering if there are any. By default, `lower_full_graph`=False.
6666

6767

docs/source/backends-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Backends are the bridge between your exported model and the hardware it runs on.
3131
| [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs |
3232
| [NXP](backends-nxp) | Embedded | NPU | NXP SoCs |
3333
| [Cadence](backends-cadence) | Embedded | DSP | DSP-optimized workloads |
34-
| [Samsung Exynos](backends-samsung-exynos)| Android | NPU | Samsung Socs |
34+
| [Samsung Exynos](backends-samsung-exynos)| Android | NPU | Samsung SoCs |
3535

3636
**Tip:** For best performance, export a `.pte` file for each backend you plan to support.
3737

docs/source/backends-xnnpack.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ To perform 8-bit quantization with the PT2E flow, perform the following steps pr
8282
1) Create an instance of the `XnnpackQuantizer` class. Set quantization parameters.
8383
2) Use `torch.export.export` to prepare for quantization.
8484
3) Call `prepare_pt2e` to prepare the model for quantization.
85-
4) For static quantization, run the prepared model with representative samples to calibrate the quantizated tensor activation ranges.
85+
4) For static quantization, run the prepared model with representative samples to calibrate the quantized tensor activation ranges.
8686
5) Call `convert_pt2e` to quantize the model.
8787
6) Export and lower the model using the standard flow.
8888

docs/source/devtools-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,6 @@ More details are available in the [ETDump documentation](etdump.md) on how to ge
4141

4242

4343
### Inspector APIs
44-
The Inspector Python APIs are the main user enrty point into the Developer Tools. They join the data sourced from ETDump and ETRecord to give users access to all the performance and debug data sourced from the runtime along with linkage back to eager model source code and module hierarchy in an easy to use API.
44+
The Inspector Python APIs are the main user entry point into the Developer Tools. They join the data sourced from ETDump and ETRecord to give users access to all the performance and debug data sourced from the runtime along with linkage back to eager model source code and module hierarchy in an easy to use API.
4545

4646
More details are available in the [Inspector API documentation](model-inspector.rst) on how to use the Inspector APIs.

docs/source/getting-started-architecture.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,6 +89,6 @@ _Executor_ is the entry point to load the program and execute it. The execution
8989

9090
## Developer Tools
9191

92-
It should be efficient for users to go from research to production using the flow above. Productivity is essentially important, for users to author, optimize and deploy their models. We provide [ExecuTorch Developer Tools](devtools-overview.md) to improve productivity. The Developer Tools are not in the diagram. Instead it's a tool set that covers the developer workflow in all three phases.
92+
It should be efficient for users to go from research to production using the flow above. Productivity is especially important, for users to author, optimize and deploy their models. We provide [ExecuTorch Developer Tools](devtools-overview.md) to improve productivity. The Developer Tools are not in the diagram. Instead it's a tool set that covers the developer workflow in all three phases.
9393

9494
During the program preparation and execution, users can use the ExecuTorch Developer Tools to profile, debug, or visualize the program. Since the end-to-end flow is within the PyTorch ecosystem, users can correlate and display performance data along with graph visualization as well as direct references to the program source code and model hierarchy. We consider this to be a critical component for quickly iterating and lowering PyTorch programs to edge devices and environments.

docs/source/getting-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ input_tensor: torch.Tensor = torch.randn(1, 3, 224, 224)
8989
program = runtime.load_program("model.pte")
9090
method = program.load_method("forward")
9191
output: List[torch.Tensor] = method.execute([input_tensor])
92-
print("Run succesfully via executorch")
92+
print("Run successfully via executorch")
9393

9494
from torchvision.models.mobilenetv2 import MobileNet_V2_Weights
9595
import torchvision.models as models

docs/source/intro-how-it-works.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ At a high-level, there are three steps for running a PyTorch model with ExecuTor
66

77
1. **Export the model.** The first step is to capture the PyTorch program as a graph, which is a new representation of the model that can be expressed in terms of a series of operators such as addition, multiplication, or convolution. This process safely preserves the semantics of the original PyTorch program. This representation is the first step to enable running the model on edge use cases that have low memory and/or low compute.
88
1. **Compile the exported model to an ExecuTorch program.** Given an exported model from step 1, convert it to an executable format called an ExecuTorch program that the runtime can use for inference. This step provides entry points for various optimizations such as compressing the model (e.g., quantization) to reduce size and further compiling subgraphs down to on-device specialized hardware accelerators to improve latency. It also provides an entry point for memory planning, i.e. to efficiently plan the location of intermediate tensors to reduce the runtime memory footprint.
9-
1. **Run the ExecuTorch program on a target device.** Given an input--such as an image represented as an input activation tensor--the ExecuTorch runtime loads the ExecuTorch program, executes the instructions represented by the program, and computes an output. This step is efficient because (1) the runtime is lightweight and (2) an efficient execution plan has already been calculated in steps 1 and 2, making it possible to do performant inference. Furthermore, portability of the core runtime enabled performant execution even on highly-constrained devices.
9+
1. **Run the ExecuTorch program on a target device.** Given an input--such as an image represented as an input activation tensor--the ExecuTorch runtime loads the ExecuTorch program, executes the instructions represented by the program, and computes an output. This step is efficient because (1) the runtime is lightweight and (2) an efficient execution plan has already been calculated in steps 1 and 2, making it possible to do performant inference. Furthermore, portability of the core runtime enables performant execution even on highly-constrained devices.
1010

1111
This figure illustrates the three-step process of exporting a PyTorch program, compiling it into an ExecuTorch program that targets a specific hardware device, and finally executing the program on the device using the ExecuTorch runtime.
1212
![name](_static/img/how-executorch-works-high-level.png)

docs/source/quantization-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Quantization in ExecuTorch is backend-specific. Each backend defines how models
1414
The PT2E quantization workflow has three main steps:
1515

1616
1. Configure a backend-specific quantizer.
17-
2. Prepare, calibrate, convert, and evalute the quantized model in PyTorch
17+
2. Prepare, calibrate, convert, and evaluate the quantized model in PyTorch
1818
3. Lower the model to the target backend
1919

2020
## 1. Configure a Backend-Specific Quantizer

docs/source/running-a-model-cpp-tutorial.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ MemoryManager memory_manager(&method_allocator, &planned_memory);
9696

9797
## Loading a Method
9898

99-
In ExecuTorch we load and initialize from the `Program` at a method granularity. Many programs will only have one method 'forward'. `load_method` is where initialization is done, from setting up tensor metadata, to intializing delegates, etc.
99+
In ExecuTorch we load and initialize from the `Program` at a method granularity. Many programs will only have one method 'forward'. `load_method` is where initialization is done, from setting up tensor metadata, to initializing delegates, etc.
100100

101101
``` cpp
102102
Result<Method> method = program->load_method(method_name);

0 commit comments

Comments
 (0)