Skip to content

Commit 76132dc

Browse files
author
Youngsik Yang
committed
[Doc] Fix typos in docs
This commit fixes typos in the Design and Architecture.
1 parent c6969d7 commit 76132dc

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

docs/arch/index.rst

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Overall Flow
3737
In this guide, we will study an example compilation flow in the compiler. The figure below shows the flow. At a high-level, it contains several steps:
3838

3939
- **Model Creation**: Create the IRModule to be optimized and compiled, which contains a collection of functions that internally represent the model.
40-
Users can manually construct IRModule via NNModule, TVMScript, or import a pre-trained model from from Relax frontend.
40+
Users can manually construct IRModule via NNModule, TVMScript, or import a pre-trained model from Relax frontend.
4141
- **Transformation**: The compiler transforms an IRModule to another functionally equivalent or approximately
4242
equivalent(e.g. in the case of quantization) IRModule. Many of the transformations are target (backend) independent.
4343
We also allow target to affect the configuration of the transformation pipeline.
@@ -103,8 +103,8 @@ cross-level transformations
103103
Apache TVM brings a unity strategy to optimize the end-to-end models. As the IRModule includes both relax and tir functions, the cross-level transformations are designed to mutate
104104
the IRModule by applying different transformations to these two types of functions.
105105

106-
For example, ``relax.LegalizeOps`` pass mutates the IRModule by lowering relax operators, add corresponding TIR PrimFunc into the IRModule, and replace the relax operators
107-
with calls to the lowered TIR PrimFunc. Another example is operator fusion pipeline in relax (including ``relax.FuseOps`` and ``relax.FuseTIR``), which fuse multiple consecutive tensor operations
106+
For example, ``relax.LegalizeOps`` pass mutates the IRModule by lowering relax operators, adding corresponding TIR PrimFunc into the IRModule, and replacing the relax operators
107+
with calls to the lowered TIR PrimFunc. Another example is operator fusion pipeline in relax (including ``relax.FuseOps`` and ``relax.FuseTIR``), which fuses multiple consecutive tensor operations
108108
into one. Different from the previous implementations, relax fusion pipeline analyzes the pattern of TIR functions and detects the best fusion rules automatically rather
109109
than human-defined operator fusion patterns.
110110

@@ -175,7 +175,7 @@ In summary, the key data structures in the compilation flows are:
175175

176176
Most parts of the compilation are transformations among the key data structures.
177177

178-
- relax/transform and tir/transform are determinstic rule-based transformations
178+
- relax/transform and tir/transform are deterministic rule-based transformations
179179
- meta-schedule contains the search-based transformations
180180

181181
Finally, the compilation flow example is only a typical use-case of the TVM stack.
@@ -246,7 +246,7 @@ The ability to save/store, and inspect an IR node provides a foundation for maki
246246

247247
tvm/ir
248248
------
249-
The `tvm/ir` folder contains the unified data structure and interfaces across for all IR function variants.
249+
The `tvm/ir` folder contains the unified data structure and interfaces across all IR function variants.
250250
The components in `tvm/ir` are shared by `tvm/relax` and `tvm/tir`, notable ones include
251251

252252
- IRModule
@@ -299,7 +299,7 @@ tvm/relax
299299
---------
300300

301301
Relax is the high-level IR used to represent the computational graph of a model. Various optimizations are defined in ``relax.transform``.
302-
Note that Relax usually works closely the the TensorIR IRModule, most of the transformations are applied on the both Relax and TensorIR functions
302+
Note that Relax usually works closely with the TensorIR IRModule, most of the transformations are applied on both Relax and TensorIR functions
303303
in the IRModule. Please refer to the :ref:`Relax Deep Dive <relax-deep-dive>` for more details.
304304

305305
tvm/tir
@@ -329,7 +329,7 @@ TE stands for Tensor Expression. TE is a domain-specific language (DSL) for desc
329329
itself is not a self-contained function that can be stored into IRModule. We can use ``te.create_prim_func`` to convert a tensor expression to a ``tir::PrimFunc``
330330
and then integrate it into the IRModule.
331331

332-
While possible to construct operators directly via TIR or tensor expressions (TE) for each use case it is tedious to do so.
332+
While possible to construct operators directly via TIR or tensor expressions (TE) for each use case, it is tedious to do so.
333333
`topi` (Tensor operator inventory) provides a set of pre-defined operators defined by numpy and found in common deep learning workloads.
334334

335335
tvm/meta_schedule

0 commit comments

Comments
 (0)