You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/arch/index.rst
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ Overall Flow
37
37
In this guide, we will study an example compilation flow in the compiler. The figure below shows the flow. At a high-level, it contains several steps:
38
38
39
39
- **Model Creation**: Create the IRModule to be optimized and compiled, which contains a collection of functions that internally represent the model.
40
-
Users can manually construct IRModule via NNModule, TVMScript, or import a pre-trained model from from Relax frontend.
40
+
Users can manually construct IRModule via NNModule, TVMScript, or import a pre-trained model from Relax frontend.
41
41
- **Transformation**: The compiler transforms an IRModule to another functionally equivalent or approximately
42
42
equivalent(e.g. in the case of quantization) IRModule. Many of the transformations are target (backend) independent.
43
43
We also allow target to affect the configuration of the transformation pipeline.
@@ -103,8 +103,8 @@ cross-level transformations
103
103
Apache TVM brings a unity strategy to optimize the end-to-end models. As the IRModule includes both relax and tir functions, the cross-level transformations are designed to mutate
104
104
the IRModule by applying different transformations to these two types of functions.
105
105
106
-
For example, ``relax.LegalizeOps`` pass mutates the IRModule by lowering relax operators, add corresponding TIR PrimFunc into the IRModule, and replace the relax operators
107
-
with calls to the lowered TIR PrimFunc. Another example is operator fusion pipeline in relax (including ``relax.FuseOps`` and ``relax.FuseTIR``), which fuse multiple consecutive tensor operations
106
+
For example, ``relax.LegalizeOps`` pass mutates the IRModule by lowering relax operators, adding corresponding TIR PrimFunc into the IRModule, and replacing the relax operators
107
+
with calls to the lowered TIR PrimFunc. Another example is operator fusion pipeline in relax (including ``relax.FuseOps`` and ``relax.FuseTIR``), which fuses multiple consecutive tensor operations
108
108
into one. Different from the previous implementations, relax fusion pipeline analyzes the pattern of TIR functions and detects the best fusion rules automatically rather
109
109
than human-defined operator fusion patterns.
110
110
@@ -175,7 +175,7 @@ In summary, the key data structures in the compilation flows are:
175
175
176
176
Most parts of the compilation are transformations among the key data structures.
177
177
178
-
- relax/transform and tir/transform are determinstic rule-based transformations
178
+
- relax/transform and tir/transform are deterministic rule-based transformations
179
179
- meta-schedule contains the search-based transformations
180
180
181
181
Finally, the compilation flow example is only a typical use-case of the TVM stack.
@@ -246,7 +246,7 @@ The ability to save/store, and inspect an IR node provides a foundation for maki
246
246
247
247
tvm/ir
248
248
------
249
-
The `tvm/ir` folder contains the unified data structure and interfaces across for all IR function variants.
249
+
The `tvm/ir` folder contains the unified data structure and interfaces across all IR function variants.
250
250
The components in `tvm/ir` are shared by `tvm/relax` and `tvm/tir`, notable ones include
251
251
252
252
- IRModule
@@ -299,7 +299,7 @@ tvm/relax
299
299
---------
300
300
301
301
Relax is the high-level IR used to represent the computational graph of a model. Various optimizations are defined in ``relax.transform``.
302
-
Note that Relax usually works closely the the TensorIR IRModule, most of the transformations are applied on the both Relax and TensorIR functions
302
+
Note that Relax usually works closely with the TensorIR IRModule, most of the transformations are applied on both Relax and TensorIR functions
303
303
in the IRModule. Please refer to the :ref:`Relax Deep Dive <relax-deep-dive>` for more details.
304
304
305
305
tvm/tir
@@ -329,7 +329,7 @@ TE stands for Tensor Expression. TE is a domain-specific language (DSL) for desc
329
329
itself is not a self-contained function that can be stored into IRModule. We can use ``te.create_prim_func`` to convert a tensor expression to a ``tir::PrimFunc``
330
330
and then integrate it into the IRModule.
331
331
332
-
While possible to construct operators directly via TIR or tensor expressions (TE) for each use case it is tedious to do so.
332
+
While possible to construct operators directly via TIR or tensor expressions (TE) for each use case, it is tedious to do so.
333
333
`topi` (Tensor operator inventory) provides a set of pre-defined operators defined by numpy and found in common deep learning workloads.
0 commit comments