From 2e22f9692f807ce9ffae36a58b680d6115e6d4fd Mon Sep 17 00:00:00 2001 From: Lion Yang Date: Thu, 13 Feb 2025 22:22:45 +0800 Subject: [PATCH 1/2] [DOCS] Correct the "Getting Started" link --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 9291ce71ca24..1e18ed591c23 100644 --- a/README.md +++ b/README.md @@ -33,7 +33,7 @@ TVM is licensed under the [Apache-2.0](LICENSE) license. Getting Started --------------- Check out the [TVM Documentation](https://tvm.apache.org/docs/) site for installation instructions, tutorials, examples, and more. -The [Getting Started with TVM](https://tvm.apache.org/docs/tutorial/introduction.html) tutorial is a great +The [Getting Started with TVM](https://tvm.apache.org/docs/get_started/overview.html) tutorial is a great place to start. Contribute to TVM From b80a649b1722a56604bf49a1242fd6663eade41e Mon Sep 17 00:00:00 2001 From: Lion Yang Date: Thu, 13 Feb 2025 22:30:55 +0800 Subject: [PATCH 2/2] [DOCS] Fix typos in README.md --- README.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index 1e18ed591c23..85e924e4ac80 100644 --- a/README.md +++ b/README.md @@ -23,8 +23,8 @@ [Release Notes](NEWS.md) Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the -productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. -TVM works with deep learning frameworks to provide end to end compilation to different backends. +productivity-focused deep learning frameworks and the performance- and efficiency-focused hardware backends. +TVM works with deep learning frameworks to provide end-to-end compilation for different backends. License ------- @@ -38,26 +38,26 @@ place to start. Contribute to TVM ----------------- -TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. +TVM adopts the Apache committer model. We aim to create an open-source project maintained and owned by the community. Check out the [Contributor Guide](https://tvm.apache.org/docs/contribute/). History and Acknowledgement --------------------------- -TVM started as a research project for deep learning compiler. -The first version of the project benefited a lot from following projects: +TVM started as a research project for deep learning compilation. +The first version of the project benefited a lot from the following projects: - [Halide](https://github.com/halide/Halide): Part of TVM's TIR and arithmetic simplification module - originates from Halide. We also learned and adapted some part of lowering pipeline from Halide. + originates from Halide. We also learned and adapted some parts of the lowering pipeline from Halide. - [Loopy](https://github.com/inducer/loopy): use of integer set analysis and its loop transformation primitives. - [Theano](https://github.com/Theano/Theano): the design inspiration of symbolic scan operator for recurrence. Since then, the project has gone through several rounds of redesigns. The current design is also drastically different from the initial design, following the -development trend of ML compiler community. +development trend of the ML compiler community. -The most recent version focuses on a cross-level design with TensorIR as tensor-level representation -and Relax as graph level representation, and python-first transformations. -The current design goal of the project is to make the ML compiler accessible by enabling most +The most recent version focuses on a cross-level design with TensorIR as the tensor-level representation +and Relax as the graph-level representation and Python-first transformations. +The project's current design goal is to make the ML compiler accessible by enabling most transformations to be customizable in Python and bringing a cross-level representation that can jointly -optimize computational graphs, tensor programs, and libraries. The project also serves as a foundation -infra to build python-first vertical compilers for various domains, such as LLMs. +optimize computational graphs, tensor programs, and libraries. The project is also a foundation +infra for building Python-first vertical compilers for domains, such as LLMs.