Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion gallery/how_to/work_with_microtvm/micro_mlperftiny.py
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@
# Select a Zephyr board
BOARD = os.getenv("TVM_MICRO_BOARD", default="nucleo_l4r5zi")

# Get the the full target description using the BOARD
# Get the full target description using the BOARD
TARGET = tvm.micro.testing.get_target("zephyr", BOARD)

######################################################################
Expand Down
2 changes: 1 addition & 1 deletion include/tvm/relay/transform.h
Original file line number Diff line number Diff line change
Expand Up @@ -492,7 +492,7 @@ TVM_DLL Pass SimplifyExprPostAlterOp();
* A typical custom pass will:
* - Find calls to "Compiler" attributes functions with matching compiler name.
* - Lower those function to TIR PrimFuncs.
* - Bind those functions into the IRModule under the the functions' "global_symbol" attribute.
* - Bind those functions into the IRModule under the functions' "global_symbol" attribute.
* - Replace all calls to those functions with 'call_lowered' to the matching global.
* Care should be taken to handle multiple calls to the same function.
* See src/relay/backend/contrib/example_target_hooks/relay_to_tir.cc for an example custom pass.
Expand Down
4 changes: 2 additions & 2 deletions include/tvm/tir/analysis.h
Original file line number Diff line number Diff line change
Expand Up @@ -281,15 +281,15 @@ TVM_DLL size_t CalculateWorkspaceBytes(const PrimFunc& func,

/*!
* \brief Calculate the allocated memory per scope in bytes needed inside the TIR PrimFunc
* \param func The TIR PrimFunc for which the the allocated memory size to be calculated
* \param func The TIR PrimFunc for which the allocated memory size to be calculated
* \return Allocated memory size per scope in bytes inside the PrimFunc returned as a Map with
* key "main" and a Map of allocated sizes as values.
*/
TVM_DLL tvm::Map<String, tvm::Map<String, Integer>> CalculateAllocatedBytes(const PrimFunc& func);

/*!
* \brief Calculate the allocated memory per scope in bytes for each function inside the module
* \param mod The IRModule for which the the allocated memory size has to be calculated
* \param mod The IRModule for which the allocated memory size has to be calculated
* \return Allocated memory size per scope in bytes for each function in the IRModule returned as a
Map with function names as keys and a Map of allocated sizes as values.
*/
Expand Down
2 changes: 1 addition & 1 deletion include/tvm/tir/schedule/schedule.h
Original file line number Diff line number Diff line change
Expand Up @@ -480,7 +480,7 @@ class ScheduleNode : public runtime::Object {
const String& storage_scope, const IndexMap& index_map) = 0;
/*!
* \brief Create 2 blocks that read&write a buffer region into a read/write cache.
* It requires the the target block both read & write the target buffer.
* It requires the target block both read & write the target buffer.
* \param block_rv The target block operates on the target buffer.
* \param read_buffer_index The index of the buffer in block's read region.
* \param storage_scope The target storage scope
Expand Down
2 changes: 1 addition & 1 deletion python/tvm/relay/op/contrib/dnnl.py
Original file line number Diff line number Diff line change
Expand Up @@ -1165,7 +1165,7 @@ def callback(self, pre, post, node_map):


def rewrite_resnetv1(mod):
"""Rewrite the the ResNetV1 downsize block to reduce the computation complexity."""
"""Rewrite the ResNetV1 downsize block to reduce the computation complexity."""
mod["main"] = rewrite(ResNetV1Rewrite(), mod["main"])
return mod

Expand Down
2 changes: 1 addition & 1 deletion python/tvm/tir/schedule/schedule.py
Original file line number Diff line number Diff line change
Expand Up @@ -1617,7 +1617,7 @@ def cache_inplace(
storage_scope: str,
) -> List[BlockRV]:
"""Create blocks that reads & write a buffer region into a cache block.
It requires the the target block both read & write the target buffer.
It requires the target block both read & write the target buffer.
Mainly for inplace operation.

Parameters
Expand Down
2 changes: 1 addition & 1 deletion python/tvm/topi/hexagon/compute_poolarea.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ def compute_PoolArea(i, j, ih, iw, kh, kw, sh, sw, dh, dw, pad_top, pad_left):
# data boundary, we should move the edge to the right untill we get to the first dilated kernel
# point inside the input data boundary.
# The third row of figures shows how this row adjustment can solve the problem.
# So the problem is reduced to finding the the first dilated kernel point inside the data
# So the problem is reduced to finding the first dilated kernel point inside the data
# boundary.# For that, we can find the number of dialted points which are mapped to the padded
# area and find the location of the next one which should be inside the input data:
# num_of_prev_points = (pad_top - i * sh - 1) // dh
Expand Down
2 changes: 1 addition & 1 deletion python/tvm/topi/hexagon/slice_ops/max_pool2d.py
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ def STIR_schedule_nhwc_8h2w32c2w_nhwc_8h8w32c(
#
# 3) Ideally, the innermost loop variable will iterate only over the output
# tensor's fastest-changing indices and nothing else. But in our case,
# our two innermost loops correspond to the the max operator's reduction axes.
# our two innermost loops correspond to the max operator's reduction axes.
#
# Finding a good way to satisfy all of these requirements at the same time is
# left for future work.
Expand Down
2 changes: 1 addition & 1 deletion src/runtime/hexagon/ops/conv2d_fp16_hvx.cc
Original file line number Diff line number Diff line change
Expand Up @@ -255,7 +255,7 @@ void conv_layer_fp16_hvx(DLTensor& cr_out, const DLTensor& cr_act, // NOLINT(*)
* height to finally get 32 elements representing 32 output channels.
*
* Since the output block also has the 8h2w32c2w format, the 32 elements of the next element
* along the width is also added into the the same vector such that the first 32 channel elements
* along the width is also added into the same vector such that the first 32 channel elements
* occupy the even lanes and the next 32 occupy the odd lanes to form a single 64-element vector
* which is then stored
*/
Expand Down
2 changes: 1 addition & 1 deletion src/tir/schedule/primitive.h
Original file line number Diff line number Diff line change
Expand Up @@ -374,7 +374,7 @@ TVM_DLL StmtSRef ReindexCacheWrite(ScheduleState self, const StmtSRef& block_sre
/*!
*!
* \brief Create 2 blocks that read&write a buffer region into a read/write cache.
* It requires the the target block both read & write the target buffer.
* It requires the target block both read & write the target buffer.
* \param self The state of the schedule
* \param block_sref The target block operates on the target buffer.
* \param read_buffer_index The index of the buffer in block's read region.
Expand Down
2 changes: 1 addition & 1 deletion tests/python/contrib/test_clml/infrastructure.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ class Device:

Notes
-----
The test configuration will be loaded once when the the class is created. If the configuration
The test configuration will be loaded once when the class is created. If the configuration
changes between tests, any changes will not be picked up.

Parameters
Expand Down
4 changes: 2 additions & 2 deletions tests/python/contrib/test_ethosn/test_codegen.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ def test_experimental_compiler(capfd):
tei.build(mod, {}, True, additional_config_args=additional_config_args)

# Check for hints that the experimental compiler was activated.
# The support library logs a warning to say the the experimental
# The support library logs a warning to say the experimental
# compiler is in use. Check that this warning was logged.
captured = capfd.readouterr()
assert (
Expand All @@ -98,7 +98,7 @@ def test_without_experimental_compiler(capfd):
tei.build(mod, {}, True, additional_config_args=additional_config_args)

# Check for hints that the experimental compiler was activated.
# The support library logs a warning to say the the experimental
# The support library logs a warning to say the experimental
# compiler is in use. Check that this warning was logged.
captured = capfd.readouterr()
assert (
Expand Down