Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename runtime-config to executor-config and add documentation for Model Library Format #8270

Merged
merged 7 commits into from
Jul 20, 2021
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/dev/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -410,3 +410,4 @@ microTVM
:maxdepth: 1

microtvm_design
model_library_format
167 changes: 167 additions & 0 deletions docs/dev/model_library_format.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
.. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at

.. http://www.apache.org/licenses/LICENSE-2.0

.. Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.

Model Library Format
====================
Comment on lines +18 to +19
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be beneficial to introduce MLF as an acronym for Model Lbrary Format early in the document, so that we can get people used to it?

Suggested change
Model Library Format
====================
Model Library Format (MLF)
==========================

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i haven't used the acronym yet in this doc though. but i agree it's an easy shorthand for the format. maybe it would make sense more in tvmc docs, where it's a command-line param? wdyt?


About Model Library Format
--------------------------

TVM traditionally exports generated libraries as Dynamic Shared Objects
(e.g. DLLs (Windows) or .so (linux)). Inference can be performed on those libraries by loading them
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
(e.g. DLLs (Windows) or .so (linux)). Inference can be performed on those libraries by loading them
(e.g. DLLs (Windows) or .so (linux)). Inferences can be performed on those libraries by loading them

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

into an executable using ``libtvm_runtime.so``. This process is very dependent on services provided
by traditional OS.

For deployment to unconventional platforms (e.g. those lacking traditional OS), the microTVM project
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe use embedded instead?

Suggested change
For deployment to unconventional platforms (e.g. those lacking traditional OS), the microTVM project
For deployment to embedded platforms (e.g. those lacking traditional OS), the microTVM project

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i feel like it's not strictly limited to embedded though

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree.

to that end, I think we should word this as a general output format that produces what strictly TVM (code)generates. The creation of .so/.dll requires external toolchains called after tvm compilation -- such as LLVM and C compilers.
WDYT ?

can be used to export a generated library in pieces. In this case, microTVM provides another output
format, Model Library Format. Model Library Format is a tarball containing a file for each part of
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should not use "microTVM provides another output format", though we could use microTVM as an example this output format becomes useful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

the TVM compiler output.

What can be Exported
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
What can be Exported
What can be exported ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just nit picking :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

--------------------

At the time of writing, export is limited to full models built with ``tvm.relay.build``.

Directory Layout
----------------

Model Library Format is traditionally contained within a tarball. All paths are relative to the root
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Model Library Format is traditionally contained within a tarball. All paths are relative to the root
Model Library Format is contained within a tarball. All paths are relative to the root

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we dont have any other way. Do we?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point :)

of the tarball:

- ``/`` - Root of the tarball

- ``codegen`` - Root directory for all generated device code

- (see `codegen`_ section)

- ``executor-config/`` - Configuration for the executor which drives model inference

- ``graph/`` - Root directory containing configuration for the GraphExecutor

- ``graph.json`` - GraphExecutor JSON configuration

- ``metadata.json`` - Machine-parseable metadata for this model

- ``parameters/`` - Root directory where simplified parameters are placed

- ``<model_name>.params`` - Parameters for the model tvm.relay._save_params format

- ``src/`` - Root directory for all source code consumed by TVM

- ``relay.txt`` - Relay source code for the generated model

Description of Sub-directories
------------------------------

.. _subdir_codegen:

``codegen``
^^^^^^^^^^^

All TVM-generated code is placed in this directory. At the time of writing, there is 1 file per
Module in the generated Module tree, though this restriction may change in the future. Files in
this directory should have filenames of the form ``<target>/(lib|src)/<unique_name>.<format>``.

These components are described below:

* ``<target>`` - Identifies the TVM target on which the code should run. Currently, only ``host``
is supported.
* ``<unique_name>`` - A unique slug identifying this file. Currently ``lib<n>``, with ``<n>>` an
autoincrementing integer.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe change to auto-incrementing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

* ``<format>`` - Suffix identifying the filename format. Currently ``c`` or ``o``.

An example directory tree for a CPU-only model is shown below:

- ``codegen/`` - Codegen directory

- ``host/`` - Generated code for ``target_host``

- ``lib/`` - Generated binary object files

- ``lib0.o`` - LLVM module (if ``llvm`` target is used)
- ``lib1.o`` - LLVM CRT Metadata Module (if ``llvm`` target is used)
- ``src/`` - Generated C source

- ``lib0.c`` - C module (if ``c`` target is used)
- ``lib1.c`` - C CRT Metadata module (if ``c`` target is used)

``executor-config``
^^^^^^^^^^^^^^^^^^^

Contains machine-parseable configuration for executors which can drive model inference. Currently,
only the GraphExecutor produces configuration for this directory, in ``graph/graph.json``. This
file should be read in and the resulting string supplied to the ``GraphExecutor()`` constructor for
parsing.

``parameters``
^^^^^^^^^^^^^^

Contains machine-parseable parameters. A variety of formats may be provided, but at present, only
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parsable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

the format produced by ``tvm.relay._save_params`` is supplied. When building with
``tvm.relay.build``, the ``name`` parameter is considered to be the model name. A single file is
created in this directory ``<model_name>.json``.

``src``
^^^^^^^

Contains source code parsed by TVM. Currently, just the Relay source code is created in
``src/relay.txt``.

Metadata
--------

Machine-parseable metadata is placed in a file ``metadata.json`` at the root of the tarball.
Metadata is a dictionary with these keys:

- ``export_datetime``: Timestamp when this Model Library Format was generated, in
`strftime <https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior>`_
format ``"%Y-%M-%d %H:%M:%SZ",``.
- ``memory``: A summary of the memory usage of each generated function. Documented in
`Memory Usage Summary`_.
- ``model_name``: The name of this model (e.g. the ``name`` parameter supplied to
``tvm.relay.build``).
- ``runtimes``: A list of runtimes supported by this model. Currently, this list is always
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe executors ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch :)

``["graph"]``.
- ``target``: A dictionary mapping ``device_type`` (the underlying integer, as a string) to the
sub-target which describes that relay backend used for that ``device_type``.
- ``version``: A numeric version number that identifies the format used in this Model Library
Format. This number is incremented when the metadata structure or on-disk structure changes.
This document reflects version ``3``.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This document reflects version ``3``.
This document reflects version ``5``.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


Memory Usage Summary
^^^^^^^^^^^^^^^^^^^^

A dictionary with these sub-keys:

- ``"main"``: ``list[MainFunctionWorkspaceUsage]``. A list summarizing memory usage for each
workspace used by the main function and all sub-functions invoked.
- ``"operator_functions"``: ``map[string, list[FunctionWorkspaceUsage]]``. Maps operator function
name to a list summarizing memory usage for each workpace used by the function.

A ``MainFunctionWorkspaceUsage`` is a dict with these keys:

- ``"device"``: ``int``. The ``device_type`` associated with this workspace.
- ``"workspace_size_bytes"``: ``int``. Number of bytes needed in this workspace by this function
and all sub-functions invoked.
- ``"constants_size_bytes"``: ``int``. Size of the constants used by the main function.
- ``"io_size_bytes"``: ``int``. Sum of the sizes of the buffers used from this workspace by this
function and sub-functions.

A ``FunctionWorkspaceUsage`` is a dict with these keys:

- ``"device"``: ``int``. The ``device_type`` associated with this workspace.
- ``"workspace_size_bytes"``: ``int``. Number of bytes needed in this workspace by this function.
4 changes: 2 additions & 2 deletions python/tvm/micro/model_library_format.py
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@ def export_model_library_format(mod: executor_factory.ExecutorFactoryModule, fil
runtime = ["aot"] if is_aot else ["graph"]

metadata = {
"version": 2,
"version": 3,
"model_name": mod.libmod_name,
"export_datetime": datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%SZ"),
"memory": _build_memory_map(mod),
Expand All @@ -252,7 +252,7 @@ def export_model_library_format(mod: executor_factory.ExecutorFactoryModule, fil
f.write(str(mod.ir_mod))

if not is_aot:
graph_config_dir_path = tempdir.relpath(os.path.join("runtime-config", "graph"))
graph_config_dir_path = tempdir.relpath(os.path.join("executor-config", "graph"))
os.makedirs(graph_config_dir_path)
with open(os.path.join(graph_config_dir_path, "graph.json"), "w") as f:
f.write(mod.get_executor_config())
Expand Down
2 changes: 1 addition & 1 deletion tests/python/unittest/test_micro_model_library_format.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@


def validate_graph_json(extract_dir, factory):
with open(os.path.join(extract_dir, "runtime-config", "graph", "graph.json")) as graph_f:
with open(os.path.join(extract_dir, "executor-config", "graph", "graph.json")) as graph_f:
graph_json = graph_f.read()
assert graph_json == factory.graph_json

Expand Down