Skip to content

Conversation

@Lunderberg
Copy link
Contributor

This is a reapplication of #15954, after resolving the breakages that required reverting in #16442. The regex matching is now implemented without the #include <regex> from the C++ stdlib, to avoid ABI incompatibility with pytorch.

Prior to this commit, the DecomposeOpsForTraining transform directly replaced relax.nn.batch_norm into more primitive relax operations. This required the decomposed form of relax.nn.batch_norm to be duplicated with DecomposeOpsForInference. This commit refactors the pass to occur in two steps, first to apply training-specific mutations, and then to decompose.

Having a clear DecomposeOps pass also has a clear single location for operator decomposition, which may be migrated into the operator definition in the future, similar to FLegalize.

This function should be used instead of `std::regex` within C++ call
sites, to avoid ABI incompatibilities with pytorch.

Currently, the pytorch wheels available through pip install use the
pre-C++11 ABI by setting `-DUSE_CXX11_ABI=0` [0]. If TVM were to user
the pre-C++11 ABI, this would cause breakages with dynamically-linked
LLVM environments.

Use of the `<regex>` header in TVM should be avoided, as its
implementation is not supported by gcc's dual ABI. This ABI
incompatibility results in runtime errors either when `std::regex` is
called from TVM, or when `std::regex` is called from pytorch,
depending on which library was loaded first.  This restriction can be
removed when a version of pytorch compiled using `-DUSE_CXX11_ABI=1`
is available from PyPI.

[0] pytorch/pytorch#51039
This is a reapplication of apache#15954,
after resolving the breakages that required reverting in
apache#16442.  The regex matching is now
implemented without the `#include <regex>` from the C++ stdlib, to
avoid ABI incompatibility with pytorch.

Prior to this commit, the `DecomposeOpsForTraining` transform directly
replaced `relax.nn.batch_norm` into more primitive relax operations.
This required the decomposed form of `relax.nn.batch_norm` to be
duplicated with `DecomposeOpsForInference`.  This commit refactors the
pass to occur in two steps, first to apply training-specific
mutations, and then to decompose.

Having a clear `DecomposeOps` pass also has a clear single location
for operator decomposition, which may be migrated into the operator
definition in the future, similar to `FLegalize`.
@Lunderberg Lunderberg force-pushed the transform_decompose_ops_for_training branch from 8a8f8d9 to 317e8da Compare January 24, 2024 18:09
@slyubomirsky
Copy link
Contributor

Is having the two separate passes necessary for reducing code duplication for batch norm? It does come at the cost of an extra traversal.

@Lunderberg
Copy link
Contributor Author

It isn't strictly necessary, but I'd like to move in that direction as a first step in removing the LegalizeOpsForInference step altogether. This came up in a conversation here regarding potential use cases of allowing FLegalize to be implemented in terms of other relax operations.

Currently, there are two distinct transforms, DecomposeOpsForTraining and DecomposeOpsForInference. If a model uses either R.nn.batch_norm or R.nn.layer_norm, a user must apply one of these two transforms prior to calling tvm.relax.build. This long-term goal of this change is to remove that requirement. By splitting the transform into two steps, the training flow has an optional pass MutateOpsForTraining and a mandatory pass DecomposeOps, while the inference flow has a single mandatory pass DecomposeOps.

Since the DecomposeOps pass is required for both use cases, and the change it makes is a special case of FLegalize, a follow-up PR can then define FLegalize for R.nn.batch_norm and R.nn.layer_norm, and remove DecomposeOps altogether.

@slyubomirsky
Copy link
Contributor

Ah, that seems like a good reason then. 👍 Some bigger simplifications in the works.

Copy link
Contributor

@slyubomirsky slyubomirsky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes seem reasonable and, per your comment, set us up for further simplifications down the line.

@Lunderberg
Copy link
Contributor Author

Sounds good. Re-running CI as I let the results get more stale than I'd like, then (assuming no new failures arise) merging in.

@Lunderberg Lunderberg merged commit 8da3de1 into apache:main Feb 6, 2024
@Lunderberg Lunderberg deleted the transform_decompose_ops_for_training branch February 6, 2024 14:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants