-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Frontend, Tensorflow2] Adding TF2 frontend code with support for control flow ops #8142
Conversation
Co-authored-by: David Huang <[email protected]> Co-authored-by: Rohan Mukherjee <[email protected]> Co-authored-by: Srinidhi Goud <[email protected]> Co-authored-by: Xingyu Zhou <[email protected]> Co-authored-by: Xiao <[email protected]>
Co-authored-by: David Huang <[email protected]> Co-authored-by: Rohan Mukherjee <[email protected]> Co-authored-by: Srinidhi Goud <[email protected]> Co-authored-by: Xingyu Zhou <[email protected]> Co-authored-by: Xiao <[email protected]>
Co-authored-by: David Huang <[email protected]> Co-authored-by: Rohan Mukherjee <[email protected]> Co-authored-by: Srinidhi Goud <[email protected]> Co-authored-by: Xingyu Zhou <[email protected]> Co-authored-by: Xiao <[email protected]>
@@ -23,8 +23,7 @@ | |||
|
|||
from tvm.runtime.vm import VirtualMachine | |||
import tvm.contrib.graph_executor as runtime | |||
from tvm.relay.frontend.tensorflow import from_tensorflow | |||
|
|||
from tvm.relay.frontend.tensorflow2 import from_tensorflow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously the existing parser from relay/frontend/tensorflow.py
was being used to test the basic TF2 ops, functions and models. From this PR, all of those test as well as the new ones are being passed through the new frontend parser for TF2, namely relay/frontend/tensorflow2.py
else: | ||
sym = [_expr.var(node.name, shape=input_shape, dtype=attr["dtype"].name)] | ||
return input_shape, sym | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The utility functions on top are imported from tensorflow.py
. Since they were methods inside class they had to extracted. Future refactor PR can address this to allow better code reuse. However all TF1 unit tests and models need to be tested alongside for such a refactor.
from tensorflow.python.framework import function_def_to_graph | ||
from tensorflow.python.framework import tensor_util | ||
from tensorflow.python.framework import dtypes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could be merged
def triple(x): | ||
return 3 * x | ||
|
||
cond = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add case for False
// tensorflow/core/framework/attr_value.proto | ||
message AttrValue { | ||
oneof value { | ||
bytes s = 2; // "string" | ||
int64 i = 3; // "int" | ||
float f = 4; // "float" | ||
bool b = 5; // "bool" | ||
DataType type = 6; // "type" | ||
TensorShapeProto shape = 7; // "shape" | ||
TensorProto tensor = 8; // "tensor" | ||
ListValue list = 1; // any "list(...)" } | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
post a link instead of pasting the proto def should be good
""" | ||
fields = ["s", "i", "f", "b", "type", "shape", "tensor", "func"] | ||
|
||
x = buf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use buf directly?
return attrs | ||
|
||
|
||
def convert_place_holder(shape, node, in_type=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def convert_place_holder(shape, node, in_type=None): | |
def convert_placeholder(shape, node, in_type=None): |
|
||
|
||
def convert_place_holder(shape, node, in_type=None): | ||
"""convert tf place holder into relay var. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"""convert tf place holder into relay var. | |
"""convert tf placeholder into relay var. |
self.mod = IRModule({}) # relay function and type definitions. defined in tvm/ir/module.py | ||
self.params = {} # for constants (weights) in the entire relay module | ||
self.prelude = Prelude(self.mod) # relay.prelude needed for tensorlist ops |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove the comments?
|
||
Parameters | ||
---------- | ||
op_name : str |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add definition for graph, node_name as well
from ..loops import while_loop as _while_loop | ||
from .common import infer_type as _infer_type | ||
|
||
from .tensorflow import _convert_map as _convert_map_tf1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from .tensorflow import _convert_map as _convert_map_tf1 | |
from .tensorflow import _convert_map as _convert_map_common |
None, | ||
) | ||
loop_inputs = convert_vars(inputs, while_func.signature.input_arg) | ||
# in_shapes = nodes[node_name].attr["output_shapes"].list.shape |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# in_shapes = nodes[node_name].attr["output_shapes"].list.shape |
Thanks for the comments @yongwww. I addressed the changes in the last commit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
…trol flow ops (apache#8142) * adding tf control flow ops with a different frontend code Co-authored-by: David Huang <[email protected]> Co-authored-by: Rohan Mukherjee <[email protected]> Co-authored-by: Srinidhi Goud <[email protected]> Co-authored-by: Xingyu Zhou <[email protected]> Co-authored-by: Xiao <[email protected]> * Some minor fixes * Fixing output order in TF2 outputs Co-authored-by: David Huang <[email protected]> Co-authored-by: Rohan Mukherjee <[email protected]> Co-authored-by: Srinidhi Goud <[email protected]> Co-authored-by: Xingyu Zhou <[email protected]> Co-authored-by: Xiao <[email protected]> * Using black * Refactoring Co-authored-by: David Huang <[email protected]> Co-authored-by: Rohan Mukherjee <[email protected]> Co-authored-by: Srinidhi Goud <[email protected]> Co-authored-by: Xingyu Zhou <[email protected]> Co-authored-by: Xiao <[email protected]> * resolving a bug with passing output tensors for Functional Graphs * fixing multi output for graph runtime * adding docstring edits * linting + black * linting + black * linting + black * removing unnecessary output propagation across function * addressed comments in PR Co-authored-by: David Huang <[email protected]> Co-authored-by: Srinidhi Goud <[email protected]> Co-authored-by: Xingyu Zhou <[email protected]> Co-authored-by: Xiao <[email protected]>
…trol flow ops (apache#8142) * adding tf control flow ops with a different frontend code Co-authored-by: David Huang <[email protected]> Co-authored-by: Rohan Mukherjee <[email protected]> Co-authored-by: Srinidhi Goud <[email protected]> Co-authored-by: Xingyu Zhou <[email protected]> Co-authored-by: Xiao <[email protected]> * Some minor fixes * Fixing output order in TF2 outputs Co-authored-by: David Huang <[email protected]> Co-authored-by: Rohan Mukherjee <[email protected]> Co-authored-by: Srinidhi Goud <[email protected]> Co-authored-by: Xingyu Zhou <[email protected]> Co-authored-by: Xiao <[email protected]> * Using black * Refactoring Co-authored-by: David Huang <[email protected]> Co-authored-by: Rohan Mukherjee <[email protected]> Co-authored-by: Srinidhi Goud <[email protected]> Co-authored-by: Xingyu Zhou <[email protected]> Co-authored-by: Xiao <[email protected]> * resolving a bug with passing output tensors for Functional Graphs * fixing multi output for graph runtime * adding docstring edits * linting + black * linting + black * linting + black * removing unnecessary output propagation across function * addressed comments in PR Co-authored-by: David Huang <[email protected]> Co-authored-by: Srinidhi Goud <[email protected]> Co-authored-by: Xingyu Zhou <[email protected]> Co-authored-by: Xiao <[email protected]>
This PR introduces a new parser
tensorflow2.py
. To keep the code separate from original parser code and introduce TF2 related changes, much of the existing code is ported into this frontend code. This commit follows the original plan of commits as proposed to support TF2 frontend in relay here: #4102. Exact commit plan here.Some important points on introducing the TF2 frontend:
tensorflow.py
has been reused.PartitionedCall
,StatelessPartitionedCall
invoked as functional nodes.If
,While
,StatelessWhile
has been introduced which are invoked as functional node.Some points in testing:
tests/python/frontend/tensorflow2/
were used in evaluating the new parser. Previous unit tests cover the support for ops likeAdd2D
,Relu
etc. intest_functional.py
and some small end-to-end models intest_sequential.py
.test_functional.py
for ops likeIf
,While
andStatelessWhile
.Co-authored-by: David Huang [email protected]
Co-authored-by: Rohan Mukherjee [email protected]
Co-authored-by: Srinidhi Goud [email protected]
Co-authored-by: Xingyu Zhou [email protected]
Co-authored-by: Xiao [email protected]
Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.