Skip to content

Conversation

@ninesheep
Copy link
Contributor

@ninesheep ninesheep commented Dec 19, 2022

construct a module like below:

import tvm
from tvm import relay

p0 = relay.var("p0", shape = [32,], dtype="float16")
p1 = relay.var("p1", shape = [32,], dtype="float16")

x1 = relay.multiply(p0, p1)
x2 = relay.round(x1)
x3 = relay.cast(x2, "uint8")  # or int8
func = relay.Function(relay.analysis.free_vars(x2), x2)
mod = tvm.IRModule.from_expr(func)

with tvm.transform.PassContext(
    opt_level=3,
    ):
    graph, lib, params = tvm.relay.build_module.build(
        new_mod, target="cuda --host=llvm", params=None
    )

and build it, it will report a error like this:
image

@wrongtest-intellif

@tvm-bot
Copy link
Collaborator

tvm-bot commented Dec 19, 2022

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

  • No users to tag found in teams: bug, codegen, cuda See #10317 for details

Generated by tvm-bot

@ninesheep
Copy link
Contributor Author

cc @wrongtest-intellif

Copy link
Contributor

@wrongtest-intellif wrongtest-intellif left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@masahi masahi merged commit cca84d3 into apache:main Jan 2, 2023
fzi-peccia pushed a commit to fzi-peccia/tvm that referenced this pull request Mar 27, 2023
* [Fix Bug]fix the bug of tensorflow frontend when parsing Range layer

* [Fix Bug]fix the bug of schedule batch_matmul_int8 on cuda

* fix cast fp16 to int8/uint8 on cuda

Co-authored-by: wangjiuyang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants