-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Closed
Labels
Description
import tvm
from tvm import relay
import numpy as np
x = relay.var("x", shape=(1, 1, 24, 48))
w1 = relay.const(np.random.uniform(size=(1, 1, 1, 1)))
w2 = relay.const(np.random.uniform(size=(1, 1, 1, 1)))
y = relay.nn.conv2d(x, w1, kernel_size=(1, 1), padding=(0, 0), channels=1)
y = relay.transpose(y, (0, 1, 3, 2))
z = relay.nn.conv2d(y, w2, kernel_size=(1, 1), padding=(0, 0), channels=1)
func = relay.Function([x], z)
mod = tvm.IRModule.from_expr(func)
print(mod)
# with tvm.transform.PassContext(opt_level=3, disabled_pass=["AlterOpLayout"]):
with tvm.transform.PassContext(opt_level=3):
res = relay.build_module.create_executor('graph', mod, target='llvm', device=tvm.cpu()).evaluate()(
np.random.uniform(size=(1, 1, 24, 48)).astype(np.float32))
print(res)The above model first does a conv, then a transpose that swap H and W dimension, and finally conv again. It fails with error
One or more operators have not been tuned. Please tune your model for better performance. Use DEBUG logging level to see more details.
The Relay type checker is unable to show the following types match:
Tensor[(1, 1, 24, 48), float32]
Tensor[(1, 1, 48, 24), float32]
The root cause is similar to https://discuss.tvm.apache.org/t/pytorch-layout-cannot-convert-f-linear-x-f-linear-y-z/10866. In short, during alteroplayout pass, each dimension is assumed to be associated with a specific semantic (e.g., H, W, O, I, ...), and when this assumption is broken, the pass will be fragile.
Environment
OS: ubuntu 1804
TVM: 6a274af
