Skip to content

[Bug][MXNet] MXNet dot not working with non 2D tensors  #11691

@petuca

Description

@petuca

Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first 😸

Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed.

Here we have a dot operation problem established in MXNet model with non 2D tensors.

For example here we want to do dot product of two tensors:

  • data tensor: [3]
  • weight tensor: [3,1]
import numpy as np
import mxnet as mx
from mxnet import gluon
from mxnet.gluon import nn
import tvm
from tvm import relay, transform
from tvm.contrib import graph_executor

shape_myx = (3,)
shape_params = (3,1)
transpose_b = False

class MyNetHybrid(gluon.HybridBlock):
    def __init__(self, **kwargs):
        super(MyNetHybrid, self).__init__(**kwargs)
        
        with self.name_scope():
            self.mat_weights = self.params.get('mat_weights', shape=shape_params) 
            
    
    def hybrid_forward(self, F, x, mat_weights):
        x = F.dot(x, mat_weights, transpose_b=transpose_b)
        return x

mynet = MyNetHybrid()
mynet.initialize()

myx = mx.nd.uniform(shape=shape_myx)


shape_dict = {'data' : myx.shape}
mod, params = relay.frontend.from_mxnet(mynet, shape_dict)
dev = tvm.cpu()

with tvm.transform.PassContext(opt_level=3):
    lib = relay.build(mod, target='llvm', params=params)

Looks like this bug is very similar to those reported in #10651 and in PR #11174 for ONNX and PyTorch models.

Similar error is obtained using any shape different from 2D for any of the data and weight tensors.

Expected behavior

Should be compiled by TVM, as it follows correct MXNet specification and can be executed by MXNet.

Actual behavior

Traceback (most recent call last):

  File "/home/syrmia/anaconda3/envs/tvmenv/lib/python3.7/site-packages/spyder_kernels/py3compat.py", line 356, in compat_exec
    exec(code, globals, locals)

  File "/home/syrmia/Desktop/tvm_tutorial/my_scripts/untitled1.py", line 40, in <module>
    mod, params = relay.frontend.from_mxnet(mynet, shape_dict)

  File "/home/syrmia/tvm/python/tvm/relay/frontend/mxnet.py", line 2975, in from_mxnet
    func = _from_mxnet_impl(sym, shape, dtype, params, mod)

  File "/home/syrmia/tvm/python/tvm/relay/frontend/mxnet.py", line 2884, in _from_mxnet_impl
    res = _convert_map[op_name](*op_params)

  File "/home/syrmia/tvm/python/tvm/relay/frontend/mxnet.py", line 802, in _mx_dot
    raise tvm.error.OpAttributeUnimplemented("Only 2-D arrays are supported.")

AttributeError: module 'tvm.error' has no attribute 'OpAttributeUnimplemented'

When I comment the lines for checking ranks in from_mxnet.py file I got this error:

...
  File "/home/syrmia/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 81, in cfun
    rv = local_pyfunc(*pyargs)
  File "/home/syrmia/tvm/python/tvm/relay/op/nn/_nn.py", line 112, in alter_op_layout_dense
    return topi.nn.dense_alter_layout(attrs, inputs, tinfos, out_type)
  File "/home/syrmia/anaconda3/envs/tvmenv/lib/python3.7/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/home/syrmia/tvm/python/tvm/target/generic_func.py", line 286, in dispatch_func
    return dispatch_dict[k](*args, **kwargs)
  File "/home/syrmia/tvm/python/tvm/topi/x86/dense_alter_op.py", line 48, in _alter_dense_layout
    M, K = get_const_tuple(data_tensor.shape)
ValueError: not enough values to unpack (expected 2, got 1)

Steps to reproduce

The code above successfully reproduce this problem.

Potential solution

Changing the _mx_dot function in from_mxnet.py with:

def _mx_dot(inputs, attrs):
    assert len(inputs) == 2
    
    a = inputs[0]
    b = inputs[1]
    
    rank_a = len(_infer_type(a).checked_type.shape)
    rank_b = len(_infer_type(b).checked_type.shape)
    
    if rank_a < 1 or rank_b < 1:
        raise tvm.error.OpAttributeInvalid("Unsupported shape of input tensors.")

    transpose_a = attrs.get_bool("transpose_a", False)
    transpose_b = attrs.get_bool("transpose_b", False)
    
    if transpose_a is True:
        msg = 'Value {} in attribute "transpose_a" of operator dot ' "is not valid."
        raise tvm.error.OpAttributeInvalid(msg.format(transpose_a))

    # When performing dot product we need to properly handle shape of result -> out_shape
    if rank_a == 1:
        out_shape = list()
        a = _op.expand_dims(a, axis=0)
    else:
        shape_a = list(_infer_type(a).checked_type.shape)
        out_shape = shape_a[:-1]
        a = _op.reshape(a, newshape=(-1, shape_a[-1]))
        
    if rank_b == 1:
        if not out_shape:
            out_shape = [1,]
        b = _op.expand_dims(b, axis=0)
    else:
        # Transpose matrix b if needed
        trans_axes = list(range(rank_b))
        if transpose_b:
            trans_axes = trans_axes[-1:] + trans_axes[:-1] 
            b = _op.transpose(b, axes=trans_axes)
                        
        shape_b = list(_infer_type(b).checked_type.shape)
        out_shape += shape_b[1:]
        
        # Additional transpose is mandatory since _op.nn.dense function transposes second tensor by default
        b = _op.transpose(_op.reshape(b, newshape=(shape_b[0], -1)), axes=[1, 0])

    out = _op.reshape(_op.nn.dense(a, b), newshape=out_shape)
    
    return out

cc: @masahi @junrushao1994 @kevinthesun @ganler

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions