Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MXNet NDArray bridge. #930

Merged
merged 3 commits into from
Feb 25, 2018
Merged

MXNet NDArray bridge. #930

merged 3 commits into from
Feb 25, 2018

Conversation

tqchen
Copy link
Member

@tqchen tqchen commented Feb 25, 2018

Support wrap TVM compiled function as an NDArray function. This enables use TVM as RTC module for MX's async function

Example

def test():
    import mxnet as mx
    import topi
    import tvm
    import numpy as np
    from tvm.contrib.mxnet import to_mxnet_func

    # build a TVM function through topi
    n = 20
    shape = (20,)
    scale = tvm.var("scale", dtype="float32")
    x = tvm.placeholder(shape)
    y = tvm.placeholder(shape)
    z = topi.broadcast_add(x, y)
    zz = tvm.compute(shape, lambda *i: z(*i) * scale)

    # build the function
    target = tvm.target.cuda()
    with target:
        s = topi.generic.schedule_injective(zz)
        f = tvm.build(s, [x, y, zz, scale])

    # get a mxnet version, that runs on async engine
    mxf = to_mxnet_func(f, const_loc=[0, 1])

    ctx = mx.gpu(0)
    xx = mx.nd.uniform(shape=shape, ctx=ctx)
    yy = mx.nd.uniform(shape=shape, ctx=ctx)
    zz = mx.nd.empty(shape=shape, ctx=ctx)

    # invoke myf: this runs in mxnet engine
    mxf(xx, yy, zz, 10.0)

    np.testing.assert_allclose(
        zz.asnumpy(), (xx.asnumpy() + yy.asnumpy()) * 10)

Technical Details

The bridge is quite natural as MXNet already uses DLTensor representation, which is used by TVM. The hard part is that we need to use MXNet's engine to run the compiled function, instead of running them directly.

Since TVM relies on LLVM, it is a bit too early to directly introduce this dependency. This PR does this differently. The TVM bridge depends on a header only component of TVM and does not have to link against tvm runtime.

When a user has TVM installed in their environment, TVM queries the MXTVMBridge function to get the wrapper logic and use it to run MXNet's function asynchronously. When a user does not have TVM installed, the additional logic won't add any additional link dependencies.

Because of this optional linking logic, I did not include test case for MXNet's CI. But have verified that the code works locally on GPU and CPU case here

Restriction

MXNet and TVM need to be built with same C++ ABI (because we pass around PackedFunc). This is somewhat a restriction but makes the code sharing easier by using the PackedFunc system. This usually can be achieved by using the same c++ compiler. For example, (g++4.8 and g++5.0 are not compatible, usually, the latest version of clang is compatible with latest version of g++), running incompatible ABI will cause undefined behavior in the code and possible segfault. This restriction can be possibly removed by forcing a pure C ABI, but requires additional work and may also affect the conciseness of code.

Support convert a tvm Function as MXNet's async NDArray function.
@tqchen
Copy link
Member Author

tqchen commented Feb 25, 2018

MXNet side of PR apache/mxnet#9880

@tqchen tqchen merged commit 30eaf46 into apache:master Feb 25, 2018
tqchen added a commit to tqchen/tvm that referenced this pull request Jul 6, 2018
* MXNet NDArray bridge.
Support convert a tvm Function as MXNet's async NDArray function.

* fix lint

* update comment
sergei-mironov pushed a commit to sergei-mironov/tvm that referenced this pull request Aug 8, 2018
* MXNet NDArray bridge.
Support convert a tvm Function as MXNet's async NDArray function.

* fix lint

* update comment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant