Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot match type float64 vs float32 #3855

Closed
gyshi opened this issue Aug 29, 2019 · 16 comments
Closed

Cannot match type float64 vs float32 #3855

gyshi opened this issue Aug 29, 2019 · 16 comments

Comments

@gyshi
Copy link

gyshi commented Aug 29, 2019

my code :

ndim = 3
dtype = 'float64'

out_grad = tvm.placeholder([tvm.var() for _ in range(ndim)], name='out_grad', dtype=dtype)
out_data = tvm.placeholder([tvm.var() for _ in range(ndim)], name='out_data', dtype=dtype)
in_grad = tvm.compute([tvm.var() for _ in range(ndim)],
                      lambda *index: (out_grad[index] * out_data[index] * np.log(3, dtype=dtype)), name='in_grad')

s = tvm.create_schedule(in_grad.op)
print(tvm.lower(s, [in_grad, out_data, out_grad], simple_mode=True))

print(in_grad.dtype)

the error is :
Traceback (most recent call last):

  File "/Users/sguangyo/PycharmProjects/sgy/test.py", line 72, in <module>
    lambda *index: (out_grad[index] * out_data[index] * np.log(3, dtype=dtype)), name='in_grad')
  File "/Users/sguangyo/.local/lib/python3.7/site-packages/tvm-0.6.dev0-py3.7-macosx-10.7-x86_64.egg/tvm/api.py", line 309, in compute
    body = fcompute(*[v.var for v in dim_var])
  File "/Users/sguangyo/PycharmProjects/sgy/test.py", line 72, in <lambda>
    lambda *index: (out_grad[index] * out_data[index] * np.log(3, dtype=dtype)), name='in_grad')
  File "/Users/sguangyo/.local/lib/python3.7/site-packages/tvm-0.6.dev0-py3.7-macosx-10.7-x86_64.egg/tvm/expr.py", line 55, in __mul__
    return _generic.multiply(self, other)
  File "/Users/sguangyo/.local/lib/python3.7/site-packages/topi-0.6.dev0-py3.7.egg/topi/generic_op_impl.py", line 83, in _tensor_bop_impl
    return orig_bop(lhs, rhs)
  File "/Users/sguangyo/.local/lib/python3.7/site-packages/tvm-0.6.dev0-py3.7-macosx-10.7-x86_64.egg/tvm/generic.py", line 79, in multiply
    return _make._OpMul(lhs, rhs)
  File "tvm/_ffi/_cython/./function.pxi", line 310, in tvm._ffi._cy3.core.FunctionBase.__call__
  File "tvm/_ffi/_cython/./function.pxi", line 245, in tvm._ffi._cy3.core.FuncCall
  File "tvm/_ffi/_cython/./function.pxi", line 234, in tvm._ffi._cy3.core.FuncCall3
  File "tvm/_ffi/_cython/./base.pxi", line 170, in tvm._ffi._cy3.core.CALL
tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (4) 5   libtvm.dylib                        0x0000000112e6e6b8 TVMFuncCall + 72
  [bt] (3) 4   libtvm.dylib                        0x0000000112649c91 std::__1::__function::__func<void tvm::runtime::TypedPackedFunc<HalideIR::Expr (HalideIR::Expr, HalideIR::Expr)>::AssignTypedLambda<tvm::ir::$_8>(tvm::ir::$_8)::'lambda'(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*), std::__1::allocator<void tvm::runtime::TypedPackedFunc<HalideIR::Expr (HalideIR::Expr, HalideIR::Expr)>::AssignTypedLambda<tvm::ir::$_8>(tvm::ir::$_8)::'lambda'(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)>, void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) + 129
  [bt] (2) 3   libtvm.dylib                        0x00000001127cee81 tvm::operator*(HalideIR::Expr, HalideIR::Expr) + 33
  [bt] (1) 2   libtvm.dylib                        0x00000001127cd411 tvm::BinaryOpMatchTypes(HalideIR::Expr&, HalideIR::Expr&) + 1777
  [bt] (0) 1   libtvm.dylib                        0x0000000112629329 dmlc::LogMessageFatal::~LogMessageFatal() + 57
  File "/Users/sguangyo/tvm/src/lang/expr_operator.cc", line 78
TVMError: Cannot match type float64 vs float32

@gyshi
Copy link
Author

gyshi commented Aug 29, 2019

i don't know how to deal with it , float32 is right , but float64 is error

@sxjscience
Copy link
Member

I think there's some bug within tvm.convert

import tvm
a = np.log(3).astype('float64')
print(a.dtype) # float64
a_ir = tvm.convert(a)
print(a_ir.dtype) # float32

@sxjscience
Copy link
Member

After some tracing, I find that the type inference here is not correct:

https://github.com/dmlc/tvm/blob/187600daef6cf89f9ab1d8a8a44316b8536475c1/python/tvm/_ffi/node_generic.py#L81-L102

It's either float32 or int32

@tqchen
Copy link
Member

tqchen commented Aug 30, 2019

@sxjscience The general principle is to make sure we are consistent with numpy. can you try to send a PR to correct this?

@gyshi
Copy link
Author

gyshi commented Aug 30, 2019

i think you must give the dtype, if i only give a number, tvm think dtype is none
tvm.const(np.log(3,dtype=dtype), dtype)
it's ok,
i think if input is a number, so we should get the dtype of input , and transform it.
@tqchen @sxjscience

@gyshi
Copy link
Author

gyshi commented Aug 30, 2019

After some tracing, I find that the type inference here is not correct:

https://github.com/dmlc/tvm/blob/187600daef6cf89f9ab1d8a8a44316b8536475c1/python/tvm/_ffi/node_generic.py#L81-L102

It's either float32 or int32

thanks . tvm.convert() has bug, i think i should use tvm.const(), rather than a number

@gyshi
Copy link
Author

gyshi commented Aug 30, 2019

if tvm want to be better, i think tvm should make user easy to use it, or give a tips, lol

@sxjscience
Copy link
Member

@tqchen @gyshi I'll fix that.

@gyshi
Copy link
Author

gyshi commented Aug 30, 2019

thx @sxjscience

@sxjscience
Copy link
Member

@gyshi #3861 fixes the problem of tvm.convert and tvm.const. However, there are still some problems within the type parsing of tvm expressions in general. For example:

import tvm
a = tvm.var('a', dtype='float64')
c = a * 1.0 # raise en error.

@junrushao
Copy link
Member

junrushao commented Sep 1, 2019

Why do you mix np.log with tvm IR building...given that we have math.log

@junrushao
Copy link
Member

Xingjian's fix helps us with numpy ndarray types, but think about it: the solution, duck typing, is interesting that now we assume that we just assume we know it (i mean we just assume dtype exists), but it now doesn't help at all in type guessing when we want to distinguish i32 and i64 when a python scalar is given (see Xingjian's example #3855 (comment)), because now we don't know immediately.

A general approach when we implement an ordinary compiler is that we just leave it blank and infer it in later passes, for example, let's just implement a unification-based type solver in this level of IR, which, @sxjscience welcome to contribute if you have time.

@junrushao
Copy link
Member

Another solution is that you just document it out...I don’t think implementing a general type solver would be worth your effort...

@gyshi
Copy link
Author

gyshi commented Sep 2, 2019

@junrushao1994
because i use tvm to write mxnet op exp2, use log(2) in backward.
i use tvm.const() to solve this problem. but when we use a const to compute in tvm.compute(), it will be 'int32' or 'float32', if my data dtype is 'float64', it will be an error.

@junrushao
Copy link
Member

@junrushao1994
because i use tvm to write mxnet op exp2, use log(2) in backward.
i use tvm.const() to solve this problem. but when we use a const to compute in tvm.compute(), it will be 'int32' or 'float32', if my data dtype is 'float64', it will be an error.

It doesn’t explain the overuse of np.log given math.log exists...

@gyshi
Copy link
Author

gyshi commented Sep 2, 2019

because if input is int, output must be int, so i need log(2) astype int or float , now i have relsove this problem, but thx your help

@gyshi gyshi closed this as completed Sep 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants