-
Notifications
You must be signed in to change notification settings - Fork 6.8k
float64 data backward error using gluon #9156
Comments
@zhaoningning You need to cast the type to float32 explicitly. Use |
@sxjscience But I use float64 data and float64 parameters, still need to cast the loss to float32 ? I have to use float64 data type because it may generate very small values in the forward process. |
@zhaoningning You can try to explicitly set the dtype of all the ndarray weights/biases to float64. Also, would float64 be a must? Most deep learning algorithms can run in float32 types. |
@sxjscience I have already cast all data to float64,so forward is OK ,but backward give error..... |
Same error occurs when I use float16 and I'm not using gluon. |
|
@Soonhwan-Kwon Could you please add a small example that reproduces the problem? |
@Soonhwan-Kwon / @zhaoningning - Can you please provide a small code sample for reproducing the issue. |
@sandeep-krishnamurthy sorry ,I have moved to other solutions for float64 training, and also can not reproduce this issue because the code is missing after such a long time.... |
This PR - #12412 should fix using other than FP32 params in Gluon. Resolving. Please reopen if closed in error. |
I write a custom loss in gluon,and when using float32 data type, everything is ok, but whien I changed to use float64 data type,there is a error says:
“include/mxnet/././tensor_blob.h:217: Check failed: mshadow::DataType<DType>::kFlag == type_flag_ TBlob.get_with_shape: data type do not match specified type.Expected: 0 v.s. given 1”,
this happend after loss is calculated ,when loss.backward () is executed.
mxnet version is 1.0.0, ubuntu 14.04,python2.7
The text was updated successfully, but these errors were encountered: