You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
When I am trying to reproduce the example of Handwritten Digit Recognition with python and prebuild MXNet version 20160321, I encounter a preblem in prodiction, i.e. executing the following code prob = model.predict(val_img[0:1].astype(np.float32)/255)[0]
MXNet version: windows binary build 20160321 (20160321_win10_x64_gpu)
Python version and distribution: python 2.7
CUDA version: 8.0.44
CUDNN version: CUDNN v3
Error Message:
AssertionError Traceback (most recent call last)
<ipython-input-6-eec631db75f6> in <module>()
2 plt.axis('off')
3 plt.show()
----> 4 prob = model.predict(val_img[0:1].astype(np.float32)/255)[0]
5 print 'Classified as %d with probability %f' % (prob.argmax(), max(prob))
E:\WinPython-64bit-2.7.10.3\python-2.7.10.amd64\lib\site-packages\mxnet-0.5.0-py2.7.egg\mxnet\model.pyc in predict(self, X, num_batch, return_data, reset)
586 The predicted value of the output.
587 """
--> 588 X = self._init_iter(X, None, is_train=False)
589
590 if reset:
E:\WinPython-64bit-2.7.10.3\python-2.7.10.amd64\lib\site-packages\mxnet-0.5.0-py2.7.egg\mxnet\model.pyc in _init_iter(self, X, y, is_train)
549 shuffle=is_train, last_batch_handle='roll_over')
550 else:
--> 551 return io.NDArrayIter(X, y, self.numpy_batch_size, shuffle=False)
552 if not isinstance(X, io.DataIter):
553 raise TypeError('X must be DataIter, NDArray or numpy.ndarray')
E:\WinPython-64bit-2.7.10.3\python-2.7.10.amd64\lib\site-packages\mxnet-0.5.0-py2.7.egg\mxnet\io.pyc in __init__(self, data, label, batch_size, shuffle, last_batch_handle)
360 self.num_data = self.data_list[0].shape[0]
361 assert self.num_data >= batch_size, \
--> 362 "batch_size need to be smaller than data size when not padding."
363 self.cursor = -batch_size
364 self.batch_size = batch_size
AssertionError: batch_size need to be smaller than data size when not padding.
What have you tried to solve it?
I tried other versions of prebuild MXNet. When using version 20160223, the problem is the same.
When I used later version 20160531 or 20160419 (with CUDNN v5.1 or v3), prediction was fine for CPU training of multilayer perceptron, but the GPU training of CNN yielded a poor accuracy (same as https://github.com/dmlc/mxnet/issues/1228)
How can I solve this problem?
The text was updated successfully, but these errors were encountered:
Yes. I have carefully checked my code and they are exactly the same. Another tricky point I find is, when visualizing the network by mx.viz.plot_network(symbol=mlp, shape=shape), I cannot get the "data" block (but other blocks are fine).
Prediction fails for CPU. If I update my prebuild MXNet to later version, prediction is fine for CPU, but the training accuracy for GPU is really poor.
When I am trying to reproduce the example of Handwritten Digit Recognition with python and prebuild MXNet version 20160321, I encounter a preblem in prodiction, i.e. executing the following code
prob = model.predict(val_img[0:1].astype(np.float32)/255)[0]
I copied the code from http://mxnet.io/tutorials/python/mnist.html and here is my environment info and error message.
Environment info
Operating System: Windows 10
Package used (Python/R/Scala/Julia): python
MXNet version: windows binary build 20160321 (20160321_win10_x64_gpu)
Python version and distribution: python 2.7
CUDA version: 8.0.44
CUDNN version: CUDNN v3
Error Message:
What have you tried to solve it?
I tried other versions of prebuild MXNet. When using version 20160223, the problem is the same.
When I used later version 20160531 or 20160419 (with CUDNN v5.1 or v3), prediction was fine for CPU training of multilayer perceptron, but the GPU training of CNN yielded a poor accuracy (same as https://github.com/dmlc/mxnet/issues/1228)
How can I solve this problem?
The text was updated successfully, but these errors were encountered: