Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when run char_lstm example #30

Open
ningyuwhut opened this issue Feb 1, 2017 · 1 comment
Open

Error when run char_lstm example #30

ningyuwhut opened this issue Feb 1, 2017 · 1 comment

Comments

@ningyuwhut
Copy link

I run the char_lstm example on my mbp. As the computer has no NVIDIA card,so I change the notebook to use cpu(0), the change is at below:

model = mx.model.FeedForward(
ctx=mx.cpu(0),//chang ctx from mx.gpu(0)to mx.cpu(0)
symbol=symbol,
num_epoch=num_epoch,
learning_rate=learning_rate,
momentum=0,
wd=0.0001,
initializer=mx.init.Xavier(factor_type="in", magnitude=2.34))

and when I run the cell, the compiler report errors:


TypeError Traceback (most recent call last)
in ()
38 eval_metric=mx.metric.np(Perplexity),
39 batch_end_callback=mx.callback.Speedometer(batch_size, 20),
---> 40 epoch_end_callback=mx.callback.do_checkpoint("obama"))

/Library/Python/2.7/site-packages/mxnet-0.7.0-py2.7.egg/mxnet/model.pyc in fit(self, X, y, eval_data, eval_metric, epoch_end_callback, batch_end_callback, kvstore, logger, work_load_list, monitor, eval_batch_end_callback)
786 logger=logger, work_load_list=work_load_list, monitor=monitor,
787 eval_batch_end_callback=eval_batch_end_callback,
--> 788 sym_gen=self.sym_gen)
789
790

/Library/Python/2.7/site-packages/mxnet-0.7.0-py2.7.egg/mxnet/model.pyc in _train_multi_device(symbol, ctx, arg_names, param_names, aux_names, arg_params, aux_params, begin_epoch, end_epoch, epoch_size, optimizer, kvstore, update_on_kvstore, train_data, eval_data, eval_metric, epoch_end_callback, batch_end_callback, logger, work_load_list, monitor, eval_batch_end_callback, sym_gen)
243
244 # evaluate at end, so we can lazy copy
--> 245 executor_manager.update_metric(eval_metric, data_batch.label)
246
247 nbatch += 1

/Library/Python/2.7/site-packages/mxnet-0.7.0-py2.7.egg/mxnet/executor_manager.pyc in update_metric(self, metric, labels)
404 def update_metric(self, metric, labels):
405 """update metric with the current executor"""
--> 406 self.curr_execgrp.update_metric(metric, labels)

/Library/Python/2.7/site-packages/mxnet-0.7.0-py2.7.egg/mxnet/executor_manager.pyc in update_metric(self, metric, labels)
260 for texec, islice in zip(self.train_execs, self.slices):
261 labels_slice = [label[islice] for label in labels]
--> 262 metric.update(labels_slice, texec.outputs)
263
264 class DataParallelExecutorManager(object):

/Library/Python/2.7/site-packages/mxnet-0.7.0-py2.7.egg/mxnet/metric.pyc in update(self, labels, preds)
342 pred = pred[:, 1]
343
--> 344 reval = self._feval(label, pred)
345 if isinstance(reval, tuple):
346 (sum_metric, num_inst) = reval

/Library/Python/2.7/site-packages/mxnet-0.7.0-py2.7.egg/mxnet/metric.pyc in feval(label, pred)
368 def feval(label, pred):
369 """Internal eval function."""
--> 370 return numpy_feval(label, pred)
371 feval.name = numpy_feval.name
372 return CustomMetric(feval, name, allow_extra_outputs)

in Perplexity(label, pred)
23
24
---> 25 loss += -np.log(max(1e-10, pred[i][int(label[i])]))
26 return np.exp(loss / label.size)
27

TypeError: only length-1 arrays can be converted to Python scalars

the error arise from the code :
loss += -np.log(max(1e-10, pred[i][int(label[i])]))

so is it caused by the ctx change? please help

@AntonEryomin
Copy link

Nope, this error is not about ctx changing. This error is about the size of array, please looking carefully about the size of array.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants