Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss computed in numpy cause performance degradation #455

Open
ThomasDelteil opened this issue Mar 19, 2018 · 0 comments
Open

Loss computed in numpy cause performance degradation #455

ThomasDelteil opened this issue Mar 19, 2018 · 0 comments

Comments

@ThomasDelteil
Copy link
Contributor

ThomasDelteil commented Mar 19, 2018

In a lot of example notebooks, you use:

curr_loss = nd.mean(loss).asscalar()

which is the equivalent of curr_loss = nd.mean(loss).asnumpy()[0]

https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.asscalar

My tests give 2-10% performance improvements when doing the .asscalar() on the moving loss at the end of the epoch rather than in every batch. (When commenting out the computation of the test and training accuracy, which we know use numpy...)

Related to this issue: apache/mxnet#9571 which you opened @zackchase 😄

To precise what I mean, calling .asnumpy(), calls a wait on read on the prediction which blocks the next batch from being loaded on the GPU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant