Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Conv1D is slow #11161

Closed
eric-haibin-lin opened this issue Jun 5, 2018 · 7 comments
Closed

Conv1D is slow #11161

eric-haibin-lin opened this issue Jun 5, 2018 · 7 comments

Comments

@eric-haibin-lin
Copy link
Member

The Conv1D block uses MXNet's implementation for Convolution, because CUDNN only implements 2-D convolution. In the operator, we can try to optimize the performance by reshaping the inputs to 4D, and kernels to 2D, so that CUDNN kernel can be used.

@TaoLv
Copy link
Member

TaoLv commented Jun 6, 2018

Yes. By doing this, conv1d can also benefit from mkldnn conv2d on cpu side. @pengzhao-intel

@pengzhao-intel
Copy link
Contributor

In our local case, it can be 10x faster :) We can provide the PR for CPU side @jinhuang415

@eric-haibin-lin
Copy link
Member Author

That will be great!

@vandanavk
Copy link
Contributor

@pengzhao-intel @jinhuang415 is there a PR for this issue?

@pengzhao-intel
Copy link
Contributor

@vandanavk Not yet, the 1D conv is still under the development.
We will file PR and merge it into MXNet in the next MKL-DNN release (0.17).
oneapi-src/oneDNN@a1204e4

@pengzhao-intel
Copy link
Contributor

@xinyu-intel please check the latest MKL-DNN and conv1d.
We could start the evaluate and integrate conv1d now.

@pengzhao-intel
Copy link
Contributor

#13530 (comment)

The performance is listed in this PR @eric-haibin-lin

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants