Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use tensorflow leaky_relu op for efficiency #9044

Merged
merged 3 commits into from
Jan 11, 2018

Conversation

dmaniry
Copy link
Contributor

@dmaniry dmaniry commented Jan 10, 2018

The current implementation for leaky_relu is extremely inefficient. In my specific usecase it took as much time as the convolution itself and led to tensorslow being much slower than theano. The old inefficient implementation was the result of tensorflow not having the leaky_relu op, but it was recently added.

fixes #3150

dmaniry and others added 3 commits January 10, 2018 23:31
The current implementation for leaky_relu is extremely inefficient. In my specific usecase it took as much time as the convolution itself and led to tensorslow being much slower than theano. The old inefficient implementation was the result of tensorflow not having the leaky_relu op, but it was recently added.
Copy link
Member

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@fchollet fchollet merged commit f699346 into keras-team:master Jan 11, 2018
@dmaniry dmaniry deleted the patch-1 branch January 11, 2018 20:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Possible inefficiencies in Tensorflow backend on gpu
2 participants