Skip to content

Commit

Permalink
Use tensorflow leaky_relu op for efficiency (#9044)
Browse files Browse the repository at this point in the history
* Use tensorflow leaky_relu op for efficiency

The current implementation for leaky_relu is extremely inefficient. In my specific usecase it took as much time as the convolution itself and led to tensorslow being much slower than theano. The old inefficient implementation was the result of tensorflow not having the leaky_relu op, but it was recently added.

* fix max_value clipping

* fix pep8
  • Loading branch information
dmaniry authored and fchollet committed Jan 11, 2018
1 parent 32aa192 commit f699346
Showing 1 changed file with 5 additions and 7 deletions.
12 changes: 5 additions & 7 deletions keras/backend/tensorflow_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -2915,15 +2915,13 @@ def relu(x, alpha=0., max_value=None):
A tensor.
"""
if alpha != 0.:
negative_part = tf.nn.relu(-x)
x = tf.nn.relu(x)
x = tf.nn.leaky_relu(x, alpha)
else:
x = tf.nn.relu(x)

if max_value is not None:
max_value = _to_tensor(max_value, x.dtype.base_dtype)
zero = _to_tensor(0., x.dtype.base_dtype)
x = tf.clip_by_value(x, zero, max_value)
if alpha != 0.:
alpha = _to_tensor(alpha, x.dtype.base_dtype)
x -= alpha * negative_part
x = tf.minimum(x, max_value)
return x


Expand Down

1 comment on commit f699346

@fi000
Copy link

@fi000 fi000 commented on f699346 Feb 14, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have the same version with almost the same changes but it does not work!
Would you please help me?

def relu(x, alpha=0., max_value=None):
"""Rectified linear unit.

With default values, it returns element-wise `max(x, 0)`.

# Arguments
    x: A tensor or variable.
    alpha: A scalar, slope of negative section (default=`0.`).
    max_value: Saturation threshold.

# Returns
    A tensor.
"""
if alpha != 0.:
    x = tf.nn.leaky_relu(x, alpha)
else:
    x = tf.nn.relu(x)

if max_value is not None:
    max_value = _to_tensor(max_value, x.dtype.base_dtype)
    x = tf.minimum(x, max_value)
return x

Please sign in to comment.