Skip to content

Commit e736fb4

Browse files
committed
Merge branch 'master' of github.com:fchollet/keras into pr_7508
* 'master' of github.com:fchollet/keras: (57 commits) Minor README edit Speed up Travis tests (keras-team#9386) fix typo (keras-team#9391) Fix style issue in docstring Prepare 2.1.4 release. Fix activity regularizer + model composition test Corrected copyright years (keras-team#9375) Change default interpolation from nearest to bilinear. (keras-team#8849) a capsule cnn on cifar-10 (keras-team#9193) Enable us to use sklearn to do cv for functional api (keras-team#9320) Add support for stateful metrics. (keras-team#9253) The type of list keys was float (keras-team#9324) Fix mnist sklearn wrapper example (keras-team#9317) keras-team#9287 Fix most of the file-handle resource leaks. (keras-team#9309) Pass current learning rate to schedule() in LearningRateScheduler (keras-team#8865) Simplify with from six.moves import input (keras-team#9216) fixed RemoteMonitor: Json to handle np.float32 and np.int32 types (keras-team#9261) Update tweet length from 140 to 280 in docs Add `depthconv_conv2d` tests (keras-team#9225) Remove `force` option in progbar ...
2 parents dca5a7a + 40e64a8 commit e736fb4

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

68 files changed

+1240
-552
lines changed

.coveragerc

+11-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,16 @@
11
[report]
2-
fail_under = 84
2+
# Regexes for lines to exclude from consideration
3+
exclude_lines =
4+
# Don't complain if tests don't hit defensive assertion code:
5+
raise ImportError
6+
raise NotImplementedError
7+
8+
# Don't complain if legacy support codes are not performed:
9+
if original_keras_version == '1':
10+
11+
fail_under = 85
312
show_missing = True
413
omit =
514
keras/applications/*
615
keras/datasets/*
16+
keras/legacy/*

.travis.yml

+3-3
Original file line numberDiff line numberDiff line change
@@ -38,10 +38,10 @@ install:
3838
# Useful for debugging any issues with conda
3939
- conda info -a
4040

41-
- conda create -q -n test-environment python=$TRAVIS_PYTHON_VERSION numpy nose scipy matplotlib pandas pytest h5py
41+
- conda create -q -n test-environment python=$TRAVIS_PYTHON_VERSION pytest pandas
4242
- source activate test-environment
43+
- pip install --only-binary=numpy,scipy numpy nose scipy matplotlib h5py theano
4344
- conda install mkl mkl-service
44-
- pip install theano
4545

4646
# set library path
4747
- export LD_LIBRARY_PATH=$HOME/miniconda/envs/test-environment/lib/:$LD_LIBRARY_PATH
@@ -111,5 +111,5 @@ script:
111111
elif [[ "$TEST_MODE" == "DOC" ]]; then
112112
PYTHONPATH=$PWD:$PYTHONPATH py.test tests/test_documentation.py;
113113
else
114-
PYTHONPATH=$PWD:$PYTHONPATH py.test tests/ --ignore=tests/integration_tests --ignore=tests/test_documentation.py --cov-config .coveragerc --cov=keras tests/;
114+
PYTHONPATH=$PWD:$PYTHONPATH py.test tests/ --ignore=tests/integration_tests --ignore=tests/test_documentation.py --ignore=tests/keras/legacy/layers_test.py --cov-config .coveragerc --cov=keras tests/;
115115
fi

LICENSE

+4-4
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
COPYRIGHT
22

33
All contributions by François Chollet:
4-
Copyright (c) 2015, François Chollet.
4+
Copyright (c) 2015 - 2018, François Chollet.
55
All rights reserved.
66

77
All contributions by Google:
8-
Copyright (c) 2015, Google, Inc.
8+
Copyright (c) 2015 - 2018, Google, Inc.
99
All rights reserved.
1010

1111
All contributions by Microsoft:
12-
Copyright (c) 2017, Microsoft, Inc.
12+
Copyright (c) 2017 - 2018, Microsoft, Inc.
1313
All rights reserved.
1414

1515
All other contributions:
16-
Copyright (c) 2015 - 2017, the respective contributors.
16+
Copyright (c) 2015 - 2018, the respective contributors.
1717
All rights reserved.
1818

1919
Each contributor holds copyright over their respective contributions.

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -155,7 +155,7 @@ sudo python setup.py install
155155
------------------
156156

157157

158-
## Switching from TensorFlow to CNTK or Theano
158+
## Using a different backend than TensorFlow
159159

160160
By default, Keras will use TensorFlow as its tensor manipulation library. [Follow these instructions](https://keras.io/backend/) to configure the Keras backend.
161161

docker/Dockerfile

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
ARG cuda_version=8.0
2-
ARG cudnn_version=6
1+
ARG cuda_version=9.0
2+
ARG cudnn_version=7
33
FROM nvidia/cuda:${cuda_version}-cudnn${cudnn_version}-devel
44

55
ENV CONDA_DIR /opt/conda

docker/Makefile

+2-2
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@ DOCKER_FILE=Dockerfile
77
DOCKER=GPU=$(GPU) nvidia-docker
88
BACKEND=tensorflow
99
PYTHON_VERSION?=3.6
10-
CUDA_VERSION?=8.0
11-
CUDNN_VERSION?=6
10+
CUDA_VERSION?=9.0
11+
CUDNN_VERSION?=7
1212
TEST=tests/
1313
SRC?=$(shell dirname `pwd`)
1414

docs/autogen.py

+13-7
Original file line numberDiff line numberDiff line change
@@ -489,13 +489,18 @@ def process_docstring(docstring):
489489
new_fpath = fpath.replace('templates', 'sources')
490490
shutil.copy(fpath, new_fpath)
491491

492+
492493
# Take care of index page.
493-
readme = open('../README.md').read()
494-
index = open('templates/index.md').read()
494+
def read_file(path):
495+
with open(path) as f:
496+
return f.read()
497+
498+
499+
readme = read_file('../README.md')
500+
index = read_file('templates/index.md')
495501
index = index.replace('{{autogenerated}}', readme[readme.find('##'):])
496-
f = open('sources/index.md', 'w')
497-
f.write(index)
498-
f.close()
502+
with open('sources/index.md', 'w') as f:
503+
f.write(index)
499504

500505
print('Starting autogeneration.')
501506
for page_data in PAGES:
@@ -564,7 +569,7 @@ def process_docstring(docstring):
564569
page_name = page_data['page']
565570
path = os.path.join('sources', page_name)
566571
if os.path.exists(path):
567-
template = open(path).read()
572+
template = read_file(path)
568573
assert '{{autogenerated}}' in template, ('Template found for ' + path +
569574
' but missing {{autogenerated}} tag.')
570575
mkdown = template.replace('{{autogenerated}}', mkdown)
@@ -574,6 +579,7 @@ def process_docstring(docstring):
574579
subdir = os.path.dirname(path)
575580
if not os.path.exists(subdir):
576581
os.makedirs(subdir)
577-
open(path, 'w').write(mkdown)
582+
with open(path, 'w') as f:
583+
f.write(mkdown)
578584

579585
shutil.copyfile('../CONTRIBUTING.md', 'sources/contributing.md')

docs/templates/getting-started/functional-api-guide.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -168,15 +168,15 @@ One way to achieve this is to build a model that encodes two tweets into two vec
168168

169169
Because the problem is symmetric, the mechanism that encodes the first tweet should be reused (weights and all) to encode the second tweet. Here we use a shared LSTM layer to encode the tweets.
170170

171-
Let's build this with the functional API. We will take as input for a tweet a binary matrix of shape `(140, 256)`, i.e. a sequence of 140 vectors of size 256, where each dimension in the 256-dimensional vector encodes the presence/absence of a character (out of an alphabet of 256 frequent characters).
171+
Let's build this with the functional API. We will take as input for a tweet a binary matrix of shape `(280, 256)`, i.e. a sequence of 280 vectors of size 256, where each dimension in the 256-dimensional vector encodes the presence/absence of a character (out of an alphabet of 256 frequent characters).
172172

173173
```python
174174
import keras
175175
from keras.layers import Input, LSTM, Dense
176176
from keras.models import Model
177177

178-
tweet_a = Input(shape=(140, 256))
179-
tweet_b = Input(shape=(140, 256))
178+
tweet_a = Input(shape=(280, 256))
179+
tweet_b = Input(shape=(280, 256))
180180
```
181181

182182
To share a layer across different inputs, simply instantiate the layer once, then call it on as many inputs as you want:
@@ -222,7 +222,7 @@ In previous versions of Keras, you could obtain the output tensor of a layer ins
222222
As long as a layer is only connected to one input, there is no confusion, and `.output` will return the one output of the layer:
223223

224224
```python
225-
a = Input(shape=(140, 256))
225+
a = Input(shape=(280, 256))
226226

227227
lstm = LSTM(32)
228228
encoded_a = lstm(a)
@@ -232,8 +232,8 @@ assert lstm.output == encoded_a
232232

233233
Not so if the layer has multiple inputs:
234234
```python
235-
a = Input(shape=(140, 256))
236-
b = Input(shape=(140, 256))
235+
a = Input(shape=(280, 256))
236+
b = Input(shape=(280, 256))
237237

238238
lstm = LSTM(32)
239239
encoded_a = lstm(a)

docs/templates/preprocessing/image.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -82,15 +82,15 @@ Generate batches of tensor image data with real-time data augmentation. The data
8282
- __batch_size__: int (default: 32).
8383
- __shuffle__: boolean (default: True).
8484
- __seed__: int (default: None).
85-
- __save_to_dir__: None or str (default: None). This allows you to optimally specify a directory to which to save the augmented pictures being generated (useful for visualizing what you are doing).
85+
- __save_to_dir__: None or str (default: None). This allows you to optionally specify a directory to which to save the augmented pictures being generated (useful for visualizing what you are doing).
8686
- __save_prefix__: str (default: `''`). Prefix to use for filenames of saved pictures (only relevant if `save_to_dir` is set).
8787
- __save_format__: one of "png", "jpeg" (only relevant if `save_to_dir` is set). Default: "png".
8888
- __yields__: Tuples of `(x, y)` where `x` is a numpy array of image data and `y` is a numpy array of corresponding labels.
8989
The generator loops indefinitely.
9090
- __flow_from_directory(directory)__: Takes the path to a directory, and generates batches of augmented/normalized data. Yields batches indefinitely, in an infinite loop.
9191
- __Arguments__:
9292
- __directory__: path to the target directory. It should contain one subdirectory per class.
93-
Any PNG, JPG, BMP or PPM images inside each of the subdirectories directory tree will be included in the generator.
93+
Any PNG, JPG, BMP, PPM or TIF images inside each of the subdirectories directory tree will be included in the generator.
9494
See [this script](https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d) for more details.
9595
- __target_size__: tuple of integers `(height, width)`, default: `(256, 256)`.
9696
The dimensions to which all images found will be resized.
@@ -100,7 +100,7 @@ Generate batches of tensor image data with real-time data augmentation. The data
100100
- __batch_size__: size of the batches of data (default: 32).
101101
- __shuffle__: whether to shuffle the data (default: True)
102102
- __seed__: optional random seed for shuffling and transformations.
103-
- __save_to_dir__: None or str (default: None). This allows you to optimally specify a directory to which to save the augmented pictures being generated (useful for visualizing what you are doing).
103+
- __save_to_dir__: None or str (default: None). This allows you to optionally specify a directory to which to save the augmented pictures being generated (useful for visualizing what you are doing).
104104
- __save_prefix__: str. Prefix to use for filenames of saved pictures (only relevant if `save_to_dir` is set).
105105
- __save_format__: one of "png", "jpeg" (only relevant if `save_to_dir` is set). Default: "png".
106106
- __follow_links__: whether to follow symlinks inside class subdirectories (default: False).

docs/templates/preprocessing/text.md

+3-1
Original file line numberDiff line numberDiff line change
@@ -81,14 +81,16 @@ keras.preprocessing.text.Tokenizer(num_words=None,
8181
filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n',
8282
lower=True,
8383
split=" ",
84-
char_level=False)
84+
char_level=False,
85+
oov_token=None)
8586
```
8687

8788
Class for vectorizing texts, or/and turning texts into sequences (=list of word indexes, where the word of rank i in the dataset (starting at 1) has index i).
8889

8990
- __Arguments__: Same as `text_to_word_sequence` above.
9091
- __num_words__: None or int. Maximum number of words to work with (if set, tokenization will be restricted to the top num_words most common words in the dataset).
9192
- __char_level__: if True, every character will be treated as a token.
93+
- __oov_token__: None or str. If given, it will be added to word_index and used to replace out-of-vocabulary words during text_to_sequence calls.
9294

9395
- __Methods__:
9496

docs/templates/why-use-keras.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Your Keras models can be easily deployed across a greater range of platforms tha
3434
- On Android, via the TensorFlow Android runtime. Example: [Not Hotdog app](https://medium.com/@timanglade/how-hbos-silicon-valley-built-not-hotdog-with-mobile-tensorflow-keras-react-native-ef03260747f3)
3535
- In the browser, via GPU-accelerated JavaScript runtimes such as [Keras.js](https://transcranial.github.io/keras-js/#/) and [WebDNN](https://mil-tokyo.github.io/webdnn/)
3636
- On Google Cloud, via [TensorFlow-Serving](https://www.tensorflow.org/serving/)
37-
- In a Python webapp backend (such as a Flask app)
37+
- [In a Python webapp backend (such as a Flask app)](https://blog.keras.io/building-a-simple-keras-deep-learning-rest-api.html)
3838
- On the JVM, via [DL4J model import provided by SkyMind](https://deeplearning4j.org/model-import-keras)
3939
- On Raspberry Pi
4040

@@ -54,7 +54,7 @@ As such, your Keras model can be trained on a number of different hardware platf
5454

5555
- [NVIDIA GPUs](https://developer.nvidia.com/deep-learning)
5656
- [Google TPUs](https://cloud.google.com/tpu/), via the TensorFlow backend and Google Cloud
57-
- OpenGL-enabled GPUs, such as those from AMD, via [the PlaidML Keras backend](https://github.com/plaidml/plaidml)
57+
- OpenCL-enabled GPUs, such as those from AMD, via [the PlaidML Keras backend](https://github.com/plaidml/plaidml)
5858

5959
---
6060

examples/babi_memnn.py

+4-3
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ def vectorize_stories(data):
100100
'$ wget http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2.tar.gz\n'
101101
'$ mv tasks_1-20_v1-2.tar.gz ~/.keras/datasets/babi-tasks-v1-2.tar.gz')
102102
raise
103-
tar = tarfile.open(path)
103+
104104

105105
challenges = {
106106
# QA1 with 10,000 samples
@@ -112,8 +112,9 @@ def vectorize_stories(data):
112112
challenge = challenges[challenge_type]
113113

114114
print('Extracting stories for the challenge:', challenge_type)
115-
train_stories = get_stories(tar.extractfile(challenge.format('train')))
116-
test_stories = get_stories(tar.extractfile(challenge.format('test')))
115+
with tarfile.open(path) as tar:
116+
train_stories = get_stories(tar.extractfile(challenge.format('train')))
117+
test_stories = get_stories(tar.extractfile(challenge.format('test')))
117118

118119
vocab = set()
119120
for story, q, answer in train_stories + test_stories:

examples/babi_rnn.py

+4-3
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@ def vectorize_stories(data, word_idx, story_maxlen, query_maxlen):
160160
'$ wget http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2.tar.gz\n'
161161
'$ mv tasks_1-20_v1-2.tar.gz ~/.keras/datasets/babi-tasks-v1-2.tar.gz')
162162
raise
163-
tar = tarfile.open(path)
163+
164164
# Default QA1 with 1000 samples
165165
# challenge = 'tasks_1-20_v1-2/en/qa1_single-supporting-fact_{}.txt'
166166
# QA1 with 10,000 samples
@@ -169,8 +169,9 @@ def vectorize_stories(data, word_idx, story_maxlen, query_maxlen):
169169
challenge = 'tasks_1-20_v1-2/en/qa2_two-supporting-facts_{}.txt'
170170
# QA2 with 10,000 samples
171171
# challenge = 'tasks_1-20_v1-2/en-10k/qa2_two-supporting-facts_{}.txt'
172-
train = get_stories(tar.extractfile(challenge.format('train')))
173-
test = get_stories(tar.extractfile(challenge.format('test')))
172+
with tarfile.open(path) as tar:
173+
train = get_stories(tar.extractfile(challenge.format('train')))
174+
test = get_stories(tar.extractfile(challenge.format('test')))
174175

175176
vocab = set()
176177
for story, q, answer in train + test:

0 commit comments

Comments
 (0)