Training examples with reproducible performance.
The word "reproduce" should always mean reproduce performance. With the magic of SGD, wrong deep learning code often appears to work, especially if you try it on toy datasets. Github is full of deep learning code that "implements" but does not "reproduce" methods, and you'll not know whether the implementation is actually correct. See Unawareness of Deep Learning Mistakes.
We refuse toy examples. Instead of showing tiny CNNs trained on MNIST/Cifar10, we provide training scripts that reproduce well-known papers.
We refuse low-quality implementations. Unlike most open source repos which only implement methods, Tensorpack examples faithfully reproduce experiments and performance in the paper, so you're confident that they are correct.
These are the only toy examples in tensorpack. They are supposed to be just demos.
- An illustrative MNIST example with explanation of the framework
- Tensorpack supports any symbolic libraries. See the same MNIST example written with tf.layers, and with weights visualizations
- A tiny Cifar ConvNet and SVHN ConvNet
- A boilerplate file to start with, for your own tasks
Name | Performance |
---|---|
Deep Q-Network(DQN) variants on Atari games, including DQN, DoubleDQN, DuelingDQN. |
reproduce the paper |
Asynchronous Advantage Actor-Critic(A3C) on Atari games | reproduce the paper |
Name | Performance |
---|---|
LSTM-CTC for speech recognition | reproduce the paper |
char-rnn for fun | fun |
LSTM language model on PennTreebank | reproduce reference code |