Skip to content

Commit

Permalink
Merge branch 'master' into fix-exception-chaining
Browse files Browse the repository at this point in the history
  • Loading branch information
akihironitta committed Oct 13, 2020
2 parents ad3b83d + ecb852d commit bdd9284
Show file tree
Hide file tree
Showing 74 changed files with 5,153 additions and 223 deletions.
6 changes: 3 additions & 3 deletions .github/workflows/ci_test-base.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,14 +41,14 @@ jobs:
uses: actions/cache@v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}-pip-${{ hashFiles('requirements/base.txt') }}
key: ${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}-pip-${{ hashFiles('requirements.txt') }}
restore-keys: |
${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}-pip-
- name: Install dependencies
run: |
python -m pip install --upgrade --user pip
pip install --requirement ./requirements/base.txt --quiet --find-links https://download.pytorch.org/whl/cpu/torch_stable.html --upgrade
pip install --requirement ./requirements.txt --quiet --find-links https://download.pytorch.org/whl/cpu/torch_stable.html --upgrade
pip install --requirement ./requirements/test.txt --quiet --upgrade-strategy only-if-needed
# pip install tox coverage
python --version
Expand All @@ -66,7 +66,7 @@ jobs:
- name: Test Package [only]
run: |
# NOTE: run coverage on tests does not propagare faler status for Win, https://github.com/nedbat/coveragepy/issues/1003
coverage run --source pl_bolts -m pytest pl_bolts -v --junitxml=junit/test-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}.xml --ignore=pl_bolts/datamodules --ignore=pl_bolts/models/self_supervised/amdim/transforms.py
coverage run --source pl_bolts -m pytest pl_bolts -v --junitxml=junit/test-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}.xml --ignore=pl_bolts/datamodules --ignore=pl_bolts/models/self_supervised/amdim/transforms.py --ignore=pl_bolts/models/rl
- name: Upload pytest test results
uses: actions/upload-artifact@master
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/ci_test-full.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ jobs:
- name: Set min. dependencies
if: matrix.requires == 'minimal'
run: |
python -c "fpath = 'requirements/base.txt' ; req = open(fpath).read().replace('>=', '==') ; open(fpath, 'w').write(req)"
python -c "fpath = 'requirements.txt' ; req = open(fpath).read().replace('>=', '==') ; open(fpath, 'w').write(req)"
python -c "fpath = 'requirements/models.txt' ; req = open(fpath).read().replace('>=', '==') ; open(fpath, 'w').write(req)"
python -c "fpath = 'requirements/loggers.txt' ; req = open(fpath).read().replace('>=', '==') ; open(fpath, 'w').write(req)"
python -c "fpath = 'requirements/test.txt' ; req = open(fpath).read().replace('>=', '==') ; open(fpath, 'w').write(req)"
Expand All @@ -61,7 +61,7 @@ jobs:
uses: actions/cache@v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-py${{ matrix.python-version }}-${{ matrix.requires }}-${{ hashFiles('requirements/base.txt') }}-${{ hashFiles('requirements/modules.txt') }}
key: ${{ runner.os }}-pip-py${{ matrix.python-version }}-${{ matrix.requires }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements/modules.txt') }}
restore-keys: |
${{ runner.os }}-pip-py${{ matrix.python-version }}-${{ matrix.requires }}-
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/code-format.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,14 +23,14 @@ jobs:
uses: actions/cache@v2
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('requirements/base.txt') }}
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: |
# python -m pip install --upgrade --user pip
pip install -r requirements/base.txt -U -f https://download.pytorch.org/whl/torch_stable.html -q
pip install -r requirements.txt -U -f https://download.pytorch.org/whl/torch_stable.html -q
pip install flake8
python --version
pip --version
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/docs-check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ jobs:
# uses: actions/cache@v2
# with:
# path: ~/.cache/pip
# key: ${{ runner.os }}-pip-${{ hashFiles('requirements/base.txt') }}
# key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
# restore-keys: |
# ${{ runner.os }}-pip-
#
Expand Down Expand Up @@ -75,13 +75,13 @@ jobs:
uses: actions/cache@v2
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('requirements/base.txt') }}
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: |
pip install --requirement requirements/base.txt --upgrade-strategy only-if-needed --find-links https://download.pytorch.org/whl/cpu/torch_stable.html --quiet
pip install --requirement requirements.txt --upgrade-strategy only-if-needed --find-links https://download.pytorch.org/whl/cpu/torch_stable.html --quiet
pip install --requirement docs/requirements.txt
# install Texlive, see https://linuxconfig.org/how-to-install-latex-on-ubuntu-20-04-focal-fossa-linux
sudo apt-get update && sudo apt-get install -y texlive-latex-extra dvipng texlive-pictures
Expand Down
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,6 @@ MNIST

# Lightning logs
lightning_logs
datasets
*.gz
*-batches-py
simclr.py
Expand Down
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Added Linear Regression
- Added Moco2g
- Added simclr
- Added RL module
- Added Loggers
- Added Transforms
- Added Tiny Datasets
Expand All @@ -42,12 +43,16 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Changed

- Device is no longer set in the DQN model init
- Moved RL loss function to the losses module
- Moved rl.common.experience to datamodules
- train_batch function to VPG model to generate batch of data at each step (POC)
- Experience source no longer gets initialized with a device, instead the device is passed at each step()
- Refactored ExperienceSource classes to be handle multiple environments.

### Removed

- Removed N-Step DQN as the latest version of the DQN supports N-Step by setting the `n_step` arg to n
- Deprecated common.experience

### Fixed
Expand Down
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ recursive-exclude docs *
exclude docs

# Include the Requirements
include requirements.txt
recursive-include requirements *.txt

# Exclude build configs
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Install bleeding-edge (no guarantees)
pip install git+https://github.com/PytorchLightning/pytorch-lightning-bolts.git@master --upgrade
```

In case you wan to have full experience you can install all optional packages at once
In case you want to have full experience you can install all optional packages at once
```bash
pip install pytorch-lightning-bolts["extra"]
```
Expand Down
4 changes: 2 additions & 2 deletions docs/source/classic_ml.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ half-precision training.
Linear Regression
-----------------
Linear regression fits a linear model between a real-valued target variable :math:`y` and one or more features :math:`X`. We
estimate the regression coefficients that minimizes the mean squared error between the predicted and true target
estimate the regression coefficients that minimize the mean squared error between the predicted and true target
values.

We formulate the linear regression model as a single-layer neural network. By default we include only one neuron in
Expand Down Expand Up @@ -69,7 +69,7 @@ Add either L1 or L2 regularization, or both, by specifying the regularization st
trainer.test(test_dataloaders=dm.test_dataloader(batch_size=12))
Any input will be flattened across all dimensions except the firs one (batch).
Any input will be flattened across all dimensions except the first one (batch).
This means images, sound, etc... work out of the box.

.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -328,7 +328,7 @@ def package_list_from_file(file):
MOCK_PACKAGES = []
if SPHINX_MOCK_REQUIREMENTS:
# mock also base packages when we are on RTD since we don't install them there
MOCK_PACKAGES += package_list_from_file(os.path.join(PATH_ROOT, 'requirements', 'base.txt'))
MOCK_PACKAGES += package_list_from_file(os.path.join(PATH_ROOT, 'requirements.txt'))
MOCK_PACKAGES += package_list_from_file(os.path.join(PATH_ROOT, 'requirements', 'models.txt'))
MOCK_PACKAGES += package_list_from_file(os.path.join(PATH_ROOT, 'requirements', 'loggers.txt'))

Expand Down
13 changes: 4 additions & 9 deletions docs/source/dataloaders.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,19 +3,14 @@ AsynchronousLoader
This dataloader behaves identically to the standard pytorch dataloader, but will transfer
data asynchronously to the GPU with training. You can also use it to wrap an existing dataloader.

Example::
Example:

.. code-block:: python
dataloader = AsynchronousLoader(DataLoader(ds, batch_size=16), device=device)
for b in dataloader:
...
.. autoclass:: pl_bolts.datamodules.async_dataloader.AsynchronousLoader
:noindex:

------------------

DummyDataset
------------

.. autoclass:: pl_bolts.datamodules.dummy_dataset.DummyDataset
:noindex:
4 changes: 2 additions & 2 deletions docs/source/datamodules.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@ DataModules (introduced in PyTorch Lightning 0.9.0) decouple the data from a mod
is simply a collection of a training dataloder, val dataloader and test dataloader. In addition,
it specifies how to:

- Downloading/preparing data.
- Download/prepare data.
- Train/val/test splits.
- Transforms
- Transform

Then you can use it like this:

Expand Down
41 changes: 41 additions & 0 deletions docs/source/datasets.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
########
Datasets
########
Collection of useful datasets

--------

*********
Debugging
*********
Use these datasets to debug

DummyDataset
============

.. autoclass:: pl_bolts.datasets.dummy_dataset.DummyDataset
:noindex:

DummyDetectionDataset
=====================

.. autoclass:: pl_bolts.datasets.dummy_dataset.DummyDetectionDataset
:noindex:

RandomDataset
=============

.. autoclass:: pl_bolts.datasets.dummy_dataset.RandomDataset
:noindex:

RandomDictDataset
=================

.. autoclass:: pl_bolts.datasets.dummy_dataset.RandomDictDataset
:noindex:

RandomDictStringDataset
=======================

.. autoclass:: pl_bolts.datasets.dummy_dataset.RandomDictStringDataset
:noindex:
17 changes: 16 additions & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,13 @@ PyTorch-Lightning-Bolts documentation
sklearn_datamodule
vision_datamodules

.. toctree::
:maxdepth: 2
:name: datasets
:caption: Datasets

datasets

.. toctree::
:maxdepth: 2
:name: dataloaders
Expand All @@ -53,10 +60,17 @@ PyTorch-Lightning-Bolts documentation
:caption: Models

models_howto
autoencoders
classic_ml

.. toctree::
:maxdepth: 2
:name: vision
:caption: Vision models

autoencoders
convolutional
gans
reinforce_learn
self_supervised_models

.. toctree::
Expand Down Expand Up @@ -90,6 +104,7 @@ Indices and tables
readme
api/pl_bolts.callbacks
api/pl_bolts.datamodules
api/pl_bolts.datasets
api/pl_bolts.metrics
api/pl_bolts.models
api/pl_bolts.callbacks
Expand Down
14 changes: 7 additions & 7 deletions docs/source/introduction_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Bolts is a Deep learning research and production toolbox of:
- Losses.
- Datasets.

**The Main goal of bolts is to enable trying new ideas as fast as possible!**
**The Main goal of Bolts is to enable trying new ideas as fast as possible!**

All models are tested (daily), benchmarked, documented and work on CPUs, TPUs, GPUs and 16-bit precision.

Expand Down Expand Up @@ -90,11 +90,11 @@ All models are tested (daily), benchmarked, documented and work on CPUs, TPUs, G

Community Built
---------------
Bolts are built-by the Lightning community and contributed to bolts.
Then lightning community builds bolts and contributes them to Bolts.
The lightning team guarantees that contributions are:

1. Rigorously Tested (CPUs, GPUs, TPUs).
2. Rigorously Documented.
1. Rigorously tested (CPUs, GPUs, TPUs).
2. Rigorously documented.
3. Standardized via PyTorch Lightning.
4. Optimized for speed.
5. Checked for correctness.
Expand Down Expand Up @@ -351,7 +351,7 @@ In case your job or research doesn't need a "hammer", we offer implementations o
which benefit from lightning's multi-GPU and TPU support.

So, now you can run huge workloads scalably, without needing to do any engineering.
For instance, here we can run Logistic Regression on Imagenet (each epoch takes about 3 minutes)!
For instance, here we can run logistic Regression on Imagenet (each epoch takes about 3 minutes)!

.. code-block:: python
Expand Down Expand Up @@ -414,7 +414,7 @@ But more importantly, you can scale up to many GPUs, TPUs or even CPUs
Logistic Regression
^^^^^^^^^^^^^^^^^^^
Here's an example for Logistic regression
Here's an example for logistic regression

.. code-block:: python
Expand All @@ -436,7 +436,7 @@ Here's an example for Logistic regression
trainer.test(test_dataloaders=dm.test_dataloader(batch_size=12))
Any input will be flattened across all dimensions except the firs one (batch).
Any input will be flattened across all dimensions except the first one (batch).
This means images, sound, etc... work out of the box.

.. code-block:: python
Expand Down
30 changes: 30 additions & 0 deletions docs/source/losses.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,33 @@ This package lists common losses across research domains
Your Loss
---------
We're cleaning up many of our losses, but in the meantime, submit a PR to add your loss here!

-------------

Reinforcement Learning
======================
These are common losses used in RL.

---------------

DQN Loss
--------

.. autofunction:: pl_bolts.losses.rl.dqn_loss
:noindex:

---------------

Double DQN Loss
---------------

.. autofunction:: pl_bolts.losses.rl.double_dqn_loss
:noindex:

---------------

Per DQN Loss
------------

.. autofunction:: pl_bolts.losses.rl.per_dqn_loss
:noindex:
4 changes: 2 additions & 2 deletions docs/source/models.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ by adding your contribution to bolts you get these **additional** benefits!
6. We'll pretrain expensive models for you and host weights.
7. We will improve the speed of your models!
8. Eligible for invited talks to discuss your implementation.
9. Lightning Swag + involvement in the broader contributor community :)
9. Lightning swag + involvement in the broader contributor community :)

.. note:: You still get to keep your attribution and be recognized for your work!

Expand Down Expand Up @@ -98,7 +98,7 @@ We request that each contribution have:
- Your name and your team's name as the implementation authors.
- Your team's affiliation
- Any generated examples, or result plots.
- Hyperparameters configurations for the results.
- Hyperparameter configurations for the results.

Thank you for all your amazing contributions!

Expand Down
Loading

0 comments on commit bdd9284

Please sign in to comment.