Skip to content

Commit

Permalink
Switch formatting from black+isort to µfmt (black+µsort) (#2460)
Browse files Browse the repository at this point in the history
* Switch to usort

* Apply usort to covered files

* Update contributing docs for usort

* use usort diff instead of check
  • Loading branch information
amyreese authored Feb 14, 2022
1 parent 4c2b62b commit 4afaf9e
Show file tree
Hide file tree
Showing 77 changed files with 147 additions and 130 deletions.
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -39,3 +39,7 @@ coverage.xml
/docs/src/
/docs/build/
/docs/source/generated/

# Virtualenv
.venv/
.python-version
17 changes: 6 additions & 11 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,18 +14,13 @@ repos:
- id: prettier
exclude_types: ["python", "jupyter", "shell", "gitignore"]

- repo: https://github.com/python/black
rev: 21.12b0
- repo: https://github.com/omnilib/ufmt
rev: v1.3.1
hooks:
- id: black
language_version: python3.8
args: ["--config", "pyproject.toml"]

- repo: https://github.com/timothycrosley/isort
rev: 5.7.0
hooks:
- id: isort
args: ["--settings", "setup.cfg"]
- id: ufmt
additional_dependencies:
- black == 21.12b0
- usort == 1.0.1

- repo: https://gitlab.com/pycqa/flake8
rev: 3.8.4
Expand Down
29 changes: 15 additions & 14 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,33 +110,34 @@ If you modify the code, you will most probably also need to code some tests to e
- naming convention for files `test_*.py`, e.g. `test_precision.py`
- naming of testing functions `def test_*`, e.g. `def test_precision_on_random_data()`
- if test function should run on GPU, please **make sure to add `cuda`** in the test name, e.g. `def test_something_on_cuda()`.
Additionally, we may want to decorate it with `@pytest.mark.skipif(not torch.cuda.is_available(), reason="Skip if no GPU")`.
For more examples, please see https://github.com/pytorch/ignite/blob/master/tests/ignite/engine/test_create_supervised.py
Additionally, we may want to decorate it with `@pytest.mark.skipif(not torch.cuda.is_available(), reason="Skip if no GPU")`.
For more examples, please see https://github.com/pytorch/ignite/blob/master/tests/ignite/engine/test_create_supervised.py
- if test function checks distributed configuration, we have to mark the test as `@pytest.mark.distributed` and additional
conditions depending on the intended checks. For example, please see
https://github.com/pytorch/ignite/blob/master/tests/ignite/metrics/test_accuracy.py

conditions depending on the intended checks. For example, please see
https://github.com/pytorch/ignite/blob/master/tests/ignite/metrics/test_accuracy.py

New code should be compatible with Python 3.X versions. Once you finish implementing a feature or bugfix and tests,
please run lint checking and tests:

#### Formatting Code

To ensure the codebase complies with a style guide, we use [flake8](https://flake8.pycqa.org/en/latest/),
[black](https://black.readthedocs.io/en/stable/) and [isort](https://pycqa.github.io/isort/) tools to
format and check codebase for compliance with PEP8.
To ensure the codebase complies with a style guide, we use [flake8](https://flake8.pycqa.org/en/latest/)
and [ufmt](https://ufmt.omnilib.dev/) ([black](https://black.readthedocs.io/en/stable/) and
[usort](https://usort.readthedocs.io/en/stable/)) to format and check codebase for compliance with PEP8.

##### Formatting without pre-commit

If you choose not to use pre-commit, you can take advantage of IDE extensions configured to black format or invoke
black manually to format files and commit them.

To install `flake8`, `black==21.12b0`, `isort==5.7.0` and `mypy`, please run
To install `flake8`, `ufmt` and `mypy`, please run

```bash
bash ./tests/run_code_style.sh install
```

To format files and commit changes:

```bash
# This should autoformat the files
bash ./tests/run_code_style.sh fmt
Expand All @@ -147,27 +148,27 @@ git commit -m "Added awesome feature"

##### Formatting with pre-commit

To automate the process, we have configured the repo with [pre-commit hooks](https://pre-commit.com/) to use black to autoformat the staged files to ensure every commit complies with a style guide. This requires some setup, which is described below:
To automate the process, we have configured the repo with [pre-commit hooks](https://pre-commit.com/) to use µfmt to autoformat the staged files to ensure every commit complies with a style guide. This requires some setup, which is described below:

1. Install pre-commit in your python environment.
2. Run pre-commit install that configures a virtual environment to invoke black, isort and flake8 on commits.
2. Run pre-commit install that configures a virtual environment to invoke ufmt and flake8 on commits.

```bash
pip install pre-commit
pre-commit install
```

3. When files are committed:
- If the stages files are not compliant with black, black will autoformat the staged files. If this were to happen, files should be staged and committed again. See example code below.
- If the stages files are not compliant with black or µsort, µfmt will autoformat the staged files. If this were to happen, files should be staged and committed again. See example code below.
- If the staged files are not compliant with flake8, errors will be raised. These errors should be fixed and the files should be committed again. See example code below.

```bash
git add .
git commit -m "Added awesome feature"
# DONT'T WORRY IF ERRORS ARE RAISED.
# YOUR CODE IS NOT COMPLIANT WITH flake8, isort or black
# YOUR CODE IS NOT COMPLIANT WITH flake8, µsort or black
# Fix any flake8 errors by following their suggestions
# isort and black will automatically format the files so they might look different, but you'll need to stage the files
# µfmt will automatically format the files so they might look different, but you'll need to stage the files
# again for committing
# After fixing any flake8 errors
git add .
Expand Down
2 changes: 1 addition & 1 deletion examples/contrib/cifar10/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import torch.nn as nn
import torch.optim as optim
import utils
from torch.cuda.amp import GradScaler, autocast
from torch.cuda.amp import autocast, GradScaler

import ignite
import ignite.distributed as idist
Expand Down
2 changes: 1 addition & 1 deletion examples/contrib/cifar100_amp_benchmark/benchmark_fp32.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from utils import get_train_eval_loaders

from ignite.contrib.handlers import ProgressBar
from ignite.engine import Engine, Events, convert_tensor, create_supervised_evaluator
from ignite.engine import convert_tensor, create_supervised_evaluator, Engine, Events
from ignite.handlers import Timer
from ignite.metrics import Accuracy, Loss

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
from utils import get_train_eval_loaders

from ignite.contrib.handlers import ProgressBar
from ignite.engine import Engine, Events, convert_tensor, create_supervised_evaluator
from ignite.engine import convert_tensor, create_supervised_evaluator, Engine, Events
from ignite.handlers import Timer
from ignite.metrics import Accuracy, Loss

Expand Down
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
import fire
import torch
from torch.cuda.amp import GradScaler, autocast
from torch.cuda.amp import autocast, GradScaler
from torch.nn import CrossEntropyLoss
from torch.optim import SGD
from torchvision.models import wide_resnet50_2
from utils import get_train_eval_loaders

from ignite.contrib.handlers import ProgressBar
from ignite.engine import Engine, Events, convert_tensor, create_supervised_evaluator
from ignite.engine import convert_tensor, create_supervised_evaluator, Engine, Events
from ignite.handlers import Timer
from ignite.metrics import Accuracy, Loss

Expand Down
4 changes: 2 additions & 2 deletions examples/contrib/cifar10_qat/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@
import torch.nn as nn
import torch.optim as optim
import utils
from torch.cuda.amp import GradScaler, autocast
from torch.cuda.amp import autocast, GradScaler

import ignite
import ignite.distributed as idist
from ignite.contrib.engines import common
from ignite.contrib.handlers import PiecewiseLinear
from ignite.engine import Engine, Events, create_supervised_evaluator
from ignite.engine import create_supervised_evaluator, Engine, Events
from ignite.handlers import Checkpoint, DiskSaver, global_step_from_engine
from ignite.metrics import Accuracy, Loss
from ignite.utils import manual_seed, setup_logger
Expand Down
4 changes: 2 additions & 2 deletions examples/contrib/mnist/mnist_with_clearml_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,13 @@
from ignite.contrib.handlers.clearml_logger import (
ClearMLLogger,
ClearMLSaver,
global_step_from_engine,
GradsHistHandler,
GradsScalarHandler,
WeightsHistHandler,
WeightsScalarHandler,
global_step_from_engine,
)
from ignite.engine import Events, create_supervised_evaluator, create_supervised_trainer
from ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events
from ignite.handlers import Checkpoint
from ignite.metrics import Accuracy, Loss
from ignite.utils import setup_logger
Expand Down
4 changes: 2 additions & 2 deletions examples/contrib/mnist/mnist_with_neptune_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,13 +28,13 @@
from torchvision.transforms import Compose, Normalize, ToTensor

from ignite.contrib.handlers.neptune_logger import (
global_step_from_engine,
GradsScalarHandler,
NeptuneLogger,
NeptuneSaver,
WeightsScalarHandler,
global_step_from_engine,
)
from ignite.engine import Events, create_supervised_evaluator, create_supervised_trainer
from ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events
from ignite.handlers import Checkpoint
from ignite.metrics import Accuracy, Loss
from ignite.utils import setup_logger
Expand Down
4 changes: 2 additions & 2 deletions examples/contrib/mnist/mnist_with_tensorboard_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,14 +29,14 @@
from torchvision.transforms import Compose, Normalize, ToTensor

from ignite.contrib.handlers.tensorboard_logger import (
global_step_from_engine,
GradsHistHandler,
GradsScalarHandler,
TensorboardLogger,
WeightsHistHandler,
WeightsScalarHandler,
global_step_from_engine,
)
from ignite.engine import Events, create_supervised_evaluator, create_supervised_trainer
from ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events
from ignite.handlers import ModelCheckpoint
from ignite.metrics import Accuracy, Loss
from ignite.utils import setup_logger
Expand Down
2 changes: 1 addition & 1 deletion examples/contrib/mnist/mnist_with_tqdm_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from torchvision.transforms import Compose, Normalize, ToTensor

from ignite.contrib.handlers import ProgressBar
from ignite.engine import Events, create_supervised_evaluator, create_supervised_trainer
from ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events
from ignite.metrics import Accuracy, Loss, RunningAverage


Expand Down
4 changes: 2 additions & 2 deletions examples/contrib/mnist/mnist_with_visdom_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,12 @@
from torchvision.transforms import Compose, Normalize, ToTensor

from ignite.contrib.handlers.visdom_logger import (
global_step_from_engine,
GradsScalarHandler,
VisdomLogger,
WeightsScalarHandler,
global_step_from_engine,
)
from ignite.engine import Events, create_supervised_evaluator, create_supervised_trainer
from ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events
from ignite.handlers import ModelCheckpoint
from ignite.metrics import Accuracy, Loss
from ignite.utils import setup_logger
Expand Down
4 changes: 2 additions & 2 deletions examples/contrib/mnist/mnist_with_wandb_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@
from torchvision.datasets import MNIST
from torchvision.transforms import Compose, Normalize, ToTensor

from ignite.contrib.handlers.wandb_logger import WandBLogger, global_step_from_engine
from ignite.engine import Events, create_supervised_evaluator, create_supervised_trainer
from ignite.contrib.handlers.wandb_logger import global_step_from_engine, WandBLogger
from ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events
from ignite.handlers import ModelCheckpoint
from ignite.metrics import Accuracy, Loss
from ignite.utils import setup_logger
Expand Down
2 changes: 1 addition & 1 deletion examples/contrib/transformers/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
import torch.nn as nn
import torch.optim as optim
import utils
from torch.cuda.amp import GradScaler, autocast
from torch.cuda.amp import autocast, GradScaler

import ignite
import ignite.distributed as idist
Expand Down
2 changes: 1 addition & 1 deletion examples/mnist/mnist.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from torchvision.transforms import Compose, Normalize, ToTensor
from tqdm import tqdm

from ignite.engine import Events, create_supervised_evaluator, create_supervised_trainer
from ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events
from ignite.metrics import Accuracy, Loss
from ignite.utils import setup_logger

Expand Down
2 changes: 1 addition & 1 deletion examples/mnist/mnist_save_resume_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
from torchvision.transforms import Compose, Normalize, ToTensor
from tqdm import tqdm

from ignite.engine import Events, create_supervised_evaluator, create_supervised_trainer
from ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events
from ignite.handlers import Checkpoint, DiskSaver
from ignite.metrics import Accuracy, Loss
from ignite.utils import manual_seed
Expand Down
2 changes: 1 addition & 1 deletion examples/mnist/mnist_with_tensorboard.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
from torchvision.datasets import MNIST
from torchvision.transforms import Compose, Normalize, ToTensor

from ignite.engine import Events, create_supervised_evaluator, create_supervised_trainer
from ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events
from ignite.metrics import Accuracy, Loss

try:
Expand Down
2 changes: 1 addition & 1 deletion examples/mnist/mnist_with_tensorboard_on_tpu.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
from torchvision.datasets import MNIST
from torchvision.transforms import Compose, Normalize, ToTensor

from ignite.engine import Events, create_supervised_evaluator, create_supervised_trainer
from ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events
from ignite.metrics import Accuracy, Loss, RunningAverage

try:
Expand Down
2 changes: 1 addition & 1 deletion examples/mnist/mnist_with_visdom.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from torchvision.datasets import MNIST
from torchvision.transforms import Compose, Normalize, ToTensor

from ignite.engine import Events, create_supervised_evaluator, create_supervised_trainer
from ignite.engine import create_supervised_evaluator, create_supervised_trainer, Events
from ignite.metrics import Accuracy, Loss

try:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@

import torch
from apex import amp
from py_config_runner.config_utils import TRAINVAL_CONFIG, assert_config, get_params
from py_config_runner.config_utils import assert_config, get_params, TRAINVAL_CONFIG
from py_config_runner.utils import set_seed
from utils import exp_tracking
from utils.handlers import predictions_gt_images_handler

import ignite
import ignite.distributed as idist
from ignite.contrib.engines import common
from ignite.engine import Engine, Events, _prepare_batch, create_supervised_evaluator
from ignite.engine import _prepare_batch, create_supervised_evaluator, Engine, Events
from ignite.metrics import Accuracy, TopKCategoricalAccuracy
from ignite.utils import setup_logger

Expand Down
4 changes: 2 additions & 2 deletions examples/references/segmentation/pascal_voc2012/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,14 @@
import torch

try:
from torch.cuda.amp import GradScaler, autocast
from torch.cuda.amp import autocast, GradScaler
except ImportError:
raise RuntimeError("Please, use recent PyTorch version, e.g. >=1.6.0")

import dataflow as data
import utils
import vis
from py_config_runner import ConfigObject, InferenceConfigSchema, TrainvalConfigSchema, get_params
from py_config_runner import ConfigObject, get_params, InferenceConfigSchema, TrainvalConfigSchema

import ignite.distributed as idist
from ignite.contrib.engines import common
Expand Down
2 changes: 1 addition & 1 deletion ignite/contrib/engines/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
from ignite.contrib.engines.tbptt import Tbptt_Events, create_supervised_tbptt_trainer
from ignite.contrib.engines.tbptt import create_supervised_tbptt_trainer, Tbptt_Events
4 changes: 2 additions & 2 deletions ignite/contrib/engines/common.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import numbers
import warnings
from functools import partial
from typing import Any, Callable, Dict, Iterable, Mapping, Optional, Sequence, Union, cast
from typing import Any, Callable, cast, Dict, Iterable, Mapping, Optional, Sequence, Union

import torch
import torch.nn as nn
Expand All @@ -12,6 +12,7 @@
import ignite.distributed as idist
from ignite.contrib.handlers import (
ClearMLLogger,
global_step_from_engine,
LRScheduler,
MLflowLogger,
NeptuneLogger,
Expand All @@ -20,7 +21,6 @@
TensorboardLogger,
VisdomLogger,
WandBLogger,
global_step_from_engine,
)
from ignite.contrib.handlers.base_logger import BaseLogger
from ignite.contrib.metrics import GpuInfo
Expand Down
2 changes: 1 addition & 1 deletion ignite/contrib/engines/tbptt.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import torch.nn as nn
from torch.optim.optimizer import Optimizer

from ignite.engine import Engine, EventEnum, _prepare_batch
from ignite.engine import _prepare_batch, Engine, EventEnum
from ignite.utils import apply_to_tensor


Expand Down
5 changes: 2 additions & 3 deletions ignite/contrib/handlers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,15 @@
from ignite.contrib.handlers.trains_logger import TrainsLogger
from ignite.contrib.handlers.visdom_logger import VisdomLogger
from ignite.contrib.handlers.wandb_logger import WandBLogger
from ignite.handlers import EpochOutputStore # ref
from ignite.handlers import global_step_from_engine # ref
from ignite.handlers import EpochOutputStore, global_step_from_engine # ref # ref
from ignite.handlers.lr_finder import FastaiLRFinder
from ignite.handlers.param_scheduler import (
ConcatScheduler,
CosineAnnealingScheduler,
create_lr_scheduler_with_warmup,
LinearCyclicalScheduler,
LRScheduler,
ParamGroupScheduler,
PiecewiseLinear,
create_lr_scheduler_with_warmup,
)
from ignite.handlers.time_profilers import BasicTimeProfiler, HandlersTimeProfiler
Loading

0 comments on commit 4afaf9e

Please sign in to comment.