Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get help from docstring #4344

Merged
merged 6 commits into from
Oct 26, 2020
Merged

get help from docstring #4344

merged 6 commits into from
Oct 26, 2020

Conversation

louis-she
Copy link
Contributor

What does this PR do?

Get help message of argparse from docstring.

➜ python3 pl_examples/basic_examples/mnist.py --help
usage: mnist.py [-h] [--batch_size BATCH_SIZE] [--logger [LOGGER]]
                [--checkpoint_callback [CHECKPOINT_CALLBACK]]
                [--default_root_dir DEFAULT_ROOT_DIR]
                [--gradient_clip_val GRADIENT_CLIP_VAL]
                [--process_position PROCESS_POSITION] [--num_nodes NUM_NODES]
                [--num_processes NUM_PROCESSES] [--gpus GPUS]
                [--auto_select_gpus [AUTO_SELECT_GPUS]]
                [--tpu_cores TPU_CORES] [--log_gpu_memory LOG_GPU_MEMORY]
                [--progress_bar_refresh_rate PROGRESS_BAR_REFRESH_RATE]
                [--overfit_batches OVERFIT_BATCHES]
                [--track_grad_norm TRACK_GRAD_NORM]
                [--check_val_every_n_epoch CHECK_VAL_EVERY_N_EPOCH]
                [--fast_dev_run [FAST_DEV_RUN]]
                [--accumulate_grad_batches ACCUMULATE_GRAD_BATCHES]
                [--max_epochs MAX_EPOCHS] [--min_epochs MIN_EPOCHS]
                [--max_steps MAX_STEPS] [--min_steps MIN_STEPS]
                [--limit_train_batches LIMIT_TRAIN_BATCHES]
                [--limit_val_batches LIMIT_VAL_BATCHES]
                [--limit_test_batches LIMIT_TEST_BATCHES]
                [--val_check_interval VAL_CHECK_INTERVAL]
                [--flush_logs_every_n_steps FLUSH_LOGS_EVERY_N_STEPS]
                [--log_every_n_steps LOG_EVERY_N_STEPS]
                [--accelerator ACCELERATOR]
                [--sync_batchnorm [SYNC_BATCHNORM]] [--precision PRECISION]
                [--weights_summary WEIGHTS_SUMMARY]
                [--weights_save_path WEIGHTS_SAVE_PATH]
                [--num_sanity_val_steps NUM_SANITY_VAL_STEPS]
                [--truncated_bptt_steps TRUNCATED_BPTT_STEPS]
                [--resume_from_checkpoint RESUME_FROM_CHECKPOINT]
                [--profiler [PROFILER]] [--benchmark [BENCHMARK]]
                [--deterministic [DETERMINISTIC]]
                [--reload_dataloaders_every_epoch [RELOAD_DATALOADERS_EVERY_EPOCH]]
                [--auto_lr_find [AUTO_LR_FIND]]
                [--replace_sampler_ddp [REPLACE_SAMPLER_DDP]]
                [--terminate_on_nan [TERMINATE_ON_NAN]]
                [--auto_scale_batch_size [AUTO_SCALE_BATCH_SIZE]]
                [--prepare_data_per_node [PREPARE_DATA_PER_NODE]]
                [--amp_backend AMP_BACKEND] [--amp_level AMP_LEVEL]
                [--distributed_backend DISTRIBUTED_BACKEND]
                [--automatic_optimization [AUTOMATIC_OPTIMIZATION]]
                [--hidden_dim HIDDEN_DIM] [--learning_rate LEARNING_RATE]

optional arguments:
  -h, --help            show this help message and exit
  --batch_size BATCH_SIZE
  --logger [LOGGER]     Logger (or iterable collection of loggers) for
                        experiment tracking.
  --checkpoint_callback [CHECKPOINT_CALLBACK]
                        Callback for checkpointing.
  --default_root_dir DEFAULT_ROOT_DIR
                        Default path for logs and weights when no
                        logger/ckpt_callback passed. Default: ``os.getcwd()``.
                        Can be remote file paths such as `s3://mybucket/path`
                        or 'hdfs://path/'
  --gradient_clip_val GRADIENT_CLIP_VAL
                        0 means don't clip.
  --process_position PROCESS_POSITION
                        orders the progress bar when running multiple models
                        on same machine.
  --num_nodes NUM_NODES
                        number of GPU nodes for distributed training.
  --num_processes NUM_PROCESSES
  --gpus GPUS           number of gpus to train on (int) or which GPUs to
                        train on (list or str) applied per node
  --auto_select_gpus [AUTO_SELECT_GPUS]
                        If enabled and `gpus` is an integer, pick available
                        gpus automatically. This is especially useful when
                        GPUs are configured to be in "exclusive mode", such
                        that only one process at a time can access them.
  --tpu_cores TPU_CORES
                        How many TPU cores to train on (1 or 8) / Single TPU
                        to train on [1]
  --log_gpu_memory LOG_GPU_MEMORY
                        None, 'min_max', 'all'. Might slow performance
  --progress_bar_refresh_rate PROGRESS_BAR_REFRESH_RATE
                        How often to refresh progress bar (in steps). Value
                        ``0`` disables progress bar. Ignored when a custom
                        callback is passed to :paramref:`~Trainer.callbacks`.
  --overfit_batches OVERFIT_BATCHES
                        Overfit a percent of training data (float) or a set
                        number of batches (int). Default: 0.0
  --track_grad_norm TRACK_GRAD_NORM
                        -1 no tracking. Otherwise tracks that p-norm. May be
                        set to 'inf' infinity-norm.
  --check_val_every_n_epoch CHECK_VAL_EVERY_N_EPOCH
                        Check val every n train epochs.
  --fast_dev_run [FAST_DEV_RUN]
                        runs 1 batch of train, test and val to find any bugs
                        (ie: a sort of unit test).
  --accumulate_grad_batches ACCUMULATE_GRAD_BATCHES
                        Accumulates grads every k batches or as set up in the
                        dict.
  --max_epochs MAX_EPOCHS
                        Stop training once this number of epochs is reached.
  --min_epochs MIN_EPOCHS
                        Force training for at least these many epochs
  --max_steps MAX_STEPS
                        Stop training after this number of steps. Disabled by
                        default (None).
  --min_steps MIN_STEPS
                        Force training for at least these number of steps.
                        Disabled by default (None).
  --limit_train_batches LIMIT_TRAIN_BATCHES
                        How much of training dataset to check (floats =
                        percent, int = num_batches)
  --limit_val_batches LIMIT_VAL_BATCHES
                        How much of validation dataset to check (floats =
                        percent, int = num_batches)
  --limit_test_batches LIMIT_TEST_BATCHES
                        How much of test dataset to check (floats = percent,
                        int = num_batches)
  --val_check_interval VAL_CHECK_INTERVAL
                        How often to check the validation set. Use float to
                        check within a training epoch, use int to check every
                        n steps (batches).
  --flush_logs_every_n_steps FLUSH_LOGS_EVERY_N_STEPS
                        How often to flush logs to disk (defaults to every 100
                        steps).
  --log_every_n_steps LOG_EVERY_N_STEPS
                        How often to log within steps (defaults to every 50
                        steps).
  --accelerator ACCELERATOR
                        Previously known as distributed_backend (dp, ddp,
                        ddp2, etc...). Can also take in an accelerator object
                        for custom hardware.
  --sync_batchnorm [SYNC_BATCHNORM]
                        Synchronize batch norm layers between process
                        groups/whole world.
  --precision PRECISION
                        Full precision (32), half precision (16). Can be used
                        on CPU, GPU or TPUs.
  --weights_summary WEIGHTS_SUMMARY
                        Prints a summary of the weights when training begins.
  --weights_save_path WEIGHTS_SAVE_PATH
                        Where to save weights if specified. Will override
                        default_root_dir for checkpoints only. Use this if for
                        whatever reason you need the checkpoints stored in a
                        different place than the logs written in
                        `default_root_dir`. Can be remote file paths such as
                        `s3://mybucket/path` or 'hdfs://path/' Defaults to
                        `default_root_dir`.
  --num_sanity_val_steps NUM_SANITY_VAL_STEPS
                        Sanity check runs n validation batches before starting
                        the training routine. Set it to `-1` to run all
                        batches in all validation dataloaders. Default: 2
  --truncated_bptt_steps TRUNCATED_BPTT_STEPS
                        Truncated back prop breaks performs backprop every k
                        steps of much longer sequence.
  --resume_from_checkpoint RESUME_FROM_CHECKPOINT
                        To resume training from a specific checkpoint pass in
                        the path here. This can be a URL.
  --profiler [PROFILER]
                        To profile individual steps during training and assist
                        in identifying bottlenecks.
  --benchmark [BENCHMARK]
                        If true enables cudnn.benchmark.
  --deterministic [DETERMINISTIC]
                        If true enables cudnn.deterministic.
  --reload_dataloaders_every_epoch [RELOAD_DATALOADERS_EVERY_EPOCH]
                        Set to True to reload dataloaders every epoch.
  --auto_lr_find [AUTO_LR_FIND]
                        If set to True, will make trainer.tune() run a
                        learning rate finder, trying to optimize initial
                        learning for faster convergence. trainer.tune() method
                        will set the suggested learning rate in self.lr or
                        self.learning_rate in the LightningModule. To use a
                        different key set a string instead of True with the
                        key name.
  --replace_sampler_ddp [REPLACE_SAMPLER_DDP]
                        Explicitly enables or disables sampler replacement. If
                        not specified this will toggled automatically when DDP
                        is used. By default it will add ``shuffle=True`` for
                        train sampler and ``shuffle=False`` for val/test
                        sampler. If you want to customize it, you can set
                        ``replace_sampler_ddp=False`` and add your own
                        distributed sampler.
  --terminate_on_nan [TERMINATE_ON_NAN]
                        If set to True, will terminate training (by raising a
                        `ValueError`) at the end of each training batch, if
                        any of the parameters or the loss are NaN or +/-inf.
  --auto_scale_batch_size [AUTO_SCALE_BATCH_SIZE]
                        If set to True, will `initially` run a batch size
                        finder trying to find the largest batch size that fits
                        into memory. The result will be stored in
                        self.batch_size in the LightningModule. Additionally,
                        can be set to either `power` that estimates the batch
                        size through a power search or `binsearch` that
                        estimates the batch size through a binary search.
  --prepare_data_per_node [PREPARE_DATA_PER_NODE]
                        If True, each LOCAL_RANK=0 will call prepare data.
                        Otherwise only NODE_RANK=0, LOCAL_RANK=0 will prepare
                        data
  --amp_backend AMP_BACKEND
                        The mixed precision backend to use ("native" or
                        "apex")
  --amp_level AMP_LEVEL
                        The optimization level to use (O1, O2, etc...).
  --distributed_backend DISTRIBUTED_BACKEND
                        deprecated. Please use 'accelerator'
  --automatic_optimization [AUTOMATIC_OPTIMIZATION]
                        If False you are responsible for calling .backward,
                        .step, zero_grad. Meant to be used with multiple
                        optimizers by advanced users.
  --hidden_dim HIDDEN_DIM
  --learning_rate LEARNING_RATE

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

  • Is this pull request ready for review? (if not, please submit in draft mode)

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

@pep8speaks
Copy link

pep8speaks commented Oct 25, 2020

Hello @louis-she! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2020-10-26 17:48:48 UTC

@mergify mergify bot requested a review from a team October 25, 2020 06:23
@codecov
Copy link

codecov bot commented Oct 25, 2020

Codecov Report

Merging #4344 into master will increase coverage by 0%.
The diff coverage is 100%.

@@          Coverage Diff           @@
##           master   #4344   +/-   ##
======================================
  Coverage      93%     93%           
======================================
  Files         111     111           
  Lines        8085    8085           
======================================
+ Hits         7486    7494    +8     
+ Misses        599     591    -8     

@awaelchli awaelchli added the feature Is an improvement or enhancement label Oct 25, 2020
@awaelchli awaelchli added this to the 1.1 milestone Oct 25, 2020
Copy link
Contributor

@awaelchli awaelchli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very cool addition :)

pytorch_lightning/utilities/argparse_utils.py Outdated Show resolved Hide resolved
CHANGELOG.md Outdated Show resolved Hide resolved
@mergify mergify bot requested a review from a team October 25, 2020 07:48
pytorch_lightning/utilities/argparse_utils.py Outdated Show resolved Hide resolved
pytorch_lightning/utilities/argparse_utils.py Outdated Show resolved Hide resolved
@mergify mergify bot requested a review from a team October 25, 2020 09:39
@mergify
Copy link
Contributor

mergify bot commented Oct 25, 2020

This pull request is now in conflict... :(

@mergify mergify bot requested a review from a team October 26, 2020 07:33
@mergify
Copy link
Contributor

mergify bot commented Oct 26, 2020

This pull request is now in conflict... :(

Copy link
Contributor

@SeanNaren SeanNaren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is awesome!

@rohitgr7 rohitgr7 merged commit 8e3faa2 into Lightning-AI:master Oct 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants