Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PrecisionPlugin should be the source of truth for the precision, amp_backend, and amp_level #11814

Closed
ananthsub opened this issue Feb 8, 2022 · 5 comments
Labels
precision: amp Automatic Mixed Precision precision: apex (removed) NVIDIA/apex precision precision: double Double precision refactor

Comments

@ananthsub
Copy link
Contributor

ananthsub commented Feb 8, 2022

Proposed refactor

Motivation

For these to truly be pluggable, the precision plugin instance should be the source of truth for these properties within the Trainer. This has many benefits:

https://github.com/PyTorchLightning/pytorch-lightning/blob/32e7d32956e1685d36f2ab0ca3770baa2f76ce10/pytorch_lightning/utilities/enums.py#L71-L104

If I want to try out 4-bit precision, I should be able to without needing to get it added to this enum.

This also simplifies the monolothic accelerator_connector file which does all of this validation today:

This means each individual plugin can validate the configuration used. This removes the framework needing to specify enums

Pitch

Add precision, amp_backend, and amp_level as attributes to the PrecisionPlugin base.

  • Change all trainer properties to refer to them
  • Remove the PrecisionType and AMPType enums. For the plugins that the framework natively supports, we can hardcode the logic or instantiating these

Additional context


If you enjoy Lightning, check out our other projects! ⚡

  • Metrics: Machine learning metrics for distributed, scalable PyTorch applications.

  • Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.

  • Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.

  • Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.

  • Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

cc @justusschock @awaelchli @akihironitta @rohitgr7 @carmocca

@ananthsub ananthsub added refactor precision: apex (removed) NVIDIA/apex precision precision: double Double precision precision: amp Automatic Mixed Precision labels Feb 8, 2022
@edward-io edward-io assigned edward-io and unassigned edward-io Feb 9, 2022
@daniellepintz daniellepintz self-assigned this Mar 2, 2022
@daniellepintz
Copy link
Contributor

@ananthsub I see the precision classes already have the class attribute backend. Do you envision renaming this to amp_backend? If so, I imagine this would involve a dep. cycle

https://github.com/PyTorchLightning/pytorch-lightning/blob/a14783ea8c4475c59b5bbaff81f592b3e8d90481/pytorch_lightning/plugins/precision/native_amp.py#L42

@justusschock
Copy link
Member

@daniellepintz With #12069 the backend would be deprecated :)

@daniellepintz
Copy link
Contributor

I submitted a PR for the first part of this - would appreciate feedback. Was planning to remove the PrecisionType and AMPType enums in a separate PR but let me know if it would be better all in one PR.

cc @ananthsub @four4fish

@daniellepintz
Copy link
Contributor

daniellepintz commented Mar 14, 2022

@ananthsub I see in #12069 we are deprecating the PrecisionPlugin.backend and Trainer.amp_backend properties and recommending people use isinstance checks against the PrecisionPlugin classes instead.

Given this, does it still make sense to add a amp_backend property to the PrecisionPlugin?

@carmocca
Copy link
Contributor

carmocca commented Aug 5, 2022

To recap here:

I don't think there's anything else to decide here, feel free to re-open if I missed something. Otherwise let's continue in the linked open issues.

@carmocca carmocca closed this as not planned Won't fix, can't repro, duplicate, stale Aug 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
precision: amp Automatic Mixed Precision precision: apex (removed) NVIDIA/apex precision precision: double Double precision refactor
Projects
None yet
Development

No branches or pull requests

5 participants