Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] Lazily instantiate backends to avoid unnecessary GPU memory pre-allocations #520

Merged
merged 18 commits into from
Sep 21, 2023

Conversation

kachayev
Copy link
Collaborator

@kachayev kachayev commented Sep 5, 2023

Types of changes

  • instead of pre-creating backend objects we now only register backend "implementations" (classes)
  • instance is created lazily, when requested
  • tests are updated accordingly
  • OS env flags to disable backends (do not even perform import)
  • quickstart mentions how switch backend support off

Motivation and context / Related issue

Closes #516.

How has this been tested (if it applies)

Tests are updated to reflect changes how backends are loaded. Configuration is applied for TF and JAX so they don't allocate GPU memory (which would obviously hurt test performance).

PR checklist

  • I have read the CONTRIBUTING document.
  • The documentation is up-to-date with the changes I made (check build artifacts).
  • All tests passed, and additional code has been covered with new tests.
  • I have added the PR and Issue fix to the RELEASES.md file.

@kachayev kachayev changed the title Lazily instanciate backends to avoid unnecessary GPU memory pre-allocations Lazily instantiate backends to avoid unnecessary GPU memory pre-allocations Sep 5, 2023
@kachayev
Copy link
Collaborator Author

kachayev commented Sep 5, 2023

Also, note that there's a somewhat related older PR which should be merged as well: #517

@codecov
Copy link

codecov bot commented Sep 5, 2023

Codecov Report

Merging #520 (a016406) into master (4cf4492) will decrease coverage by 0.10%.
The diff coverage is 90.54%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #520      +/-   ##
==========================================
- Coverage   96.08%   95.99%   -0.10%     
==========================================
  Files          65       65              
  Lines       13784    13816      +32     
==========================================
+ Hits        13245    13263      +18     
- Misses        539      553      +14     

test/conftest.py Outdated
# before taking list of backends, we need to make sure all
# available implementations are instantiated. looks somewhat hacky,
# but hopefully it won't be needed for a common library use
for backend_impl in get_available_backend_implementations():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should do that in get_backend_list ? I mean this funtiinn should return all bacjen even if the backend was not used in the past... Of course we need to put more detail in the doc of get_backend_list to be clear about the potentiel memory problems

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This depends mainly on the use case get_backend_list was originally dedicated for. get_available_backend_implementations clearly conveys the intention: those are backends "available" to be used. But I'm not sure what would be a use case for get_backend_list except for running tests on all backends at once. If we want get_backend_list to always force instantiation of all them - sure, we can do that. Just being very careful with the documentation.

RELEASES.md Outdated Show resolved Hide resolved
@rflamary rflamary changed the title Lazily instantiate backends to avoid unnecessary GPU memory pre-allocations [Fix] Lazily instantiate backends to avoid unnecessary GPU memory pre-allocations Sep 21, 2023
@rflamary rflamary merged commit 5ab00dd into PythonOT:master Sep 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Importing POT consumes 22G GPU memory. [ONLY FOR 0.9.1]
2 participants