-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bugfix] Resolve memory not logged when missing metrics #8174
Conversation
…om/PyTorchLightning/pytorch-lightning into bugfix/8159_log_gpu_memory_on_step
Codecov Report
@@ Coverage Diff @@
## master #8174 +/- ##
=======================================
- Coverage 93% 88% -5%
=======================================
Files 211 211
Lines 13440 13450 +10
=======================================
- Hits 12474 11837 -637
- Misses 966 1613 +647 |
pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 😃
for key, mem in self.gpus_metrics.items(): | ||
gpu_id = int(key.split('/')[0].split(':')[1]) | ||
if gpu_id in self.trainer.accelerator_connector.parallel_device_ids: | ||
self.trainer.lightning_module.log(key, mem, prog_bar=False, logger=True, on_step=True, on_epoch=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since we're already in the trainer, why do we have to log through the lightning module's log
?
@property | ||
def gpus_metrics(self) -> Dict[str, str]: | ||
if self.trainer._device_type == DeviceType.GPU and self.log_gpu_memory: | ||
mem_map = memory.get_memory_profile(self.log_gpu_memory) | ||
self._gpus_metrics.update(mem_map) | ||
return self._gpus_metrics | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did this PR need to add _gpu_metrics
? Doesn't seem related to the issue linked.
This means that gpu metrics are now duplicated in this dictionary and in logged metrics.
Also it only gets filled when self.log_gpu_memory
so it can't be used anyways without the flag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this PR do?
This PR adds gpus_metrics to ResultCollection + filter the non requested gpus for logging
Fixes #8159
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃