You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Tensorboard logger fails to log module hyperparameters configured with OmegaConf. This happens when updating the logger hparams here:
The trainer calls the logger's log_hyperparamshere:
Inside log_hyperparams the logger's hparams are updated here. This causes the hparams type to now be dict instead of DictConfig
As a result, this branch in [save_hparams_to_yaml](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/core/saving.py#L330-L333) is never triggered
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
PyTorch Version (e.g., 1.0):
OS (e.g., Linux):
How you installed PyTorch (conda, pip, source):
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information:
Additional context
The text was updated successfully, but these errors were encountered:
ananthsub
changed the title
Tensorboard logger fails to save model Omegaconf hparams
Tensorboard logger fails to save model OmegaConf hparams
Aug 6, 2020
* Add support to Tensorboard logger for OmegaConf hparams
Address #2844
We check if we can import omegaconf, and if the hparams are omegaconf instances. if so, we use OmegaConf.merge to preserve the typing, such that saving hparams to yaml actually triggers the OmegaConf branch
* avalaible
* chlog
* test
Co-authored-by: Jirka Borovec <[email protected]>
🐛 Bug
The Tensorboard logger fails to log module hyperparameters configured with
OmegaConf
. This happens when updating the loggerhparams
here:log_hyperparams
here:log_hyperparams
the logger's hparams are updated here. This causes the hparams type to now bedict
instead ofDictConfig
[save_hparams_to_yaml](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/core/saving.py#L330-L333)
is never triggeredThis is the stacktrace when logging hyperparams: https://gist.github.com/ananthsub/7acfdb0e0f551ed030f05f7674c37b46
To Reproduce
Code sample
A hacky fix would be something like changing the hparams update to use this inside the tensorboard logger:
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
conda
,pip
, source):Additional context
The text was updated successfully, but these errors were encountered: