Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Returning None in validation_end method raises error #84

Closed
preddy5 opened this issue Aug 9, 2019 · 3 comments
Closed

Returning None in validation_end method raises error #84

preddy5 opened this issue Aug 9, 2019 · 3 comments
Labels
bug Something isn't working

Comments

@preddy5
Copy link

preddy5 commented Aug 9, 2019

Hey,
If we define a validation_end method like

    def validation_end(self, outputs):
        return

it is gonna raise an error

AttributeError: 'NoneType' object has no attribute 'items'

Is this intended, if not shouldnt this part of the code initialize the metrics dict
https://github.com/williamFalcon/pytorch-lightning/blob/018b8da50e90638e8aa8d3eda1f8637656c25f2d/pytorch_lightning/models/trainer.py#L987

like here

https://github.com/williamFalcon/pytorch-lightning/blob/018b8da50e90638e8aa8d3eda1f8637656c25f2d/pytorch_lightning/models/trainer.py#L886

@preddy5 preddy5 added the bug Something isn't working label Aug 9, 2019
@williamFalcon
Copy link
Contributor

williamFalcon commented Aug 9, 2019

return {} for now

@williamFalcon
Copy link
Contributor

but we're adding support for not needing to implement a val function if not needed #82

@finkga
Copy link

finkga commented Sep 25, 2020

The recommended solution:

    def validation_epoch_end(self, validation_outputs):
        return {}

Also produces the error 'AttributeError: 'dict' object has no attribute 'callback_metrics':

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-27-f49f97955aea> in <module>
      4     check_val_every_n_epoch=3,
      5 )
----> 6 trainer.fit(model)

~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/states.py in wrapped_fn(self, *args, **kwargs)
     46             if entering is not None:
     47                 self.state = entering
---> 48             result = fn(self, *args, **kwargs)
     49 
     50             # The INTERRUPTED state can be set inside the run function. To indicate that run was interrupted

~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, datamodule)
   1082             self.accelerator_backend = CPUBackend(self)
   1083             self.accelerator_backend.setup(model)
-> 1084             results = self.accelerator_backend.train(model)
   1085 
   1086         # on fit end callback

~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/accelerators/cpu_backend.py in train(self, model)
     37 
     38     def train(self, model):
---> 39         results = self.trainer.run_pretrain_routine(model)
     40         return results

~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in run_pretrain_routine(self, model)
   1222 
   1223         # run a few val batches before training starts
-> 1224         self._run_sanity_check(ref_model, model)
   1225 
   1226         # clear cache before training

~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _run_sanity_check(self, ref_model, model)
   1255             num_loaders = len(self.val_dataloaders)
   1256             max_batches = [self.num_sanity_val_steps] * num_loaders
-> 1257             eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
   1258 
   1259             # allow no returns from eval

~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py in _evaluate(self, model, dataloaders, max_batches, test_mode)
    397 
    398         # log callback metrics
--> 399         self.__update_callback_metrics(eval_results, using_eval_result)
    400 
    401         # Write predictions to disk if they're available.

~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py in __update_callback_metrics(self, eval_results, using_eval_result)
    419             if isinstance(eval_results, list):
    420                 for eval_result in eval_results:
--> 421                     self.callback_metrics = eval_result.callback_metrics
    422             else:
    423                 self.callback_metrics = eval_results.callback_metrics

AttributeError: 'dict' object has no attribute 'callback_metrics'

Comment those two lines out and it works without any issue. This issue should not be closed until either the problem is fixed or the documentation is updated to show that it's not working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants