You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Train with an DataLoader where a batch is of type tuple with one GPU
Expected behavior
I would expect the copy to only happen when batch is of type tensor, or possibly apply the copy to elements inside the tuple if it is a tuple.
The text was updated successfully, but these errors were encountered:
Common bugs:
Describe the bug
After #532 , training with a DataLoader where batch does not have a .copy() method (such as tuple) will cause an exception. It doesn't seem to be the assumption that batch should always be a tensor, because we are passing it to transfer_batch_to_gpu, which does a lot of checking to handle different types differently.
Exception happens at https://github.com/williamFalcon/pytorch-lightning/blob/f2191b0cdf4305ae3a5ad2b1e404f99764a1a7c6/pytorch_lightning/trainer/train_loop_mixin.py#L293
To Reproduce
Steps to reproduce the behavior:
Expected behavior
I would expect the copy to only happen when batch is of type tensor, or possibly apply the copy to elements inside the tuple if it is a tuple.
The text was updated successfully, but these errors were encountered: