You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
When I run the train_net.py, it raises a ValueError: Target size (torch.Size([2, 28, 28])) must be the same as input size (torch.Size([4, 28, 28])). The traceback is below:
Traceback (most recent call last):
File "tools/train_net.py", line 186, in
main()
File "tools/train_net.py", line 179, in main
model = train(cfg, args.local_rank, args.distributed)
File "tools/train_net.py", line 85, in train
arguments,
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\engine\trainer.py", line 67, in do_train
loss_dict = model(images, targets)
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\nn\modules\module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "C:\Users\opt\apex\apex\amp_initialize.py", line 194, in new_fwd
**applier(kwargs, input_caster))
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\modeling\detector\generalized_rcnn.py", line 52, in forward
x, result, detector_losses = self.roi_heads(features, proposals, targets)
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\nn\modules\module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\modeling\roi_heads\roi_heads.py", line 39, in forward
x, detections, loss_mask = self.mask(mask_features, detections, targets)
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\nn\modules\module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\modeling\roi_heads\mask_head\mask_head.py", line 77, in forward
loss_mask = self.loss_evaluator(proposals, mask_logits, targets)
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\modeling\roi_heads\mask_head\loss.py", line 126, in call
mask_logits[positive_inds, labels_pos], mask_targets
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\nn\functional.py", line 2161, in binary_cross_entropy_with_logits
raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
ValueError: Target size (torch.Size([2, 28, 28])) must be the same as input size (torch.Size([4, 28, 28]))
By the way, before I got this error, actually I got another IndexError: list index out of range in segmentation_mask.py, whose traceback is below. I noticed that #725 mentioned a similar bug as mine, but in their case, they were using pytorch=1.2 while the version I was using is pytorch=1.1. Anyway, I tried their method which is replacing all torch.uint8 with torch.bool in segmentation_mask.py but it didn't work. Then I found another issue #704 which says it might be the problem of the mismatch of size between item and self.polygons. At last I still didn't konw how to solve this so I just bypassed the problem by using try-except, I mention this here because I don't know whether this trick has anything to do with the above valueError.
Traceback (most recent call last):
File "tools/train_net.py", line 186, in
main()
File "tools/train_net.py", line 179, in main
model = train(cfg, args.local_rank, args.distributed)
File "tools/train_net.py", line 85, in train
arguments,
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\engine\trainer.py", line 57, in do_train
for iteration, (images, targets, _) in enumerate(data_loader, start_iter):
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\utils\data\dataloader.py", line 568, in next
return self._process_next_batch(batch)
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\utils\data\dataloader.py", line 608, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
IndexError: Traceback (most recent call last):
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\utils\data_utils\worker.py", line 99, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\utils\data_utils\worker.py", line 99, in
samples = collate_fn([dataset[i] for i in batch_indices])
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\data\datasets\coco.py", line 91, in getitem
target = target.clip_to_image(remove_empty=True)
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\structures\bounding_box.py", line 223, in clip_to_image
return self[keep]
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\structures\bounding_box.py", line 208, in getitem
bbox.add_field(k, v[item])
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\structures\segmentation_mask.py", line 528, in getitem
selected_instances = self.instances.getitem(item)
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\structures\segmentation_mask.py", line 433, in getitem
selected_polygons.append(self.polygons[i])
IndexError: list index out of range
Here is my environment:
PyTorch Version: 1.1:
OS: Windows10
Python version: python3.6
CUDA/cuDNN version: CUDA=10.0, cuDNN=10.1
At the end, I wanna mention another thing. I managed to train another sets of images which are gray pictures but this time the images I want to train are all RGB pictures, I don't know whether this makes any difference.
Any help is appreciated!
The text was updated successfully, but these errors were encountered:
@nancymao
Hello, sorry to reply late. In my case accually I got this problem after doing some unsuitable changes in the code to solve another error: IndexError: list index out of range. You can see#704 as a refference, I'm not sure that you are in the same situation as I was. Good luck!
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
🐛 Bug
When I run the train_net.py, it raises a ValueError: Target size (torch.Size([2, 28, 28])) must be the same as input size (torch.Size([4, 28, 28])). The traceback is below:
Traceback (most recent call last):
File "tools/train_net.py", line 186, in
main()
File "tools/train_net.py", line 179, in main
model = train(cfg, args.local_rank, args.distributed)
File "tools/train_net.py", line 85, in train
arguments,
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\engine\trainer.py", line 67, in do_train
loss_dict = model(images, targets)
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\nn\modules\module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "C:\Users\opt\apex\apex\amp_initialize.py", line 194, in new_fwd
**applier(kwargs, input_caster))
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\modeling\detector\generalized_rcnn.py", line 52, in forward
x, result, detector_losses = self.roi_heads(features, proposals, targets)
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\nn\modules\module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\modeling\roi_heads\roi_heads.py", line 39, in forward
x, detections, loss_mask = self.mask(mask_features, detections, targets)
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\nn\modules\module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\modeling\roi_heads\mask_head\mask_head.py", line 77, in forward
loss_mask = self.loss_evaluator(proposals, mask_logits, targets)
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\modeling\roi_heads\mask_head\loss.py", line 126, in call
mask_logits[positive_inds, labels_pos], mask_targets
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\nn\functional.py", line 2161, in binary_cross_entropy_with_logits
raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
ValueError: Target size (torch.Size([2, 28, 28])) must be the same as input size (torch.Size([4, 28, 28]))
By the way, before I got this error, actually I got another IndexError: list index out of range in segmentation_mask.py, whose traceback is below. I noticed that #725 mentioned a similar bug as mine, but in their case, they were using pytorch=1.2 while the version I was using is pytorch=1.1. Anyway, I tried their method which is replacing all torch.uint8 with torch.bool in segmentation_mask.py but it didn't work. Then I found another issue #704 which says it might be the problem of the mismatch of size between item and self.polygons. At last I still didn't konw how to solve this so I just bypassed the problem by using try-except, I mention this here because I don't know whether this trick has anything to do with the above valueError.
Traceback (most recent call last):
File "tools/train_net.py", line 186, in
main()
File "tools/train_net.py", line 179, in main
model = train(cfg, args.local_rank, args.distributed)
File "tools/train_net.py", line 85, in train
arguments,
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\engine\trainer.py", line 57, in do_train
for iteration, (images, targets, _) in enumerate(data_loader, start_iter):
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\utils\data\dataloader.py", line 568, in next
return self._process_next_batch(batch)
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\utils\data\dataloader.py", line 608, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
IndexError: Traceback (most recent call last):
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\utils\data_utils\worker.py", line 99, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "D:\zhaoyang\anaconda\anaconda\envs\maskrcnn_benchmark\lib\site-packages\torch\utils\data_utils\worker.py", line 99, in
samples = collate_fn([dataset[i] for i in batch_indices])
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\data\datasets\coco.py", line 91, in getitem
target = target.clip_to_image(remove_empty=True)
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\structures\bounding_box.py", line 223, in clip_to_image
return self[keep]
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\structures\bounding_box.py", line 208, in getitem
bbox.add_field(k, v[item])
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\structures\segmentation_mask.py", line 528, in getitem
selected_instances = self.instances.getitem(item)
File "C:\Users\opt\maskrcnn-benchmark\maskrcnn_benchmark\structures\segmentation_mask.py", line 433, in getitem
selected_polygons.append(self.polygons[i])
IndexError: list index out of range
Here is my environment:
At the end, I wanna mention another thing. I managed to train another sets of images which are gray pictures but this time the images I want to train are all RGB pictures, I don't know whether this makes any difference.
Any help is appreciated!
The text was updated successfully, but these errors were encountered: