Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

maybe bug in ./segmentor/trainer_contrastive.py #55

Open
eraserNut opened this issue Aug 31, 2022 · 5 comments
Open

maybe bug in ./segmentor/trainer_contrastive.py #55

eraserNut opened this issue Aug 31, 2022 · 5 comments

Comments

@eraserNut
Copy link

In./segmentor/trainer_contrastive.py
I think
#138 pixel_queue_ptr[lb] = (pixel_queue_ptr[lb] + 1) % self.memory_size
should be
#138 pixel_queue_ptr[lb] = (pixel_queue_ptr[lb] + K) % self.memory_size

@tfzhou
Copy link
Owner

tfzhou commented Sep 1, 2022

Hi, @eraserNut thanks for pointing this out. Yes, this is indeed a bug as mentioned in another issue. I will fix this.

@eraserNut
Copy link
Author

In ContrastiveSeg/lib/loss/loss_contrast_mem.py #134
logits_mask = torch.ones_like(mask).scatter_(1, torch.arange(anchor_num * anchor_count).view(-1, 1).cuda(), 0) mask = mask * logits_mask

Can you tell me what the logits_mask does? Can I delete it? Because in my experiments, the logits_mask seems strange in the process.

@tfzhou
Copy link
Owner

tfzhou commented Sep 8, 2022

Hi @eraserNut, please refer to #16. The logits_mask helps to mask out self-similarity of each feature itself.

@weiaicunzai
Copy link

weiaicunzai commented Feb 27, 2023

Hi @eraserNut, please refer to #16. The logits_mask helps to mask out self-similarity of each feature itself.

Thanks for your time. I didn't quite understand why there will be self-similarity pixels. All the anchor pixels are sampled from the training batch, and the contrasive pixels are sampled from the queue. Why multiply of these two kinds of pixels will have self-similarity?

@crisz94
Copy link

crisz94 commented May 8, 2024

Hi @eraserNut, please refer to #16. The logits_mask helps to mask out self-similarity of each feature itself.

Thanks for your time. I didn't quite understand why there will be self-similarity pixels. All the anchor pixels are sampled from the training batch, and the contrasive pixels are sampled from the queue. Why multiply of these two kinds of pixels will have self-similarity?

I think self-contrastive only exists when queue is None. When queue is not None, logits_mask should not be multiplied to mask, maybe its a unfixed bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants