-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The problem with visualization during inference #6
Comments
I guess the size of your input image is wrong. In fact, the input size should be 256x256. The inference method I designed in the paper is as follows: |
Thank you for your reply. In fact, I divided the large images in the test set into small blocks of 256x256 and performed inference on these slices. If the inference results of the slices are not good, then stitching them into a large image will certainly not be good either. However, I did not use overlapping slices for inference. Could this be the problem? I will try to perform inference with overlapping slices. @dyzy41 |
You can try this config.py for test, configs/0cd_ce/changeclip_levir_test.py |
@dyzy41 Additionally, I have an annother question: Did you use the same number of training epochs for your comparative experiments? For example, did you use 40k epochs for your method and also 40k epochs for Changerformer? Or did you follow the number of epochs set by each paper for comparison? The number of training epochs can affect accuracy, and if the training epochs are different for comparison, there may be differences in the results. How did you set the epochs for other methods? |
"When training a model, if you find that the loss has converged, you can stop the training. I trained the model in this way." |
"Hello, I'd like to know how you managed to get the prediction code to work, as I've set the parameters the same as yours, but I encountered an error during prediction." |
@COUJIALUO |
Your algorithm proposed in the paper is a fantastic idea. However, I have a few questions to ask you. When I use the ChangeCLIP_best_weights you provided to visualize the inference results of images in the test directory of datasets like LEVIR, I found the results to be unsatisfactory. Why are my visualization results so poor? Looking forward to your reply.

GT:
This is my code:
parser.add_argument('--file_list', default='D:/pythonwork/Data/LEVIR-CD/LEVIR-CD256256CLIP/test.txt', help='Image file')
parser.add_argument('--config', default='configs/0cd_ce/changeclip_levir.py')
parser.add_argument('--checkpoint', default='checkpoint/best_mIoU_iter_17000.pth')
Could you please share the visualization process and code you used? Thank you.
The text was updated successfully, but these errors were encountered: