Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Training result is higher than the test? #66

Open
liuzhidemaomao opened this issue Oct 24, 2020 · 6 comments
Open

The Training result is higher than the test? #66

liuzhidemaomao opened this issue Oct 24, 2020 · 6 comments

Comments

@liuzhidemaomao
Copy link

Hello, when I trained the PSPNet50 on the cityscapes dataset, I get 0.8266 miou in the training after 200 epochs, and 0.7764 miou in the test, and 0.6971 miou in the valuation. My environment is pytorch 1.6, 8 Tesla V100 GPUs. Thanks!

@XRodriguez10
Copy link

Have you solved it? I'm facing the same situation here.

@XRodriguez10
Copy link

The evluation in train.py gives different result compare to test.py. I don't understand why

@hanzhy-code
Copy link

The evluation in train.py gives different result compare to test.py. I don't understand why

I meet the same question.

@AndreyStille
Copy link

AndreyStille commented Jan 7, 2022

The evluation in train.py gives different result compare to test.py. I don't understand why

I meet the same question.

  1. Test. For some unknow reason author decided to f __k up everybody demostating better result than they really are. In test inference is launched per patches and than combined together. So results on standart datasets are higher than in other implementation, and he doesn't give a s__t on perfomance.
  2. Train. For some unknow reason author decided to add random crop (instead resize) to transformations in dataset though it can dramatically change results for many applications.

Such is it vanity of github contributors and we need to use their code because have tasks on our bad jobs when we aren't provided with enough time to write it from scratch

@hanzhy-code
Copy link

The evluation in train.py gives different result compare to test.py. I don't understand why

I meet the same question.

  1. Test. For some unknow reason author decided to f __k up everybody demostating better result than they really are. In test inference is launched per patches and than combined together. So results on standart datasets are higher than in other implementation, and he doesn't give a s__t on perfomance.
  2. Train. For some unknow reason author decided to add random crop (instead resize) to transformations in dataset though it can dramatically change results for many applications.

Such is it vanity of github contributors and we need to use their code because have tasks on our bad jobs when we aren't provided with enough time to write it from scratch

I have solved this problem. There is an error in the saving path of the picture in test.py. You should modify it in your own code.
I change 'image_name = image_path.split('/')[-1].split('.')[0]' to 'image_name = image_path.split('/')[-1][:-4]' in test function and cal_acc function. After my modification, I find the calculation result is right.

@LM0223
Copy link

LM0223 commented Mar 29, 2022

您好,我想询问您,您是否可以将test.py中的batch size 调高,而不是只能设置为1,谢谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants