Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch with Resnet50 #37

Open
rayLemond opened this issue Jan 2, 2020 · 6 comments
Open

Pytorch with Resnet50 #37

rayLemond opened this issue Jan 2, 2020 · 6 comments

Comments

@rayLemond
Copy link

rayLemond commented Jan 2, 2020

I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.

Any idea what's the problem and how can i fix it?

@zhouxiaohang
Copy link

zhouxiaohang commented Apr 17, 2020

I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.

Any idea what's the problem and how can i fix it?

Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.

Here is a set of parameters which I tried:

python train.py \
    --dataset cub200 \
    --prefix resnet50_hashnet \
    --hash_bit 64 \
    --net ResNet50 \
    --lr 1e-5 \
    --class_num 1.0

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

But the training loss is always around 0.69, and mAP is extremely as low as 0.04.

Could you please give me some hints or share your train script with me? Thanks.

@rayLemond
Copy link
Author

I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.
Any idea what's the problem and how can i fix it?

Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.

Here is a set of parameters which I tried:

python train.py \
    --dataset cub200 \
    --prefix resnet50_hashnet \
    --hash_bit 64 \
    --net ResNet50 \
    --lr 1e-5 \
    --class_num 1.0

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

But the training loss is always around 0.69, and mAP is extremely as low as 0.04.

Could you please give me some hints or share your train script with me? Thanks.

我用的RMSprop,lr=1e-5,Resnet最后一层fc后面接一个tanh输出网络特征。除此之外都跟源码一致。Class_num是200,为什么要改成1?没太懂。。

@zhouxiaohang
Copy link

zhouxiaohang commented Apr 17, 2020

I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.
Any idea what's the problem and how can i fix it?

Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.
Here is a set of parameters which I tried:

python train.py \
    --dataset cub200 \
    --prefix resnet50_hashnet \
    --hash_bit 64 \
    --net ResNet50 \
    --lr 1e-5 \
    --class_num 1.0

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

But the training loss is always around 0.69, and mAP is extremely as low as 0.04.
Could you please give me some hints or share your train script with me? Thanks.

我用的RMSprop,lr=1e-5,Resnet最后一层fc后面接一个tanh输出网络特征。除此之外都跟源码一致。Class_num是200,为什么要改成1?没太懂。。

  1. About tanh after fc

I think the original code has already tanh after fc, Do you use the original code?
https://github.com/thuml/HashNet/blob/master/pytorch/src/network.py#L84

  1. About class_num

I'm quite confused about class_num because in coco dataset it's 1.0 even there are 80 classes in coco. I've also perform an experiment with parameters that you mentioned above.

python train.py \      
    --prefix resnet50_hashnet \
    --dataset cub200 \              
    --hash_bit 64 \
    --net ResNet50 \ 
    --lr 1e-5 \  
    --class_num 200 

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 200.0}  {'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

Thel loss is shocking during the interval from 0.3 to 0.5, here is a glance of loss:

Iter: 02680, loss: 0.57731044
Iter: 02690, loss: 0.37092209
Iter: 02700, loss: 0.46045765
Iter: 02710, loss: 0.41415858
...
Iter: 09996, loss: 0.42658427
Iter: 09997, loss: 0.40386644
Iter: 09998, loss: 0.39876112
Iter: 09999, loss: 0.43418783

MAP: 0.043476254169203095

But the final mAP is still very low.

Would you mind to share your code on your GitHub?

@rayLemond
Copy link
Author

I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.
Any idea what's the problem and how can i fix it?

Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.
Here is a set of parameters which I tried:

python train.py \
    --dataset cub200 \
    --prefix resnet50_hashnet \
    --hash_bit 64 \
    --net ResNet50 \
    --lr 1e-5 \
    --class_num 1.0

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

But the training loss is always around 0.69, and mAP is extremely as low as 0.04.
Could you please give me some hints or share your train script with me? Thanks.

我用的RMSprop,lr=1e-5,Resnet最后一层fc后面接一个tanh输出网络特征。除此之外都跟源码一致。Class_num是200,为什么要改成1?没太懂。。

  1. About tanh after fc

I think the original code has already tanh after fc, Do you use the original code?
https://github.com/thuml/HashNet/blob/master/pytorch/src/network.py#L84

  1. About class_num

I'm quite confused about class_num because in coco dataset it's 1.0 even there are 80 classes in coco. I've also perform an experiment with parameters that you mentioned above.

python train.py \      
    --prefix resnet50_hashnet \
    --dataset cub200 \              
    --hash_bit 64 \
    --net ResNet50 \ 
    --lr 1e-5 \  
    --class_num 200 

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 200.0}  {'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

Thel loss is shocking during the interval from 0.3 to 0.5, here is a glance of loss:

Iter: 02680, loss: 0.57731044
Iter: 02690, loss: 0.37092209
Iter: 02700, loss: 0.46045765
Iter: 02710, loss: 0.41415858
...
Iter: 09996, loss: 0.42658427
Iter: 09997, loss: 0.40386644
Iter: 09998, loss: 0.39876112
Iter: 09999, loss: 0.43418783

MAP: 0.043476254169203095

But the final mAP is still very low.

Would you mind to share your code on your GitHub?

OK, can i have your email?

@zhouxiaohang
Copy link

zhouxiaohang commented Apr 17, 2020

I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.
Any idea what's the problem and how can i fix it?

Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.
Here is a set of parameters which I tried:

python train.py \
    --dataset cub200 \
    --prefix resnet50_hashnet \
    --hash_bit 64 \
    --net ResNet50 \
    --lr 1e-5 \
    --class_num 1.0

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

But the training loss is always around 0.69, and mAP is extremely as low as 0.04.
Could you please give me some hints or share your train script with me? Thanks.

我用的RMSprop,lr=1e-5,Resnet最后一层fc后面接一个tanh输出网络特征。除此之外都跟源码一致。Class_num是200,为什么要改成1?没太懂。。

  1. About tanh after fc

I think the original code has already tanh after fc, Do you use the original code?
https://github.com/thuml/HashNet/blob/master/pytorch/src/network.py#L84

  1. About class_num

I'm quite confused about class_num because in coco dataset it's 1.0 even there are 80 classes in coco. I've also perform an experiment with parameters that you mentioned above.

python train.py \      
    --prefix resnet50_hashnet \
    --dataset cub200 \              
    --hash_bit 64 \
    --net ResNet50 \ 
    --lr 1e-5 \  
    --class_num 200 

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 200.0}  {'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

Thel loss is shocking during the interval from 0.3 to 0.5, here is a glance of loss:

Iter: 02680, loss: 0.57731044
Iter: 02690, loss: 0.37092209
Iter: 02700, loss: 0.46045765
Iter: 02710, loss: 0.41415858
...
Iter: 09996, loss: 0.42658427
Iter: 09997, loss: 0.40386644
Iter: 09998, loss: 0.39876112
Iter: 09999, loss: 0.43418783

MAP: 0.043476254169203095
But the final mAP is still very low.
Would you mind to share your code on your GitHub?

OK, can i have your email?

Thanks

@sagar-dutta
Copy link

Can anybody tell me the reason for low mAP? I am getting around 0.03. Any suggestion would be great. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants