Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fine_Tunning with isnet-general-use.pth #105

Open
kabbas570 opened this issue Jan 4, 2024 · 12 comments
Open

Fine_Tunning with isnet-general-use.pth #105

kabbas570 opened this issue Jan 4, 2024 · 12 comments

Comments

@kabbas570
Copy link

Hello, thanks for providing the code publicly.
I have a question about fine-tuning the network, can I start training with isnet-general-use.pth weigh file and further fine-tune these weights i.e. transfer learning?

Cheers
Abbas

@kabbas570
Copy link
Author

I found this code from train_valid_inference_main.py

if(hypar["gt_encoder_model"]!=""):
        model_path = hypar["model_path"]+"/"+hypar["gt_encoder_model"]
        if torch.cuda.is_available():
            net.load_state_dict(torch.load(model_path))
            net.cuda()
        else:
            net.load_state_dict(torch.load(model_path,map_location="cpu"))
        print("gt encoder restored from the saved weights ...")
        return net ############

Does it mean it will load the weights for both the encoder and decoder of the model .pth file specified at hypar["model_path"] if we specify hypar["gt_encoder_model"] = ' isnet-general-use.pth'?

@youzipi
Copy link

youzipi commented Mar 22, 2024

maybe the restore_model?

hypar["restore_model"] = "RMBG-1.4.pth" ## name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

@kabbas570

@hjj-lmx
Copy link

hjj-lmx commented Apr 16, 2024

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

Excuse me, has the problem been resolved?

@hjj-lmx
Copy link

hjj-lmx commented May 15, 2024

gt_encoder_model
May I ask how to continue training on isnet general use. pth? Have you resolved this?

@youzipi
Copy link

youzipi commented May 15, 2024

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

Excuse me, has the problem been resolved?

this parameter works.

@hjj-lmx
Copy link

hjj-lmx commented May 16, 2024

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

请问,问题解决了吗?

这个参数有效。

1715829207(1)
This is my input parameter. I have received 200 training sets, and only one model file will be generated during training, which is an improvement compared to isnet general use. pth, but the effect is not very good. What is the reason for this situation

@youzipi
Copy link

youzipi commented May 16, 2024

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

请问,问题解决了吗?

这个参数有效。

1715829207(1) This is my input parameter. I have received 200 training sets, and only one model file will be generated during training, which is an improvement compared to isnet general use. pth, but the effect is not very good. What is the reason for this situation

is the loss convergence ?
do you use the best f-score snapshot?

if yes, you need to expand your train set.
or if your case is similar to the origin model, you can check the interm_sup parameter, it freeze the origin parameter.

@hjj-lmx
Copy link

hjj-lmx commented May 16, 2024

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

请问,问题解决了吗?

这个参数有效。

1715829207(1) This is my input parameter. I have received 200 training sets, and only one model file will be generated during training, which is an improvement compared to isnet general use. pth, but the effect is not very good. What is the reason for this situation

is the loss convergence ? do you use the best f-score snapshot?

if yes, you need to expand your train set. or if your case is similar to the origin model, you can check the interm_sup parameter, it freeze the origin parameter.

interm_sup = false
We are using the default parameters provided on Git, only changing the dataset passed in

@youzipi
Copy link

youzipi commented May 16, 2024

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

请问,问题解决了吗?

这个参数有效。

1715829207(1) This is my input parameter. I have received 200 training sets, and only one model file will be generated during training, which is an improvement compared to isnet general use. pth, but the effect is not very good. What is the reason for this situation

is the loss convergence ? do you use the best f-score snapshot?
if yes, you need to expand your train set. or if your case is similar to the origin model, you can check the interm_sup parameter, it freeze the origin parameter.

interm_sup = false We are using the default parameters provided on Git, only changing the dataset passed in

你加我 v 吧,eW91emlwaXBwaQ==

@hjj-lmx
Copy link

hjj-lmx commented May 17, 2024

eW91emlwaXBwaQ==

What kind of account is this

@hjj-lmx
Copy link

hjj-lmx commented May 20, 2024

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

请问,问题解决了吗?

这个参数有效。

1715829207(1)这是我的输入参数。我收到了200个训练集,训练时只会生成一个模型文件,相对于isnet一般使用来说是一个进步。 pth,但是效果不是很好。造成这种情况的原因是什么

是损失收敛吗?你使用最好的 f 分数快照吗?
如果是,您需要扩展您的火车组。或者如果您的情况与原始模型类似,您可以检查参数interm_sup,它会冻结原始参数。

interm_sup = false 我们使用Git上提供的默认参数,仅更改传入的数据集

你加我吧,eW91emlwaXBwaQ==

How to add this

@xiefeihua
Copy link

Hello, I have the same issue. Is there any way to solve it? I am looking forward to your response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants