-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training with customdataset #10
Comments
Hi, thanks for your interest.
Good luck to you and your training. Tell me if you have any more questions. |
Thank you for your reply |
Yeah, all the weights where you can see with 'DIS' as the task name in the file name or no task name specified in the file names are trained with only the DIS5K dataset. |
Hello I want to speed up the inference of the model, so I'm going to apply quantization. thank you |
I know techniques like half-precision and tensorRT may increase the inference speed without almost the same performance. |
Okay, I'll share it when I succeed in that task. thank you |
Yeah, of course, that's a trade-off between accuracy and inference speed. You can take a look at this issue, where these kinds of things have been discussed and I provided a lightweight version with Swin-Tiny as the backbone. |
Hi, @ZeVicTech, I've updated a BiRefNet for general segmentation with swin_v1_tiny as the backbone for the edge device. The well-trained model has been uploaded to my Google Drive. Check the Meanwhile, check the update in inference.py. Set the torch.set_float32_matmul_precision to 'high' can increase the FPS of the large version on A100 from 5 to 12 with ~0 performance downgrade (Because I set it to 'high' during training). Good luck with the smaller and faster BiRefNet with ~0 degradation. |
Hello
I am amazed at the performance of your created model.
So I want to training with custom data, but I'm having some issues.
When I resumed training after the interruption, the training loss increased significantly. (Is this because the model weights are saved but the optimizer information is not?)
In the
init_models_optimizers
function intrain.py
, there is a variableepoch_st
. I think epoch_st should be a global variable, but is there a reason why you have it set up like this?I currently have a custom dataset of about 9000 images. Due to the small number of data, I am adding the DIS dataset and HRSOD to run training. Is it okay to train them like this? Or should I just train it with custom data? (I use BiRefNet_ep580.pth)
I look forward to your response, thank you.
The text was updated successfully, but these errors were encountered: