-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to convert custom .pth Model to .onnx? #193
Comments
Wow! Thanks for your fast and also helpful response! I converted my custom model via import io
import numpy as np
import torch.onnx
from model import U2NET
torch_model = U2NET(3,1)
model_path = "<pathToStateDict>.pth"
batch_size = 1
torch_model.load_state_dict(torch.load(model_path))
torch_model.eval()
x = torch.randn(batch_size, 3, 320, 320, requires_grad=True)
torch_out = torch_model(x)
torch.onnx.export(torch_model, x, "model.onnx", export_params=True, opset_version=11, do_constant_folding=True, input_names = ['input'], output_names = ['output'], dynamic_axes = {'input' : {0: 'batch_size'}, 'output': {0: 'batch_size'}}) And now it's working again with rembg! |
|
Hi @endh1337, |
Hey sorry @danielgatis and @suri199507, latest events got me quite busy First of all, I'm a newbie to python and machine learning and stuff, I'm just crawling the web for information I can use to achieve some specific background/object removal tasks. So please don't judge this non professional response 😅 TL;DR
Long answer (no in-depth guide!)The U²Net-Repository was originally trained on the DUTS-TR dataset which is a set of imagery and their counterpart masks. So you have images like
and their couterpart mask which is according to the resources I found the ground truth binary mask (i guess this means only black and white) of what element of the image should be segmentated. So at first, u need to create your own dataset like DUTS-TR and mask the objects you want to segmentate in white and leave the background / parts you want to be removed by RemBG black. By the way, RemBG does not only work for background removal, you can train a U²Net model to also segmentate a specific part of the image you want to be removed (leave it black in the mask and all the surroundings white. You can change this behavior, but by default you have one directory with the images ( I cloned the U²Net-Repository and made a few changes in it's model_name = 'u2net' #'u2netp'
data_dir = os.path.join(os.getcwd(), 'train_data' + os.sep)
tra_image_dir = os.path.join('DUTS', 'DUTS-TR', 'DUTS-TR', 'im_aug' + os.sep)
tra_label_dir = os.path.join('DUTS', 'DUTS-TR', 'DUTS-TR', 'gt_aug' + os.sep) If you change the # ------- 3. define model --------
# define the net
if(model_name=='u2net'):
net = U2NET(3, 1)
elif(model_name=='u2netp'):
net = U2NETP(3,1) to # ------- 3. define model --------
net = U2NET(3, 1) You will need to change some other parts I won't describe here e.g auto saving models, cuda support etc.. Crawl The U2net Issues for more information. If you (are different than me and you) know what you are doing, you can adjust the model parameters like the loss function in After fixing all the errors occuring while executing After that you convert it to To @suri199507
The script to convert the My response is messier than I thought it'll be. Hope its helpful anyways |
this helped me alot!!, thanks! |
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: input for the following indices |
|
How can I use rembg to load a custom trained model for prediction! |
Is there a way to extract the weights (.pth) from the onnx model? |
i have try this script to convert model , model is converted but not working |
How can I use rembg to load a custom trained model for prediction! |
What should I do with my own data set, and I'm wondering is there an easy, quick way to turn a color picture into a black and white binary image that I can customize the body for? |
hello, I have the same problem. I don't know how to use my custom-trained model with rembg :( |
Hi guys, how can I use my custom-trained model with rembg? |
try this:
|
Dear friends, I've heavily improved this code, and created a dedicated rembg-trainer repo! It's much much faster now (uses hardware acceleration if possible, multi-threading where possible), more reliable, easier to start working with, and saves model into onnx format every x iterations, so you can easily compare model behaviour after each x iterations. Should be very intuitive and understandable. Please kindly take a look. Thanks ever so much! |
Can I use the remove function with a custom model_path? |
I want to make a cutout model, but I'm not sure how to make it (because I don't understand these at all). However, after seeing your message, I realized that based on your code, my understanding is that you only need to place the image and mask image in the corresponding directory, and then execute the Python file you have prepared to achieve it. I placed an xx.jpg image in the images directory and the corresponding mask file xx.png in the masks directory. Then I executed the following script, which prompted 'num_samples should be a positive integer value, but got num_samples=0'. So I changed xx.jpg to. png, but it also prompted 'Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 1024, 1024] to have 3 channels, but got 4 channels instead' Can you teach me how to set these parameters and provide more detailed steps? |
Hey, first of all thanks for sharing your great work!
As described in the docs I trained a custom model based on the U²Net architecture to remove specific backgrounds and the results were fine but it seems like you cut off custom model support in 3b18bad. Are you planing to add this again in future? Could you please give an insight how you converted the u2net
.pth
-models to the.onnx
ones?Thanks!
The text was updated successfully, but these errors were encountered: