Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use with your own sketches? #4

Open
b-adkins opened this issue Feb 17, 2018 · 9 comments
Open

How to use with your own sketches? #4

b-adkins opened this issue Feb 17, 2018 · 9 comments

Comments

@b-adkins
Copy link

Hi! I was interested in this library as a user, not a developer. I hate inking my comics and wanted an AI inker. I could run the included example data with no issue, but the application failed on my own drawings.

What does it take to run my own scanned pencil drawing through your neural net? Is there preprocessing required? Are there specific details that need to be correct in an image file? What range of resolutions does it accept? E.g. a human heads of height 30px to 700px.

@bobbens
Copy link
Owner

bobbens commented Feb 20, 2018

It should run fine on your input, assuming it is an image type supported by pillow. Preprocessing is done by the script. Image size is a bit tricky, as it influences the output, I usually run between 500-1500 pixel lengths, but it really depends on how detailed the image is.

@jakubLangr
Copy link

jakubLangr commented Apr 3, 2018

Thanks for the model, it looks really awesome!

But I have a bit to add to this:

  • So I am using pytorch version 0.3.1
  • Running on Ubuntu with no GPUs
  • Converted images to png to ~35.7 kb (163x290)
  • Yet the output I get is something like this.
  • Tried both gan and mse model but mse does not output anything.

Yet this is the ouput I get (looks nothing like the original image)
diag

Any recommendations?

Ps, if you need any extra information to help diagnose it, just ask. Happy to chat :)

@jakubLangr
Copy link

Alright, I've managed to get output with the mse model.

A couple of extra thoughts.

  • The test runs fine.
  • I tried a different encoding; no change, it seems.
  • Here's the MSE model:
    image

This file was originally .jpg, wonder if that has anything to do with it?

@bobbens
Copy link
Owner

bobbens commented Apr 4, 2018

@jakubLangr Could I see the input image? I'm assuming the network is firing on the paper texture and the contrast is very low which could explain those results.

@jakubLangr
Copy link

Yes, probably the case. So how did you obtain the training dataset? it does not look like it's scanned but i would never get the light so perfect.

For example this image produced similar results:
diagram2

@jakubLangr
Copy link

diagram
Or this one

@bobbens
Copy link
Owner

bobbens commented Jun 29, 2018

The models were not trained with data taken from pictures, which explains the low performance on the images you supplied. Retraining with data more similar to the images you want to use with would work better (training code is available now). Our new approach should be able to handle that much better, however, I still have to prepare the code and models to make them public.

@jakubLangr
Copy link

jakubLangr commented Jul 7, 2018 via email

@raul1968
Copy link

Could you post the code that you use? I get two errors when trying to run.
$ python simplify.py
Traceback (most recent call last):
File "simplify.py", line 4, in
from torch.utils.serialization import load_lua
ImportError: No module named serialization
and
unable to load_lua..
I even put ubuntu 16.04 to try and get it to work. I have several hundred images of my own i want to use to train it but can wrap my mind around how to get it to work. I'm trying to use it for my animation to clean up my pencils since I don't use very much color.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants