-
Notifications
You must be signed in to change notification settings - Fork 27.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request]: Img2Img inpainting/sketching - Non-binary/alpha weighted denoising mask #14024
Closed
1 task done
Labels
enhancement
New feature or request
Comments
https://github.com/CodeHatchling/stable-diffusion-webui-soft-inpainting Code cleaned up and forked over to here |
Great work on this!
Feel free to make a PR whenever you think this is ready. You may need to account for merge conflicts since your fork was based on the master branch and not the dev branch though. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Is there an existing issue for this?
What would your feature do ?
Problem to solve
It appears that the denoiser only considers a binary mask (with a hard boundary) with respect to what pixels should be denoised, even with extreme blurring values. Specifically, only if the mask/sketch opacity is greater than 50% does the region under that pixel get denoised. The resulting image and the original image are simply alpha-blended together using the mask opacity values.
Why this is a problem
What possibilities solving it brings
Proposed solution
Interpret the mask opacity as a per-pixel multiplier for the denoising strength.
AFAIK there are a few ways one could achieve this effect:
I believe either of these would allow inpainting objects with partial opacity or very gradual transitions, where content in a transition region is preserved.
Alternate solution: dithering
A simpler option could be to use dithering to decide whether a given pixel/block is masked. In other words, using some kind of dithering pattern (Beyer, blue noise, Floyd–Steinberg) the mask opacity represents a probability a given element of the image is affected by the denoiser.
Alternate solution: adjust mask threshold
An even simpler solution could be to change the mask opacity threshold at which denoising occurs from >=50% to >0%. In other words, if the mask has opacity greater than 0, it is included in the denoising.
Then, the original content could be blended over-top to completely hide the seam at the point where the mask has 0 opacity.
However, the main drawback is that ghosting artifacts will appear where both the original and modified image are visible. (Though this is an issue with the current implementation anyway.)
Proposed workflow
The text was updated successfully, but these errors were encountered: