Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hires fix change checkpoint #12181

Merged
merged 12 commits into from
Aug 5, 2023
Merged

Hires fix change checkpoint #12181

merged 12 commits into from
Aug 5, 2023

Conversation

AUTOMATIC1111
Copy link
Owner

Description

  • add a checkpoint selection for hires fix.
  • in the same row as sample selection; must enable Hires fix: show hires checkpoint and sampler selection setting to see it.
  • just switches the checkpoint.
  • works with SD1, SD2 and SDXL
  • made it possible to do hires fix with same resolution as original, for SDXL refiner.
  • you may want to use Checkpoints to cache in RAM setting to speed it up.
  • in future, I plan to make it possible to keep multiple models (as opposed to state dicts) in memory to make the switch instant.

Screenshots/videos:

firefox_HvBxrY0VsP

Checklist:

@AUTOMATIC1111 AUTOMATIC1111 changed the base branch from master to dev July 30, 2023 11:11
@AUTOMATIC1111 AUTOMATIC1111 changed the title Hires checkpoint Hires fix change checkpoint Jul 30, 2023
@w-e-w
Copy link
Collaborator

w-e-w commented Jul 30, 2023

@anae-git
Copy link

This breaks in an ugly way if I use a lora. Could the loras perhaps be disabled, or maybe allow for a separate hires prompt?

@hubba1987
Copy link

This breaks in an ugly way if I use a lora. Could the loras perhaps be disabled, or maybe allow for a separate hires prompt?

There is an option in Settings -> User interface : "Hires fix: show hires prompt and negative prompt"
probably fits your needs to manage prompts separately.

image

@anae-git
Copy link

This breaks in an ugly way if I use a lora. Could the loras perhaps be disabled, or maybe allow for a separate hires prompt?

There is an option in Settings -> User interface : "Hires fix: show hires prompt and negative prompt" probably fits your needs to manage prompts separately.

Ah yes, I had not noticed that. That will work.

@missionfloyd missionfloyd linked an issue Jul 31, 2023 that may be closed by this pull request
1 task
@daxijiu
Copy link
Contributor

daxijiu commented Jul 31, 2023

i get error for just setting below
image
Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]
Version: v1.5.1-92-g0af4127f
Commit hash: 0af4127
Launching Web UI with arguments: --theme light --port 7861 --xformers --no-half-vae --disable-nan-check --api --autolaunch --listen --enable-insecure-extension-access
Loading weights [31e35c80fc] from D:\kkkkk\release\SD_webui_with_aki_launcher_dev\models\Stable-diffusion\sd_xl_base_1.0.safetensors
Creating model from config: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://0.0.0.0:7861

To create a public link, set share=True in launch().
Startup time: 11.7s (prepare environment: 1.5s, launcher: 0.1s, import torch: 3.2s, import gradio: 0.7s, setup paths: 0.4s, other imports: 0.7s, load scripts: 0.6s, create ui: 0.2s, gradio launch: 4.2s).
Applying attention optimization: xformers... done.
Textual inversion embeddings loaded(0):
Textual inversion embeddings skipped(11): 0xisometric, 21charturnerv2, badhandv4, charturnerv2, corneo_marin_kitagawa, EasyNegative, EasyNegativeV2, ganyu_c, ng_deepnegative_v1_75t, realisticvision-negative-embedding, ulzzang-6500
Model loaded in 6.7s (load weights from disk: 0.8s, create model: 1.1s, apply weights to model: 1.4s, apply half(): 0.7s, move model to device: 2.0s, calculate empty prompt: 0.6s).
Loading weights [7440042bbd] from D:\kkkkk\release\SD_webui_with_aki_launcher_dev\models\Stable-diffusion\sd_xl_refiner_1.0.safetensors
Creating model from config: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\repositories\generative-models\configs\inference\sd_xl_refiner.yaml
Applying attention optimization: xformers... done.
Model loaded in 16.1s (create model: 1.9s, apply weights to model: 2.1s, apply half(): 9.4s, move model to device: 2.3s, calculate empty prompt: 0.3s).
*** Error completing request
*** Arguments: ('task(pfwkzgo5ca8rd96)', '1girl', '', [], 15, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 1024, 1024, True, 0.3, 1, 'None', 5, 0, 0, 'sd_xl_refiner_1.0 [7440042bbd]', 0, '', '', [], <gradio.routes.Request object at 0x000001A32D70EE30>, 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
Traceback (most recent call last):
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\processing.py", line 1083, in sample
return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\processing.py", line 1142, in sample_hr_pass
image_conditioning = self.img2img_image_conditioning(decoded_samples, samples)
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\processing.py", line 278, in img2img_image_conditioning
if self.sampler.conditioning_key in {'hybrid', 'concat'}:
AttributeError: 'NoneType' object has no attribute 'conditioning_key'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\call_queue.py", line 58, in f
    res = list(func(*args, **kwargs))
  File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\txt2img.py", line 63, in txt2img
    processed = processing.process_images(p)
  File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\processing.py", line 681, in process_images
    res = process_images_inner(p)
  File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\processing.py", line 798, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\processing.py", line 1085, in sample
    del self.sampler
AttributeError: sampler

提示:Python 运行时抛出了一个异常。请检查疑难解答页面。


@AUTOMATIC1111
Copy link
Owner Author

I'm not sure why this is happening @daxijiu, maybe an extension, but I pushed an edit that would prevent this from happening.

@ZeroCool22
Copy link

How i pull this one?

@sashasubbbb
Copy link

Any idea why this error pops up?

Traceback (most recent call last):
      File "B:\AIimages\stable-diffusion-webui\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "B:\AIimages\stable-diffusion-webui\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "B:\AIimages\stable-diffusion-webui\modules\txt2img.py", line 63, in txt2img
        processed = processing.process_images(p)
      File "B:\AIimages\stable-diffusion-webui\modules\processing.py", line 681, in process_images
        res = process_images_inner(p)
      File "B:\AIimages\stable-diffusion-webui\modules\processing.py", line 798, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "B:\AIimages\stable-diffusion-webui\modules\processing.py", line 1083, in sample
        return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
      File "B:\AIimages\stable-diffusion-webui\modules\processing.py", line 1142, in sample_hr_pass
        image_conditioning = self.img2img_image_conditioning(decoded_samples, samples)
      File "B:\AIimages\stable-diffusion-webui\modules\processing.py", line 278, in img2img_image_conditioning
        if self.sampler.conditioning_key in {'hybrid', 'concat'}:
    AttributeError: 'NoneType' object has no attribute 'conditioning_key'

@daxijiu
Copy link
Contributor

daxijiu commented Jul 31, 2023

I'm not sure why this is happening @daxijiu, maybe an extension, but I pushed an edit that would prevent this from happening.

after update I got another error
ps:To eliminate interference, I have disabled all extensions except for the built-in ones.

Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]
Version: v1.5.1-93-gdca121e9
Commit hash: dca121e
Launching Web UI with arguments: --theme light --port 7861 --xformers --no-half-vae --disable-nan-check --api --autolaunch --listen --enable-insecure-extension-access
Loading weights [31e35c80fc] from D:\kkkkk\release\SD_webui_with_aki_launcher_dev\models\Stable-diffusion\sd_xl_base_1.0.safetensors
Creating model from config: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://0.0.0.0:7861

To create a public link, set share=True in launch().
Startup time: 10.1s (prepare environment: 1.1s, import torch: 2.5s, import gradio: 0.5s, setup paths: 0.3s, other imports: 0.6s, load scripts: 0.6s, create ui: 0.2s, gradio launch: 4.2s).
Applying attention optimization: xformers... done.
Textual inversion embeddings loaded(0):
Textual inversion embeddings skipped(11): 0xisometric, 21charturnerv2, badhandv4, charturnerv2, corneo_marin_kitagawa, EasyNegative, EasyNegativeV2, ganyu_c, ng_deepnegative_v1_75t, realisticvision-negative-embedding, ulzzang-6500
Model loaded in 6.2s (load weights from disk: 0.8s, create model: 1.1s, apply weights to model: 1.4s, apply half(): 0.7s, move model to device: 1.7s, calculate empty prompt: 0.4s).
Loading weights [7440042bbd] from D:\kkkkk\release\SD_webui_with_aki_launcher_dev\models\Stable-diffusion\sd_xl_refiner_1.0.safetensors
Creating model from config: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\repositories\generative-models\configs\inference\sd_xl_refiner.yaml
Applying attention optimization: xformers... done.
Model loaded in 11.8s (create model: 1.9s, apply weights to model: 1.4s, apply half(): 5.8s, move model to device: 2.3s, calculate empty prompt: 0.4s).
Loading weights [31e35c80fc] from cache
Creating model from config: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\repositories\generative-models\configs\inference\sd_xl_base.yaml
Applying attention optimization: xformers... done.
Model loaded in 8.1s (create model: 2.3s, apply weights to model: 2.5s, apply half(): 0.8s, move model to device: 2.1s, calculate empty prompt: 0.4s).
*** Error completing request
*** Arguments: ('task(erbji1yp32vkoff)', '1girl', '', [], 15, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 1024, 1024, True, 0.3, 1, 'None', 0, 0, 0, 'sd_xl_refiner_1.0 [7440042bbd]', 0, '', '', [], <gradio.routes.Request object at 0x000001CB2D292B90>, 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
Traceback (most recent call last):
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\call_queue.py", line 58, in f
res = list(func(*args, **kwargs))
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\txt2img.py", line 63, in txt2img
processed = processing.process_images(p)
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\processing.py", line 681, in process_images
res = process_images_inner(p)
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\processing.py", line 798, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\processing.py", line 1083, in sample
return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\processing.py", line 1142, in sample_hr_pass
image_conditioning = self.img2img_image_conditioning(decoded_samples, samples)
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\modules\processing.py", line 278, in img2img_image_conditioning
if self.sampler.conditioning_key in {'hybrid', 'concat'}:
AttributeError: 'NoneType' object has no attribute 'conditioning_key'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。


@ZeroCool22
Copy link

ZeroCool22 commented Jul 31, 2023

All in same boat...

Screenshot_5

venv "C:\Users\ZeroCool22\Desktop\Auto2\webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.5.1-93-gdca121e9
Commit hash: dca121e9035ba36b3f7484c8a31a7776d85c0960
Launching Web UI with arguments: --xformers --api
Loading weights [0f1b80cfe8] from C:\Users\ZeroCool22\Desktop\Auto2\webui\models\Stable-diffusion\dreamshaperXL10_alpha2Xl10_2.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 9.3s (prepare environment: 1.9s, launcher: 0.2s, import torch: 2.8s, import gradio: 0.9s, setup paths: 0.8s, other imports: 1.0s, load scripts: 1.0s, create ui: 0.4s, gradio launch: 0.2s).
Creating model from config: C:\Users\ZeroCool22\Desktop\Auto2\webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Applying attention optimization: xformers... done.
Model loaded in 10.4s (load weights from disk: 1.2s, create model: 0.3s, apply weights to model: 1.9s, apply half(): 2.2s, move model to device: 1.8s, load textual inversion embeddings: 1.9s, calculate empty prompt: 1.1s).
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:49<00:00,  1.99s/it]
=========================================================================================25/35 [00:49<00:19,  1.92s/it]
A tensor with all NaNs was produced in VAE.
Web UI will now convert VAE into 32-bit float and retry.
To disable this behavior, disable the 'Automaticlly revert VAE to 32-bit floats' setting.
To always start with 32-bit VAE, use --no-half-vae commandline flag.
=========================================================================================
Loading weights [7440042bbd] from C:\Users\ZeroCool22\Desktop\Auto2\webui\models\Stable-diffusion\sd_xl_refiner_1.0.safetensors
Creating model from config: C:\Users\ZeroCool22\Desktop\Auto2\webui\repositories\generative-models\configs\inference\sd_xl_refiner.yaml
Applying attention optimization: xformers... done.
Model loaded in 62.5s (apply weights to model: 36.9s, apply half(): 22.9s, move model to device: 1.8s, calculate empty prompt: 0.5s).
Loading weights [0f1b80cfe8] from C:\Users\ZeroCool22\Desktop\Auto2\webui\models\Stable-diffusion\dreamshaperXL10_alpha2Xl10_2.safetensors
Creating model from config: C:\Users\ZeroCool22\Desktop\Auto2\webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Applying attention optimization: xformers... done.
Model loaded in 36.4s (create model: 0.4s, apply weights to model: 29.4s, apply half(): 3.7s, move model to device: 2.1s, hijack: 0.1s, load textual inversion embeddings: 0.1s, calculate empty prompt: 0.5s).
*** Error completing request
*** Arguments: ('task(34dq4ta9rzw0mvw)', '(Epic scene:1.2), (close portrait:1.2), Evil ripped warrior consumed by the evil of darkness with a furious face holding a sword on fire, (full body shot), (photorealistic:1.2), (detailed face:1.2), (realistic skin texture:1.1), dramatic lighting, detailed armor plate, art by artgerm and greg rutkowski, unreal engine 5, (Detail Extractor:1.1)', '(anime:1.1), (manga:1.4), distorted face, pigtail, paint, cartoon, render, 3d, asian, deformities, double head, duplicated head, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, bad eyes, crossed eyes, poorly drawn face, mutation, extra limb, poorly drawn hands, missing limb, floating limbs, malformed hands, out of focus, long neck, long body, mutated hands and fingers, out of frame, long fingers, blender, doll, cropped, low-res, black and white', [], 25, 0, True, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 1024, 1024, True, 0.7, 1, 'None', 10, 0, 0, 'sd_xl_refiner_1.0 [7440042bbd]', 0, '', '', [], <gradio.routes.Request object at 0x000001FB12F41540>, 0, False, 'None', 10, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
    Traceback (most recent call last):
      File "C:\Users\ZeroCool22\Desktop\Auto2\webui\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\ZeroCool22\Desktop\Auto2\webui\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\Users\ZeroCool22\Desktop\Auto2\webui\modules\txt2img.py", line 63, in txt2img
        processed = processing.process_images(p)
      File "C:\Users\ZeroCool22\Desktop\Auto2\webui\modules\processing.py", line 681, in process_images
        res = process_images_inner(p)
      File "C:\Users\ZeroCool22\Desktop\Auto2\webui\modules\processing.py", line 798, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\ZeroCool22\Desktop\Auto2\webui\modules\processing.py", line 1083, in sample
        return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
      File "C:\Users\ZeroCool22\Desktop\Auto2\webui\modules\processing.py", line 1142, in sample_hr_pass
        image_conditioning = self.img2img_image_conditioning(decoded_samples, samples)
      File "C:\Users\ZeroCool22\Desktop\Auto2\webui\modules\processing.py", line 278, in img2img_image_conditioning
        if self.sampler.conditioning_key in {'hybrid', 'concat'}:
    AttributeError: 'NoneType' object has no attribute 'conditioning_key'

Screenshot_6

@AUTOMATIC1111
Copy link
Owner Author

pushed an update for AttributeError: 'NoneType' object has no attribute 'conditioning_key'

@spinferno
Copy link

Hey, when i checkout the hires_checkpoint branch and launch with:
COMMANDLINE_ARGS= --xformers --no-half-vae --disable-nan-check --api --autolaunch --listen --enable-insecure-extension-access
I don't get the new fields for 'hires checkpoint' dropdown, hires sampling method.
To get the new fields to show up on the front end, do I need to blow away any scripts/plugin(s) or low away my VENV?
Thanks so much for your time!! I am very excited about SDXL refiner support!

@AUTOMATIC1111
Copy link
Owner Author

Here is the part of the original post relevant to your question @spinferno:

in the same row as sample selection; must enable Hires fix: show hires checkpoint and sampler selection setting to see it.

@hubba1987
Copy link

Hey, when i checkout the hires_checkpoint branch and launch with: COMMANDLINE_ARGS= --xformers --no-half-vae --disable-nan-check --api --autolaunch --listen --enable-insecure-extension-access I don't get the new fields for 'hires checkpoint' dropdown, hires sampling method. To get the new fields to show up on the front end, do I need to blow away any scripts/plugin(s) or low away my VENV? Thanks so much for your time!! I am very excited about SDXL refiner support!

There is an option in Settings -> User interface

@daxijiu
Copy link
Contributor

daxijiu commented Jul 31, 2023

pushed an update for AttributeError: 'NoneType' object has no attribute 'conditioning_key'

Great, it's working fine now!
The remaining issue is that switching models seems to take too much time. Even with the models cached in memory, switching models twice still took 16 seconds.

@rltgjqmcpgjadyd
Copy link

An error occurred

after 4d9b096

스크린샷 2023-07-31 183026

Total progress: 100%|██████████████████████████████████████████████████████████████████| 30/30 [00:06<00:00, 4.84it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:05<00:00, 5.37it/s]
Loading weights [8d0ce6c016] from J:\SDWebUI\models\Stable-diffusion\sd_xl_refiner_1.0_0.9vae.safetensors01, 5.48it/s]
Creating model from config: J:\SDWebUI\repositories\generative-models\configs\inference\sd_xl_refiner.yaml
Loading VAE weights specified in settings: J:\SDWebUI\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
Model loaded in 16.4s (create model: 0.1s, apply weights to model: 12.7s, apply half(): 1.6s, load VAE: 0.7s, move model to device: 0.9s, calculate empty prompt: 0.3s).
Loading weights [e6bb9ea85b] from J:\SDWebUI\models\Stable-diffusion\sd_xl_base_1.0_0.9vae.safetensors
Creating model from config: J:\SDWebUI\repositories\generative-models\configs\inference\sd_xl_base.yaml
Loading VAE weights specified in settings: J:\SDWebUI\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
Model loaded in 20.8s (create model: 0.4s, apply weights to model: 15.5s, apply half(): 2.1s, load VAE: 0.8s, move model to device: 1.7s, calculate empty prompt: 0.3s).
*** Error completing request
*** Arguments: ('task(liep2ahu5cecesl)', 'anime artwork, anime style, key visual, vibrant, studio anime, highly detailed, 1girl,', 'photo, deformed, black and white, realism, disfigured, low contrast,', [], 30, 18, False, False, 1, 1, 8, 2295887319.0, -1.0, 0, 0, 0, False, 1024, 1024, True, 0.2, 1, 'None', 10, 0, 0, 'sd_xl_refiner_1.0_0.9vae [8d0ce6c016]', 0, '', '', [], <gradio.routes.Request object at 0x000001B69174E6B0>, 0, False, False, False, 'positive', 'comma', 0, False, False, '', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
Traceback (most recent call last):
File "J:\SDWebUI\modules\call_queue.py", line 58, in f
res = list(func(*args, **kwargs))
File "J:\SDWebUI\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "J:\SDWebUI\modules\txt2img.py", line 63, in txt2img
processed = processing.process_images(p)
File "J:\SDWebUI\modules\processing.py", line 681, in process_images
res = process_images_inner(p)
File "J:\SDWebUI\modules\processing.py", line 798, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "J:\SDWebUI\modules\processing.py", line 1083, in sample
return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
File "J:\SDWebUI\modules\processing.py", line 1127, in sample_hr_pass
lowres_samples = torch.clamp((decoded_samples + 1.0) / 2.0, min=0.0, max=1.0)
RuntimeError: "clamp_scalar_cpu" not implemented for 'Half'


need --no-half ?

@AUTOMATIC1111
Copy link
Owner Author

rltgjqmcpgjadyd: can't run the webui right now but I pushed what I think should fix the problem.

@rltgjqmcpgjadyd
Copy link

after c09bc2c

It`s work!

Great job! Thx!

@mariaWitch
Copy link

This is really good stuff, but something I think should be added is a two pass option, where you can decide the model and size for a hires fix, and then a refiner pass. Mainly because you would always want to use the refiner at 1x no matter what, but there could be a situation where you want to use normal hires (to upscale and add detail), but then use the refiner on a final pass after the initial hires fix. Honestly, I think "refining" should get it's own setting area since it is slightly different in purpose over hires fix and there are situations where you want to do hires fix and then refine.

@djdookie
Copy link

Can you please consider implementing not only the alternative two-stage SDXL img2img pipeline, but also the intended ensemble of experts pipeline for maximum quality? Here you don't run base to completion and renoise it before refining.
With this pipeline the image is only denoised down to let's say 20% with the base model, and the latent result is then handed over to the refiner, which denoises the remaining noise.
In vladmandics UI (fork of this WebUI) I can generate outstanding quality that I can hardly reach with the img2img workflow. I wish I could also do that in Automatic1111 WebUI.
Explanation of how to use both pipelines there: https://github.com/vladmandic/automatic/wiki/SD-XL#using-sd-xl-refiner

Quote Stability AI:

SDXL consists of an ensemble of experts pipeline for latent diffusion: In a first step, the base model is used to generate (noisy) latents, which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.

https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
grafik

@FurkanGozukara
Copy link

Can you please consider implementing not only the alternative two-stage SDXL img2img pipeline, but also the intended ensemble of experts pipeline for maximum quality? Here you don't run base to completion and renoise it before refining. With this pipeline the image is only denoised down to let's say 20% with the base model, and the latent result is then handed over to the refiner, which denoises the remaining noise. In vladmandics UI (fork of this WebUI) I can generate outstanding quality that I can hardly reach with the img2img workflow. I wish I could also do that in Automatic1111 WebUI. Explanation of how to use both pipelines there: https://github.com/vladmandic/automatic/wiki/SD-XL#using-sd-xl-refiner

Quote Stability AI:

SDXL consists of an ensemble of experts pipeline for latent diffusion: In a first step, the base model is used to generate (noisy) latents, which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.

https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0 grafik

i read it and it looks very promising

i agree i hope we get this

@catboxanon
Copy link
Collaborator

FWIW this extension already implements the aforementioned functionality. https://github.com/wcde/sd-webui-refiner

@AUTOMATIC1111
Copy link
Owner Author

While I do acknowledge the desire to switch models mid-sampling, and that's probably coming sometime soon, the official way to use the refiner from the official repo seems to be to just to txt2img and then run img2img on latents:

https://github.com/Stability-AI/generative-models/blob/45c443b316737a4ab6e40413d7794a7f5657c19f/scripts/demo/sampling.py#L317

@catboxanon
Copy link
Collaborator

catboxanon commented Aug 1, 2023

The keyword is that it runs the refiner on the noisy latents.

In a first step, the base model is used to generate (noisy) latents, which are then further processed with a refinement model

It does not run a full txt2img pass thru all steps and then do an img2img pass like you describe. It purposefully stops sampling earlier on the txt2img pass if the refiner is enabled.
https://github.com/Stability-AI/generative-models/blob/45c443b316737a4ab6e40413d7794a7f5657c19f/scripts/demo/streamlit_helpers.py#L373-L376
https://github.com/Stability-AI/generative-models/blob/45c443b316737a4ab6e40413d7794a7f5657c19f/scripts/demo/streamlit_helpers.py#L259
(default strength value when the refiner is enabled is 0.15)

I think it needs to be pretty explicitly stated that this PR is not the official way to use SDXL with the refiner.

@camoody1
Copy link

camoody1 commented Aug 1, 2023

I do not see this checkbox option in the User Interface section of my Settings. I see other hires related options, but not this one. Has this option been removed?
image

@chrisincite
Copy link

I do not see this checkbox option in the User Interface section of my Settings. I see other hires related options, but not this one. Has this option been removed?
image

You must use hires_checkpoint branch.
use command git checkout hires_checkpoint to switch from origin master.

@ZeroCool22
Copy link

I do not see this checkbox option in the User Interface section of my Settings. I see other hires related options, but not this one. Has this option been removed?
image

You must use hires_checkpoint branch. use command git checkout hires_checkpoint to switch from origin master.

When is planned to put it in the stable branch?

@catboxanon catboxanon added the sdxl Related to SDXL label Aug 3, 2023
@brkirch brkirch linked an issue Aug 5, 2023 that may be closed by this pull request
1 task
@AUTOMATIC1111 AUTOMATIC1111 merged commit 0ae2767 into dev Aug 5, 2023
6 checks passed
@AUTOMATIC1111 AUTOMATIC1111 deleted the hires_checkpoint branch August 5, 2023 04:47
@w-e-w w-e-w mentioned this pull request Aug 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sdxl Related to SDXL
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request]: Double model load for multi process(hires fix)