venv "I:\stable-diffusion-webui-updated\venv\Scripts\Python.exe" Python 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)] Version: v1.9.3-20-g0094aaf0 Commit hash: 0094aaf0f08cf2a8b04b4021e5011d2658fa9aae loading WD14-tagger reqs from I:\stable-diffusion-webui-updated\extensions\stable-diffusion-webui-wd14-tagger\requirements.txt Checking WD14-tagger requirements. Launching Web UI with arguments: --autolaunch --precision half --no-download-sd-model --opt-sdp-no-mem-attention --opt-split-attention --xformers --opt-channelslast --skip-torch-cuda-test --skip-python-version-check --skip-version-check --no-hashing 2024-05-23 21:32:26.764809: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2024-05-23 21:32:29.436987: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. ldm/sgm GroupNorm32 replaced with normal torch.nn.GroupNorm due to `--precision half`. ControlNet preprocessor location: I:\stable-diffusion-webui-updated\models\ControlNet\annotator 2024-05-23 21:32:55,206 - ControlNet - INFO - ControlNet v1.1.446 2024-05-23 21:32:57,359 - ControlNet - INFO - ControlNet v1.1.446 [sd-webui-freeu] Controlnet support: *enabled* [-] ADetailer initialized. version: 24.4.2, num models: 18 Total VRAM 24576 MB, total RAM 32509 MB xformers version: 0.0.23.post1 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 3090 : native VAE dtype: torch.bfloat16 Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu. CivitAI Browser+: Aria2 RPC started 05/23 21:33:03 [NOTICE] IPv4 RPC: listening on TCP port 24000 05/23 21:33:03 [NOTICE] IPv6 RPC: listening on TCP port 24000 [AddNet] Updating model hashes... 100%|███████████████████████████████████████████████████████████████████████████| 1804/1804 [00:00<00:00, 12371.04it/s] [AddNet] Updating model hashes... 100%|██████████████████████████████████████████████████████████████████████████| 1804/1804 [00:00<00:00, 162889.10it/s] [Vec. CC] Style Sheet Loaded... == WD14 tagger /gpu:0, uname_result(system='Windows', node='NBA-PC', release='10', version='10.0.22621', machine='AMD64') == Loading weights [None] from I:\stable-diffusion-webui-updated\models\Stable-diffusion\link\2_SDXL\waiREALMIX_v60.safetensors Creating model from config: I:\stable-diffusion-webui-updated\repositories\generative-models\configs\inference\sd_xl_base.yaml Loading VAE weights specified in settings: I:\stable-diffusion-webui-updated\models\VAE\sdxl_fp16.vae.safetensors Applying attention optimization: sdp-no-mem... done. Model loaded in 13.9s (load weights from disk: 0.4s, create model: 0.3s, apply weights to model: 10.0s, load VAE: 0.4s, move model to device: 0.1s, load textual inversion embeddings: 2.1s, calculate empty prompt: 0.4s). 2024-05-23 21:33:23,666 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. IIB Database file has been successfully backed up to the backup folder. Startup time: 90.1s (prepare environment: 10.7s, import torch: 14.3s, import gradio: 4.6s, setup paths: 22.9s, initialize shared: 0.7s, other imports: 3.3s, list SD models: 0.9s, load scripts: 16.3s, create ui: 15.5s, gradio launch: 0.6s, app_started_callback: 0.3s). 0%| | 0/20 [00:00, 'an architectural photo of __architecture/arch_building__ __architecture/arch_location__, __architecture/arch_material__, __architecture/weather_condition_clear__ __architecture/daytime__, __architecture/camera_angle__, __architecture/camera_effect__, architecture focus, designed by __architecture/arch_architect__', 'neg_BadDream,', [], 1, 1, 22, 768, 512, True, 0.49, 1.5, '4x-UltraSharp', 6, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M v2', 'Karras', -1, False, -1, 0, 0, 0, False, '', 0.8, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None, effective_region_mask=None, ipadapter_input=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None, effective_region_mask=None, ipadapter_input=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None, effective_region_mask=None, ipadapter_input=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None, effective_region_mask=None, ipadapter_input=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None, effective_region_mask=None, ipadapter_input=None), True, 9, 99.95, 'Half Cosine Up', 6, 'Half Cosine Up', 7, 4, True, 'MEAN', 'AD', 1, True, 0, 1, 0, 'Version 2', 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, True, 1, 1, False, False, 9, 1, False, 'MultiDiffusion', False, 10, 1, 1, 64, False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, True, True, True, False, 1536, 128, False, '1.5', 0.5, True, 0.01, 0.5, -0.13, 0, 0, 0, 0, True, 0, 0.5, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': 'portrait of a woman ', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 7, 'ad_denoising_strength': 0.35, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'control_v11p_sd15_inpaint [ebff9138]', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\nBody\nBODY:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1\nBODY0.5:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,1,1\nFace (face shape, hairstyle, eye shape, pupil color, etc.)\nFACE:1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0\nFACE0.5:1,0,0,0,0,0,0,0,0.8,1,1,0.2,0,0,0,0,0\nFACE0.2:1,0,0,0,0,0,0,0,0.2,0.6,0.8,0.2,0,0,0,0,0\nFor repairing\nHAND:1,0,1,1,0.2,0,0,0,0,0,0,0,0,0,0,0,0\nClothing (use with tag)\nCLOTHING:1,1,1,1,1,0,0.2,0,0.8,1,1,0.2,0,0,0,0,0\nAction (use with tag)\nPOSE:1,0,0,0,0,0,0.2,1,1,1,0,0,0,0,0,0,0\nColoring style (use with tag)\nPALETTE:1,0,0,0,0,0,0,0,0,0,0,0.8,1,1,1,1,1\ncharacter (de-stylized)\nKEEPCHAR:1,1,1,1,1,0,0,0,1,1,1,1,1,1,1,0,0\nbackground (de-stylized)\nKEEPBG:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,0,0\nReduce overfitting (equivalent to OUTALL))\nREDUCEFIT:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\n', True, 0, 'Original Weights', 'NONE,ALL,INS,IND,INALL,MIDD,OUTD,OUTS,OUTALL', 'none', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, False, 1024, 0, 15, 'R-ESRGAN 4x+', 'R-ESRGAN 4x+', 0.3, 0.1, '', '', 2, 'Noise sync (sharp)', 0, 0.05, 0, 'DPM++ 2M SDE', False, 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 1, 1, None, 'Refresh models', False, False, 20, 4, 4, 0.4, 0.95, 2, 2, 0.4, 0.5, False, 1, False, False, 0.6, 0.9, 0.25, 1, True, False, False, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, False, -1, -1, 0, 0, '1,1', 'Horizontal', '', 2, 1, [], False, 0, 0, 150, False, 'Constant', 0, 100, False, False, 2, 0.1, 0.5, 0, '', 0, 25, 1, False, False, False, 'BREAK', '-', 0.2, 10, False, True, 4, 4, 0.15, 0.3, 'nearest-exact', 0.5, 2, True, True, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\nINOUTD:1,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0\nSIDD:1,1,1,1,0,0,0,0,0,0,0,0,1,1,1,1,1\nBody\nBODY:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1\nBODY0.5:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,1,1\nFace (face shape, hairstyle, eye shape, pupil color, etc.)\nFACE:1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0\nFACE0.5:1,0,0,0,0,0,0,0,0.8,1,1,0.2,0,0,0,0,0\nFACE0.2:1,0,0,0,0,0,0,0,0.2,0.6,0.8,0.2,0,0,0,0,0\nFor repairing\nHAND:1,0,1,1,0.2,0,0,0,0,0,0,0,0,0,0,0,0\nClothing (use with tag)\nCLOTHING:1,1,1,1,1,0,0.2,0,0.8,1,1,0.2,0,0,0,0,0\nAction (use with tag)\nPOSE:1,0,0,0,0,0,0.2,1,1,1,0,0,0,0,0,0,0\nColoring style (use with tag)\nPALETTE:1,0,0,0,0,0,0,0,0,0,0,0.8,1,1,1,1,1\ncharacter (de-stylized)\nKEEPCHAR:1,1,1,1,1,0,0,0,1,1,1,1,1,1,1,0,0\nbackground (de-stylized)\nKEEPBG:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,0,0\nReduce overfitting (equivalent to OUTALL))\nREDUCEFIT:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\n', True, 0, 'Original Weights', 'NONE,ALL,INS,IND,INALL,MIDD,OUTD,OUTS,OUTALL', 'none', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, False, False, False, 'link\\2_SDXL\\waiREALMIX_v60.safetensors', 'None', 3, '', {'calcmodes': ('Normal', 'Normal', 'Normal'), 'save_settings': ['fp16', 'prune', 'safetensors'], 'calc_settings': ['GPU', 'fastrebasin']}, True, False, False, 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, True, True, True, [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', False, False, False, '', '', '', False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0, '', False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, '', '', 10, 1, 0.05, 0.3, 0.6, False, False, 0, 0, 1, 0, 0, 0, False, False, 'Straight Abs.', 'Flat', True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, False, 'positive', 'comma', 0, False, False, 'start', '', 7, '1,5,10,15,20', [], 0, '5,6,7,8,9,10', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, [], 30, '', 4, [], 1, '', '', '', '', '', '', '', '0.3', '', '', '', '', '', False, 7, 1.5, True, '16bpc', '.tiff', 1.2, 'Positive', 0, ', ', 'Generate and always save', 32) {} Traceback (most recent call last): File "I:\stable-diffusion-webui-updated\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "I:\stable-diffusion-webui-updated\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "I:\stable-diffusion-webui-updated\modules\txt2img.py", line 109, in txt2img processed = processing.process_images(p) File "I:\stable-diffusion-webui-updated\modules\processing.py", line 839, in process_images res = process_images_inner(p) File "I:\stable-diffusion-webui-updated\extensions\01.sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs) File "I:\stable-diffusion-webui-updated\modules\processing.py", line 975, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "I:\stable-diffusion-webui-updated\modules\processing.py", line 1322, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "I:\stable-diffusion-webui-updated\modules\sd_samplers_kdiffusion.py", line 219, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "I:\stable-diffusion-webui-updated\modules\sd_samplers_common.py", line 272, in launch_sampling return func() File "I:\stable-diffusion-webui-updated\modules\sd_samplers_kdiffusion.py", line 219, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "I:\stable-diffusion-webui-updated\repositories\k-diffusion\k_diffusion\sampling.py", line 766, in sample_dpmpp_2m_test denoised = model(x, sigmas[i] * s_in, **extra_args) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "I:\stable-diffusion-webui-updated\modules\sd_samplers_cfg_denoiser.py", line 242, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "I:\stable-diffusion-webui-updated\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "I:\stable-diffusion-webui-updated\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "I:\stable-diffusion-webui-updated\modules\sd_models_xl.py", line 43, in apply_model return self.model(x, t, cond) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "I:\stable-diffusion-webui-updated\modules\sd_hijack_utils.py", line 22, in setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "I:\stable-diffusion-webui-updated\modules\sd_hijack_utils.py", line 34, in __call__ return self.__sub_func(self.__orig_func, *args, **kwargs) File "I:\stable-diffusion-webui-updated\modules\sd_hijack_unet.py", line 48, in apply_model result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs) File "I:\stable-diffusion-webui-updated\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward return self.diffusion_model( File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1568, in _call_impl result = forward_call(*args, **kwargs) File "I:\stable-diffusion-webui-updated\modules\sd_unet.py", line 91, in UNetModel_forward return original_forward(self, x, timesteps, context, *args, **kwargs) File "I:\stable-diffusion-webui-updated\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 995, in forward h = self.middle_block(h, emb, context) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "I:\stable-diffusion-webui-updated\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 100, in forward x = layer(x, context) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "I:\stable-diffusion-webui-updated\repositories\generative-models\sgm\modules\attention.py", line 627, in forward x = block(x, context=context[i]) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "I:\stable-diffusion-webui-updated\repositories\generative-models\sgm\modules\attention.py", line 459, in forward return checkpoint( File "I:\stable-diffusion-webui-updated\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 167, in checkpoint return func(*inputs) File "I:\stable-diffusion-webui-updated\repositories\generative-models\sgm\modules\attention.py", line 467, in _forward self.attn1( File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\nn\modules\module.py", line 1568, in _call_impl result = forward_call(*args, **kwargs) File "I:\stable-diffusion-webui-updated\extensions\02.sd_webui_SAG\scripts\SAG.py", line 158, in xattn_forward_log out = einsum("b i j, b j d -> b i d", sim, v) File "I:\stable-diffusion-webui-updated\venv\lib\site-packages\torch\functional.py", line 377, in einsum return _VF.einsum(equation, operands) # type: ignore[attr-defined] RuntimeError: expected scalar type Half but found Float ---