Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Error : run the library in offline mode?? #20

Open
dyspnea opened this issue Jan 14, 2023 · 1 comment
Open

Running Error : run the library in offline mode?? #20

dyspnea opened this issue Jan 14, 2023 · 1 comment

Comments

@dyspnea
Copy link

dyspnea commented Jan 14, 2023

Arguments: ('a tree', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, 0.9, 5, '0.0001', False, 'fantasy', '', 0.1, False, False, False, False, False, '', 1, '', 0, '', True, False, False, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None') {}
Traceback (most recent call last):
File "D:\Tools\SDUI\venv\lib\site-packages\transformers\configuration_utils.py", line 601, in _get_config_dict
resolved_config_file = cached_path(
File "D:\Tools\SDUI\venv\lib\site-packages\transformers\utils\hub.py", line 282, in cached_path
output_path = get_from_cache(
File "D:\Tools\SDUI\venv\lib\site-packages\transformers\utils\hub.py", line 545, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\Tools\SDUI\modules\call_queue.py", line 45, in f
res = list(func(*args, **kwargs))
File "D:\Tools\SDUI\modules\call_queue.py", line 28, in f
res = func(*args, **kwargs)
File "D:\Tools\SDUI\modules\txt2img.py", line 52, in txt2img
processed = process_images(p)
File "D:\Tools\SDUI\modules\processing.py", line 479, in process_images
res = process_images_inner(p)
File "D:\Tools\SDUI\modules\processing.py", line 597, in process_images_inner
uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
File "D:\Tools\SDUI\modules\processing.py", line 565, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "D:\Tools\SDUI\modules\prompt_parser.py", line 140, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "D:\Tools\SDUI\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "D:\Tools\SDUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Tools\SDUI\modules\sd_hijack_clip.py", line 220, in forward
z = self.process_tokens(tokens, multipliers)
File "D:\Tools\SDUI\extensions\aesthetic-gradients\aesthetic_clip.py", line 211, in call
model = copy.deepcopy(aesthetic_clip()).to(device)
File "D:\Tools\SDUI\extensions\aesthetic-gradients\aesthetic_clip.py", line 97, in aesthetic_clip
aesthetic_clip_model = CLIPModel.from_pretrained(shared.sd_model.cond_stage_model.wrapped.transformer.name_or_path)
File "D:\Tools\SDUI\venv\lib\site-packages\transformers\modeling_utils.py", line 1764, in from_pretrained
config, model_kwargs = cls.config_class.from_pretrained(
File "D:\Tools\SDUI\venv\lib\site-packages\transformers\configuration_utils.py", line 526, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "D:\Tools\SDUI\venv\lib\site-packages\transformers\configuration_utils.py", line 553, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "D:\Tools\SDUI\venv\lib\site-packages\transformers\configuration_utils.py", line 634, in _get_config_dict
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this model, couldn't find it in the cached files and it looks like None is not the path to a directory containing a config.json file.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

@todstelzer
Copy link

Same error. Not working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants