Skip to content

Failed to load any .safetensors model, returns OSError: No such device (os error 19) #752

@johnchen40904

Description

@johnchen40904

Has this issue been opened before?

  • [V] It is not in the FAQ, I checked.
  • [V] It is not in the issues, I searched.

Describe the bug

It fails to open any safetensors format models, including checkpoints and LoRAs.

Only the .ckpt checkpoint this came with was able to generate something.

The problem persists even with a new safetensors model downloaded fresh from CivitAI placed in the same directory as the ckpt model;
this rules out the possibility of my model files being corrupted or some filesystem quirk happened.

The following traceback will appear when this occurs:
File "/opt/conda/lib/python3.10/site-packages/safetensors/torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:
OSError: No such device (os error 19)

Which UI

auto.

Hardware / Software

  • OS: Debian 12 bookworm
  • OS version:
  • WSL version (if applicable):
  • Docker Version: 27.4.0, build bde2b89
  • Docker compose version: v2.31.0
  • Repo version: from master
  • RAM: 16GB
  • GPU/VRAM: GTX 1060 / 6GB

Steps to Reproduce

  1. Follow the guide to set up container
  2. Place any safetensors to its respective directory
  3. Load a safetensors checkpoint, or select a safetensors LoRA and hit "Generate"
  4. See error in docker container logs

Additional context
Any other context about the problem here. If applicable, add screenshots to help explain your problem.

My docker compose file:

    ports:
      - "${WEBUI_PORT:-7860}:7860"
    volumes:
      - &v1 ./data:/data
      - &v2 ./output:/output
      - "/srv/mergerfs/Event_Horizon/AI/StDi_Models/VAE:/data/models/VAE"
      - "/srv/mergerfs/Event_Horizon/AI/StDi_Models/embeddings:/data/models/embeddings"
#      - "/srv/mergerfs/Event_Horizon/AI/StDi_Models/Stable-diffusion:/data/models/Stable-diffusion"
      - "/srv/mergerfs/Event_Horizon/AI/StDi_Models/Lora:/data/models/Lora"
    stop_signal: SIGKILL
    tty: true
    deploy:
      resources:
        limits:
          memory: 6G
        reservations:
          devices:
            - driver: nvidia
              capabilities: [compute, utility]
              count: all
    restart: unless-stopped

name: webui-docker

services:
  download:
    build: ./services/download/
    profiles: ["download"]
    volumes:
      - *v1

  auto: &automatic
    <<: *base_service
    profiles: ["auto"]
    build: ./services/AUTOMATIC1111
    image: sd-auto:78
    environment:
      - CLI_ARGS=--allow-code --xformers --enable-insecure-extension-access --api

  auto-cpu:
    <<: *automatic
    profiles: ["auto-cpu"]
    deploy: {}
    environment:
      - CLI_ARGS=--no-half --precision full --allow-code --enable-insecure-extension-access --api

  comfy: &comfy
    <<: *base_service
    profiles: ["comfy"]
    build: ./services/comfy/
    image: sd-comfy:7
    environment:
      - CLI_ARGS=


  comfy-cpu:
    <<: *comfy
    profiles: ["comfy-cpu"]
    deploy: {}
    environment:
      - CLI_ARGS=--cpu

Log file of the error while attempting to load a safetensors checkpoint:
webui-docker-auto-1-2024-12-13T07-50-01.log

Log file of the error when it asks for a safetensors LoRA included in the prompt:
webui-docker-auto-1-2024-12-13T08-21-30.log

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions