This extension enables faster image generation without the need for expensive GPUs and seamlessly integrates with the AUTOMAIC1111 UI.
- No expensive GPUs required, can even use the CPU.
- No need to change your workflow, compatible with the usage and scripts of sd-webui, such as X/Y/Z Plot, Prompt from file, etc.
- Support for 10000+ Checkpoint models, don't need download
Feature | Support | Limitations |
---|---|---|
txt2img | ✅ | |
txt2img_hires.fix | ✅ | |
txt2img_sdxl_refiner | ✅ | |
txt2img_controlnet | ✅ | |
img2img | ✅ | |
img2img_inpaint | ✅ | |
img2img_sdxl_refiner | ✅ | |
img2img_controlnet | ✅ | |
extras upscale | ✅ | |
vae model | ✅ | |
scripts - X/Y/Z plot | ✅ | |
scripts - Prompt matrix | ✅ | |
scripts - Prompt from file | ✅ |
2. Get your omniinfer.io Key
Open omniinfer.io in browser
We can choice "Google Login" or "Github Login"
Let us back to Cloud Inference
tab of stable-diffusion-webui
Let us back to Txt2Img
tab of stable-diffusion-webui
From now on, you can give it a try and enjoy your creative journey.
Furthermore, you are welcome to freely discuss your user experience, share suggestions, and provide feedback on our Discord channel.
or you can use the VAE feature with X/Y/Z
The AUTOMATIC1111 webui loads the model on startup. However, on low-memory computers like the MacBook Air, the performance is suboptimal. To address this, we have developed a stripped-down minimal-size model. You can utilize the following commands to enable it.
its will reduce memory from 4.8G -> 739MB
- Download tiny model and config to model config.
wget -O ./models/Stable-diffusion/tiny.yaml https://github.com/omniinfer/sd-webui-cloud-inference/releases/download/tiny-model/tiny.yaml
wget -O ./models/Stable-diffusion/tiny.safetensors https://github.com/omniinfer/sd-webui-cloud-inference/releases/download/tiny-model/tiny.safetensors
- start webui with tiny model
--ckpt=/stable-diffusion-webui/models/Stable-diffusion/tiny.safetensors