Releases: Mystfit/Unreal-StableDiffusionTools
v0.12.0 - SDXL and Image Pipelines
SDXL
SDXL 1.0 support is now available. Pick either the SDXL_1-0
or SDXL_1-0_PromptOnly
pipeline preset to get started. The default image size is 768x768px but will go up to 1024x1024px without requiring upscaling. No ControlNet support yet until the official weights are out.
Image Pipelines
The plugin now supports multi-stage image pipelines. This lets us generate images using one model and then chain the result into another model by passing either images or latents along for further processing. This change was to facilitate the addition of SDXL as a new base model type that has an associated refiner model that adds more detail to the final image and makes for more appealing outputs. To generate an image, you need to pick an image pipeline preset that will load some sensible defaults for different types of tasks.
UI changes
This version introduces a number of sweeping UI changes that might be a bit confusing at first. Here are some of the noticeable differences:
- The
Generate
andUpscale
buttons have moved from the sidebar to the top-right of the toolbar. - Pipeline settings have moved to individual image pipeline stages.
- Layers have moved to individual image pipeline stages.
- Since all generations are now saved as textures by default, the image outputs section in the sidebar has been removed. The
texture save location
is now located in theGeneration settings
category along with some other settings that didn't have a home. - The external image section has been removed. Instead, you can right click an image in the history reel at the bottom of the UI to open a menu which will let you export the texture to an image. This uses the same code path as the asset browser's export feature so it will open a destination file dialog automatically.
- Depreciated auto-upscale on generation. This might be added in later as an additional image pipeline stage.
- Generation options are now inline overridable so you need to hit the checkbox next to a property first in order to modify it.
UI Fixes
- Changing the image width/height generation properties will update the size of the canvas in the viewport.
- Viewport framing now respects both width and height.
- Alpha checkerboard finally working (in preparation for inpainting upgrade).
- Additional progress bar colours and messages.
- The first base model in an image pipeline will autofill the image width and height generation options.
Full Changelog: v0.11.1...v0.12.0
v0.11.1 - ControlNet strength and cache updates
- Huggingface models will be now be cached in the model download path that can be set from
Project Settings->Stable Diffusion Tools
. - Added strength property to all layer processor options. Can be used to set the
controlnet_conditioning_scale
argument for ControlNet passes which you can use to tweak how strongly a ControlNet layer will influence the generated image.
Full Changelog: v0.11.0...v0.11.1
v0.11.0 - Textual Inversion, Scheduler overrides, and bundled python dependencies
- Added support for Textual Inversion models. Download them from the in-editor Civit.ai browser or import them with the
Convert models
page in the Model browser window. - Scheduler overrides now now be set in the plugin UI and the scheduler. The plugin UI will trigger a model load and then will show all supported schedulers available for that model.
- Frozen python dependencies are now be distributed alongside plugin releases. Due to Github release file size limits the zip has been split into two parts using 7zip. Download both StableDiffusionTools-full-0.11.0.zip.001 and StableDiffusionTools-full-0.11.0.zip.002, right click on the 001 file and choose
7zip->Extract files
and pick your project (or engine) plugins directory. You can still use the regular plugin in case you want to run the normal dependency installer to download the python dependencies. - OpenPose layer processor assets (and any BP layer processors in the future) are now loaded on editor startup which fixes them not being selectable in the plugin UI.
Full Changelog: v0.10.0...v0.11.0
v0.10.0 - LoRA and in-editor model downloads
This release adds support for LoRA (Low-Rank Adaptation) models as well as in-editor browsers to help download LoRA models and Stable Diffusion checkpoint models from Civit.ai and Huggingface.co
- Added a new Model Tools window containing model browsers and convertors.
- Added civit.ai web widget (look for the 'Download to Unreal' button on a model page).
- Added huggingface web widget (look for the 'Use in Unreal' button on a model page).
- Added model conversion tool to convert checkpoints from .safetensor format into diffusers format.
- Added LoRA model support. Find LoRA models from the Civit.ai browser.
v0.10.0 Beta
This is a big release so I'm going to keep it as a beta for the time being to try and shake out any bugs.
Here's a list of updates off the top of my head.
Dependencies
- Updated Diffusers to 0.17.1
- Updated other dependencies. Don't forget to update your dependencies with clean-install checked!
UI Updates
- Generation history panel that will fill up with previously generated images. You can click on them to reload them into the plugin window along with all of the generation options that were used to create the image.
- Hideable sidebar.
- Hideable and resizeable and history panel.
- Viewport images can now be automatically positioned in the middle of the plugin viewport with the "frame all" button.
- Added a toolbar to hold new/future operation buttons. Some of the big plugin UI buttons might eventually move up there depending on feedback.
- You can now drop textures or generation result data assets into the plugin UI to load the image into the viewport along with the asset's generation properties.
- Added a texture source type which will use whatever is loaded into the viewport as an init image.
- Layers have been moved out of model presets and now live in the plugin UI. You can now mix and match controlnet layer processors without needing to create new model assets.
- The plugin UI will let you know if you don't have the right layers setup for the currently selected pipeline.
- Generated images will auto-save data assets by default. Images will live in "/Game/SDOutputs" unless you change the path in the Texture asset options.
DataAsset Updates
- Split the model data asset into seperate model and pipeline data assets.
- Layer processors now contain all of the python init scripts required to initialize the pipeline they're loaded into.
- Added StableDiffusionControlNetImg2ImgPipeline.
- LayerProcessor properties have been moved into a LayerProcessorOptions object. The plugin UI will automatically create this object in the UI if the Layer has exposed properties that can be edited.
Sequencer Updates
- Added new "Layer Processor" track type. This is the sequencer equivalent to adding multiple layer processors in the plugin UI. LayerProcessorOption object properties are exposed as sequencer parameters and are keyable.
Stability Updates
- Fixed some crashes relating to textures not being ready.
Full Changelog: v0.9.1...v0.10.0-Beta
v0.9.1 - Viewport capture fixes
A small bugfix for those who may be having issues with the viewport capture method only capturing black frames.
Full Changelog: v0.9.0...v0.9.1
v0.9.0 - Projection
- Added image plate projection method. You can now spawn a camera actor with an attached image plate containing a generated image at the location in the world that the image was generated from. Useful for creating animatics using the sequencer where you can add different camera cuts for each generated image.
- The dependencies button in the plugin UI will now open the dependencies window as a popup menu. The old floating dependencies window is still available in the Window menu
- Removed the model init button. Models will not load when the generation button is clicked if the model preset has changed.
- Updated the progress bar at the bottom of the viewport to update more frequently and added visual states to represent when a model is downloading or loading.
- Camera texture projection has been marked as experimental but is included in this build. Generate an image then project the gnerated image onto the world to bake the texture to per-instance textures. You can then move the camera and generate a new image which will get baked into the parts of the first texture that weren't visible from the camera in the first pass. Using this method, you can build up a texture from multiple perspectives. This feature is still a work-in-progress.
Full Changelog: v0.9.0-alpha...v0.9.0-beta
v0.9.0-alpha
WARNING: The features in this release are not finished.
- Support for OpenPose characters for finer pose control. Drop a BP_MannequinOpenPose actor into your scene and use a control rig to pose it, then add it to an actor layer, pick the layer in the OpenPose layer processor and it will output a coloured OpenPose skeleton.
- Very WIP but this alpha has a preview of a new texture projection and baking feature. The projection section in the plugin UI will let you start a bake session which will take the last generated image and project it onto your level geometry. The resulting texture will be baked back to the model's UV space so you won't need custom UV coordinates. A bake session will also let you generate new images and bake the same meshes from multiple angles which will allow you to fully cover an entire mesh. Generated textures will have fairly obvious seams at the moment but this will be resolved by the time this feature goes live in 0.9.0
v0.8.2 - Dependency updates
- IMPORTANT: If you are updating the plugin from a version prior to 0.8.2 then I recommend checking both the
Clean install
andClear system packages
checkboxes in order to clean up older packages that will still be hanging around in your Unreal's python site-packages folder. - The dependency installer is now more robust and will now allow you to clean-install packages to fix dependency issues.
- Python packages are now installed into
%LOCALAPPDATA%\UnrealEngine\5.1\Saved\StableDiffusionToolsPyEnv
. If you have installed other plugins which have installed python packages intoYOURUNREALFOLDER\Engine\Binaries\ThirdParty\Python3\Win64\lib\site-packages
then you might need to re-install or update them. - Torch has been upgraded to version 2.0 which has removed the need for the xformers package as a dependency.
- Added the compel package to fix prompt weighting. Each prompt you specify will have a weight applied to individual prompt array entry, but you can add subweights within the text if you need more precise control.
Full Changelog: v0.8.1...v0.8.2
v0.8.1 - Layers and ControlNet
-
Added support for ControlNet Diffusers models. Two examples have been provided, a model using both a normal and depth map and a model using canny edge detection. Check out their respective model DataAssets as an example for how to implement other ControlNet models.
-
Added a new layer processing system for models to streamline the configuration of models that require different types of input images and converted all existing model DataAssets to use the new system. You can now also provide additional arguments for a Diffusers pipeline in the form of additional Python code that will be run at model initialisation time.
-
Revamped the generation section of the plugin UI to more closely match the rest of Unreal Editor's UI.
Known Issues:
- Prompt weighting currently not working. The community pipeline that was used to implement prompt weights has not been updated to match the rest of the diffusers library in a while but there is a package that provides similar functionality that will be implemented in the next update of the plugin.
- Layer previewing will not update in realtime if a layer is selected whilst another layer is currently being previewed. Hide the current layer first before switching.
- Layers previewing will sometimes show a black image after generating an image. Try switching models or use the "Debug python images" option as a workaround.
Full Changelog: v0.7.2...v0.8.1