You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In many situations, we want to apply different types of controls to different parts of the canvas. A good example is how 3D openpose editor handles hand/feet. The main body skelelton is send to ControlNet as openpose image, while hands/feets are send to ControlNet as depth maps. The editor makes sure that the location of the depth maps and skeleton sync with each other.
There are other potential use cases, where different ControlNet units are enabled and the relative position of Controlled areas need to be synced with each other. In order to achieve this, current solution is to use a 3rd party tool like Photoshop, and put all processed maps ControlNet generated into it to apply masks / change relative positions.
Due to complexity of the current workflow, this issue is proposing an ControlNet feature to simplify this process.
Following is the proposed workflow:
The user can click a button Layer View to open a modal. The modal should embed an image editor, e.g. photopea. See how photopea can be embed into A1111 in https://github.com/yankooliveira/sd-webui-photopea-embed) (Maybe not a modal that takes up the whole screen, but a foldable panel to be put side by side with the ControlNet UI)
The user can upload an image to the editor, select the image layer, select a preprocessor type, and click preprocess button on modal. The preprocess button will send an API call to /detect endpoint to get the processed map, and put back that processed map to the image editor. (If the editor is put side by side with the ControlNet UI, we might want to use ControlNet's UI to generate preprocessor maps and send maps to the editor)
When the uesr is satisfied with the result, he/she can click send to ControlNet to send the maps back to ControlNet units. Ideally one layer will correspond to one ControlNet unit.
The text was updated successfully, but these errors were encountered:
In many situations, we want to apply different types of controls to different parts of the canvas. A good example is how 3D openpose editor handles hand/feet. The main body skelelton is send to ControlNet as openpose image, while hands/feets are send to ControlNet as depth maps. The editor makes sure that the location of the depth maps and skeleton sync with each other.
There are other potential use cases, where different ControlNet units are enabled and the relative position of Controlled areas need to be synced with each other. In order to achieve this, current solution is to use a 3rd party tool like Photoshop, and put all processed maps ControlNet generated into it to apply masks / change relative positions.
Due to complexity of the current workflow, this issue is proposing an ControlNet feature to simplify this process.
Following is the proposed workflow:
Layer View
to open a modal. The modal should embed an image editor, e.g. photopea. See how photopea can be embed into A1111 in https://github.com/yankooliveira/sd-webui-photopea-embed) (Maybe not a modal that takes up the whole screen, but a foldable panel to be put side by side with the ControlNet UI)preprocess
button on modal. The preprocess button will send an API call to/detect
endpoint to get the processed map, and put back that processed map to the image editor. (If the editor is put side by side with the ControlNet UI, we might want to use ControlNet's UI to generate preprocessor maps and send maps to the editor)send to ControlNet
to send the maps back to ControlNet units. Ideally one layer will correspond to one ControlNet unit.The text was updated successfully, but these errors were encountered: