Basically the title. I’m looking for a way to start generating an image, use ControlNet halfway though the generation to detect the pose and stabilize it for the remaining steps. Is there any way to do this? Any help would be appreciated!
There’s a way to do this in Auto1111 (sort of):
- generate an image with part of your steps
- Enable openpose
- Add the partially generated pixel image
- Set controlnet to start at the halfway point, etc.
- re-generate the image with the same settings
This feels pretty janky, though. I think you could do it better (and in one shot) in comfyUI by processing the partially generated latent, feeding that result to a controlnet preprocessor node, then adding the controlnet conditioning and latent to a new ksampler node and finishing generation from the original latent at whatever step you split off.
Thanks for the suggestion! I will give ComfyUI a try.
Isn’t there a starting step/ending step parameter you can just use on controlnet with Autmatic1111? It uses percentage of the generation so just set the start to .5.
Yeah, that’s what I tried first, but it won’t work without an image or pose already loaded into the ControlNet tab.
oooh now I see the conundrum. You are basically trying to find a way to sort of bake openpose into the image generation process… yeah I’m not sure how to do that other than what was suggested below or coding an extension for it.
Yeah, that’s exactly it.