Basically the title. I’m looking for a way to start generating an image, use ControlNet halfway though the generation to detect the pose and stabilize it for the remaining steps. Is there any way to do this? Any help would be appreciated!
Basically the title. I’m looking for a way to start generating an image, use ControlNet halfway though the generation to detect the pose and stabilize it for the remaining steps. Is there any way to do this? Any help would be appreciated!
There’s a way to do this in Auto1111 (sort of):
This feels pretty janky, though. I think you could do it better (and in one shot) in comfyUI by processing the partially generated latent, feeding that result to a controlnet preprocessor node, then adding the controlnet conditioning and latent to a new ksampler node and finishing generation from the original latent at whatever step you split off.
Thanks for the suggestion! I will give ComfyUI a try.