I was using stable diffusion a lot previously, but haven’t really touched it in the past several months. I was wondering what interfaces people are using these days?

Automatic1111 still seems to be popular, and that’s the one I am most familiar with. I know there are some others now though, like comfy, and I guess maybe invokeAI is still going?

  • Flicsmo@rammy.site
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    Automatic1111’s Stable Diffusion WebUI is hard to give up, with how many features it has that are missing in other frontends. I use anapnoe’s fork for a slightly better UI. I would use vladmandic’s fork, but some of the changes have caused issues for my particular setup.

  • Swexti@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    Is no-one here running ComfyUI? It’s one of my favorite UI’s as it’s completely node based and extensive! Has everything auto1111 has and even more! EDIT: It’s not really everything, but almost!

    • KiranWells@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m also using ComfyUI. It just has the ability to do so much more than something like Automatic1111, even if it is missing a couple of features. For example, I have several workflows that do incremental changes to a photo, changing the prompt halfway through the generation, or even upscaling halfway through the generation.

      I can’t really imagine going back, unless there is some killer feature that Comfy is missing.

  • radialmonster@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    i used to use invoke ai,. invoke ai just realeased version 3 beta. i will wait for a few more betas or rc for 3 to use it again.
    These days I usually use Makeayo

  • Dax87@forum.stellarcastle.net
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    edit-2
    1 year ago

    The porn industry is doomed!

    Edit: well I think the title of this post changed or my comment ended up on the wrong post so my comment is irrelevant lol

            • 2dollarsim@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I would agree, but the rate of innovation in AI is so unpredictable that it could go either way.

              • pokexpert30@lemmy.pussthecat.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I don’t really agree.

                Recent AI inovations are pretty modest and use the innovation of raw fucking power to achieve goals.

                Gpt4 uses 230B parameters, whereas to run a 7B LLM you need 16gb of vram already, and llms are o(n²) in complexity in terms of parameters, I’ll let you do the maths

                Stable diffusion (latent diffusion to be more precise) is about the same, the initial training required billions of teraflop, while it was relatively cheap (100k$), it still rides on modern GPU technology .

  • IntheTreetop@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’m stuck with an AMD card for other purposes so I pretty much have to use the DirectML fork which is okay, but it’s very slow and despite having 12gb of video ram, I still get the out of ram messages all the time. But, hopefully some progress will be made somewhat soon on those cards.

    But it is fun, that’s for sure.