• 520@kbin.social
    link
    fedilink
    arrow-up
    8
    arrow-down
    32
    ·
    edit-2
    1 year ago

    They are not talking about the training process

    They literally say they do this “to combat the racial bias in its training data”

    to combat racial bias on the training process, they insert words on the prompt, like for example “racially ambiguous”.

    And like I said, this makes no fucking sense.

    If your training processes, specifically your training data, has biases, inserting key words does not fix that issue. It literally does nothing to actually combat it. It might hide issues if the data model has sufficient training to do the job with the inserted key words, but that is not a fix, nor combating the issue. It is a cheap hack that does not address the underlying training issues.

    • Primarily0617@kbin.social
      link
      fedilink
      arrow-up
      47
      arrow-down
      1
      ·
      edit-2
      1 year ago

      but that is not a fix

      congratulations you stumbled upon the reason this is a bad idea all by yourself

      all it took was a bit of actually-reading-the-original-post

        • Primarily0617@kbin.social
          link
          fedilink
          arrow-up
          42
          arrow-down
          1
          ·
          edit-2
          1 year ago

          the point of the original post is that artificially fixing a bias in training data post-training is a bad idea because it ends up in weird scenarios like this one

          your comment is saying that the original post is dumb and betrays a lack of knowledge because artificially fixing a bias in training data post-training would obviously only result in weird scenarios like this one

          i don’t know what your aim is here

        • Gamma@beehaw.org
          link
          fedilink
          English
          arrow-up
          15
          ·
          edit-2
          1 year ago

          You started your initial rant based on a misunderstanding of what was actually said. Stumbling into the correct answer != knowing what you’re reacting to

    • lars@programming.dev
      link
      fedilink
      arrow-up
      27
      ·
      1 year ago

      Yes. The training data has a bias, and they are using a cheap hack (prompt manipulation) to try to patch it.

    • phx@lemmy.ca
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      1 year ago

      Any training data almost certainly has biases. For awhile, if you asked for pictures of people eating waffles or fried chicken they’d very likely be black.

      Most of the pictures I tried of kid-type characters were blue eyed.

      Then people review the output and say "hey this might still racist, so they tweak things to “diversity” the output. This is likely the result of that, where they’ve “fixed” one “problem” and created another.

      Behold, Homer in brownface. D’oh!