I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

  • GoosLife@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    1 year ago

    If there’s something illegal in your dish, you throw it out. It’s not a question. I don’t care that you spent a lot of time and money on it. “I spent a lot of time preparing the circumstances leading to this crime” is not an excuse, neither is “if I have to face consequences for committing this crime, I might lose money”.

    • Quokka@quokk.au
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Fuck no.

      It’s illegal to be gay in many places, should we throw out any AI that isn’t homophobic as shit?

      • GoosLife@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        No, especially because it’s not the same thing at all. You’re talking about the output, we’re talking about the input.

        The training data was illegally obtained. That’s all that matters here. They can train it on fart jokes or Trump propaganda, it doesn’t really matter, as long as the Trump propaganda in question was legally obtained by whoever trained the model.

        Whether we should then allow chatbots to generate harmful content, and how we will regulate that by limiting acceptable training data, is a much more complex issue that can be discussed separately. To address your specific example, it would make the most sense that the chatbot is guided towards a viewpoint that aligns with its intended userbase. This just means that certain chatbots might be more or less willing to discuss certain topics. In the same way that an AI for children probably shouldn’t be able to discuss certain topics, a chatbot that’s made for use in highly religious area, where homosexuality is very taboo, would most likely not be willing to discuss gay marriage at all, rather than being made intentionally homophobic.

        • Quokka@quokk.au
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          The output only exists from the input.

          If you feed your model only on “legal” content, that would in many places ensure it had no LGBT+ positive content.

          Legality (and the dubious nature of justice systems) of training data is not the angle to be going for.