Which of the following sounds more reasonable?

  • I shouldn’t have to pay for the content that I use to tune my LLM model and algorithm.

  • We shouldn’t have to pay for the content we use to train and teach an AI.

By calling it AI, the corporations are able to advocate for a position that’s blatantly pro corporate and anti writer/artist, and trick people into supporting it under the guise of a technological development.

  • gaun@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    há 1 ano

    What is meant by the term “AI” has definitely shifted overtime. What I would have considered to be an AI back in time is nowadays referred to as an “AGI”. So they simply changed the language. LLMs are not really capable of “intelligence” they are just automated statistics. On the other hand what really is intelligence? The output does appear intelligent. Maybe in the end it does not matter how it is generated.

    • NXTR@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      há 1 ano

      There a great Wikipedia article which talk about it. Basically AI has always been used as a fluid term to describe forms of machine decision making. A lot of the times it’s used as a marketing term (except when it’s not like during the AI Winter). I definitely think that a lot of the talk about regulation around “AI” is essentially trying to wall off advanced LLMs to the companies who can afford to go through the regulation paperwork while making sure those who are pushing for regulation now stay ahead. However, I’m not so sure calling something AI vs LLMs will make any difference when it comes to actual intellectual property litigation due to how the legal system operates.