We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege."

  • Classism. Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.
  • Ableism. Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing. Some brains and ability levels require outside help or accommodations to achieve certain goals. The notion that all writers “should“ be able to perform certain functions independently or is a position that we disagree with wholeheartedly. There is a wealth of reasons why individuals can’t “see” the issues in their writing without help.
  • General Access Issues. All of these considerations exist within a larger system in which writers don’t always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur.

Presented without comment.

  • jaschop@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Doesn’t even mention the one use case I have a moderate amount of respect for, automatically generating image descriptions for blind people.

    And even those should always be labeled, since AI is categorically inferior to intentional communication.

    They seem focused on the use case “I don’t have the ability to communicate with intention, but I want to pretend I do.”

    • faercol@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      AI and ML (and I’m not talking about LLM, but more about those techniques in general) have many actual uses, often when the need is “you have to make a decision quickly, and there’s a high tolerance for errors or imprecision”.

      Your example is a perfect example: it’s not as good as a human-generated caption, it can lack context, or be wrong. But it’s better than the alternative of having nothing.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        But it’s better than the alternative of having nothing.

        I’d take nothing over trillions of dollars dedicated to igniting the atmosphere for an incorrectly captioned video

        • faercol@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Oh yeah I’m not arguing with you on that. AI has become synonymous with LLM, and doing the most generic models possible, which means syphoning (well stealing actually) stupid amounts of data, and wasting a quantity of energy second only to cryptocurrencies.

          Simpler models that are specialized in one domain instead do not cost as much, and are more reliable. Hell, spam filters have been partially based on some ML for years.

          But all of that is irrelevant at the moment, because IA/ML is not one possible solution among other solutions that are not based on ML. Currently they are something that must be pushed as much as possible because it’s a bubble that gets investors, and I’m so waiting forward for it to burst.