• 5 Posts
  • 328 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle








  • here’s some interesting context on the class action:

    They wanted an expert who would state that 3D models aren’t worth anything because they are so easy to make. Evidently Shmeta and an ivy league school we will call “Schmarvard” had scraped data illegally from a certain company’s online library and used it to to train their AI…

    this fucking bizarro “your work is worthless so no we won’t stop using it” routine is something I keep seeing from both the companies involved in generative AI and their defenders. earlier on it was the claim that human creativity didn’t exist or was exhausted sometime in the glorious past, which got Altman & Co called fascists and made it hard for them to pretend they don’t hate artists. now the idea is that somehow the existence of easy creative work means that creative work in general (whether easy or hard) has no value and can be freely stolen (by corporations only, it’s still a crime when we do it).

    not that we need it around here, but consider this a reminder to never use generative AI anywhere in your creative workflow. not only is it trained using stolen work, but making a generative AI element part of your work proves to these companies that your work was created “easily” (in spite of all proof to the contrary) and itself deserves to be stolen.





  • Like many here on awful.systems I have a pretty thick skin, but reading the above put me in a really weird mood all day.

    same here. the thing is, I think a lot of us are on awful.systems because we’ve seen far too much of how fascism operates and spreads online. this is an antifascist place; it’s so core to the mission that we don’t publish it as a policy (because a policy can be argued against and twisted and the fash kids love doing that), we just demonstrate it in a way that can’t be ignored. so seeing the first or second (I don’t keep track of these things) most popular social media platform publish a policy whose only purpose is to be used as a weapon against marginalized people, for it to be written in a matter-of-fact “this is just how it is” way, and for essentially nobody outside of the fediverse to push back on it in any real way — that is shocking.








  • guess again

    what the locals are probably taking issue with is:

    If you want a more precise model, you need to make it larger.

    this shit doesn’t get more precise for its advertised purpose when you scale it up. LLMs are garbage technology that plateaued a long time ago and are extremely ill-suited for anything but generating spam; any claims of increased precision (like those that openai makes every time they need more money or attention) are marketing that falls apart the moment you dig deeper — unless you’re the kind of promptfondler who needs LLMs to be good and workable just because it’s technology and because you’re all-in on the grift