Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this.)

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    2 days ago

    Brief overlapping thoughts between parenting and AI nonsense, presented without editing.

    The second L in LLM remains the inescapable heart of the problem. Even if you accept that the kind of “thinking” (modeling based on input and prediction of expected next input) that AI does is closely analogous to how people think, anyone who has had a kid should be able to understand the massive volume of information they take in.

    Compare the information density of English text with the available data on the world you get from sight, hearing, taste, smell, touch, proprioception, and however many other senses you want to include. Then consider that language is inherently an imperfect tool used to communicate our perceptions of reality, and doesn’t actually include data on reality itself. The human child is getting a fire hose of unfiltered reality, while the in-training LLM is getting a trickle of what the writers and labellers of their training data perceive and write about. But before we get just feeding a live camera and audio feed, haptic sensors, chemical tests, and whatever else into a machine learning model and seeing if it spits out a person, consider how ambiguous and impractical labelling all that data would be. At the very least I imagine the costs of doing so are actually going to work out to be less efficient than raising an actual human being and training them in the desired tasks.

    Human children are also not immune to “hallucinations” in the form of spurious correlations. I would wager every toddler has at least a couple of attempts at cargo cult behavior or inexplicable fears as they try to reason a way to interact with the world based off of very little actual information about it. This feeds into both versions of the above problem, since the difference between reality and lies about reality cannot be meaningfully discerned from text alone and the limited amount of information being processed means any correction is inevitably going to be slower than explaining to a child that finding a “Happy Birthday” sticker doesn’t immediately make it their (or anyone else’s) birthday.

    Human children are able to get human parents to put up with their nonsense ny taking advantage of being unbearably sweet and adorable. Maybe the abundance of horny chatbots and softcore porn generators is a warped fun house mirror version of the same concept. I will allow you to fill in the joke about Silicon Valley libertarians yourself.

    IDK. Felt thoughtful, might try to organize it on morewrite later.