This is pretty amusing to see. Nothing really related to Linux / Steam Deck gaming, but more a state of the industry post that I thought you might also find fun. Redditors managed to trick an AI-powered news scraper.
It’s the humanless present. The AIs will get better in the future, presumably learning the things that human journalists have known for centuries - verify your sources.
Could a language model actually independently discern if a source is trustworthy? Seems that’s something difficult to determine when it comes to possible leaks. The kinds of AIs that we have today can’t really conceptualize a world outside the texts they process, they can only check based on other texts and user input.
I mean, chatGPT with its knowledge cutoff and no internet connection figured it out. See my comment below, I asked it and posted its response.
The guys who run that news website just didn’t include any checks in their algorithm. It doesn’t seem like an LLM problem at this point. A properly set up AutoGPT with an ability to look stuff up online would have no problem sorting though and fact-checking posts to decide which ones to use for an article.
It would need to be told to do so, of course. I can think of a couple of approaches. You could have it use a database to track the identities of information sources, so the AI would know whether it was coming from new or well-established sources. It could check to see if the news is appearing in other sources. A lot of this isn’t strictly large-language-model-based capability, but it would be using LLMs to interpret its inputs.
Analysis is social media through the lens of tracking source reliability would be damned useful without AI and if that could easily be done I think it would already be. I’ve thought about this for about five years, thinking we could track bots and disinformation based on the patterns of who promotes/upvotes it, but it’s beyond my meager means.
I think certain places (reddit?) Have been using algorithms to find and stamp out bots/vote manipulation for quite a while. I remember at least one major wave of bans for smurfed accounts participating in manipulation.
Human journalists already do this, though. All I’m suggesting is that these automated journalists should do likewise. That clearly wasn’t the case in this particular instance.
Beep bop, I’m [citation needed] bot, a large language model. The information you referenced in your post can neither be found in official Blizzard material including release notes, nor in community wikis. Have a nice day!
Exactly. An AI journalist could easily have a rule that tells it to go web-searching for other sources when something new like this pops up. If the WoW subreddit is going on about Glorbo rumors, check the other WoW fora to see if the same thing is being talked about there. Perhaps it would have found a post where someone talked about what those silly Redditors are up to with their fake Glorbo antics, or at the very least there would have been a suspicious silence.
Right now AI journalists are only being used to replace the absolute bottom-of-the-barrel human journalism, because that’s really cheap and easy and since that bottom-of-the-barrel journalism doesn’t earn much revenue finding a cheaper way to churn it out is useful. So AI journalism is getting a bad reputation. I hope that when the more refined AI journalists start putting out higher-quality material it won’t stick too badly. I’ve seen how whole disciplines of technology can be tarred with stereotypes that prevent it from being used in applications where it would be a genuine boon.
The kinds of AIs that we have today can’t really conceptualize a world outside the texts they process
The LLMs we have today process “tokens”, which can represent anything. That they happen to look “more intelligent” to humans when used as “text goes in, text comes out”, is a purely human bias, not a limitation of the AI.
Don’t be mistaken, LLMs can process, conceptualize, and output, anything that can be represented with a token, including the initial, intermediary, or final states of other AIs. That’s how multimodal AIs with plugins work right now.
Using text (with or without emojis) as an input/output system, is just a way to interact with humans, other AIs designed to input/output text, and to feedback (reflect) on themselves.
It’s the humanless present. The AIs will get better in the future, presumably learning the things that human journalists have known for centuries - verify your sources.
For now, though, this was a fun gag.
Could a language model actually independently discern if a source is trustworthy? Seems that’s something difficult to determine when it comes to possible leaks. The kinds of AIs that we have today can’t really conceptualize a world outside the texts they process, they can only check based on other texts and user input.
I mean, chatGPT with its knowledge cutoff and no internet connection figured it out. See my comment below, I asked it and posted its response.
The guys who run that news website just didn’t include any checks in their algorithm. It doesn’t seem like an LLM problem at this point. A properly set up AutoGPT with an ability to look stuff up online would have no problem sorting though and fact-checking posts to decide which ones to use for an article.
It would need to be told to do so, of course. I can think of a couple of approaches. You could have it use a database to track the identities of information sources, so the AI would know whether it was coming from new or well-established sources. It could check to see if the news is appearing in other sources. A lot of this isn’t strictly large-language-model-based capability, but it would be using LLMs to interpret its inputs.
Analysis is social media through the lens of tracking source reliability would be damned useful without AI and if that could easily be done I think it would already be. I’ve thought about this for about five years, thinking we could track bots and disinformation based on the patterns of who promotes/upvotes it, but it’s beyond my meager means.
I think certain places (reddit?) Have been using algorithms to find and stamp out bots/vote manipulation for quite a while. I remember at least one major wave of bans for smurfed accounts participating in manipulation.
Human journalists already do this, though. All I’m suggesting is that these automated journalists should do likewise. That clearly wasn’t the case in this particular instance.
Beep bop, I’m [citation needed] bot, a large language model. The information you referenced in your post can neither be found in official Blizzard material including release notes, nor in community wikis. Have a nice day!
Exactly. An AI journalist could easily have a rule that tells it to go web-searching for other sources when something new like this pops up. If the WoW subreddit is going on about Glorbo rumors, check the other WoW fora to see if the same thing is being talked about there. Perhaps it would have found a post where someone talked about what those silly Redditors are up to with their fake Glorbo antics, or at the very least there would have been a suspicious silence.
Right now AI journalists are only being used to replace the absolute bottom-of-the-barrel human journalism, because that’s really cheap and easy and since that bottom-of-the-barrel journalism doesn’t earn much revenue finding a cheaper way to churn it out is useful. So AI journalism is getting a bad reputation. I hope that when the more refined AI journalists start putting out higher-quality material it won’t stick too badly. I’ve seen how whole disciplines of technology can be tarred with stereotypes that prevent it from being used in applications where it would be a genuine boon.
The LLMs we have today process “tokens”, which can represent anything. That they happen to look “more intelligent” to humans when used as “text goes in, text comes out”, is a purely human bias, not a limitation of the AI.
Don’t be mistaken, LLMs can process, conceptualize, and output, anything that can be represented with a token, including the initial, intermediary, or final states of other AIs. That’s how multimodal AIs with plugins work right now.
Using text (with or without emojis) as an input/output system, is just a way to interact with humans, other AIs designed to input/output text, and to feedback (reflect) on themselves.