I wonder if there’s any data available on how much content was generated from 3rd party apps. in my experience, reddit was accessed solely through an app (rip baconreader). am I wrong in thinking that the users are the product and advertisers the customer?
While third party app users probably had a larger proportion of contributors, Reddit is big enough to still have plenty of content. Moderators are more interesting and it remains to be seen over time if an erosion of quality moderation happens which would make Reddit even shittier. Especially since Reddit seems to keep fumbling when it comes to providing good first party mod tools, see the whole r/Blind fiasco.
am I wrong in thinking that the users are the product and the advertisers the customer?
As long as profitability is the goal then you are correct.
You’re not wrong and this article really drove this home for me.
Especially how leadership really didn’t care about moderators having the tools they need to do the job they volunteered for.
They don’t care about the quality of the site, just that people keep posting so they can package up all that sweet sweet data for advertisers.
Think about all the little niche communities. I’m sure Reddit can link your username to your real identity internally. Imagine the profiles they can build and sell.
They don’t even care if you stop posting, it’s all there.
am I wrong in thinking that the users are the product and advertisers the customer?
I think there was/is a monetization route through the use of user data (probably why they’re pushing their app so much) as well as using all the data for things like language AI modeling.
But on that last one it seems like the biggest players such as openAI and Microsoft already scraped their site freely. Not sure if they’ve missed a big chunk of that opportunity by now
I have developed the impression, and it’s mostly just my hunch not so much evidence based - spez and co are kind of kicking themselves for being oblivious to the AI training rush and failing to monetize on it. Probably didn’t even realize it was happening until we all did via crazy headlines showing up in news about what AI could do. That kind of thing may lead to kneejerk decisions on api access
If Reddit leadership was oblivious, their heads were under a rock. Various GPT iterations have been training on different subreddits and posting to places like r/SubredditSimulator for years and have even been reported on in the media well before ChatGPT came out. Here is one report on it from 4 years ago:
I wonder if there’s any data available on how much content was generated from 3rd party apps. in my experience, reddit was accessed solely through an app (rip baconreader). am I wrong in thinking that the users are the product and advertisers the customer?
While third party app users probably had a larger proportion of contributors, Reddit is big enough to still have plenty of content. Moderators are more interesting and it remains to be seen over time if an erosion of quality moderation happens which would make Reddit even shittier. Especially since Reddit seems to keep fumbling when it comes to providing good first party mod tools, see the whole r/Blind fiasco.
As long as profitability is the goal then you are correct.
You’re not wrong and this article really drove this home for me.
Especially how leadership really didn’t care about moderators having the tools they need to do the job they volunteered for.
They don’t care about the quality of the site, just that people keep posting so they can package up all that sweet sweet data for advertisers.
Think about all the little niche communities. I’m sure Reddit can link your username to your real identity internally. Imagine the profiles they can build and sell.
They don’t even care if you stop posting, it’s all there.
I think there was/is a monetization route through the use of user data (probably why they’re pushing their app so much) as well as using all the data for things like language AI modeling.
But on that last one it seems like the biggest players such as openAI and Microsoft already scraped their site freely. Not sure if they’ve missed a big chunk of that opportunity by now
I have developed the impression, and it’s mostly just my hunch not so much evidence based - spez and co are kind of kicking themselves for being oblivious to the AI training rush and failing to monetize on it. Probably didn’t even realize it was happening until we all did via crazy headlines showing up in news about what AI could do. That kind of thing may lead to kneejerk decisions on api access
If Reddit leadership was oblivious, their heads were under a rock. Various GPT iterations have been training on different subreddits and posting to places like r/SubredditSimulator for years and have even been reported on in the media well before ChatGPT came out. Here is one report on it from 4 years ago:
https://www.engadget.com/2019-06-05-subreddit-simulator-gpt-2-bots.html
According to the article, r/SubredditSimulator is 7 years old. The cat is way out of the bag here.