There is a machine learning bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by machine learning. But it will probably be crappier, not better.
What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.
AI is defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of AI are going to make the world worse. The AI revolution is here, and I don’t really like it.
You know that a LLM is a statistical word prediction thing, no? That LLMs “hallucinate”. That this is an inevitable consequence of how they work. They’re designed to take in a context and then sound human, or sound formal, or sound like an excellent programmer, or sound like a lawyer, but there’s no particular reason why the content that they present to you would be accurate. It’s just that their training data contains an awful lot of accurate data which has a surprisingly large amount of commonality of meaning.
You say that the current crop of LLMs are good at Wikipedia style questions, but that’s because their authors have trained them with some of the most reliable and easy to verify information on the Web. A lot of that is Wikipedia style stuff. That’s it’s core knowledge, what it grew up reading, the yardstick by which it was judged. And yet it still goes off on inaccurate tangents because there’s nothing inherently accurate about statistically predicting the next word based on your training and the context and content of the prompt.
Yes, LLMs sound like they understand your prompt and are very knowledgeable, but the output is fundamentally not a fact-based thing, it’s a synthesized thing, engineered to sound like its training data.
You do not query the LLM directly. The LLM just provides the baseline language understanding. You use the LLM to extract information out of websites and convert it into a machine readable format. You can do that with ChatGPT today:
Prompt: Extract important product information out of this text and format it as json: [copy and paste random Amazon.com website] Answer: Here's the important product information extracted from the text and formatted as JSON: { "Product Name": "kwmobile 8 Port Patch Panel - RJ45 Cat6 Shielded Network Splitter Panel with Ground Wire", "Price": { "Discounted Price": "$20.99", "Typical Price": "$22.99" }, "Color": "Black", "Brand": "Kwmobile", "Connector Type": "RJ45", "Cable Type": "Ethernet", ... }
That’s the power of LLMs. They aren’t better a Google, they are a way to interface with semantic information stored in human readable text (or pictures or sound). And with that extracted information you can go and built a better Google or just let the LLM browse the web and search for information relevant to you.