Top physicist says chatbots are just ‘glorified tape recorders’::Leading theoretical physicist Michio Kaku predicts quantum computers are far more important for solving mankind’s problems.
A physicist is not gonna know a lot more about language models than your average college grad.
That’s absolute nonsense. Physicists have to be excellent statisticians and, unlike data scientists, statisticians have to understand where the data is coming from, not just how to spit out simple summaries of enormously complex datasets as if it had any meaning without context.
And his views are exactly in line with pretty much every expert who doesn’t have a financial stake in hyping the high tech magic 8-ball. On the Dangers of Stochastic Parrots.
I had that paper in mind when I said that. Doesn’t exhibit a very thorough understanding of how these models actually work.
A common argument is that the human brain very well may work the exact same, ergo the common phrase, “I’m a stochastic parrot and so are you.”
Nope. Biologists also use statistical models and also know where the data is coming from etc etc. They are not experts in AI. This Michio Kaku guy is more like the African American Science Guy to me, more concerned with being a celeb.
Biologists are (often) excellent statisticians too, you’re correct. That’s why the most successful quants are biologists or physicists, despite not having trained in finance.
They’re not experts in (the badly misnamed) AI. They’re experts in the statistical models AI uses. They know an awful lot more than the likes of Sam Altman and the AI-hypers. Because they’re trained specialists, not techbro grifters.
I disagree, physics is the foundational science of all sciences. It is the science with the strongest emphasis on understanding math well enough to derive the equations that actually take form in the real world
Therefore, if you know physics, you know everything.
Remember the time Michio Kaku went on CBS to talk about Hurricane Harvey despite not being an expert on Hurricanes? He is a clout chaser who regularly steps outside his expertise to make hot takes for attention.
I also recommend this video that touches on Michio Kaku talking about things outside his expertise.
He’s not even a top physicist, just well known.
Yes. Glorified tape recorders that can provide assistance and instruction in certain domains that is very useful beyond what a simple tape recorder could ever provide.
Well it’s like a super tape recorder that can play back anything anyone has ever said on the internet.
But altered almost imperceptibly such that it’s entirely incorrect.
I wouldn’t call this guy a top physicist… I mean he can say what he wants but you shouldn’t be listening to him. I also love that he immediately starts shilling his quantum computer book right after his statements about AI. And mind you that this guy has some real garbage takes when it comes to quantum computers. Here is a fun review if you are interested https://scottaaronson.blog/?p=7321.
The bottom line is. You shouldn’t trust this guy on anything he says expect maybe string theory which is actually his specialty. I wish that news outlets would stop asking this guy on he is such a fucking grifter.
I wouldn’t call this guy a top physicist… I mean he can say what he wants but you shouldn’t be listening to him.
I don’t see how he has any time to be a “top physicist” when it seems he spends all his time on as a commenter on tv shows that are tangentially related to his field. On top of that LLM is not even tangentially related.
Leading theoretical physicist Michio Kaku
I wouldn’t listen too closely to discount Neil deGrasse Tyson these days, especially in domains in which he has no qualifications whatsoever.
Just set your expectations right, and chat it’s are great. They aren’t intelligent. They’re pretty dumb. But they can say stuff about a huge variety of domains
Well, one could argue that our brain is a glorified tape recorder.
behold! a tape recorder.
holds up a plucked chicken
I understand that he’s placing these relative to quantum computing, and that he is specifically a scientist who is deeply invested in that realm, it just seems too reductionist from a software perspective, because ultimately yeah - we are indeed limited by the architecture of our physical computing paradigm, but that doesn’t discount the incredible advancements we’ve made in the space.
Maybe I’m being too hyperbolic over this small article, but does this basically mean any advancements in CS research are basically just glorified (insert elementary mechanical thing here) because they use bits and von Neumann architecture?
I used to adore Kaku when I was young, but as I got into academics, saw how attached he was to string theory long after it’s expiry date, and seeing how popular he got on pretty wild and speculative fiction, I struggle to take him too seriously in this realm.
My experience, which comes with years in labs working on creative computation, AI, and NLP, these large language models are impressive and revolutionary, but quite frankly, for dumb reasons. The transformer was a great advancement, but seemingly only if we piled obscene amounts of data on it, previously unspeculated of amounts. Now we can train smaller bots off of the data from these bigger ones, which is neat, but it’s still that mass of data.
To the general public: Yes, LLMs are overblown. To someone who spent years researching creativity assistance AI and NLPs: These are freaking awesome, and I’m amazed at the capabilities we have now in creating code that can do qualitative analysis and natural language interfacing, but the model is unsustainable unless techniques like Orca come along and shrink down the data requirements. That said, I’m running pretty competent language and image models on 12GB of relatively cheap consumer video card, so we’re progressing fast.
He is trying to sell his book on quantum computers which is probably why he brought it up in the first place
Oh for sure. And it’s a great realm to research, but pretty dirty to rip apart another field to bolster your own. Then again, string theorist…
He’s a physicist. That doesn’t make him wise, especially in topics that he doesn’t study. This shouldn’t even be an article.
What’s a physicist going to know about machine learning and AI?
Is the logic really “he has a smart sounding job therefore he knows stuff about other smart sounding things”?
Idiotic headline written by slimey journalists to cater to knuckledraggers.
They could know quite a lot, ML is still a rather shallow field compared to the more established sciences, it’s arguably not even a proper science yet, perhaps closer to alchemy than chemistry. Max Tegmark is a cosmologist and he has learned it well enough for his opinion to count, this guy on the other hand is famous for his bad takes and has apparently gotten a lot wrong about QC even though he wrote a whole book about it.
Kaku is a quack.
I call them glorified spread sheets, but I see the correlation to recorders. LLMs, like most “AIs” before them, are just new ways to do line of best fit analysis.
That’s fine. Glorify those spreadsheets. It’s a pretty major thing to have cracked.
It is. The tokenization and intent processing are the thing that impress me most. I’ve been joking since the 90’s that the most impressive technological innovation shown on Star Trek TNG was computers that understand the intent of instructions. Now we have that… mostly.
To counter the grandiose claims that present-day LLMs are almost AGI, people go too far in the opposite direction. Dismissing it as being only “line of best fit analysis” fails to recognize the power, significance, and difficulty of extracting meaningful insights and capabilities from data.
Aside from the fact that many modern theories in human cognitive science are actually deeply related to statistical analysis and machine learning (such as embodied cognition, Bayesian predictive coding, and connectionism), referring to it as a “line” of best fit is disingenuous because it downplays the important fact that the relationships found in these data are not lines, but rather highly non-linear high-dimensional manifolds. The development of techniques to efficiently discover these relationships in giant datasets is genuinely a HUGE achievement in humanity’s mastery of the sciences, as they’ve allowed us to create programs for things it would be impossible to write out explicitly as a classical program. In particular, our current ability to create classifiers and generators for unstructured data like images would have been unimaginable a couple of decades ago, yet we’ve already begun to take it for granted.
So while it’s important to temper expectations that we are a long way from ever seeing anything resembling AGI as it’s typically conceived of, oversimplifying all neural models as being “just” line fitting blinds you to the true power and generality that such a framework of manifold learning through optimization represents - as it relates to information theory, energy and entropy in the brain, engineering applications, and the nature of knowledge itself.
Ok, it’s a best fit line on an n-dimentional matrix querying a graphdb ;)
My only point is that this isn’t AGI and too many people still fail to recognize that. Now people are becoming disillusioned with it because they’re realizing it isn’t actually creative. It’s still still just a fancy comparison engine. That’s not not world changing, but it’s also not Data from Star Trek
Theoretical physicist and a questionable one at that
Thanks for good article link
More people need to learn about Racter. This is nothing new.