I hear people saying things like “chatgpt is basically just a fancy predictive text”. I’m certainly not in the “it’s sentient!” camp, but it seems pretty obvious that a lot more is going on than just predicting the most likely next word.
Even if it’s predicting word by word within a bunch of constraints & structures inferred from the question / prompt, then that’s pretty interesting. Tbh, I’m more impressed by chatgpt’s ability to appearing to “understand” my prompts than I am by the quality of the output. Even though it’s writing is generally a mix of bland, obvious and inaccurate, it mostly does provide a plausible response to whatever I’ve asked / said.
Anyone feel like providing an ELI5 explanation of how it works? Or any good links to articles / videos?
But that is all that’s going on. It has just been trained on so much text that the predictions “learn” the grammatical structure of language. Once you can form coherent sentences, you’re not that far from ChatGPT.
The remarkable thing is that prediction of the next word seems to be “sufficient” for ChatGPT’s level of “intelligence”. But it is not thinking or conscious, it is just data and statistics on steroids.
Try to use it to solve a difficult problem and it will become extremely obvious that it has no idea what it is talking about.
Yup. I used it to try to figure out why our Java code was getting permission denied on jar files despite being owned by the user running the code and 777 permissions while upgrading from rhel7 to 8
It gave me some good places to check, but the answer was that rhel8 uses fapolicyd instead of selinux (which I found myself on some tangentially related stack exchange post)
The magic sauce is context length within reasonable compute restraints. Phone predictive text has a context length of like 2-3 words, ChatGPT (and other LLMs) have figured out how to do predictions on thousands or tens of thousands of words of context at a time.
I think this explanation would be more satisfying if we had a better understanding of how the human brain produces intelligence.
I agree. We don’t actually know that the brain isn’t just doing the same thing as ChatGPT. It probably isn’t, but we don’t really know.