I hear people saying things like “chatgpt is basically just a fancy predictive text”. I’m certainly not in the “it’s sentient!” camp, but it seems pretty obvious that a lot more is going on than just predicting the most likely next word.
Even if it’s predicting word by word within a bunch of constraints & structures inferred from the question / prompt, then that’s pretty interesting. Tbh, I’m more impressed by chatgpt’s ability to appearing to “understand” my prompts than I am by the quality of the output. Even though it’s writing is generally a mix of bland, obvious and inaccurate, it mostly does provide a plausible response to whatever I’ve asked / said.
Anyone feel like providing an ELI5 explanation of how it works? Or any good links to articles / videos?
I found this example useful, this dude builds a simple LLM that writes infinite Shakespeare and walks you through each step.
Eli5: large language models like chat GPT are really good at guessing what combinations of words are most likely to be a good response to questions. They’re so good at it that many people think they are intelligent even though they’re not.
Thank you for sharing this. I just finished watching it.