Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.
Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.
None of which are intelligence, and all of which are catered towards predicting the next token.
All the models have a total reliance on data and structure for inference and prediction. They appear intelligent but they are not.
How is good old fashioned code comparing outputs to a database of factual knowledge “predicting the next token” to you. Or reinforcement relearning and token rewards baked into models.
I can tell you have not actually tried to work with professional ai or looked at the research papers.
Yes none of it is “intelligent” but i would counter that with neither are human beings.