• 2 Posts
  • 4 Comments
Joined 7 months ago
cake
Cake day: May 5th, 2024

help-circle



  • the argument that they can’t learn doesn’t make sense because models have definitely become better.

    They have to be either trained with new data or their internal structure has to be improved. It’s an offline process, meaning they don’t learn through chat sessions we have with them (if you open a new session it will have forgotten what you told it in a previous session), and they can’t learn through any kind of self-directed research process like a human can.

    all of your shortcomings you’ve listed humans are guilty of too.

    LLMs are sophisticated word generators. They don’t think or understand in any way, full stop. This is really important to understand about them.