We could have AI models in a couple years that hold the entire internet in their context window.

  • Martineski@lemmy.fmhy.mlM
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    We could have AI models in a couple years that hold the entire internet in their context window.

    That’s a really bold claim.

    • Behohippy@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Also not sure how that would be helpful. If every prompt needs to rip through those tokens first, before predicting a response, it’ll be stupid slow. Even now with llama.cpp, it’s annoying when it pauses to do the context window shuffle thing.