• Bogasse@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 year ago

      I suppose the compression process looks like this :

      • call the model to predict the most probable next tokens (this is deterministic)
      • encode next tokens by with its ranking in model prediction

      If the model is good at predicting what the next token is, I suppose you need only 2bits to encode each token (for any of the top 4 predictions).

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        If you’re encoding the rankings as raw bits, how do you know when one ranking ends and the next begins? Zip compression solves this by using a BST, where you’d know if you need to keep reading by whether or not you’ve reached a leaf. But if there’s no reference data structure to tell you this, how do you know if you should read 4 bits ahead or 5?

    • 9point6@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Lossless in terms of compression is being able to reconstruct the original bits of a piece of media exactly from its compressed bits.

      The thing that I’m wondering is how reliable this is

    • Sethayy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Depends on how you use it, if you just use it in place of finding repetition, it just means that our current way ain’t the mathematically best and AI can find better lol.

      If you tried to “compress” a book into chatgpt tho yeah it’d probably be pretty lossy