• Asifall@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 year ago

    The mentioned but unsupported link to “general intelligence” reeks of bullshit to me. I don’t doubt a modified LLM (maybe an unmodified one as well) can beat lossless compression algorithms, but I doubt that’s very useful or impressive when you account for the model size and speed.

    If you allow the model to be really huge in comparison to the input data it’s hard to prove you haven’t just memorized the training set.

    • Bogasse@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 year ago

      I suppose the compression process looks like this :

      • call the model to predict the most probable next tokens (this is deterministic)
      • encode next tokens by with its ranking in model prediction

      If the model is good at predicting what the next token is, I suppose you need only 2bits to encode each token (for any of the top 4 predictions).

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        If you’re encoding the rankings as raw bits, how do you know when one ranking ends and the next begins? Zip compression solves this by using a BST, where you’d know if you need to keep reading by whether or not you’ve reached a leaf. But if there’s no reference data structure to tell you this, how do you know if you should read 4 bits ahead or 5?

    • 9point6@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Lossless in terms of compression is being able to reconstruct the original bits of a piece of media exactly from its compressed bits.

      The thing that I’m wondering is how reliable this is

    • Sethayy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Depends on how you use it, if you just use it in place of finding repetition, it just means that our current way ain’t the mathematically best and AI can find better lol.

      If you tried to “compress” a book into chatgpt tho yeah it’d probably be pretty lossy

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    This is the best summary I could come up with:


    When an algorithm or model can accurately guess the next piece of data in a sequence, it shows it’s good at spotting these patterns.

    The study’s results suggest that even though Chinchilla 70B was mainly trained to deal with text, it’s surprisingly effective at compressing other types of data as well, often better than algorithms specifically designed for those tasks.

    This opens the door for thinking about machine learning models as not just tools for text prediction and writing but also as effective ways to shrink the size of various types of data.

    Over the past two decades, some computer scientists have proposed that the ability to compress data effectively is akin to a form of general intelligence.

    The idea is rooted in the notion that understanding the world often involves identifying patterns and making sense of complexity, which, as mentioned above, is similar to what good data compression does.

    The relationship between compression and intelligence is a matter of ongoing debate and research, so we’ll likely see more papers on the topic emerge soon.


    The original article contains 709 words, the summary contains 175 words. Saved 75%. I’m a bot and I’m open source!

  • droidpenguin@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Hmm… Wonder how AI predicting what a photo of someone should look like will compare to how they actually do. Guess it’s not that different than the automatic filters phones have to make everyone look better.