I got 32 additional GB of ram at a low, low cost from someone. What can I actually do with it?

  • some_guy@lemmy.sdf.org
    link
    fedilink
    arrow-up
    3
    ·
    12 days ago

    The best thing about having a lot of RAM is that you can have a ton of apps open with a ton of windows without closing them or slowing down. I have an unreasonable number of browser windows and tabs open because that’s my equivalent to bookmarking something to come back and read it later. It’s similar to if you’re the type of person for whom stuff accumulates on flat surfaces cause you just set stuff down intending to deal with it later. My desk is similarly cluttered with books, bills, accessories, etc.

  • fuckwit_mcbumcrumble@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    12 days ago

    700 Chrome tabs, a very bloated IDE, an Android emulator, a VM, another Android emulator, a bunch of node.js processes (and their accompanying chrome processes)

  • eyeon@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    12 days ago

    I used to have a batch file to create a ram disk and mirror my Diablo3 install to it. The game took a bit longer to start up but map load times were significantly shorter.

    I don’t know if any modern games would fit and have enough loads to really care…but you could

  • Jesus_666@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    12 days ago

    Run a fairly large LLM on your CPU so you can get the finest of questionable problem solving at a speed fast enough to be workable but slow enough to be highly annoying.

    This has the added benefit of filling dozens of gigabytes of storage that you probably didn’t know what to do with anyway.

  • zkfcfbzr@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 days ago

    I have 16 GB of RAM and recently tried running local LLM models. Turns out my RAM is a bigger limiting factor than my GPU.

    And, yeah, docker’s always taking up 3-4 GB.

      • zkfcfbzr@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        Fair, I didn’t realize that. My GPU is a 1060 6 GB so I won’t be running any significant LLMs on it. This PC is pretty old at this point.

        • fubbernuckin@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          You could potentially run some smaller MoE models as they don’t take up too much memory while running. I’d suspect the deepseek r1 8B distill with some quantization would work well.

          • zkfcfbzr@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            12 days ago

            I tried out the 8B deepseek and found it pretty underwhelming - the responses were borderline unrelated to the prompts at times. The smallest I had any respectable output with was the 12B model - which I was able to run, at a somewhat usable speed even.

  • vividspecter@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    12 days ago
    • Compressed swap (zram)

    • Compiling large C++ programs with many threads

    • Virtual machines

    • Video encoding

    • Many Firefox tabs

    • Games

    • daggermoon@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      3
      ·
      12 days ago

      I actually did. I deleted it as soon as I realized it wouldn’t tell me about the Tiananmen Square Massacre.

      • Yerbouti@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        12 days ago

        But the local version is not supposed to be censored…? I’ve asked it questions about human rights in China and got a fully detailed answer, very critical of the government, something that I could not get on the web version. Are you sure you were running it locally?

  • spicy pancake@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Fold At Home!

    https://foldingathome.org/

    You can essentially donate your processing power to various science projects that need it to compute protein folding simulations. I used to run it whenever I wasn’t actively using my PC. This does cost electricity and increase rate of wear and tear on the device, as with any sustained high computational load. But it’s cool! :]