I got 32 additional GB of ram at a low, low cost from someone. What can I actually do with it?
Run a local LLM
The best thing about having a lot of RAM is that you can have a ton of apps open with a ton of windows without closing them or slowing down. I have an unreasonable number of browser windows and tabs open because that’s my equivalent to bookmarking something to come back and read it later. It’s similar to if you’re the type of person for whom stuff accumulates on flat surfaces cause you just set stuff down intending to deal with it later. My desk is similarly cluttered with books, bills, accessories, etc.
700 Chrome tabs, a very bloated IDE, an Android emulator, a VM, another Android emulator, a bunch of node.js processes (and their accompanying chrome processes)
I used to have a batch file to create a ram disk and mirror my Diablo3 install to it. The game took a bit longer to start up but map load times were significantly shorter.
I don’t know if any modern games would fit and have enough loads to really care…but you could
I used it for virtual machines and Docker containers.
Run a fairly large LLM on your CPU so you can get the finest of questionable problem solving at a speed fast enough to be workable but slow enough to be highly annoying.
This has the added benefit of filling dozens of gigabytes of storage that you probably didn’t know what to do with anyway.
Sell it to somebody at a medium, medium cost who needs it
You can install it in a compatible computer.
Which I did
Excellent!
I have 16 GB of RAM and recently tried running local LLM models. Turns out my RAM is a bigger limiting factor than my GPU.
And, yeah, docker’s always taking up 3-4 GB.
Either you use your CPU and RAM, either your GPU and VRAM
Fair, I didn’t realize that. My GPU is a 1060 6 GB so I won’t be running any significant LLMs on it. This PC is pretty old at this point.
You could potentially run some smaller MoE models as they don’t take up too much memory while running. I’d suspect the deepseek r1 8B distill with some quantization would work well.
I tried out the 8B deepseek and found it pretty underwhelming - the responses were borderline unrelated to the prompts at times. The smallest I had any respectable output with was the 12B model - which I was able to run, at a somewhat usable speed even.
Ah, that’s probably fair, i haven’t run many of the smaller models yet.
Open 10 extra tabs in chrome
-
Compressed swap (zram)
-
Compiling large C++ programs with many threads
-
Virtual machines
-
Video encoding
-
Many Firefox tabs
-
Games
-
You could run a Java program, but you’d quickly run out of ram.
Honestly, this is the answer and also the future of OS’s.
Download DeepSeek’s 64B model.
I actually did. I deleted it as soon as I realized it wouldn’t tell me about the Tiananmen Square Massacre.
But the local version is not supposed to be censored…? I’ve asked it questions about human rights in China and got a fully detailed answer, very critical of the government, something that I could not get on the web version. Are you sure you were running it locally?
Fold At Home!
You can essentially donate your processing power to various science projects that need it to compute protein folding simulations. I used to run it whenever I wasn’t actively using my PC. This does cost electricity and increase rate of wear and tear on the device, as with any sustained high computational load. But it’s cool! :]
Thought this was obsolete as of like a year ago. Did they update it?
seems like the last update was 23 Jan 2025