Hi,
I wanted to run some Large Language Models locally. Something like Private GPT or Medium Article on my local Apple Silicon to enhance my privacy but also get some additional help.
Does anyone have recommendations or guides I could follow?
Thank you very much.
The tldr as I understand it is that Mac M1/M2 devices are unique in that the vram (gpu ram) is the same as the normal ram. This sharing allows LLM models to run on the gpu of those chips, and in their “vram” as well.
Llama.cpp was the software that users do this. I can’t find the original guide/article I looked at, but here is a github gist, where the commenters have done benchmarks:
https://gist.github.com/cedrickchee/e8d4cb0c4b1df6cc47ce8b18457ebde0
Alright, interesting… As I said, I’m no expert or anything and this was just my noob optinion.
Thank you for the correction and further resources!