LinuxFanatic@lemmy.worldtoLocalLLaMA@sh.itjust.works•New Wizard coder model posted and quantized by TheBloke!English
2·
1 year agoOn The Bloke’s hugging face repo, it says the GGML quants are not compatible with llama.cpp, anyone know why?
On The Bloke’s hugging face repo, it says the GGML quants are not compatible with llama.cpp, anyone know why?
The point is the lurkers subscribed there are going to get bored of steam pictures and unsubscribe, if it happens to enough subs then a lot of the passive userbase will end up either spending less time or leave entirely. Since the vast majority of users are lurkers, it’ll outweigh the number of people creating these rebellion posts and Reddit should see a net loss in traffic. At least, that’s what I’ve gathered. Please don’t shoot the messenger if I’m wrong or it’s a stupid idea