I got cancelled too and chose Hetzner instead. Will not do business with a company that can’t get their filters working decently.
I got cancelled too and chose Hetzner instead. Will not do business with a company that can’t get their filters working decently.
Lovely! I’ll go read the code as soon as I have some coffee.
I do SDXL generation in 4GB at extreme expense of speed, by using a number of memory optimizations.
I’ve done this kind of stuff since SD 1.4, for the fun of it. I like to see how low I can push vram use.
SDXL takes around 3 to 4 minutes per generation including refiner but it works within constraints.
Graphics cards used are hilariously bad for the task, a 1050ti with 4GB and a 1060 with 3GB vram.
Have an implementation running on the 3GB card, inside a podman container, with no ram offloading, 1 vcpu and 4GB ram.
Graphical UI (streamlit) run on a laptop outside of server to save resources.
Working on a example implementation of SDXL as we speak and also working on SDXL generation on mobile.
That is the reason I’ve looked into this news, SSD-1B might be a good candidate for my dumb experiments.
Oh my Gwyn, this comment section is just amazing.
Goddammit! Don’t tell that one, I use it to impress random people at parties.
Not joking, although I understand it seems very silly at face value.
Dark Souls 3 PvP specifically SL60+6 at gank town (after pontiff).
It used to be my go-to wind down after a work day.
It made me smile and actually relaxed me enough to go to bed and sleep, especially after a hard day.
HateLLM will be a smash. /s
That’s wonderful to know! Thank you again.
I’ll follow your instructions, this implementation is exactly what I was looking for.
Absolutely stellar write up. Thank you!
I have a couple of questions.
Imagine I have a powerful consumer gpu card to trow at this solution, 4090ti for the sake of example.
- How many containers can share one physical card, taking into account max memory will not be exceeded?
- How does one virtual gpu look like in the container? Can I run standard stuff like PyTorch, Tensorflow, and CUDA stuff in general?
Just pip install mscandy -U
If at all true this would be world-changing news.
I use this: https://cloudhiker.net/explore
It is not easy to go from healthy background levels of mercury to mild poisoning in max 700-ish meals.
Each fish in 700 meals would have to be 100x the normal average of mercury, every single one, every single time, for every single meal, consuming up to a kilogram of fish in each meal.
It wasn’t fish, it’s more complex.
I’m quite aware we’re discussing a real human, your friend.
If it was from eating fish and I’m completely wrong, I’m sorry. Wish the person a fast recovery as best as possible.
I won’t respond any further.
Erasmus is a semester up to max of one year.
It is impossible that condition came from eating normal food here, compared to a lifetime somewhere else.
Take my grand aunt with 102 years of age as an example, she would be a walking pot of mercury by now.
She ate fish all her life and due to location, way more fish than meat.
What about me at 50 years old and not even an hint of poisoning. I eat more fish than meat.
How does that work?
Here are the numbers for heavy metal poisoning for 2022, ordered by rank and Country:
https://epi.yale.edu/epi-results/2022/component/hmt
I’m very sorry for your friend and wish the best without reservation, but her condition was not from eating fish during a semester in Portugal.
I’m sorry but how is that possible? I’m here and I have eaten fish all my life, 50 years of it.
Had bloodwork done a few weeks ago that included count for heavy metals.
No sign of any kind poisoning.
The whole country should have from mild to heavy poisoning by now… And yet they don’t.
How do you explain this discrepancy?
How much mercury did your friend had before coming here? How much was it acquired here?
Fish of all kinds is a staple of mediterranean common diet, there should be people dropping left and right…
I don’t think people realize how much data they leak daily.
Got one more for you: https://gossip.ink/
I use it via a docker/podman container I’ve made for it: https://hub.docker.com/repository/docker/vluz/node-umi-gossip-run/general