- cross-posted to:
- python@lemmy.ml
- programming@programming.dev
- cross-posted to:
- python@lemmy.ml
- programming@programming.dev
itertools.batched() is pretty neat.
This update just makes me thirstier for the next level optimizer currently in work. I think it’s supposed to come out with 3.13.
How do you guys update python versions and all the libraries you have installed? I have multiple like
- pygame
- ptpython
- pandas
- Pillow
- icecream
- … is it not a massive hassle to have to reinstall all of this with every new version and fight the old version on ubuntu?
Pyenv! Let the OS have its own version and work on whatever version you want, whenever.
Also pipx for cli tools. It creates isolated environments for every tool you install. And upgrading is one command away
pipx reinstall-all --python (your pyenv)
.
You are 100% right, that’s why we use virtual environments. Specifically we use poetry, which is fine.
Conda is, to the alternative already mentioned, a great way to keep different versions of python and it’s packages for each project!
On Linux, I’d just build my own Python binaries and make them available. But you can also use pyenv for the same thing if you’re ok with it.
Then, using poetry, I have different projects with isolated environments.
PEP 688 sounds useful. Might have to have a play.
This broke Pytest for us, not sure if it’s an us problem, pytest problem, or Python 3.12 problem. Basically, it uses a ton of RAM until it gets killed by the OOM (I’m running in a Docker container w/ 3GB max RAM limit).
I’ll post back when I get it working, but that’s blocking our upgrade for now. Will probably revisit in a couple months after we get some projects shipped so we don’t fall too far behind.
How is it going ? Did you solve it, or find the cause yet ?
No, but we haven’t really been trying.
Our tests are written in
unittest
style, but run with unit test. Unfortunately, a large number of our tests rely on fixtures, as in loading a ton of data into a SQLite database and then running code against that. That’s because we have DB queries all throughout our service logic, so it’s quite a bit of spaghetti to try to mock the DB logic.So instead of trying to fix the memory issues in pytest, we’re refactoring our app to separate the DB calls from our service logic, which should let us easily mock the repository in our tests.
So short answer: no. Longer answer: I might be able to tell you in a few months if this approach fixes the issue.