The virtual disk for my lemmy instance filled up which caused lemmy to throw a lot of errors. I resized the disk and expanded the filesystem but now the pictrs container is constantly restarting.
root@Lemmy:/srv/lemmy# le
less lessecho lessfile lesskey lesspipe let letsencrypt lexgrog
root@Lemmy:/srv/lemmy# ls
leemyalone.org
root@Lemmy:/srv/lemmy# cd leemyalone.org/
root@Lemmy:/srv/lemmy/leemyalone.org# docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------------------------------------------
leemyaloneorg_lemmy-ui_1 docker-entrypoint.sh /bin/ ... Up 1234/tcp
leemyaloneorg_lemmy_1 /app/lemmy Up
leemyaloneorg_pictrs_1 /sbin/tini -- /usr/local/b ... Restarting
leemyaloneorg_postfix_1 /root/run Up 25/tcp
leemyaloneorg_postgres_1 docker-entrypoint.sh postgres Up 5432/tcp
leemyaloneorg_proxy_1 /docker-entrypoint.sh ngin ... Up 80/tcp, 0.0.0.0:3378->8536/tcp,:::3378->8536/tcp
Have you checked the logs for the pictures container to see why it’s restarting?
Could it be permissions?
Might this be related?
In some cases, pict-rs might crash and be unable to start again. The most common reason for this is the filesystem reached 100% and pict-rs could not write to the disk, but this could also happen if pict-rs is killed at an unfortunate time. If this occurs, the solution is to first get more disk for your server, and then look in the sled-repo directory for pict-rs. It’s likely that pict-rs created a zero-sized file called snap.somehash.generating. Delete that file and restart pict-rs.
https://git.asonix.dog/asonix/pict-rs#user-content-common-problems
What do the logs say?
Can’t see pictrs log because it never full starts.
root@Lemmy:/srv/lemmy/leemyalone.org# docker-compose logs leemyaloneorg_pictures_1 ERROR: No such service: leemyaloneorg_pictures_1 root@Lemmy:/srv/lemmy/leemyalone.org#
If the pictrs container doesn’t start check the docker logs.
journalctl -fexu docker
It’ll typically tell you why a container isn’t starting, usually a broken bind mount.
To prevent this from happening again, try migrating to an S3 backend; DigitalOcean have one that’s fixed-price and includes egress, so you can’t accidentally end up with a ridiculous bill one month!
You can still see the logs using
docker logs container_name>
. To get the container name you can usedocker ps -a
. It should list the pictrs container there. The container name is usually the last column of the output.` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ BACKTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Run with COLORBT_SHOW_HIDDEN=1 environment variable to disable frame filtering. 2023-08-26T20:46:43.679371Z WARN sled::pagecache::snapshot: corrupt snapshot file found, crc does not match expected Error: 0: Error in database 1: Read corrupted data at file offset None backtrace ()
Location: src/repo/sled.rs:84
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ SPANTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
0: pict_rs::repo::sled::build with path=“/mnt/sled-repo” cache_capacity=67108864 export_path=“/mnt/exports” at src/repo/sled.rs:78 1: pict_rs::repo::open with config=Sled(Sled { path: “/mnt/sled-repo”, cache_capacity: 67108864, export_path: “/mnt/exports” }) at src/repo.rs:464
root@Lemmy:~#`
Seems like your pictrs database is corrupted.
Is there a way to reset the pictrs DB without affecting the posts, commenst and users DB?
pictrs database is completely separate from lemmy database. If you want you just delete everything in the pictrs volume and start afresh. You will lose all images though.
OK. I just deleted the pictrs folder from srv/lemmy/leemyalone.org/volumes but I am still having the same issue.
There’s only two local posts on my instance so i’m not worried about losing thoseabout.
Will the pictrs from subscribed communities in other instances be restored after a db reset?
You can try mounting a new folder as pictrs volume. I assume your other data will be safe since it is in the database.