You mean deeper than Lviv, which they have been striking from day 1 of the invasion? How much deeper can Russia still strike?
You mean deeper than Lviv, which they have been striking from day 1 of the invasion? How much deeper can Russia still strike?
Not to be an unfunny nitpicker (I don’t know why I’m denying this, that kinda the whole point), but all iphones do have lossless audio streaming via AirPlay. I’m assuming that you specifically meant Bluetooth streaming, but then you should’ve said so. Furthermore, normal aptx isn’t high resolution, only aptx HD and aptx adaptive are. The phone does support aptx HD as well, but once again, you could’ve said so from the start (though 3 characters more or less might make a significant difference to most memes, this one certainly wouldn’t have had that problem)
Luxury! My homeserver has an i5 3470 with 6GB or RAM (yes, it’s a cursed 4+2 setup)! </badMontyPythonReference>
Interesting, I also run Nextcloud and pihole, and vaultwarden, jellyfin, paperless-ngx, gitea, vscode-server and a minecraft server (every now and then).
You’re right that such a system really does show its age, but only when doing multiple intensive tasks at the same time. I try not to backup my photos to Nextcloud while running minecraft, for example, as the imagine identification task pins my CPU at 100%. So yes, I agree, you’re probably not doing anything out of the ordinary on your setup.
The point I was trying to make still stands though, as that pi 2B could run more than I would’ve expected beforehand. I believe it once even ran jellyfin, a simple file server, samba, and a webserver with a simple HTML website. Jellyfin worked just fine, as long as the pi didn’t have to transcode (never got hardware transcoding to work).
It is funny that you should run out of memory, seeing as everything fits (albeit, just barely) on my machine in 1/5 the memory. Would de overhead of running VM’s account for such a large difference?
Coming from someone who started selfhosting on a pi 2B (similar-ish specs), you’d be surprised. If you don’t need anything fast or fancy, that 1GB will go a long way, and plenty of selfhosted apps require very little CPU. The only real problem I faced was that all HTTPS-related network tasks were limited at ~3MB/s, as that is how fast my pi could encrypt the data (presumably, I just saw my webserver utilising the entire CPU and figured this was the most likely explanation)
I’ve had good experiences with whisper.cpp (should be in the AUR). I used the large model on my GPU (3060), and it filled 11.5 out of the 12GB of vram, so you might have to settle for a lower tier model. The speed was pretty much real time on my GPU, so it might be quite a bit slower on your CPU, unless the lower tier models are also a lot faster (never tested them due to lack of necessity).
The large model had pretty much perfect accuracy (only 5 or so mistakes in ~40 pages of transcriptions), and that was with Dutch audio recorded on a smartphone. If it can handle my pretty horrible conditions, your audio should (hopefully) be no problem to transcribe.
It depends what you’re optimising for. If you want a single (relatively small) download to be available on your HDD as fast as possible, then your current setup might be better (optimising for lower latency). However, if you want to be maxing out your internet speeds at all time and increase your HDD speeds by making the copy sequential (optimising for throughput), then the setup with the catch drive will be better. Keep in mind that a HDD’s sequential write performance is significantly higher than its random write performance, so copying a large file in one go will be faster than copying a whole bunch of random chunks in a random order (like torrents do). You can check the difference for yourself by doing a disk benchmark and comparing the sequential vs random writes of your drive.
qBittorrent has exactly the option you’re looking for, I believe it’s called “incomplete download path” in the settings, letting you store incomplete downloads at a temporary path and moving them to their regular location when the download finishes. Aside from the download speed improvement, this will also lead to less fragmentation on your HDD (which might be part of the reason why it is so slow when downloading directly to it). Pre-allocating space could have the same effect, but I would recommend only using one of these two solutions at once (pre-allocating space on your SSD would only waste space)
It’s possible for a certain hardware/software setup not to support a certain codec. For example, my jellyfin client (Finamp) uses the iOS native decoders (afaik), which means opus files are practically broken. My music library (8000+ songs) contained exactly 1 lossy file, which just so happened to be an opus file. I decided to spend the extra ~20MB to standardise my entire library to flac files, ensuring I could play every song on all my devices.
Edit cause I posted too soon: you are generally correct; only in very specific circumstances will you encounter compatibility issues like this one in the modern world. This is 100% apple being apple, and you can expect pretty much every other (reasonably modern) device to support all codecs you might encounter in the wild.
To add to the audio compression: it isn’t possible to further compress an mp3 file without losing any quality. You can either:
If you’re willing to spend some extra time learning about audio compression, you can download lossless files and compress those directly to whatever format and bitrate you want. The quality will be better than option 1 above, as the audio is only lossely compressed once instead of twice.
I have about 0 experience with openssl, I just looked at the man page (openssl-enc). It looks like this command doesn’t take a positional argument. I believe the etcBackup.key file isn’t being read, as that command simply doesn’t attempt to read any files without a flag like -in or -out. I could be wrong though, see previously stated inexperience.
Dutch media are reporting the same thing: https://nos.nl/l/2529468 (liveblog) https://nos.nl/l/2529464 (Normal article)
“cis” and “trans” are prefixes denoting on what “side” something is. “cis” means “on this/our side”, while “trans” refers to “the other side”, for example:
The modern use of “cis” and “trans” is generally about gender. A cisgender person is someone whose gender identity aligns with their sex assigned at birth, while a transgender person is someone for whom that doesn’t hold true.
In this meme, the person on the right is wearing a transgender flag for a shirt, and presumably offending the cisgender person on the left by calling them cis. The meme is making fun of the fact that some cisgender people consider “cis” an insult, when it really only is a neutral and non-offensive description.
That seems like a good edit, and fair enough. Good to know that there is also room for people who want to use their computer in a non-fanatical way, simply minding our own business.
I don’t fit in an of these teams, and neither do literally all Linux users I know. Should we have identity crises, or could this be a giant oversimplification?
YouTube would be smart enough not to advertise Adobe creative cloud in the pre-roll ads of this video, right? Right???
To change the ownership of the files, you should only have to run sudo chown -R user:group directory
. -R makes chown run recursively, so it will modify the directory and all subdirectories and files. Do note that changing the ownership to plex:plex or something similar would leave your user unable to normally modify the files. My solution to this was to add both my regular user and the plex (in my case jellyfin) user to the same group. That way both users can easily see and modify the files, as long as the group has read/write permissions (the 2nd column of rwx in ls -Al
). If necessary, you can add group permissions with sudo chmod -R g+rw directory
.
On a side note: have you considered using jellyfin? It’s a completely free alternative to plex, which recently received a truly massive update with tons of new features. Some people prefer plex’ overall experience, but I’ve been running jellyfin with almost no complaints.
Small disclaimer: I’m writing from mobile, so the commands might not be 100% correct. Run at your own risk, and NEVER POINT A CHMOD/CHOWN COMMAND AT SYSTEM DIRECTORIES LIKE / OR /USR. That’s one of the easiest ways to completely break your system.
Have you tried the official guide from the jellyfin website?
As for the guide this AI generated: it bothers me that they instruct you to use chocolatey for the *arrs, but still advice you to install docker, qbittorrent and jellyfin manually (all of which have chocolatey packages). I disagree with the comment that external storage would be recommended, as internal storage is generally more reliable (depending on a lot of factors of course). Also, I believe the “adding a library”-section of the jellyfin setup is a bit too short to be of any use, and would recommend referring to the jellyfin docs instead.
This guide also doesn’t explain how to make jellyfin accessible outside of your LAN. Once again, I’d recommend referring to the jellyfin docs if you want to do this.
I personally have only set up qbittorrent, jellyfin and docker (not the *arr suite), so I can’t comment on the completeness of the guide, but I wouldn’t trust it too much (seeing the previous oversights).
And finally, as someone who started their selfhosted server journey on windows: don’t. There is a reason why almost all guides are written for linux, as it is (in my humble opinion) vastly superior for server usage once you get used to it.
didn’t know that was a part of bisexuality
I should probably flee before I get eaten by an army of blahåjar (apparently that’s the correct plural?)
Oh I don’t mind the nitpicking, thanks for the explanation! I (apparently erroneously) thought “demake” and “decompile” were synonyms. Guess I’m one of today’s 10000.
In that case the (now taken down, but forked a gazillion times) portal64 project would be a correct example of a demake, right?
Unless your initial recordings were lossless (they probably weren’t), recompressing the files with a lossless flag will only increase the size by a lot. Lossless video is HUGE, which is why almost no one actually records/saves it. What you’re probably looking for is visually lossless transcoding, where you do lose some data, but the difference is too small for most people to notice.
My recommendations: