BLUF: It’s been a mixed bag, but I would call it “worth it”.
I’ve used Ubuntu a bit before. That’s what my home server runs on and has for years. Granted, most of it’s functions live in Docker containers. I also used both Debian (via Kali) and Ubuntu at work (yes, I know Ubuntu is Debian based, but it’s also big enough to have it’s own dedicated ecosystem). I work in Cybersecurity and use Linux based tools for image acquisition, digital forensics and data recovery. Kali makes for a great “it just works” system to validate vulnerabilities and poke at a network. And, between a lot of tools targeting Ubuntu and frameworks like SANS SIFT, Ubuntu gets used a lot. I also supported several Red Hat based servers at work for various tools. I’m far from an expert on Linux, but I can usually hold my own.
In a lot of ways, Arch wasn’t an obvious choice for me. And I seriously considered going with Ubuntu (or another Debian based OS (e.g. PopOS)) at first. It’s worth mentioning that my primary use for my desktop is video games. So, that heavily effected my choices. That said, the reasons for choosing Arch ended up being:
- I have a SteamDeck and most of my games “just work” on it. With Arch being the flavor of Linux Valve is targeting, following their lead seemed like a good idea. I expected that a lot of effort to get games working on “Linux” would ultimately be focused on getting games working on Arch.
- I wanted a “minimal” system. I can be a bit of a control freak and privacy nut. I already self-host NextCloud, because I don’t want my pictures/data sitting on someone else’s computer. So, the “install only what you need” nature of Arch was appealing.
- I did do some testing of Ubuntu on my system and had driver issues (nVidia GPU) and some other problems I didn’t put the time into running down. In the end, it put me off Linux for a while before I came back to it and ran Arch.
One of the things I did, which was really helpful, was a “try before you buy” setup. I was coming from Windows 10. And, as mentioned above, gaming was my main use case. So, that had to work for me to make the jump. Otherwise, I was going to milk Windows 10 for as long as possible and then figure things out when it went EOS. So, I installed Arch on a USB 3.0 thumbdrive and left my Windows OS partition alone. I also mounted my “Games” drive (M.2 SSD) and installed games to that. It was still NTFS, but that only created minor bumps in the road. Running that configuration for a couple months proved out that Arch was going to work for me.
When it came time to fully change over, I formatted my Windows OS partition as ext4, setup the correct folder structure and rsync’d everything from the thumbdrive to it. So, everything was the way I’d had it for those couple months. I did have an issue that my BIOS refused to see the OS partition on the SATA SSD I used for my OS partition; but, that was MSI’s fault (I have an MSI motherboard). And that was resolved by changing where GRUB is located in my /boot partition.
Overall, I’ve been happy with the choice I made. Arch hasn’t always been easy. Even the Official Install Guide seems to come from a RTFM perspective. But, if you’re willing to put the time into it, you will learn a lot or you won’t have a functional system. And you’ll end up with a system where you can fire up a packet capture and have a really good idea of what each and every packet is about. As for gaming, so far I’ve had exactly one game which didn’t run on Linux. That was Call of Duty 6, which I was considering giving a go to play with some folks I know. But, Activision’s Anti-Cheat software is a hard “no” on Linux. So, I had to pass on that. Otherwise, every game I have wanted to play either had native Linux support or worked via Proton/WINE.
I think this anger is linked to the irrational exuberance for “AI”.
Personally, I kinda hate AI. Not because of any sort of fear of job loss or anything like that. It’s because “AI” has been rolled out heavily in the Cybersecurity space, making my work life hell because of it. Models are only as good as their training and this means that any AI model which is going to spot anomalies in a network needs to spend a good amount of time being trained. However, what the vendors sell are touted as unsupervised models. They just need to spend a while on your network and they can automagically learn what “normal” is and then alert you on “abnormal”. This ignores the fact that you still need your analysts chasing false positives constantly from this black box. And that “black box” aspect is a major problem. You’ll get an AI/ML based alert with exactly fuck all in detail on why the alert triggered. If you’re lucky, you might get a couple log entries along with the alert, but nothing saying why those entries are suspicious.
I will grant that, there are a few cases where the “AI” in a product has worked. Mostly, it’s been in language processing. Heck, having an AI half-write a function for you in a tool you don’t use very often is quite nice. You almost always need to rework the results a bit, but it can get you started. But, my first question for any vendor talking about “AI Detections” is “how do we tune false positives?”. It’s just too big of a headache. And most of them try to downplay the need or dodge the question. Or, you have to babysit the model, effectively making it a supervised model. Which, fine. Just stop telling me how much time it’s going to save me, when I’m going to spend more time supervising the model than searching for threats in my environment. And, for fucks sake, design that shit to explain itself.
As for putting AI in my system. I can see a use case for language processing. Heck, I’d love to have the Star Trek style, “hello computer…” type stuff actually work worth a damn. Google and Siri are pretty close, though even those can be shit on toast when trying to do anything slightly complex. And having all that done locally, without having to send data “to the cloud” sounds great for privacy and security (until MS adds a keylogger as part of the OS). But, given how much time my GPU sits at or very near idle, I do wonder if the extra chip is worth the silicon or space.
In the end, I’m expecting this to go much the way TPM has. We’ll all end up with it in our systems, whether or not we know, care or use it. All because manufacturers just start soldering it on to everything. Maybe someone will find a good use for it eventually, distributed AI porn, maybe? But, like a lot of AI, it seems like a solution in search of a problem.