I don’t know what shady shit you’re referring to. They do AI, but I don’t use any of that. IMO their core strength is the search engine and how it works for you rather than against.
I don’t know what shady shit you’re referring to. They do AI, but I don’t use any of that. IMO their core strength is the search engine and how it works for you rather than against.
Why would their experience be relevant? They’re asking a question, so obviously they have things to learn. You could be nicer about it.
Then it’s a problem of the platform, if there’s no way to either tag content on a particular topic, which people can filter if they wish, or a place for meta discussions, which people can choose not to visit. I still agree with the OP that simply deleting/forbidding this content isn’t a good option.
That’s a bit like saying “I’m not interested in compiler warnings, my program works for me.” The issues this article discusses are like compiler warnings, but for the community. You should be free to ignore them, just by scrolling past. But forbidding compiler warnings would not fly in any respectable project.
I hadn’t bought a bundle in a long time, maybe I just don’t remember it being that bad, but really? Even with the “extra to charity” preset, the charity gets less than Humble themselves? That’s kind of gross.
GitHub Desktop works well for me and my workflow; even though the Linux version is only supported by the community (possible thanks to it being open source). The UI is very neat and simple. Yet you can do squash, reorder commits, ammend, commit hunks etc. Dark theme available of course! It integrates with GitHub (for PRs mostly) but afaik isn’t tied to GitHub repos.
Very relevant read: https://staffeng.com/
I second this. I lead a team of engineers, and to us the main dividing line between senior and not senior is if you’re able to take on a project and lead it autonomously. I.e., you’ve gone past the stage where all you do is take on the next ticket in your task tracker; you have an awareness and understanding of the bigger picture, which allows you to create tickets on your own and select the most appropriate thing to work on next. The lead (me) is still there to help prioritize, fetch requirements, unblock things, etc, but it’s fairly light touch management.
(Edit: my job title is Principal Software Engineer)
There’s no specific AI detection at the moment, as far as I can tell. But it has “listicle” detection. If you ask “best lawn mower”, all these “the 5 best lawn mowers of 2023” websites with affiliated Amazon links get pooled into a compact Listicle section, that you can just scroll past and ignore.
That’s crazy. Google/DDG bloat from SEO websites had already driven me out a while ago, so I hadn’t noticed. I’ve been using Kagi for a few months now, and I find I can trust my search results again. Being able to permanently downgrade or even block a given website is an awesome feature, I would recommend it just for that.
How I wish CUDA was an open standard. We use it at work, and the tooling is a constant pain. Being almost entirely controlled by NVIDIA, there’s no alternative toolset, and that means little pressure to make it better. Clang being able to compile CUDA code is an encouraging first step, meaning we could possibly do without nvcc. Sadly the CMake support for it on Windows has not yet landed. And that still leaves the SDK and runtime entirely in NVIDIA’s hands.
What irritates me the most about this SDK is the versioning and compatibility madness. Especially on Windows, where the SDK is very picky about the compiler/STL version, and hence won’t allow us to turn on C++20 for CUDA code. I also could never get my head around the backward/forward compatibility between SDK and hardware (let alone drivers).
And the bloat. So many GBs of pre-compiled GPU code for seemingly all possible architectures in the runtime (including cudnn, cublas, etc). I’d be curious about the actual number, but we probably use 1% of this code, yet we have to ship the whole thing, all the time.
If CPU vendors were able to come up with standard architectures, why can’t GPU vendors? So much wasted time, effort, energy, bandwidth, because of this.
How do you people manage this?