• 0 Posts
  • 47 Comments
Joined 1 year ago
cake
Cake day: July 20th, 2023

help-circle
  • As someone who’s written pipelines who do exactly that on Windows, macOS, Linux across x86_64, aarch64, and MIPS, with optimized, unoptimized, instrumented for ASAN, instrumented for TSAN, and instrumented for coverage, and does it all in a distributed containerized workflow… It’s not as easy as it sounds. Honestly macOS is way more of a hassle to deal with than Linux.

    Unless you need ROS. ROS is utter garbage. ROS is popular in robots. ROS is, unlike its name, not actually an operating system but rather a system of tools and utilities which do not follow any standards and certainly not the OS standards. I literally hate ROS. I would burn that shit to the ground and rebuild-the-world if I had the time to.


  • and would not include it in the main repo

    Tests that verify behavior at run time belong elsewhere

    The test blobs belong in whatever repository they’re used.

    It’s comically dumb to think that a repository won’t include tests. So binary blobs like this absolutely do belong in the repository.



  • I was running Fedora. Something like 27 or so. I needed drivers. I don’t remember if it was AMD or Nvidia, but they were only available on RedHat.

    So I downloaded the RedHat drivers for the GPU and forced it to install. It worked! It was great.

    Then when I updated the distro to the next release… everything failed. It was dropping into grub, but no video was output. Ooof.

    So I ended up enabling a terminal console and connecting to it via a serial port to debug. I had to completely uninstall that RPM and I was never happy that it was properly gone. So a few months later I ended up reinstalling the whole OS.

    On the plus side, I learned a lot about grub and serial consoles. Worth it.











  • make -j will create unlimited jobs. If you have very few CPU cores you will likely overload your CPU with tens or hundreds (or maybe even thousands) of compilation jobs.

    Each one of those jobs will need RAM. If you eat up all of your RAM then your computer will hit swap and become unusable for hours (at best) or days. I’ve had a computer chug along for weeks in swap like that before OOMkiller decided to do things. And the worst part is the Out-Of-Memory killer decided to kill all the wrong things…

    The -j argument has an optional additional argument. The additional argument is the limit to the number of jobs. The best thing you can do is (roughly) use make -j$(nproc). That will give the number of processors as an optional argument to -j. If your build line gets parsed (as is common in an IDE) then you might replace “$(nproc)” with a hard number. So if you have 20 cores in your CPU then you might do -j20. If you only have 8GB of RAM with your 20 cores then maybe you give it -j8. I saw one guy try to get fancy with math to divide up CPU and RAM and … it was just way more complicated to get at the same number as nproc :)

    I, personally, just buy more RAM. Then with my 20 cores and 64GB of RAM, there’s plenty of headroom for compilation in the background for each one core and also room in RAM for a browser for documentation and IDE for all the editing. Developer machines are known for being resource hogs. Might as well lean in to that stereotype just a tiny bit.

    And one tiny edit: I highly suggest you start to read the manual. man make will tell you all about how -j works :) and man nproc and there’s tons of others too. I love git’s manual pages, they’re pretty awesome.

    man make tells:

       -j [jobs], --jobs[=jobs]
           Specifies the number of jobs (commands) to run simultaneously.  If there is more than one -j option, the last one is effective.  If the -j option is given without an argument, make will not limit the number of jobs that can run simultaneously.
    


  • I was so stunned when I left college and realized how much I have to limit my vocabulary around most people. It made me feel distant from many of my peers.

    To be fair, there are a lot of things that you learn in college that are supposed to be specialized for your field of study.

    I was told that I was being condescending when I use words that nobody else understands. To put it in the words of one boss: @inetknght, you’re smart and you do good work. But, @inetknght, you can’t handle stupid. So I have to move you to another team.

    It was kind’ve eye-opening to realize that stupid people don’t just exist. They’re all around us. Always have been. Being moved to my own team of one just so people wouldn’t feel dumb around me definitely made me feel distant too.





  • AFAIK there has never been an actual evidence-based study for how long and how often you should brush and floss.

    The National Institute of Health has a ton of public-paid studies. Did you even bother to search it before making your astounding claim? https://www.nih.gov

    Just one search for nih brush time shows several studies. Let me just link the top two…

    https://pubmed.ncbi.nlm.nih.gov/19723429/ -> “This study was undertaken to measure plaque removal during untutored brushing over timed periods between 30 and 180 seconds with”

    https://pubmed.ncbi.nlm.nih.gov/16355646/ -> “This review shows that there is consensus in the literature that (meticulous) tooth brushing once per day is sufficient to maintain oral health and to prevent caries and periodontal diseases. Tooth brushing is also regarded as an important vehicle for application of anti-caries agents, such as fluorides. However, most patients are not able to achieve sufficient plaque removal by performing oral hygiene measures at home. Therefore, tooth brushing twice daily is recommended by most of the dentists in order to improve plaque control.”

    OP poses one question with two parts. The first study answers the time (2-3 minutes) part. The second link answers the other part (twice daily).