One big difference that I’ve noticed between Windows and Linux is that Windows does a much better job ensuring that the system stays responsive even under heavy load.

For instance, I often need to compile Rust code. Anyone who writes Rust knows that the Rust compiler is very good at using all your cores and all the CPU time it can get its hands on (which is good, you want it to compile as fast as possible after all). But that means that for a time while my Rust code is compiling, I will be maxing out all my CPU cores at 100% usage.

When this happens on Windows, I’ve never really noticed. I can use my web browser or my code editor just fine while the code compiles, so I’ve never really thought about it.

However, on Linux when all my cores reach 100%, I start to notice it. It seems like every window I have open starts to lag and I get stuttering as the programs struggle to get a little bit of CPU that’s left. My web browser starts lagging with whole seconds of no response and my editor behaves the same. Even my KDE Plasma desktop environment starts lagging.

I suppose Windows must be doing something clever to somehow prioritize user-facing GUI applications even in the face of extreme CPU starvation, while Linux doesn’t seem to do a similar thing (or doesn’t do it as well).

Is this an inherent problem of Linux at the moment or can I do something to improve this? I’m on Kubuntu 24.04 if it matters. Also, I don’t believe it is a memory or I/O problem as my memory is sitting at around 60% usage when it happens with 0% swap usage, while my CPU sits at basically 100% on all cores. I’ve also tried disabling swap and it doesn’t seem to make a difference.

EDIT: Tried nice -n +19, still lags my other programs.

EDIT 2: Tried installing the Liquorix kernel, which is supposedly better for this kinda thing. I dunno if it’s placebo but stuff feels a bit snappier now? My mouse feels more responsive. Again, dunno if it’s placebo. But anyways, I tried compiling again and it still lags my other stuff.

  • SorteKanin@feddit.dkOP
    link
    fedilink
    arrow-up
    29
    arrow-down
    5
    ·
    5 months ago

    “they never know what you intend to do”

    I feel like if Linux wants to be a serious desktop OS contender, this needs to “just work” without having to look into all these custom solutions. If there is a desktop environment with windows and such, that obviously is intended to always stay responsive. Assuming no intentions makes more sense for a server environment.

    • BearOfaTime@lemm.ee
      link
      fedilink
      arrow-up
      19
      ·
      5 months ago

      Even for a server, the UI should always get priority, because when you gotta remote in, most likely shit’s already going wrong.

      • SirDimples@programming.dev
        link
        fedilink
        arrow-up
        12
        ·
        5 months ago

        Totally agree, I’ve been in the situation where a remote host is 100%-ing and when I want to ssh into it to figure out why and possibly fix it, I can’t cause ssh is unresponsive! leaving only one way out of this, hard reboot and hope I didn’t lose data.

        This is a fundamental issue in Linux, it needs a scheduler from this century.

        • eyeon@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          5 months ago

          You should look into IPMI console access, that’s usually the real ‘only way out of this’

          SSH has a lot of complexity but it’s still the happy path with a lot of dependencies that can get in your way- is it waiting to do a reverse dns lookup on your IP? Trying to read files like your auth key from a saturated or failing disk? syncing logs?

          With that said i am surprised people are having responsiveness issues under full load, are you sure you weren’t running out of memory and relying heavily on swapping?

    • UnculturedSwine@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      One of my biggest frustrations with Linux. You are right. If I have something that works out of the box on windows but requires hours of research on Linux to get working correctly, it’s not an incentive to learn the complexities of Linux, it’s an incentive to ditch it. I’m a hobbyist when it comes to Linux but I also have work to do. I can’t be constantly ducking around with the OS when I have things to build.

    • secret300@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      I see what you mean but I feel like it’s more on the distro mainters to set niceness and prioritize the UI while under load.

    • witx@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      5 months ago

      What do you even mean as serious contender? I’ve been using Linux for almost 15 years without an issue on CPU, and I’ve used it almost only on very modest machines. I feel we’re not getting your whole story here.

      On the other hand whenever I had to do something IO intensive on windows it would always crawl in these machines

      • SorteKanin@feddit.dkOP
        link
        fedilink
        arrow-up
        4
        ·
        5 months ago

        You are getting the whole story - not sure what it is you think is missing. But I mean a serious desktop contender has to take UX seriously and have things “just work” without any custom configuration or tweaking or hacking around. Currently when I compile on Windows my browser and other programs “just works” while on Linux, the other stuff is choppy and laggy.