After doing some google-fu, I’ve been puzzled further as to how the finnish man has done it.

What I mean is, Linux is widely known and praised for being more efficient and lighter on resources than the greasy obese N.T. slog that is Windows 10/11

To the big brained ones out there, was this because the Linux Kernel more “stripped down” than a Windows bases kernel? Removing bits of bloated code that could affect speed and operations?

I’m no OS expert or comp sci graduate, but I’m guessing it has a better handle of processes, the CPU tasks it gets given and “more refined programming” under the hood?

If I remember rightly, Linux was more a server/enterprise OS first than before shipping with desktop approaches hence it’s used in a lot of institutions and educational sectors due to it being efficient as a server OS.

Hell, despite GNOME and Ubuntu getting flak for being chubby RAM hog bois, they’re still snappier than Windows 11.

MacOS? I mean, it’s snappy because it’s a descendant of UNIX which sorta bled to Linux.

Maybe that’s why? All of the snappiness and concepts were taken out of the UNIX playbook in designing a kernel and OS that isn’t a fat RAM hog that gobbles your system resources the minute you wake it up.

I apologise in advance for any possible techno gibberish but I would really like to know the “Linux is faster than a speeding bullet” phenomenon.

Cheers!

  • jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    44
    ·
    edit-2
    4 months ago

    You’re going to want to read the foundational published papers on operating system design. Especially kernel design and considerations. You can just Google any random graduate operating system class, and look at their reading list to get started.

    I.e. https://www.cs.jhu.edu/~huang/cs718/spring20/syllabus.html

    The big thing you want to look at is the different types of kernels there are, microkernels, monolithic kernels. How they divide memory, how they do IPC, how they incorporate drivers. All of these have different trade-offs.

    BSD/Mac OS, Linux, NT/windows, xen/hypervisors… They all currently have different approaches, and they’re all actually quite performant.

    A while ago, process multiplexing, scheduling was had a huge impact on the perceived performance of a system, but now with multicore machines becoming extremely common well this is still important it is not as impactful.

    Approaches to memory management, virtual memory, swapping to disk, the aggressiveness of this also has an impact on perceived system performance.

    As you alluded to in your post, a lot of the perceived performance is not the operating system and kernel itself, but the user interface and extra services offered. Windows 11 is going to feel like a clunker for any retail user just due to all of the network driven advertisements incorporated which slow down the core interaction loop. If you click on the start menu and everything lags for a second will it pulls a new advertisements, you’re going to feel that.

    Start adding in background scanning for viruses, indexing for AI features, you’re adding a lot of load to the system that’s not necessary.

    • jet@hackertalks.com
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      4 months ago

      Because everything’s a trade-off, people optimize different systems for different things, if you have a real-time operating system that runs a power plant, it doesn’t matter if the interface is clunky as long as it hits its time targets for its tasks.

      If you’re running a data center server, you’re probably worried more about total throughput over time, rather than immediate responsiveness to a terminal.

      For a computer that does lots of machine learning and vector math, you might spend a massive amount of time making certain programs run a few percentage points faster by changing how memory is managed, cache allocation across CPUs, network access, you’re going to find your critical path and bottleneck of performance and optimize that.

      When we’re talking about a general use desktop computer, we tend to focus on anything that a human would interact with minimize that loop. But because people could do anything, this becomes difficult to do perfectly in all scenarios. Just ask anybody who’s run Chrome for a while, without restarting, and has a thousand tabs open, because all the RAM is being consumed the computer starts to feel slow, because virtual memory management becomes more demanding…

      TLDR, all of the operating systems are capable of being very performant, all of the kernels are really good, it’s all the extra stuff that people run at the same time that makes them feel different.

      • DaGeek247@fedia.io
        link
        fedilink
        arrow-up
        20
        ·
        4 months ago

        Because everything’s a trade-off, people optimize different systems for different things

        And microsoft has chosen to optimize windows 11 for online advertisers above or equal to the user experience.

        • jet@hackertalks.com
          link
          fedilink
          English
          arrow-up
          12
          ·
          edit-2
          4 months ago

          Yeah, they seem hell-bent on making people hate windows. Not a great long-term strategy.

          Before you could argue most retail people wouldn’t know a better experience, they just accept it. But now everybody has a phone, and that phone gives them a better experience than Windows. So the tolerance for this b******* is going to go down