Linux Myths Series:
Sadly Linux is (not) an operating system

First, let's talk about terminology. When we are talking about Linux in this context, we are talking about a complete product, not just the Linux kernel. And the complete product in our case will be a Linux distribution (“distro” later on). The Linux kernel all by itself is completely useless, it's an API between your hardware and software.

Then let's talk about the staples of an actual operating system.

Backward and forward binary compatibility

Let's explain these terms first.

Backward compatibility means software compiled in the past or in the older version of an operating system can work in its future versions. For example, the vast majority of 32bit software compiled in Windows 95 (released in 1995) can work in Windows 11. It's an extreme example and it's cost Microsoft billions to maintain this level of compatibility but it certainly exists.

Forward compatibility is when you compile software in e.g. a current version of your operating system and it can work in preceding releases. E.g. software compiled in Windows 11 can work in Windows 7.

Here is where Linux falls short. There's no guaranteed or implied backward or forward compatibility in Linux distros. As an extreme example software compiled in Ubuntu 23.10 may not necessarily work in Ubuntu 24.10 and vice versa.

The Linux kernel itself does maintain near infinite backward and forward compatibility with respect to syscalls, i.e. kernel low-level functions. Unfortunately, since they are very low level, nothing uses them directly because it means too much source code and too high maintenance and development cost. These functions are wrapped and you get either glibc (GNU lib C library) or musl neither of which are backward compatibile. There are certain tricks to compile software with newer glibc versions and make it run using older versions but no one does that.

In Linux you can perfectly create applications which will work for years, if not decades by linking them statically. It means that when you compile software (turn it from source code into binary code that can actually be executed by your CPU) you link all the functions statically, i.e. you embed the entire implementation of each function starting with glibc and ending with high-level libraries. This is not recommended and almost never used for the following reasons: the resulting binary will be very fat (heavy), this cannot be done for commercial closed source software because e.g. the GPL2 license forbids static linking, and lastly the functions you embed may have vulnerabilities which means you'll have to recompile your application every time those vulnerabilities are found and fixed.

Some Linux distros used to maintain some level of compatibility called LSB (Linux Standard Base). This meant that if you compiled software for one distribution, it was guaranteed to run on any distribution that supported it. Unfortunately, the initiative never gained traction and was eventually abandoned nearly a decade ago.

Modern Linux distros have taken a different approach to this issue, known as Snap or Flatpak. Both are largely very similar in that they provide an almost complete extra Linux distro that is installed whenever you need to install the software using Snap or Flatpak. This extra special Linux distro is frozen in time and may allow its internal libraries to be updated when there are security fixes but these updates always retain compatibility.

Snap and Flatpak have the same issues:

  • A lot of wasted space due to the need of having an extra Linux distro installed
  • The base Linux distros used in them are sometimes updated, so instead of one extra Linux distro you can and will have many
  • Software that uses them takes a lot longer to start, uses a lot more RAM and CPU
  • Software that uses them has problems integrating with your desktop environment

Very long driver API/ABI compatibility

What's a driver? It's a piece of software which is written for the operating system kernel that allows the OS to talk to a specific hardware device.

What's an API? API are basically source code definitions of how program functions work. Here's an example function: int sum(a, b) - what does it do? When you write sum(1, 2) it will return 3. Simple as that.

What's an ABI? When you compile source code into binary code that your CPU can actually run, you use certain conventions, such as code location, offsets, and so on. You don't need to understand this, it just means that there are different ways to compile source code, and e.g. if you update your operating system or a compiler, the binary conventions can change, so software linked to the old version of the code may not be able to use the new version of the code because the conventions have changed.

Again, let's talk about Windows. Windows features at the very least 10 years of ABI/API driver compatibility. It's even “worse” than that. Many device drivers written for Windows 10 work in Windows 11 and vice versa.

Now what about Linux? There's zero API/ABI compatibility in the Linux kernel. Every kernel release breaks ABI compatibility, and API compatibility may be broken as well, but it's not always the case. Regardless, it's not guaranteed, it's not maintained, and it's often broken. NVIDIA Linux users will attest to this.

Drivers are independent of the kernel

In Windows device drivers are independent of the operating system kernel and can be easily swapped. Regressions do happen even with WHQL. OK, you can downgrade. With Linux? Install the entire kernel or bust.

The issue is, sometimes you need a new kernel because it now supports something which didn't work before but now your old hardware doesn't work because of a newly introduced regression. You cannot just use a working driver from e.g. kernel 6.8 while running kernel 6.9. And this scenario happens quite often to Linux users.

Regression testing

The Linux kernel has an insane number of regressions in every release. People don't talk about it, unless you go to r/Fedora, askubuntu.com, or similar sites and read about dozens of them ... every day.

Granted, there are some companies that do regression testing, but they only do it for the things they care about. They can't possibly test everything because they don't have access to all the hardware Linux supports, nor do they have enough resources (money) to test everything.

In Windows, there's the WHQL program, which means that all new drivers submitted to Microsoft have to be thoroughly tested and all their features verified to work correctly. In Linux, patches are sent to the Linux kernel mailing list and merged on the spot if they just compile. The Linux kernel maintainers are under no obligation to ensure that these patches are thoroughly tested.

For each Linux kernel release, at least a few hundred regressions are fixed. For LTS Linux kernel releases, this number is sometimes in the thousands.

Proper support channels

In Linux, if something breaks for you, who exactly are you going to report it to? Your distro subreddit? No, mostly end users like you who have very basic technical skills. Unix StackExchange? Nope, that's not for regressions. Kernel Bugzilla when you're damn sure it's a bug in the kernel? How will you find that site? A bug tracker for your distribution? A huge number of bug reports are abandoned because 1) there are not enough maintainers, 2) the maintainers are not smart enough, 3) the problem is too complicated.

Granted, in Windows it's not an ideal situation at all, and Microsoft support has always been ridiculed because their only solution to all your problems is to reinstall Windows from scratch, but since the OS gets a lot more testing and drivers can be swapped out, deal-breaking problems occur a lot less frequently.

 
 

© 2024 . Last revised: . The most current version can be found here.

All rights reserved. You can reproduce the entire text verbatim, and you must retain the authorship and provide a link to this document.

 
blog comments powered by Disqus

Return to the main page.

free counters
Viewable With Any Browser Valid HTML5! Valid CSS!

Back to top