Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This debate about monolithic vs. micro-kernels has been had many times. Maybe this time the resolution is different, who knows. But FWIW Linux didn't reach its success because someone made a feature comparison between it and what else was out there in a spreadsheet and somehow discovered how Linux was so much better. Instead, Linux won (and continues to win) because it's the Rocky Balboa of operating systems. It may loose the first round, but it always come back. And the reason for that is that Linux's biggest feature isn't necessarily technical. Rather, it's the community of people around it, the fact that it can tolerate a healthy dose of disagreement and infighting before eventually finding and settling on whatever best solution solves the next immediate problem, not some far-into-future idealistic goal. The downside to that development model is that radical changes take several iterations/years while in a centrally-managed OS development model can be shoved in "atomically" -- ex: real-time, tracing, etc. You can devise many a great OSes on paper and even implement them. Bootstrapping an entire ecosystem and, effectively, institutionalize a completely open and nimble development model such as that of the Linux kernel is a whole other story.


And here's me thinking it was just because shared hosting providers didn't have to deal with insane and onerous licensing costs and vm isolation problems


That also would have been true for BSD.

Linux probably also had good timing while the Unix wars were fought.


Yep. I remember choosing Linux in '93 because it had a huge momentum behind it (the hacker literally asked me "Linux or BSD" before handing me 4 floppies). I didn't really understand the technical or legal differences at the time, but it was clear that BSD wasn't as "hot".

In retrospect I really liked BSD for a lot of the ways it did things (more stable, excellent long-term backwards compatibility).


If I recall correctly, at the time, BSD was stuck in a lawsuit.

https://en.wikipedia.org/wiki/UNIX_System_Laboratories,_Inc.....

I'm wondering if it had a detrimental effect on adoption of *BSD, and thus increased Linux popularity at this early and critical time.


Many BSDheads I know from that time believe that was the case (I was asked by them "why didn't you go with BSD" and TBH I just liked the GPL license since I had recently read the section about Stallman in Hackers).

I think another issue is that linux added new features rapidly which probably helped adoption.


> I'm wondering if it had a detrimental effect on adoption of *BSD, and thus increased Linux popularity at this early and critical time.

Linus has said as much[1]: "If 386BSD had been available when I started on Linux, Linux would probably never had happened."

[1]: https://gondwanaland.com/meta/history/interview.html


Keep in mind, too, that path dependence plays a huge role here. Linux took off when the alternatives were Windows NT, Novell Netware, or commercial Unixes running on underpowered RISC hardware.


Late 1990s,a huge amount of linux installs in the business happened because of Samba and Apache


Right! It takes nothing away from the Linux developers to acknowledge the enormous role of contingency in Linux' initial adoption.


"commercial Unixes running on underpowered RISC hardware."

Commercial Unix running on expensive hardware (and software) was the key opening for Linux, not the underpowered bit. Linux on X86 took a while to catch up performance wise.


BSD was around!


It was, but it was in legal limbo.


After the legal limbo came a whole bunch of fractious disputes within and between multiple core teams -- initially FreeBSD (concentrating on PC-derived platforms) and NetBSD (cross-platform), both having some roots in the earlier, troubled 386BSD project led by Bill Jolitz, with some later forks from each group (most notably OpenBSD). Those persisted long after the legal issues were effectively settled, and really hindered the BSDs in general from keeping up with Linux-based OS distributions.


I have also read that, back then, Linux was better at supporting common hardware than the BSDs (https://news.ycombinator.com/item?id=21420338). My personal experience at the time is that I didn't even consider the BSDs; Linux had UMSDOS, which allowed me to try it out without having to repartition and/or reformat (then later I noticed that I was nearly always on Linux, so I reformatted a whole partition as ext2 and dedicated it exclusively to Linux).


Linux also had a very clear story for access to source code and a clear structure for accepting/rejecting contributions.


I think it’s a bit early to declare Linux’s market victory here.

The obvious thing for Google to do is to use this in Android, and this will solve a number of big problems for them (specifically, binary-only drivers and a better security model). They might even have some success in the server or desktop spaces as well.

I worry a bit that Linux will be the next Firefox.


>binary-only drivers

If by this you're referring to their promises of a stable driver ABI, I can't understand what problem this is supposed to solve compared to Linux. There are plenty of binary drivers already shipped on Linux. IOT device vendors don't care about a stable ABI because they just pin to their kernel version. Android device vendors don't care because either way they will stop updating their kernels after a number of years. Enterprise users don't care because they also pin to a kernel version and backport what fixes they want.

It also does not seem to solve any problem for Google because they still have to make the same stability/support promises and deal with the same accumulation of legacy code either way. Yes, this is necessary to stop fragmentation but the problem is nothing really changes here compared to what they would do if they were going to take on the cost of backporting Linux kernel fixes. The only realistic cost saving for them I could see is if the greater stability came from reducing the total amount of hardware that is supported compared to Linux. Which makes sense for them but at the same time completely eliminates the possibility of them ever seriously touting this as a Linux replacement.


> Android device vendors don't care because either way they will stop updating their kernels after a number of years

This one seems like a big problem to me. Maybe the handset manufacturer doesn’t care because they already made their profit, but this pushes the support burden onto app developers (including Google itself) who have to maintain support for these old Android devices that the manufacturers don’t care about.

This seems like a real issue to me, is there something I’m missing?


I agree that is a real issue. By my point is that the proposed solution is now that Google itself is now going to have to maintain support for old Fuschia devices because the burden is now on them to maintain this stable ABI. How is this going to solve anything compared to just making a support promise about a particular Linux version? Nothing here seems like it would improve for the app developers.


*Fuchsia


It's ok, nobody can spell fuchsia: https://blog.xkcd.com/2010/05/03/color-survey-results/

Back on topic though, I think it's just easier from an engineering perspective to maintain a stable ABI than it is to maintain a set of blessed kernel versions.

An ABI is something that can be reasonably well-defined and has a clear scope, whereas if you just say that you support kernel versions X, Y and Z, then who knows what weird undocumented behavior you'll need to maintain.


There is an effort to upstream all the Android kernel patches to make Android device kernel updates simple:

https://lwn.net/Articles/771974/

Also some vendors such as Sony have standardized kerbel versions across many devices & publish kernel major version updates for a range of devices (in Sonys case called the open device program).


I agree with you on the first part: Linux's best weapon is its community and strong leadership.

But in my opinion, the second most important feature of Linux is in fact its ability to change large parts of the kernel when and if needed. And do it very very quickly.

And this something microkernels cannot do. They are just slower when doing major changes that touch many parts of there OS.


Isn't this more so related to having all relevant code in the same repo rather than about being a monolithic kernel? As long as an OS's out of tree contract with out of tree users is well defined, refactoring code within the repo shouldn't be any more difficult. The fact that Linux's contract is the very obvious division between kernel and userspace doesn't much matter.


I am sure having all code in the same repo helps but I was mainly thinking about how changes across multiple services is often very time consuming in microkernel designs and requires very careful analysis.


Fuchsia/Zircon claims not to be a microkernel. It definitely isn't a classic microkernel like Mach or L4; Zircon is still responsible for a very large number of syscalls.

However, core components such as graphics, file systems, hardware devices, etc. are moved into userland, so in that sense it follows the microkernel idea of putting as little as possible in the kernel itself.

Personally, I love the design, and hope it takes off. We really need to get away from letting kernels and devices have elevated privileges in an OS.

As far as I'm concerned, Tanenbaum won the monolith/microkernel debate — Minix 3 is a true microkernel, and is currently a popular niche OS — and the success of Linux does not diminish the argument. (The infamous failure of GNU Hurd doesn't, either.)


> Minix 3 is a true microkernel, and is currently a popular niche OS

Isn't it the most widely deployed operating system in the world? It's embedded in basically every Intel CPU


This keeps getting repeated, but I don’t understand how that would lead to it being the most popular OS there is. There is way more embedded systems than there is Intel CPUs on the market and they often run Linux. Android phones are sold more than four times the amount of PCs a year. I’d assume that most Intel CPUs are deployed in the data centers... that run Linux.


but the cpus deployed in datacenters also run minix. even the ones running linux. or windows


Yeah, but the point was that there is without a doubt more Linux instances running on ARM than there is Intel CPUs in total. Then even from the Intel processors that do have Minix, a sizable amount is running Linux. Therefore Minix can’t be nearly as widely used as Linux.


What about the ones that are on fridges?


But they said "widely deployed", not "popular". No one is claiming Minix is the most popular OS in the world.


There are far more phones than desktops, and far more desktops than servers, so I doubt it. Especially if you count by OS instance (counting VMs) rather than by machine.


True!


That so many OS developers are working on Fuchsia and other kernels suggests that the Linux community hasn't been very successful with inclusion and handling disagreement and infighting.


Linux won because of Steve Ballmer trying to torpede it in any conceivable manner. Those brave people contributing to and using into developed a kind of Robin Hood mentality.

Without a hate figure like Steve Ballmer net relative momentum will decline. It just happens that so many other big players joined the band wagon and contribute, that you don't notice that relative decline.


No, when Ballmer decided to torpedo Linux, it was starting to make inroads into the server markets. Momentum had been there for some times already.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: