I'm more excited about the possibility of a high quality desktop OS rather than the kernel. The Linux kernel is great but a great open-source desktop OS doesn't exist today.
Specifically, this product doesn't exist today:
- Desktop environment that matches or surpasses Mac OS in quality, performance and UX design.
- It includes seamless synchronization between devices.
- Apps are sandboxed, similarly to Android or iOS.
- There is a clearly defined platform / SDK, like in Android or iOS. If it works in your development environment, it's guaranteed to work everywhere.
- You can easily make a commercial product, like in Android or iOS.
- It has 10% on the market (or software creators believe it's on the trajectory to get there).
I don't understand why Google hasn't done this yet. Of course, the answer could be that such project just doesn't have the required ROI.
Google is a web company; they want people to use the web. They made a laptop/desktop operating system built around their web browser, because they want everything to be on the web. It does most of the things you list, including synchronizing between devices since your data is all "in the Cloud".
As for market share, I'm not sure what power you think Google has, but getting 10% of the desktop OS market has got to be pretty difficult for anyone. I do not know what share of the market Chrome OS has.
No. I mean a full-fledged alternative to the major operating systems. Do people in charge of Chrome OS say "In 5 years, we want developers, designers and project managers at Google use this OS"? I don't think so - it's not their ambition to compete with Mac OS or Linux.
I'm really enjoying ChromeOS for software dev. The Linux container is tightly integrated with the rest of the OS. On top of that I can run all my favorite Android apps as well, making it an excellent OS for personal use.
One interesting example is that if I have a file I want to open, it doesn't matter if it's Chrome, Android, or Linux that has the executable to open it. I just click the file and it opens in the right app. I can also open it in any other apps via a dialogue that lists things I can open it with. The list shows all relevant Android and Linux apps.
The app launcher is similar -- all apps, regardless of how they are run, show up together.
I'm still not sure I could give up my Mac for my day job, but for personal use, I love it.
As others have mentioned, it's an officially supported feature and it's well integrated into ChromeOS itself. It's still "Beta", but it's now in a really solid state.
Perhaps you would like to share a screenshot or a pointer to some documentation. Chrome OS forces users to sign in to run apps and use extensions. Sounds like this is no better than crouton or crostini.
This feature is Crostini. I now know you have either half knowledge or are intentionally posting in bad faith.
You don't need to turn developer mode on for this feature. That means you get verified boot , updates (including your Linux distro) as well as the security guarantees of Chrome OS. Crouton requires you to give up all of that. You can't get sync without signing in. Cloud sync is the foundation of the OS. It's what makes it stateless. If you don't like that it's not for you but stop spreading half truths about the product in bad faith.
Ok, but the Linux container is completely at odds with Google's ChromeOS pitch.
Google says that ChromeOS has "simple setup" - but not the Linux parts. They say you can "search anything on your Chromebook" - but not Linux. They talk about "Chrome sync" - doesn't apply to Linux. Etc.
I think that ChromeOS has value as a web-browser host, and also as a development machine for vim-jockeys. What's missing is the middle part: a real desktop OS.
They don't say that, because that's not their goal. Google makes money from consumers being online, searching, browsing the internet, and using Google apps. It doesn't matter if that person is using Windows, Mac, Linux, or some imaginative Google OS to connect to the internet. They profit either way.
Another data point: I was an intern at Google last year and they gave most of the interns Pixelbooks. It worked fine for development (granted, development consisted of SSHing into a dev box and using a web-based IDE).
The whole point is that you can't fight the old "full-fledged" OS with a new one. The aim of the Google's web is to reduce the role of the OS to the "BIOS".
The problem with Chrome OS is that it's designed around treating your hardware like a kiosk. This is evident from design decisions like including an SSH console, but then disabling the escape sequences (see: https://codereview.chromium.org/5183004/).
ChromeOS does solve many of criteria listed by the GP, if one doesn't mind Google ecosystem; then a Chromebook is a good computing device at lower-medium price range[1].
Btw, people are already doing some interesting projects with Fuchsia like this desktop shell written in Flutter for a Fuchsia fork[2].
Chrome OS is not 100% open source and the bits that are not in Chromium OS are significant. It is also effectively tied to certain hardware. Users cannot easily install it on whatever hardware they choose.
The Chromium OS FAQ contains no link to download the source code and contains this little gem:
"Keep in mind that Chromium OS is not for general consumer use."
LOL a whole guide "secretly hidden" on the Internet, to get and build the source code.
Right above that line -
"Where can I download Chromium OS?
If you are the kind of developer who likes to build an open source operating system from scratch, you can follow the developer instructions to check out Chromium OS, build it and experiment with it. A number of sites have also posted pre-built binaries of Chromium OS. However, these downloads are not verified by Google, therefore please ensure you trust the site you are downloading these from."
What if the user is not "the kind of developer who likes to build an open source operating system form scratch"?
The fact is that this FAQ on www.chromium.org gives no pointers into Google Git to find the files to which you refer. It does not even give a link to chromium.googlesource.org, aside from the BSD license. If the reader following the "For everyone" link is not a developer, not a "UI designer" and not a "Contributor", she does not want to be treated like those types of people, she just wants the source code. If she were a developer she would have followed the "For developers" link at www.chromium.org. Interestingly, there is no "For users" link at www.chromium.org.
Not everyone looking for source code is a "developer", or thinks like one, and even if they are a developer, they may not be one "who likes build an open source OS from scratch".
I am an end user, not a developer, and I have been building an open source OS from source code for over 15 years. This FAQ makes some bold assumptions about end users. I do not believe that calling this out is "disingenuous", a "put down", nor "snark", at least, not according to the definition of that term I found on FOLDOC. I have always run my customised systems in a configuration similar to what Google calls "developer mode" (in fact, I use a more flexible, simpler configuration), however I am not a developer. The name "developer mode" is suggestive and silly.
One does not need to be a "developer", or think like one, to read, edit, write or compile software. Google seems to prefer to pretend such users do not exist. A Chrome OS user who is not a developer, UI designer nor contributor who wants the source code gets funnelled to the Chromium OS website, and then is provided with the advice to "follow the developer instructions". No link to the source code, or even to the "For developers" section. Too add to this, the user is given the caveat that "Chromium OS is not for general consumer use". This "For everyone" page is a dead end for non-developer users who want source code.
Imagine you are a "general consumer", i.e., an end-user, who is told Chrome OS is "open source". You go looking for the source code tree and tarballs and you are directed to Chromium OS. Then you arrive at www.chromium.org and are told Chromium OS is "not intended for general consumer use" and "if you a developer who likes to build an open source operating system from scratch you can follow the developer instructions". This is extremely presumptuous.
Nor everyone who wants to see the source code wants to build Chromium OS. Building "modern" web browsers like Chrome from scratch is unreasonably resource intensive. Some people may not want to read through all the opinionated developer instructions whose primary audience appears to be Google staff. Some people just want the source code to Chrome OS. That's it.
Yes, people not working for Google can get it. I can get it (with some effort). I never said otherwise. However, as I pointed out, it is "Chromium OS" not "Chrome OS" and there is a lot of cult-like mumbo jumbo for non-Google staff to wade through in order to find what one is looking for. The "open source" OS is Chromium OS not Chrome OS. That is not a "half-truth" it is a fact. Is all this a deterrent for those interested in Chrome OS source code? That is for the reader to decide. I think it could be made easier but that is only my opinion.
Hear hear. It seems to me that any such effort either has to be built on the Linux kernel (forgoing much of the horrid userland) or rely on a something like Fuchsia that's backed by a large corporate entity becoming so widely used as to have significant driver buy-in from hardware manufacturers.
Unfortunately, I don't believe that relying on a for-profit third party entity like Google is a good idea due to the conflict of interest, even if their OS is FOSS. Just look at what Google has done throwing its weight around as the big-man in browsers, even though the browser engine is FOSS.
> Desktop environment that matches or surpasses Mac OS in quality, performance and UX design.
I've got Plasma setup to take advantage of a decade of Mac muscle memory, and think it does "macOS" better than macOS itself.
> Apps are sandboxed, similarly to Android or iOS
Install firejail, and its wrapper scripts. The wrapper will automatically wrap common commands and apps in a sandbox, and you can use the firejail CLI to launch apps that it doesn't wrap.
I agree with your other points, however I lament that BSD-style licensing will prevent a lot of cool things from having their sources see the light of day or be upstreamed.
Latte-dock with auto-hide for a dock, and custom KWin button arrangements to mimic macOS. A top panel with Window Title + Window Appmenu Plasmoids for a global menu. System Tray and Event Calendar Plasmoids on the same panel, the latter of which has a drop down calendar like in macOS. KRunner, or either the Search or Application Launcher Plasmoids to replace Spotlight. The Screen Edges setting can mimic macOS hot corners. Widgets on the desktop is a compromise for a Dashboard replacement, as is the Launchpad Plasma Plasmoid. Workspaces, Activities and various Screen Edges views can replace Mission Control. Dolphin can be configured to look like Finder, and you can set Finder keyboard shortcuts, too. There's a setting somewhere to make menus transparent and blurred, as well.
Most of these are extensible or scriptable, and have actions that can be triggered with custom shortcuts. KRunner in particular is extensible. Plasma lets you change or set shortcuts for everything, so, for example, you can mimic the Mac screenshot shortcuts well, since KDE's Spectacle is comparable to the latest updates to screenshot functionality in macOS.
Oh, and symlink 'xdg-open' to 'open', or create an alias.
>I don't understand why Google hasn't done this yet.
If Google or some similar company were to make a desktop OS, it would probably be packed with so many monetization "features" that the free software community wouldn't be very interested and may even see it as a threat. Chrome OS already demonstrated most of this pattern.
The elementary project has done some great work with their UI, but the limitations of a decentralized funding model mean that it just isn't as complete as the major OSes. Meanwhile the jousting between Red Hat and Canonical over control of the GNOME ecosystem alienates a lot of users. Meanwhile corporate control of Qt has limited KDE's popularity. In all three cases the organizational problems associated with developing a truly freedom-preserving OS in a profit-oriented world crop up in different ways. Google taking over doesn't sound like a solution to me.
Some kind of alliance between free software developers and hardware manufacturers has long been dreamt about but hasn't materialized over the last two decades. All too often, Linux OEMs attach a half-baked homegrown distribution and their hardware tries to rely on it. We've also hoped that democratic countries will decide that a free and open-source operating system would be good for national security and fund development. In practice, interest has mostly come from Russia and China, with user freedom off the priority list.
Google isn't a deity. They don't exist to serve humanity but rather their investors. The organization problem continues to be difficult.
I don't know if you've tried Plasma but it's an excellent DE that hits your "seamless synchronization" and "better UX than MacOS" requirements.
Sandboxed apps are a no-go on a productive desktop. Have you tried using Snaps? They're sandboxed by default, and it's extremely annoying having barriers in the way sharing data between them. Sandboxing is the sworn enemy of composition. It's like keeping all the interior doors of your house locked.
Clearly defined platform / guaranteed to work: Linux-the-kernel provides a binary-compatible ABI. This is entirely sufficient, if you bundle everything including libc. If you don't want to do that, well glibc is backwards-compatible so you can just build against some reasonably old version of glibc and it'll work on any recent distro. This isn't some obscure method - lots of software is binary-distributed this way (Firefox, Blender).
Easily make a commercial product: well, I'm currently using Pointwise and Mathematica on Linux, so it doesn't appear to be that hard.
KDE does most of what you're talking about. Qt is a stable, established SDK, KDE Connect offers seamless integration, Plasma has a very good UX, and so on. Only thing missing is market share and a flagship distro.
If something works in your Android dev environment, it is absolutely not guaranteed to work everywhere, or work the same everywhere. Differences exist between vendors, and every between different models by the same vendor.
I don't think that's very true. Smartphone-like sandboxing is widely praised for its security and stability benefits, UWP failed due to a lot of other factors (eg. Windows 10 Store, lack of Windows 7 support, etc.)
It wasn't _just_ the sandboxing. It took an unreasonably long time for MS to loosen restrictions on "sideloading" (read: distributing sandboxed apps outside of the Microsoft Store), and the deployment story for line-of-business applications was abysmal for years.
I'm not saying the AppContainer sandbox is perfect, but I suspect it would have gone over a lot better if the early distribution story for UWP had been less of a disaster.
I don't think anyone rejected it because of sandboxing. It is a new API that would only work on windows with that installed. Learning something completely new for a niche target is not very enticing. Also, if you can write something to be sandboxed, there is a good chance it can be a web page instead.
Sandboxing also needs to come from the other direction.
Having programs be made to be sandboxed defeats the purpose. What is needed is the ability to do contain all of a program's files in one place and isolate the access to the file system with permissions for other resources.
Developer push back first started with windows 8 with fears of lockdown. Valve's CEO was the most vocal because it looked like MS's end game would be to crush steam.
Microsoft didn't help matters with the 99 dollars developer fee - not high per se but in an existing free for all operating, pushbacks would happen.
Microsoft basically said to use this shiny new crippled toy, you must use our store and pay money.
Microsoft also fanned the fire when they announced that only one metro browser was allowed. That is you could only use the metro browser of your default desktop browser. Others would be disabled
For me as a user I hated/hate, UWP, metro and all it's incarnations because:
UWP apps are slower than regular apps. In the early days they crashed a lot. They wasted desktop real estate with too much whitespace. Initially they were not resizable like normal windows application. Oh... The hidden settings menu sucked.
Microsoft also consistently undermined the new format by limiting UWP to the latest operating system. Metro apps couldn't run on Windows 7. Most Universal apps made for Windows 10 won't work on Windows 8.1 and lower. Some wouldn't even work on some Windows 10 versions.
For developers, having to maintain multiple versions written with different API's for "one operating system" is crazy.
UWP apps are basically unusable without mouse or touch screen. Almost all the shortcuts we know and love don't work.
This flat UI nonsense that makes it difficult for users to detect the active menu or even clickable items was started by Microsoft with windows 8. Google and Apple take the credit cos Microsoft failed with mobile.
If you believe in such a thing as a Windows way, then UWP was absolutely not the Windows way of doing things. Windows developers, used to the Windows way, had only one reasonable answer.
So they thought, hence why Project Reunion is making clear what was already slowly happening since 2018, exposing UWP model in Win32, including sandboxing.
In a couple of years every Win32 app will think it still owns the OS, while they are actually playing on their little virtualized OS, using the same pico process model as WSL.
But that would basically make it an awkward Linux distribution that doesn't have much benefits over the existing ones. It seems like an afterthought, a second class citizen.
Where's the documentation for Chrome OS SDK (for native apps)?
Linux Files and Apps are first party citizens in the OS. They appear in all menus and settings. You'd develop apps as usual for a Debian distribution. Nothing needs to change.
I use chromeOS as my daily driver but even with Linux and Android app support it doesn't work as a dev machine. The hardware isn't there and its just not positioned well enough for vendors to take it seriously.
I think the missing golden keystone w/in the "UX design" category is the physical touchpad quality and its driver. The mac touchpad that is so accurate, responsive/sensitive, jitter-free/steady, and effortless that I've yet to come across the like in Windows or *.nix land. I would more than love to be proved wrong in this thread!
I agree. I'm not sure why the mac touchpad is so far and away better than everything else. Touch screen and gestures are just as good on android phones as apple phones. Idk
Laughed to myself "that maybe Linux laptops should just use a slimmed down android phone as a touchpad". But then thinking about it, maybe it'd actually work. Use it as a security enclave similar to touchbar on macbooks as well.
Sandboxing in Linux is developing with both Snaps and Flatpaks.
Linux really does miss an SDK though. Realistically if you target Ubuntu you'll reach the majority of Linux users but the community of power users (ugh, hate that phrase but can't think of a better one) and developers is scattered any number of desktops and Linux varients.
We build and package Ardour, a cross-platform DAW, as a single installable for every version of Linux except nixOS. Same installer, same package. Every version of Linux (except nixOS).
> There is a clearly defined platform / SDK, like in Android or iOS. If it works in your development environment, it's guaranteed to work everywhere.
Are iOS/Android SDKs good examples of stable platforms? They are just as much a moving target as anything else and the only thing certain about them is that they _will_ require maintenance.
I really doubt Google has any interest in building a "great open-source desktop OS". Everything they've done with Android suggests they'll throw releases over the wall, but keep any interesting or useful bits proprietary & closely tied to Google cloud services.
Each component uses it's own resources and logic to make up "verified boot". For e.g. there is dm-verity, for block level hashes and then there is key signing for images.
You said Chrome OS was open. In another comment, I pointed out that only Chromium OS is open. I asked you for a link to the Chrome OS source code. You provided a link to Chromium OS source code. Thank you.
Has Google ever created any non-web product that it supported for more than a token amount of time, that it continued to re-iterate on for years that was not linked to Google properties?
A good functioning Desktop experience for Linux is impossible.
- It requires a massive centralized investment. Microsoft/Apple/Google can do that. But Linux is run by amateurs (not talking about skills but rather guys who are not on a payroll).
- Linux is for people who have rather weird setups. Apple uses limited hardware (ie: You have few Macbook Pros, etc...). Windows is supported by manufactures because it is the standard of Desktop computing for most of the world (and still can be a mess). Linux, on the other hand, has to do all that work alone.
- Linux professional and profitable market is still in servers/deployment where you don't need a desktop space.
Ah, Fuchsia... another of Google's solutions looking for a problem.
I remember hearing about it and immediately asking "what is the market for this?" and not being satisfied with the answer.
On the one side you have Samsung, who are unhappy enough with their dependence on Google that they'll not likely eat the cost of moving from Android to something else that has no strategic benefit to them (as far as I could tell, anyway).
So what does that leave? Google making its own devices. I don't know this to be the case but I strongly suspect the intent was for the Pixel line was (and may still be?) intended to be the spark for Google being a vertically integrated first-party seller of mobile devices. What I mean is these aren't just proof-of-concept or developer devices.
The problem there is by doing that they'll hurt their relationships with Android OEMs and it'd take a long time to fill that void, if that's even possible.
So where else? The obvious answer is... Chromecast. This is a hardware product Google has had a good amount of success with. It's already first-party.
A recently leaked report [1] seems to indicate that the next Chromecast will be Android TV. That is (IMHO) bad news for Fuchsia.
You have to also take into account that during the time Fuchsia has been around (which must be 4+ years by now?) there's been a change at the top with Sundar replacing Larry. That's always a dangerous time for high-profile and high-cost (literally $billions) projects with no customers. Sundar came from Chrome. Fuchsia didn't. You have to ask questions about who was the original (SVP+) cheerleader who got it off the ground and funded it? Was larry on board? Is Sundar/ Does that sponsor enjoy the same support under Sundar that they did under Larry?
I don't know the answer to these questions, just that they matter. marissa Mayer's departure from Google was largely a product of a change at the top as she enjoyed Eric's favour but all signs pointed to Larry not being as big as a fan.
Fuchsia probably should've been called Graphene. Graphene can do everything but leave the lab, after all.
Traditionally, large money-making companies liked having big, obvious loss areas, so that when the hard times come, they can just cut that and restore the illusion of growth on Wall Street. It only buys you a couple of quarters of fake growth, but for Wall Street, that's often enough.
People like to claim these things anecdotally but if you sit down for a few minutes and think about it then it doesn't really make sense. By doing something like that you'd sacrifice current growth, and since money today is worth more than money in the future cutting dead weight now would be more valuable to the shareholders (not to mention the fact that the losses accumulate each year as cash being drained from your accounts).
The reality is probably that companies are making a bet on some far fetched technology, so for example Fuschia or the developers behind it could lead to something insanely profitable, but there's only a 20% chance. If you're in a good year then the investment is worthwhile because the expected value is still positive. If you're in a bad year you can't afford to wait for the investment to pan out (not to mention your risk model goes totally out of whack; if the market is doing poorly consumers won't shell out cash for this new tech) so you have to cut costs in that area.
Shareholders are not rational, and CEOs believe shareholders to be even more irrational than they actually are. There's plenty of research showing that CEOs do things they think looks good to investors, despite evidence to the contrary. What happens to come to mind is research of FIFO v. LIFO inventory. One understates income at the benefit of having a lower tax liability, and the other overstates income to investors at the cost of paying more in taxes. Statistically, investors punish CEOs for leaving tax cuts on the table for an amount greater than the value of the supposed tax savings, while CEOs insist the opposite is true and that investors are rewarding them for overstating income. Managing your books is widespread and commonplace. People downsize, hide liabilities, massage earnings, and do irrational things regardless of not just long-term consequences, but often on the basis of perceived consequences.
In this particular case, though, you're probably right. Android itself was one of those whacky ideas that no one expected to gain any traction. Google funds moonshot projects as a matter of company culture and self-identity and it seems to actually make them money. Of course, the possibility still exists to cut them for short-term gain and I doubt no one factors that into their equation at all.
I like the insight in this comment, but I worry that you subscribe to the idea that there is any kind of sanity in buisness decisions in a large org. As a researcher pleb mucking around in the undergrowth of a university organisation, many of the decisions admin makes don't make any strategic sense if the university is considered as a rational machine looking out for it's own good. If instead, the university is seen as being controlled by a bunch of self serving sociopaths looking to get a leg-up on the competition, the insane decisions admin makes suddenly become a lot more understandable.
Ah, the "senior-engineer retention project". And when they need those senior engineers, or they get bored and retire or go to another company, only the senior engineers that were not that good will keep on working on that thing. That's usually when the useless by design project starts getting pressure to show that's actually useful and when it's surprisingly not it gets the axe.
And while that happens all the cream of the crop of the top talent already left or leaves because of the bullshit that comes with the pressure to show marketability for a research project.
People seem to forget this is Google. Yes, this take might be a bit cynical but given Google's track record of projects that sound great but then burn and die, how sure are we that this won't be another one? Google working on something and it becoming the next Chrome or Android isn't a sure bet given history. It's more like a >60% probably it will become abandoned at some point after enough careers from within the company are advanced.
I don't know if this is Google's intention, but I could certainly see Fuchsia being used for IoT devices.
IoT devices have a big problem with software updates. The people who make them are primarily interested in selling hardware, not supporting it for the many years it will probably be in your home. (You might replace a smartphone every two years, but how often do you replace a light switch or thermostat?)
Google says Fuchsia is built to be easy to update. Different components are able to be upgraded independently. There is a stable binary API for drivers.
Imagine a world where IoT devices are built to some Fuchsia-based standard which allows them to continue to be updated even when the manufacturer inevitably abandons the devices and their users. You could get more out of the devices, and you could worry less about security.
It wouldn't solve 100% of the problem, though, because there would always be some device-specific code, including device drivers and any software that enables unique capabilities of the device.
Apparently Google is already running Fuchsia (or at least Zircon) internally on several smart home devices, such as the Home Hub. Here [1] is an article from last year, still relevant.
Which several ones? The old puck shaped access points don't run it, and the Hub is the only other device with a processor that's capable or running it.
What was the market for Linux when it came out ? The thing was a copy of an existing OS, made by a guy in his spare time.
What was the market for php when it came out ? It was just a few helpers to write web pages, simpler to use than perl that already existed at the time
What was the market for go when it came out ? Yes, it did have a few modern things built-in, but there were already tons of widely deployed languages.
And yet here we are. I don't need to tell you how successful each of those is today. I don't mean to say that Fuchsia will definitely be useful or deployed in any large fashion; just that "there is no market" is not a valid counterpoint to the development of a new project.
> What was the market for Linux when it came out ? The thing was a copy of an existing OS, made by a guy in his spare time.
> What was the market for php when it came out ? It was just a few helpers to write web pages, simpler to use than perl that already existed at the time
The markets were clearly there (and huge!) for a free Unix-like OS and for a better replacement for CGI scripts. The fact that some people didn't see the potential for Linux or PHP, is wholly separate from their market opportunities.
The question here remains: where is the market today for a new micro-kernel OS? The likely answer would have to be device-makers... but which ones can actually benefit from Fushcia?
> On the one side you have Samsung, who are unhappy enough with their dependence on Google that they'll not likely eat the cost of moving from Android to something else...
Strange argument. Maybe they will move all their devices to their own superior OS Tizen. Or maybe they tried and it did not turn out very well.
So whichever side Samsung is it does not affect Google. There are many phone makers not that many phone OS makes. And with chinese manufacturers already pounding them they have no leverage on Google.
> Maybe they will move all their devices to their own superior OS Tizen. Or maybe they tried and it did not turn out very well.
Google (via the Open Handset Alliance) managed to enforce the current status quo. Manufacturers can't simultaneously ship their own OS like forked versions of Android, or presumably Tizen, and still be allowed to use Google Play Services. It's all-Google or nothing. I don't know whether Tizen is superior or not but one could assume it isn't simply due to the lacking ecosystem. Windows Phone was a good OS. It was shot in the foot by a weak app ecosystem.
> So whichever side Samsung is it does not affect Google.
Samsung has 20% of the market and a lot more of the high end market. Going with "not Google" means a lot of users no longer using Google Play Store and services. It most certainly would "affect" Google.
Samsung has 20% of the market and a lot more of the high end market. Going with "not Google" means a lot of users no longer using Google Play Store and services. It most certainly would "affect" Google.
It’s a game of chicken and one Samsung can’t win, but Google probably doesn’t want to inflict on them. Look at how well Huawei is doing outside China without google services (they’re dead in the water). Samsung doesn’t want that and can’t build an app ecosystem on their own. If Google pulls them and android and more importantly the android SDKs and dev tools toward Fuschia, I guarantee you that Samsung will follow.
At the same time I think Google would be foolish to intentionally antagonize one of their most successful partners. If Samsung’s bungled reaction was to try to go it alone with Tizen or hold out upgrading their phones to a new android, it only helps Apple and iOS, and that’s the last thing google wants.
> It’s a game of chicken and one Samsung can’t win
Indeed, they most likely can't (easily) make Tizen a commercial success to rival Android, and even using an Android fork would still be far from ideal without Google's Play Services.
But Google probably can't legally sustain this position for long and hold OEMs hostage, especially with all the antitrust scrutiny they are facing now. Conditioning access to the Play Services on using exclusively Google on every device sounds like something that could be considered anti-competitive.
My understanding was not that a manufacturer had to use a Google OS on all their devices, but that they could not ship Android without Google services. So Samsung can (and has in minuscule quantities) ship a Tizen phone. Or a Windows phone. They just cannot ship an AOSP phone.
Everyone in China ships AOSP phones. You’re completely free to. But everyone wants google services because everywhere outside China, they’re ubiquitous.
And Tizen will never be a successful software ecosystem on the order of android or iOS. Samsung is all thumbs when it comes to software. They don’t understand how to build and sustain a software platform and what third party devs need.
> Samsung has 20% of the market and a lot more of the high end market. Going with "not Google" means a lot of users no longer using Google Play Store and services. It most certainly would "affect" Google.
That will make Samsung phones very pretty door stops. Users do not care about phones. Users care about apps that run on those phones. Samsung has no apps that users care about ( IG/Facebook/TikTok/Snapchat/Banking/News/Tinder/etc ) without Google Play.
That's the point I made in the comment. Having no apps will shoot even good OSes in the foot. This being said all Samsung needs is for an antitrust investigation to break the exclusivity clause.
Samsung doesn't need to drop Android, they need to be able to sell phones with other OSes. Tizen probably fits the bill for low-end, cheap phones. With the exclusivity clause in place Samsung would have to bridge the canyon between the current Tizen ecosystem and where Samsung would want it in one step. Removing the limitation just gives them the chance to take it one step at a time with building their ecosystem (not a pleasant thought looking at the state of their software but hey...).
I would argue that it is not possible to slowly built that ecosystem. Samsung apps are crap and popular apps that people want create no motivation from developers to support yet another platform.
That's why the Windows Phone failed -- even though Microsoft offered shops with reasonably popular apps both money and placement to make them available in their store at the end the cost of supporting another platform was too high.
Yeah, all those banking apps, streaming clients and even games that require locked boot loader and a singned image are basically another wave of thrusted computing. Totally disgusting.
Everything out of PARC in the 70s were solutions looking for a problem. Fuchsia, in my mind, has always been a research project. I find it weird all your counter examples were all consumer facing, when Google is trying to grow an enterprise cloud business and we are seeing a push for ARM in the cloud. Fuchsia could have datacenter impacts for Google internally.
This debate about monolithic vs. micro-kernels has been had many times. Maybe this time the resolution is different, who knows. But FWIW Linux didn't reach its success because someone made a feature comparison between it and what else was out there in a spreadsheet and somehow discovered how Linux was so much better. Instead, Linux won (and continues to win) because it's the Rocky Balboa of operating systems. It may loose the first round, but it always come back. And the reason for that is that Linux's biggest feature isn't necessarily technical. Rather, it's the community of people around it, the fact that it can tolerate a healthy dose of disagreement and infighting before eventually finding and settling on whatever best solution solves the next immediate problem, not some far-into-future idealistic goal. The downside to that development model is that radical changes take several iterations/years while in a centrally-managed OS development model can be shoved in "atomically" -- ex: real-time, tracing, etc. You can devise many a great OSes on paper and even implement them. Bootstrapping an entire ecosystem and, effectively, institutionalize a completely open and nimble development model such as that of the Linux kernel is a whole other story.
And here's me thinking it was just because shared hosting providers didn't have to deal with insane and onerous licensing costs and vm isolation problems
Yep. I remember choosing Linux in '93 because it had a huge momentum behind it (the hacker literally asked me "Linux or BSD" before handing me 4 floppies). I didn't really understand the technical or legal differences at the time, but it was clear that BSD wasn't as "hot".
In retrospect I really liked BSD for a lot of the ways it did things (more stable, excellent long-term backwards compatibility).
Many BSDheads I know from that time believe that was the case (I was asked by them "why didn't you go with BSD" and TBH I just liked the GPL license since I had recently read the section about Stallman in Hackers).
I think another issue is that linux added new features rapidly which probably helped adoption.
Keep in mind, too, that path dependence plays a huge role here. Linux took off when the alternatives were Windows NT, Novell Netware, or commercial Unixes running on underpowered RISC hardware.
"commercial Unixes running on underpowered RISC hardware."
Commercial Unix running on expensive hardware (and software) was the key opening for Linux, not the underpowered bit. Linux on X86 took a while to catch up performance wise.
After the legal limbo came a whole bunch of fractious disputes within and between multiple core teams -- initially FreeBSD (concentrating on PC-derived platforms) and NetBSD (cross-platform), both having some roots in the earlier, troubled 386BSD project led by Bill Jolitz, with some later forks from each group (most notably OpenBSD). Those persisted long after the legal issues were effectively settled, and really hindered the BSDs in general from keeping up with Linux-based OS distributions.
I have also read that, back then, Linux was better at supporting common hardware than the BSDs (https://news.ycombinator.com/item?id=21420338). My personal experience at the time is that I didn't even consider the BSDs; Linux had UMSDOS, which allowed me to try it out without having to repartition and/or reformat (then later I noticed that I was nearly always on Linux, so I reformatted a whole partition as ext2 and dedicated it exclusively to Linux).
I think it’s a bit early to declare Linux’s market victory here.
The obvious thing for Google to do is to use this in Android, and this will solve a number of big problems for them (specifically, binary-only drivers and a better security model). They might even have some success in the server or desktop spaces as well.
I worry a bit that Linux will be the next Firefox.
If by this you're referring to their promises of a stable driver ABI, I can't understand what problem this is supposed to solve compared to Linux. There are plenty of binary drivers already shipped on Linux. IOT device vendors don't care about a stable ABI because they just pin to their kernel version. Android device vendors don't care because either way they will stop updating their kernels after a number of years. Enterprise users don't care because they also pin to a kernel version and backport what fixes they want.
It also does not seem to solve any problem for Google because they still have to make the same stability/support promises and deal with the same accumulation of legacy code either way. Yes, this is necessary to stop fragmentation but the problem is nothing really changes here compared to what they would do if they were going to take on the cost of backporting Linux kernel fixes. The only realistic cost saving for them I could see is if the greater stability came from reducing the total amount of hardware that is supported compared to Linux. Which makes sense for them but at the same time completely eliminates the possibility of them ever seriously touting this as a Linux replacement.
> Android device vendors don't care because either way they will stop updating their kernels after a number of years
This one seems like a big problem to me. Maybe the handset manufacturer doesn’t care because they already made their profit, but this pushes the support burden onto app developers (including Google itself) who have to maintain support for these old Android devices that the manufacturers don’t care about.
This seems like a real issue to me, is there something I’m missing?
I agree that is a real issue. By my point is that the proposed solution is now that Google itself is now going to have to maintain support for old Fuschia devices because the burden is now on them to maintain this stable ABI. How is this going to solve anything compared to just making a support promise about a particular Linux version? Nothing here seems like it would improve for the app developers.
Back on topic though, I think it's just easier from an engineering perspective to maintain a stable ABI than it is to maintain a set of blessed kernel versions.
An ABI is something that can be reasonably well-defined and has a clear scope, whereas if you just say that you support kernel versions X, Y and Z, then who knows what weird undocumented behavior you'll need to maintain.
Also some vendors such as Sony have standardized kerbel versions across many devices & publish kernel major version updates for a range of devices (in Sonys case called the open device program).
I agree with you on the first part: Linux's best weapon is its community and strong leadership.
But in my opinion, the second most important feature of Linux is in fact its ability to change large parts of the kernel when and if needed. And do it very very quickly.
And this something microkernels cannot do. They are just slower when doing major changes that touch many parts of there OS.
Isn't this more so related to having all relevant code in the same repo rather than about being a monolithic kernel? As long as an OS's out of tree contract with out of tree users is well defined, refactoring code within the repo shouldn't be any more difficult. The fact that Linux's contract is the very obvious division between kernel and userspace doesn't much matter.
I am sure having all code in the same repo helps but I was mainly thinking about how changes across multiple services is often very time consuming in microkernel designs and requires very careful analysis.
Fuchsia/Zircon claims not to be a microkernel. It definitely isn't a classic microkernel like Mach or L4; Zircon is still responsible for a very large number of syscalls.
However, core components such as graphics, file systems, hardware devices, etc. are moved into userland, so in that sense it follows the microkernel idea of putting as little as possible in the kernel itself.
Personally, I love the design, and hope it takes off. We really need to get away from letting kernels and devices have elevated privileges in an OS.
As far as I'm concerned, Tanenbaum won the monolith/microkernel debate — Minix 3 is a true microkernel, and is currently a popular niche OS — and the success of Linux does not diminish the argument. (The infamous failure of GNU Hurd doesn't, either.)
This keeps getting repeated, but I don’t understand how that would lead to it being the most popular OS there is. There is way more embedded systems than there is Intel CPUs on the market and they often run Linux. Android phones are sold more than four times the amount of PCs a year. I’d assume that most Intel CPUs are deployed in the data centers... that run Linux.
Yeah, but the point was that there is without a doubt more Linux instances running on ARM than there is Intel CPUs in total. Then even from the Intel processors that do have Minix, a sizable amount is running Linux. Therefore Minix can’t be nearly as widely used as Linux.
There are far more phones than desktops, and far more desktops than servers, so I doubt it. Especially if you count by OS instance (counting VMs) rather than by machine.
That so many OS developers are working on Fuchsia and other kernels suggests that the Linux community hasn't been very successful with inclusion and handling disagreement and infighting.
Linux won because of Steve Ballmer trying to torpede it in any conceivable manner. Those brave people contributing to and using into developed a kind of Robin Hood mentality.
Without a hate figure like Steve Ballmer net relative momentum will decline. It just happens that so many other big players joined the band wagon and contribute, that you don't notice that relative decline.
Didn't see the article address power management in the context of things that might be running idle in the background. That would seem to be to be a major incentive for an OS that's going be used on mobile, which needs to respond to changes in environment on the move (changing WLAN, network, beacons etc.)
> Traditional power management (PM) is aimed at conserving the power of computers that are usually left on. The general-purpose approach to PM for desktop PCs -- or even for "mobile" PCs, such as laptops -- doesn't take into account the specific demands of embedded systems, which can be off (or on standby) much of the time, yet must respond to external events in predictable ways.
I don't know if it's inherently more efficient to implement this type of thing with a microkernel but given the iBeacons and similar effectively "failed" ( https://venturebeat.com/2018/10/27/why-android-nearby-ibeaco...) due to power and sensitivity, this could be a big enough incentive to start a new OS
I'm both very excited someone is taking a shot at trying something new on kernel side. But i can't help wonder about what would a future look like where 90% of hardware run on a Google-owned operating system.
Maybe Google will take a bit out of Windows marketshare? It really sucks that we only have three options now - windows, mac and Linux. Two of these are owned by mega corps and the last one isn't as user friendly as the others
So is a world with 4 options, 3 owned by megacorps, that much better? I guess you could argue competition is always good for consumers, but it starts to just be an oligopoly rather than real competition.
No one except megacorps is going to build and maintain any alternative. Yes, it'd be much better than maintaining the status quo where the two user-friendly operating systems are either hardware-locked or riddled with bugs, vulnerabilities, and Candy Crush.
Yeah, I'm really happy someone with enough clout to possibly make it reality is doing it, but I don't really want a future where everything runs on google stuff (like with android, now)
Would that really ever happen? Microsoft - a company so dominant it got an antitrust lawsuit - hasn't even been able to get 90% of the market share. No operating system since the very early days of computing has.
Not only that, but other older operating systems are always going to have an advantage here anyway: they'll have support, ecosystems, documentation, and people with years of experience with them.
Google isn't even a monopoly in search for heaven's sake! Only 87% of people use Google Search. If they can't get a monopoly there - and you really can't get a monopoly in almost any industry without government help - what makes you think they can get a near-monopoly in operating systems?
A monopoly is the sole supplier of a product or service and the fact that bing exists makes search not a monopoly. In the concept of antitrust, the United States Department of Justice does not use that term directly and instead talks about power and the Supreme Court has defined market power as "the ability to raise prices above those that would be charged in a competitive market," and monopoly power as "the power to control prices or exclude competition" [0]. Google does not posses monopoly power over search as they do not exclude you from using bing, nor do they control prices (bing in fact sets a lower price than Googles free in that they will pay you to use their search engine). They do have market power though.
bing has under 3% marketshare vs 92.06% for google. As an advertiser or app/publisher, google has "the power to control prices or exclude competition."
From Section IV of the Justice Departments article:
The Supreme Court has noted the crucial role that defining the relevant market plays in section 2 monopolization and attempt cases. The market-definition requirement brings discipline and structure to the monopoly-power inquiry, thereby reducing the risks and costs of error. The relevant product market in a section 2 case, as elsewhere in antitrust, "is composed of products that have reasonable interchangeability for the purposes for which they are produced--price, use and qualities considered." Thus, the market is defined with regard to demand substitution, which focuses on buyers' views of which products are acceptable substitutes or alternatives.
For search advertising, they will likely find other forms of advertising such as Facebook as a good enough substitute and thus the market definition will include other forms of advertising and not just search. For app publishing, you can publish your app for android yourself which Fortnite did.
iOS distribution on the other hand can probably be argued for that and in fact they are getting sued for their "abusive monopoly in iOS app/in-app distribution services" [0].
"For search advertising, they will likely find other forms of advertising such as Facebook as a good enough substitute and thus the market definition will include other forms of advertising and not just search."
Nope, they won't. because 92% of people search the web on google.
He just said that they could find other places online to advertise ("other forms of advertising such as Facebook"), not that they could find other SEARCH places.
Do companies really care whether it's on Google Search or some other place, as long as they get eyes? And really, Google doesn't own 92% of the web, lol.
Of course they have the power to control prices or exclude competition on their own product (app store)? That's how markets work: every company has a monopoly over its own product? And as for android phones, which aren't their product directly, there are many other ways to publish apps.
I don't understand why people think monopoly means "big company," especially when size is contingent on continuing to serve customers better than competitors.
This is fundamentally untrue. Monopoly has nothing to do with product or service quality. This is proven by Google, which both has bad products and terrible service. Monopoly is built by using network effects and illegal business arrangements to gut competitors such that a better product cannot win in the market.
Network effects aren't part of the quality of the product? Seems to me like they are. Interestingly as well, the most network-effect-ridden sectors of the internet, namely social media platforms, also seem to be the most diverse. There are a heck of a lot of social media platforms out there!
Furthermore, what about Google Search allows it to take advantage of a network effect? It's a product designed to be used by a single person - it's not a social network. You can switch without incurring network effect problems. The real reason nobody is switching is that Google Search really is better than the alternatives: Bing, DuckDuckGo, etc, all have a plethora of users complaining about worse searches, even here. If someone were to make a search engine algorithm that led to a genuinely better experience (not for ideological or moral reasons, but actually better at finding what you want on a regular basis in a way the average person would care about), and had equally good (or better) user interface design, do you really think it would be that hard to switch?
Face it, for the Average Joe, the experience of using Google is far better than any other option. That's why they're all using Google. They don't care so much about their data, they just want search results. In essence, they're paying for good search results by selling their search history.
What illegal business arrangements has Google used to "gut competitors"? Nothing that Google seems to have done, besides being well known - as far as I know - has prevented other players from entering the market. This is definitely [citation needed] in this case.
Furthermore, sure I made a hasty generalization, but the best way to keep competitors out of the market really is using the government. What else are you going to do? Buy every competitor that comes your way? That's not going to work for long, especially as you get worse in services (that's the whole point of monopoly power) so each competitor sees a bigger and bigger chance of reward in the amount of pent up demand (people who would really like to switch to something else).
In conclusion, Google Search has not made it impossible for a better product to win in the market, it has not to my knowledge used illegal businesses, has certainly not gutted competitors except through actual market competition, and can't take advantage of network effects. It does not have bad products (I love most of Google's products, everything else is basically sh!t) nor does it have terrible service - even moreso for the Average Joe, as I said, who doesn't have an ideological commitment to hate Google.
One could argue that monopoly can be kept by using network effects and "illegal business arrangments" (whatever that means) but that's not how a company gets big enough in the first place to do those things. The reason Google is big enough now to make everyone start freaking out is because their Search was that much better.
You looked at the wrong side of Google Search, which is why you didn't understand it. Searchers are the product being sold, Google Search's business is ads, and it's customers are advertisers.
Due to their monopoly, businesses can't switch: You might hate Google. Google might be your direct competitor. No matter what, you have to give Google money, and Google gets to make all of your web design decisions, because if Google doesn't approve of you, you don't exist.
Google isn't good at this, it's just that you really can't leave. They have too much search data solely based on the fact everyone already uses them. Why does everyone already use them? They're the default. Through investing in web browsers and mobile OSes, Google is the default everywhere already. And the majority don't ever change the default. So no matter how bad they are at search, they remain in top.
If you include network effects as part of the quality of a product you torpedo any argument that would posit that the free market is optimal for all parties in products where the network effect is similar, because that means that you will always end up with a monopoly in all but name and that you will refuse to call it a monopoly, which leads to bad outcomes for the vast majority of actors.
It seems such a waste to spend all the effort to write a whole new OS and all these drivers with the same old buffer overflow bugs we've been fighting since the dawn of time. It doesn't have to be this way anymore!
Sure, but if people don't make any serious attempts at creating something completely new, how do we know we're not stuck in a local minimum in terms of what OS tech we could have?
Or do you mean in terms of not using a "safer" language like Rust? I assume its because its not what the devs working on it know.
Indeed. I wonder why they didn't go with Rust? They seem to be trying to make the perfect architecture from scratch without worries about complexity or how experimental it is or even how long it'll take, and yet they don't go with a language that'll solve a whole other class of problems? Seems like an odd choice.
Maybe they're just making use of the existing C++ talent pool at Google.
Lots of the userland pieces are in Rust. I don't think Rust gives you much advantage for kernel code (esp. in a microkernel), because most of it is unsafe anyway. And Rust has some missing pieces for this kind of code (for example, using custom allocators on a per-data structure basis is still difficult).
I'm not a developer on it, but I've been following the development pretty closely for a while. IIRC, the lead developer Jeremy Soller states that even though the kernel code does require unsafe blocks, it's actually less common than you might think.
I can't find the source, but for some reason I have it in my head that the percent of Redox kernel code that is unsafe sits at somewhere around 30%, which especially considering the actual LoC of the microkernel vs. a monolith, means much easier code coverage. Again, the exact number might be wrong, but I do know that the kernel is significantly less that 100% unsafe code.
I've heard this kind of argument a lot and it's getting tiresome when we just keep seeing the same old preventable bugs being a problem time and time again. Yes, a kernel written in a safe language will still have security bugs. But it would absolutely have a very large positive impact. And any problems with safe languages can be worked around with a bit of imagination, especially with the level of effort already required to write a whole OS with a large number of hardware drivers.
Rust would help with drivers, not so much with the kernel itself. Drivers run in user space in Fuschia, so you should be able to use any language you want.
If you turn off all the safety checks, is Rust still safe?
It seems a microkernel architecture allows for the kernel to be as small as possible, with the scheduler and drivers and everything written in a safe language in user-space. In that case, it is anything other than vanity that says the kernel is better off being written in a "safe" language with all the safeties turned off, compared to a unsafe language?
1. Fuchsia is not a microkernel, according to themselves. [1]
2. You don't have to turn off safety checks in all of the code. Only having to audit the unsafe code is still a huge win. You're vastly overstating the amount of code that can't be safe.
3. Fuchsia has many components outside of the kernel that should have been written in safe languages but weren't.
4. It's funny that you use the word vanity. I think it perfectly describes the attitude that you're smart enough to write a nontrivial project in C/C++ without any of the many classes of preventable bugs that safe languages fix. Or that a few parts of your code requiring unsafe behavior somehow elevate the whole project into an elite class that doesn't need safety.
Ex BeOS developers are the core team behind Fuchsia (Travis Geiselbrecht and Brian Swetland). Travis wrote NewOS which was adapted to be the Haiku kernel (and Travis still hangs around the Haiku IRC channels every now and then). I wouldn't be suprised to see Haiku R2 transition to Fuchsia kernel since they share the same DNA.
I thought it was going to be some boring material UI screenshots, but this is so much more interesting! Susceptible to the usual C bugs, albeit with minor impact. I thought they were switching most of the kernel and drivers to Rust, for some reason.
Edit: not a Rust fanboy by any means, hell, I've never even opened a Rust file.
> I thought they were switching most of the kernel and drivers to Rust, for some reason.
Note that zircon itself is not allowed to contain any Rust [0]. It's not exactly specified what they mean by the kernel, but it seems that this includes not just the microkernel but also everything that lives in the zircon top level directory, which is enough to boot the system. tokei says there is not a single line of Rust in that entire directory (but about 1 million lines of C/C++).
As of commit cb20372465f875ff4fbf2a04f0951430207f7b7a, according to tokei, in all of fuchsia there are 792k lines of C, 880k lines of C header, 1.711M lines of C++, 10k lines of C++ header, and 2.585M lines of Rust. In the zircon subdir, there are 569k lines of C, 300k lines of C header and 473k lines of C++.
A couple of years ago it was believed that Fuchsia would replace Android and ChromeOS. Then, IIRC, it was said in a Google IO that it was just some sort of experiment to test new OS ideas.
What do you think is Google's masterplan for Fuchsia?
I believe the idea behind could be the same for Mindori project inside Microsoft years ago:
"Midori was a research/incubation project to explore ways of innovating throughout Microsoft’s software stack. This spanned all aspects, including the programming language, compilers, OS, its services, applications, and the overall programming models. We had a heavy bias towards cloud, concurrency, and safety. The project included novel “cultural” approaches too, being 100% developers and very code-focused, looking more like the Microsoft of today and hopefully tomorrow, than it did the Microsoft of 8 years ago when the project began." [1]
Later some of this ideas became core features on Windows.
> It never shipped. A lot of beautiful code and proven concepts, and it just got tossed in the dustbin.
It doesn't sound it was ever planned to be shipped? It was for experimentation, research and development. And some of that work made its way into Windows. Seems like a success to me.
edit:
> Midori was the code name for a managed code operating system being developed by Microsoft with joint effort of Microsoft Research. It had been reported[2][3] to be a possible commercial implementation of the Singularity operating system, a research project started in 2003 to build a highly dependable operating system in which the kernel, device drivers, and applications are all written in managed code. It was designed for concurrency, and could run a program spread across multiple nodes at once.[4] It also featured a security model that sandboxes applications for increased security.[5] Microsoft had mapped out several possible migration paths from Windows to Midori.[6] The operating system was discontinued some time in 2015, though many of its concepts were rolled into other Microsoft projects.
> Fuchsia's goal is to power production devices and products used for business-critical applications. As such, Fuchsia is not a playground for experimental operating system concepts. Instead, the platform roadmap is driven by practical use cases arising from partner and product needs.
At the very least I expect to see it used in "IoT" devices like Nest thermostats and stuff like that. Whether it ever replaces ChromeOS or Android is impossible to determine at this point though. Almost every attempt at running Android apps on another platform for compatibility has been a failure. Microsoft seems to have given up on it entirely.
Wouldn't that be where Flutter could come in handy? If the same codebase cross compiles to Android and Fuchsia (hell, even iOS), lack of apps won't be an issue.
The Android team doesn't seem fond of Dart/ Flutter, but that's not surprising. Who wants to voluntarily sign their death certificate, so to speak?
> Then, IIRC, it was said in a Google IO that it was just some sort of experiment to test new OS ideas.
That's a form of public denial on their side. Why would they upstream Chromium, LLVM, Rust and Android ART support then? It has first class support for Flutter as a way of running all Flutter apps from day 0.
Totally agree, Steve Jobs bashed smartphones and making them in the early 2000s, said that 7 inch tablets made no sense then shipped the iPad Mini, MS bashed phones without physical keyboards then shipped the Zune HD and WP7. Companies will say anything as long as it gets the job done and the job is to maintain sales.
Its a thorough experiment, sure. I'm not sure that proves secret master plans. That said, if it finds a market fit as they go along I'm sure they'd use it.
License is import for an OS and most - it's far too important to not be covered by copyleft license like Linux kernel and most of usual distro userspace is.
Sometimes one can just wonder how many of these decisions to do something boil down to just one person being in the right place of the totem pole in order to
a) sell the product to their superiors and get their buy-in
b) drive their own resume for "being the guy who created X"
rather than any technological reasons or valid business reasons.
I fully welcome some more competiton on the operating system front.
I am sad to see WindowsNT, Linux and macOS be the only
dominant operating systems.
My personal very perhaps unpopular view is that Windows NT
has a better technial implentation than Linux.
Linux does most things well, form small devices to big iron.
That is true now.
But when it started, it was as a learning experiment, and damn good one too. An amazing achievement.
A lot of work my haorsd of people have since built on top of it, replaced things, epanded things, hardening things, adding drivers, etc. And it has come a long way, but to
me as an operating system, technically it is not that inspiring.
I wish we had maybe 10 competitve opearing systems, some brand new off the presses.
I have run Linux in one form or another
since the first Yggdrasil release at the end of 1992.
(not very early).
It was amazing. Running it on my PC at home and it was
faster than the terminals at school (well all shared a few servers so always a lot going on, I am sure if you had it all to yourself it would be faster. I guess that is why
some people spent the nights there is they had demanding tasks to run, but htye were not a priority to get access to the better, newer and a lot more dedicated hardware. (It was a complicated process).
I could do 90% of what was needed at home, woot.
I was very happy when I get my paws on the first WindowsNT
release back in 1994 I think. I had preordered it.
I cold not wait to install it.
processor-independent, multiprocessing and multi-user operating system
the end of 1992.
After having suffered through the pain that was Widows that had a kinda sorts maybe
a little multitasking. Finally an improvement. OF course, a lot of my software
refused to run on it, or it refused to run it
The first Mac I had with macOS was also very cool, since then OpenBSD,
I had NextStep when they released it for Intel. Very cool.
Anyways I have been waiting since to get a new operating system that levels up
the game, as much as WindowsNT did for Windows 3.11, 95, 98, me.
I have not yet had that privilege.
I had good hopes for Plan9, QNX, a reimplementation fo BeOS that is still ongoing.
(I might have the wrong name on that one. I remember a demo I had at the university
of a BeBox with the BeOs and how well it multitasked what backed then seemed like
very CPU intensive graphics manipulation while playing a video and some other
stuff -at the time-... I never got a BeBox and havent run its OS)
There have been, and are some solid research OSs but they have never made it
out into the real world.
WindowsNT, Linux and macOS cannot be all we are given.
What replaces them?
Where is WindowsNTNT.
Will it take quantum computers to become the norm before we get it?
Maybe Linux already runs on that too.
(if quantum computers ever become viable or even a good fit for everyday boring
computer stuff).
Give me a new operating system, written from scratch, to implement every
secure feature it should have, harden it, eliminate even the possibility
of buffer overflows and assorted tasks, or create them so that the elements
that are exposed can fail gracefully and non-destructbily and not leak data
or allow inputting of data.
I forgot about Qubes OS, that is very interesting. Maybe that is it.
The BeOS "reimplementation" is called Haiku. It is a pretty remarkably mature project, though unfortunately not what I'm looking for in a new OS personally.
The reason we don't see new OSs is pretty simple: there is entirely too much hardware you have to accommodate to be usable. There are klocs of actual kernel code in Linux, and Mlocs of driver code.
That becomes extremely problematic as a lot of devices anymore ship with bare minimum firmware to turn themselves on and require the host to upload firmware to actually work. The "drivers" on disk are then the modules required by the OS and a big blob of firmware.
Other times devices are just a thin PHY interface and all of the work is done in software inside the driver. There's nothing on the device to even run onboard firmware.
So many devices can't store their own drivers for the host to use, let alone store drivers for multiple operating systems.
There's a bit of that already with USB device classes. A keyboard might have a bunch of special features if it at least works with the USB HID keyboard class it will have base level keyboard functionality.
For some things like HIDs this can be pretty workable. For more complicated peripherals it is far more challenging if not impractical.
That's not to say device drivers should be an insane Wild West of compatibility shims and edge cases but some blanket solution doesn't solve all problems.
Oh, like UEFI or openfirmware drivers? That's... a pretty neat idea, actually. That would lower the barrier to entry, certainly. About the only issue I see is then trusting those drivers (I don't really trust hardware mfgs, and you'd have to update them somehow), but that seems doable.
> My personal very perhaps unpopular view is that Windows NT has a better technial implentation than Linux.
I actually don't think it's that uncommon to see people who think the NT kernel is a superior implementation than the Linux kernel. There does seem to be a sentiment that the Linux kernel has a history of adopting features later than other OSes, and often using a worse implementation (epoll is one of the more infamous cases).
About the NT Kernel I'm just gonna repost what was said several years ago by someone who worked on it:
>I'm a developer in Windows and contribute to the NT kernel. (Proof: the SHA1 hash of revision #102 of [Edit: filename redacted] is [Edit: hash redacted].) I'm posting through Tor for obvious reasons.
>
>Windows is indeed slower than other operating systems in many scenarios, and the gap is worsening. The cause of the problem is social. There's almost none of the improvement for its own sake, for the sake of glory, that you see in the Linux world.
>
>Granted, occasionally one sees naive people try to make things better. These people almost always fail. We can and do improve performance for specific scenarios that people with the ability to allocate resources believe impact business goals, but this work is Sisyphean. There's no formal or informal program of systemic performance improvement. We started caring about security because pre-SP3 Windows XP was an existential threat to the business. Our low performance is not an existential threat to the business.
>
>See, component owners are generally openly hostile to outside patches: if you're a dev, accepting an outside patch makes your lead angry (due to the need to maintain this patch and to justify in in shiproom the unplanned design change), makes test angry (because test is on the hook for making sure the change doesn't break anything, and you just made work for them), and PM is angry (due to the schedule implications of code churn). There's just no incentive to accept changes from outside your own team. You can always find a reason to say "no", and you have very little incentive to say "yes".
>
>There's also little incentive to create changes in the first place. On linux-kernel, if you improve the performance of directory traversal by a consistent 5%, you're praised and thanked. Here, if you do that and you're not on the object manager team, then even if you do get your code past the Ob owners and into the tree, your own management doesn't care. Yes, making a massive improvement will get you noticed by senior people and could be a boon for your career, but the improvement has to be very large to attract that kind of attention. Incremental improvements just annoy people and are, at best, neutral for your career. If you're unlucky and you tell your lead about how you improved performance of some other component on the system, he'll just ask you whether you can accelerate your bug glide.
>
>Is it any wonder that people stop trying to do unplanned work after a little while?
>
>Another reason for the quality gap is that that we've been having trouble keeping talented people. Google and other large Seattle-area companies keep poaching our best, most experienced developers, and we hire youths straight from college to replace them. You find SDEs and SDE IIs maintaining hugely import systems. These developers mean well and are usually adequately intelligent, but they don't understand why certain decisions were made, don't have a thorough understanding of the intricate details of how their systems work, and most importantly, don't want to change anything that already works.
>
>These junior developers also have a tendency to make improvements to the system by implementing brand-new features instead of improving old ones. Look at recent Microsoft releases: we don't fix old features, but accrete new ones. New features help much more at review time than improvements to old ones.
>
>(That's literally the explanation for PowerShell. Many of us wanted to improve cmd.exe, but couldn't.)
>
>More examples:
>
>* We can't touch named pipes. Let's add %INTERNAL_NOTIFICATION_SYSTEM%! And let's make it inconsistent with virtually every other named NT primitive.
>* We can't expose %INTERNAL_NOTIFICATION_SYSTEM% to the rest of the world because we don't want to fill out paperwork and we're not losing sales because we only have 1990s-era Win32 APIs available publicly.
>* We can't touch DCOM. So we create another %C#_REMOTING_FLAVOR_OF_THE_WEEK%!
>* XNA. Need I say more?
>* Why would anyone need an archive format that supports files larger than 2GB?
>* Let's support symbolic links, but make sure that nobody can use them so we don't get blamed for security vulnerabilities (Great! Now we get to look sage and responsible!)
>* We can't touch Source Depot, so let's hack together SDX!
>* We can't touch SDX, so let's pretend for four releases that we're moving to TFS while not actually changing anything!
>* Oh god, the NTFS code is a purple opium-fueled Victorian horror novel that uses global recursive locks and SEH for flow control. Let's write ReFs instead. (And hey, let's start by copying and pasting the NTFS source code and removing half the features! Then let's add checksums, because checksums are cool, right, and now with checksums we're just as good as ZFS? Right? And who needs quotas anyway?)
>* We just can't be fucked to implement C11 support, and variadic templates were just too hard to implement in a year. (But ohmygosh we turned "^" into a reference-counted pointer operator. Oh, and what's a reference cycle?)
> I'm a developer in Windows and contribute to the NT kernel. (Proof: the SHA1 hash of revision #102 of [Edit: filename redacted] is [Edit: hash redacted].) I'm posting through Tor for obvious reasons.
> Windows is indeed slower than other operating systems in many scenarios, and the gap is worsening. The cause of the problem is social. There's almost none of the improvement for its own sake, for the sake of glory, that you see in the Linux world.
> Granted, occasionally one sees naive people try to make things better. These people almost always fail. We can and do improve performance for specific scenarios that people with the ability to allocate resources believe impact business goals, but this work is Sisyphean. There's no formal or informal program of systemic performance improvement. We started caring about security because pre-SP3 Windows XP was an existential threat to the business. Our low performance is not an existential threat to the business.
> See, component owners are generally openly hostile to outside patches: if you're a dev, accepting an outside patch makes your lead angry (due to the need to maintain this patch and to justify in in shiproom the unplanned design change), makes test angry (because test is on the hook for making sure the change doesn't break anything, and you just made work for them), and PM is angry (due to the schedule implications of code churn). There's just no incentive to accept changes from outside your own team. You can always find a reason to say "no", and you have very little incentive to say "yes".
> There's also little incentive to create changes in the first place. On linux-kernel, if you improve the performance of directory traversal by a consistent 5%, you're praised and thanked. Here, if you do that and you're not on the object manager team, then even if you do get your code past the Ob owners and into the tree, your own management doesn't care. Yes, making a massive improvement will get you noticed by senior people and could be a boon for your career, but the improvement has to be very large to attract that kind of attention. Incremental improvements just annoy people and are, at best, neutral for your career. If you're unlucky and you tell your lead about how you improved performance of some other component on the system, he'll just ask you whether you can accelerate your bug glide.
> Is it any wonder that people stop trying to do unplanned work after a little while?
> Another reason for the quality gap is that that we've been having trouble keeping talented people. Google and other large Seattle-area companies keep poaching our best, most experienced developers, and we hire youths straight from college to replace them. You find SDEs and SDE IIs maintaining hugely import systems. These developers mean well and are usually adequately intelligent, but they don't understand why certain decisions were made, don't have a thorough understanding of the intricate details of how their systems work, and most importantly, don't want to change anything that already works.
> These junior developers also have a tendency to make improvements to the system by implementing brand-new features instead of improving old ones. Look at recent Microsoft releases: we don't fix old features, but accrete new ones. New features help much more at review time than improvements to old ones.
> (That's literally the explanation for PowerShell. Many of us wanted to improve cmd.exe, but couldn't.)
> More examples:
> * We can't touch named pipes. Let's add %INTERNAL_NOTIFICATION_SYSTEM%! And let's make it inconsistent with virtually every other named NT primitive.
> * We can't expose %INTERNAL_NOTIFICATION_SYSTEM% to the rest of the world because we don't want to fill out paperwork and we're not losing sales because we only have 1990s-era Win32 APIs available publicly.
> * We can't touch DCOM. So we create another %C#_REMOTING_FLAVOR_OF_THE_WEEK%!
> * XNA. Need I say more?
> * Why would anyone need an archive format that supports files larger than 2GB?
> * Let's support symbolic links, but make sure that nobody can use them so we don't get blamed for security vulnerabilities (Great! Now we get to look sage and responsible!)
> * We can't touch Source Depot, so let's hack together SDX!
> * We can't touch SDX, so let's pretend for four releases that we're moving to TFS while not actually changing anything!
> * Oh god, the NTFS code is a purple opium-fueled Victorian horror novel that uses global recursive locks and SEH for flow control. Let's write ReFs instead. (And hey, let's start by copying and pasting the NTFS source code and removing half the features! Then let's add checksums, because checksums are cool, right, and now with checksums we're just as good as ZFS? Right? And who needs quotas anyway?)
> * We just can't be fucked to implement C11 support, and variadic templates were just too hard to implement in a year. (But ohmygosh we turned "^" into a reference-counted pointer operator. Oh, and what's a reference cycle?)
Assuming most (not all) vulnerabilities are C-style use after free and buffer overflows. If kernel were written in Rust these vulnerabilities would not be issues? Meaning microkernels only make sense in C world. What am I missing?
As you implicitly note yourself; even if "most (not all) vulnerabilities are C-style use after free and buffer overflows" - well, if you reasonably can do something to defend against the things that aren't memory issues, then that will catch those. Also, even Rust lets you use "unsafe" code, and an OS will probably contain some; even if it's minimal, even if it's reviewed, you want any extra protections that you can get.
Spooky. I predicted the double descriptor read bug before actually seeing the code. I am not an expert, and I definitely haven't written my own USB stack. Still, I wonder why the original code had this problem, given that this seems to be the classical example of how to attack a USB stack. Somehow reminds me what happened when Cisco started to ship a HTTP server with some switches. One of the first bugs was a buffer overflow on URLs longer then 255 bytes ...
Specifically, this product doesn't exist today:
- Desktop environment that matches or surpasses Mac OS in quality, performance and UX design.
- It includes seamless synchronization between devices.
- Apps are sandboxed, similarly to Android or iOS.
- There is a clearly defined platform / SDK, like in Android or iOS. If it works in your development environment, it's guaranteed to work everywhere.
- You can easily make a commercial product, like in Android or iOS.
- It has 10% on the market (or software creators believe it's on the trajectory to get there).
I don't understand why Google hasn't done this yet. Of course, the answer could be that such project just doesn't have the required ROI.