I haven’t watched this talk, but I worked on Fuchsia from the start (I’m no longer at Google) and want to clear up some common questions and misconceptions:
1. Fuchsia is a general-purpose operating system built to support Google’s consumer hardware.
2. It’s not designed to compete with or replace Android. Its goal is to replace Linux, which Android is built on. One big challenge it addresses is Linux’s driver problem. If Fuchsia succeeds, Android apps could run on it.
Fuchsia isn’t trying to replace Android. Its survival for over a decade—through layoffs and with hundreds still working on it—says a lot.
I can’t predict Fuchsia’s future, but it’s already running on millions of devices. The git logs show big strides in running Linux programs on Fuchsia without recompilation, as well as progress on the Android runtime. The best way to predict Fuchsia’s future is to look at its hardware architecture support and which runtime is getting attention.
Fuchsia’s success will likely depend more on market forces than on technical innovation. Linux is “good enough” for most needs, and its issues may not justify switching. The choice between sticking with Linux or moving to Fuchsia often favors Linux.
Still, I hope Fuchsia succeeds.
spankalee 9 hours ago [-]
I was never that close to the Fuchsia project, but knew quite a few people who worked on it.
My understanding from them was, as much as I can remember it now, something like:
1. That yes, Fuchsia was originally intended, by at least some in senior leadership on the team, to replace both Android and ChromeOS. This is why Fuchsia had a mobile shell (or two?) at one point.
2. The Android team wasn't necessarily on board with this. They took a lot of ideas from Fuchsia and incorporated them into Android instead.
3. When Platforms were consolidated under Hiroshi it brought the Android and Fuchsia teams closer together in a way that didn't look great for Fuchsia. Hiroshi had already been in charge of Android and was presumed to favor it. People were worried that Hiroshi was going to kill Fuchsia.
4. Fuchsia pivoted to Nest devices, and a story of replacing just the kernel of Android, to reduce the conflict with the Android team.
4a. The Android team was correct on point (2) because it's either completely infeasible or completely dumb for Google to launch a separate competitor to Android, with a new ecosystem, starting from scratch.
To work around the ecosystem problem, originally Android apps were going to be run in a Linux VM, but that was bad for battery and performance. Starnix was started to show that Fuchsia could run Linux binaries in a Fuchsia component.
5. Android and ChromeOS are finally merging, and this _might_ mean that Android gets some of the auto-update ability of ChromeOS? Does that make the lower layer more suitable for Nest devices and push Fuchsia out there too?
Again, I was a pretty removed from the project, but it seemed too simplifying to say that Fuchsia either was never intended to replace Android, or always intended to replace Android. It changed over time and management structures.
raggi 6 hours ago [-]
You got the high drama stories with the timelines re-arranged to fit the narrative :D
Fuchsias underlying goals are to be a great platform for computing. This is distilled in its current incantation into a short tagline on fuchsia.dev: simple, secure, updatable, performant.
The details of how when and where Fuchsia might fit / gets exercised are nuanced and far more often about other factors than those which make great stories. Maybe there will be some of the good stories told one day, but that'll need someone from the team to finish a book and take it through the Google process to publish :D
Look I understand the context here but if you're going to go around saying "a great platform for computing" it's not really telling me much about the project.
lukan 1 hours ago [-]
It is a OS.
Intended to run on mainstream devices.
Main difference to Linux: stable driver API. So vendors could make their blobs and support them easier, without open sourcing, like linux demands.
cflewis 9 hours ago [-]
I worked on Fuchsia engprod for a while. I am still employed at Google and can't talk about anything that isn't publicly available already (which really means anything gleaned from commits to the Git repo).
I think the best way to look at it is like any software: there's Fuchsia The Artifact (thing that is made) and Fuchsia The Product (how thing is used, and how widely). I don't know anything about operating systems, but my understanding is that the engineers are very happy with Fuchsia The Artifact. Fuchsia The Product has had some wandering in the wilderness years.
pfannkuchen 6 hours ago [-]
> Fuchsia pivoted to Nest devices, and a story of replacing just the kernel of Android, to reduce the conflict with the Android team
This is like a textbook example of weak leadership of an executive team.
The power jockeying of a fiefdom’s chieftain (power reduction mitigation in this case) is allowed to drive the organizational structure and product strategy.
IshKebab 5 hours ago [-]
Yeah the "was never meant to replace" here sounds exactly like the placation we got with wasm - "it's not meant to replace JavaScript" (it totally is).
josephg 1 hours ago [-]
> "it's not meant to replace JavaScript"
The word meant is doing a lot of heavy lifting here. Meant - by who? The technology itself doesn’t want anything.
Do some people want to use wasm instead of JavaScript for websites? Yes. Will JS ever be removed from web browsers? Probably not, no. Wasm isn’t a grand design with a destiny it’s “meant to” reach. It’s actually just some code written by a bunch of people trying to solve a bunch of disparate problems. How well wasm solves any particular problem depends on the desires and skills of the people in the room, pushing the technology forward.
It’s kind of like that for everything. Rust was never meant to be a high performance systems language by its original creator. But the people in the room pushed it in that direction. Fuscia could replace Linux in Android. I’m sure some people want that to happen, and some people don’t. There’s no manifest destiny. What actually happens depends on a lot of arguing in meeting rooms somewhere. How that turns out is anyone’s guess!
mdhb 2 hours ago [-]
Is it? I don’t think those statements are incompatible in either example. In both scenarios we are looking at very meaningful leaps forward in terms of the underlying architecture and what that enables that simply aren’t possible within the boundaries of what is out there currently. I don’t think that’s the same thing as “meant to replace” at all though.
tgma 3 hours ago [-]
> Fuchsia isn’t trying to replace Android. Its survival for over a decade—through layoffs and with hundreds still working on it—says a lot.
Says a lot about managing to cling onto another product as a dependency to save the team from cancelation. Gotta thank the Directors for playing politics well. (Dart also played that game.)
Does not say much about necessity. Won't be surprised if it gets DOGE'd away at some point.
knifie_spoonie 12 hours ago [-]
Thanks for the info. For those of us not familiar with it, what were the main motivations for building Fuchsia instead of just using Linux?
TheDong 11 hours ago [-]
They did say:
> One big challenge it addresses is Linux’s driver problem
Android devices have been plagued with vendors having out-of-tree device drivers that compile for linux 3.x, but not 4.x or 5.x, and so the phone is unable to update to a new major android version wit ha new linux kernel.
A micro-kernel with a clearly defined device driver API would mean that Google could update the kernel and android version, while continuing to let old device drivers work without update.
That's consistently been one of the motivating factors cited, and linux's monolithic design, where the internal driver API has never been anything close to stable, will not solve that problem.
aidenn0 11 hours ago [-]
> A micro-kernel with a clearly defined device driver API would mean that Google could update the kernel and android version, while continuing to let old device drivers work without update.
A monolithic kernel with a clearly defined device driver API would do the same thing. Linux is explicitly not that, of course. Maintaining backwards-compatibility in an API is a non-trivial amount of work regardless of whether the boundary is a network connection, IPC, or function call.
amluto 10 hours ago [-]
> A monolithic kernel with a clearly defined device driver API would do the same thing.
Maybe, but I doubt it. History has shown pretty clearly that driver authors will write code that takes advantage of its privilege state in a monolithic kernel to bypass the constraints of the driver API. Companies will do this to kludge around the GPL, to make their Linux driver look more like the Windows driver, because they were lazy and it was easier than doing it right, and for any number of other reasons. The results include the drivers failing if you look at the rest of the system funny and making the entire system wildly insecure.
If you want to a driver not subject to competent code review abide by the terms of the box in which it lives, then the system needs to strictly enforce the box. Relying on a header file with limited contents will not do the job.
est31 10 hours ago [-]
It's still possible that drivers might be so buggy that a newer OS version might interact with them in a slightly different way which is still legal by the API definition but it still makes them crash or stop working.
vlovich123 8 hours ago [-]
That can be treated as an OS bug that’s fixed by updating the kernel to the latest version that fixes compat with that driver, which you can do because the driver remains unchanged. With Linux, even with DKMS, you’d need to backport your fixes to that old kernel in addition to maintaining the latest kernel version. And on mobile DKMS is not a thing.
raverbashing 4 hours ago [-]
> driver authors will write code that takes advantage of its privilege state in a monolithic kernel to bypass the constraints of the driver API.
Well, your job is shipping the driver. If the API is limited and/or your existing drivers in Windows or other OSs do something and the linux driver doesn't then you have a problem
Linux kernel pros: it evolves organically
Linux kernel cons: it evolves organically
plagiarist 10 hours ago [-]
I would love to have an open source microkernel OS that works as well as Linux on modern hardware even if the API wasn't stable. I am making assumptions that you could have ZFS and secure boot at the same time without jumping through hoops, containerization without needing fictitious UIDs for every user, and other things of that nature. The monolithic kernel is very frustrating with some things.
wilsonjholmes 25 minutes ago [-]
Why does a monolithic kernel make those features have "hoops to jump through" compared with how a micro-kernel would handle those features?
throwaway48476 10 hours ago [-]
The point of linux is to upstream drivers so that devices just work.
kyrra 10 hours ago [-]
The problem is the release cadence, especially around mobile devices. Driver packages for them tend to be worked on right up to shipping, because they are developed in parallel with the hardware.
Android using the absolutely most head or tip version of the Linux kernel sounds like a QA nightmare of its own.
Mobile SOC has to have everything to start up the phone, as there is no bios like system that the driver is kind work through. Maybe this is a problem that could be solved, but it hasn't been yet.
throwaway48476 6 hours ago [-]
In PC land linux driver support is not always day one as they are working on the drivers up to and sometimes after the release. Somehow the mobile vendors aren't capable of this.
pjmlp 5 hours ago [-]
OEMs would rather sell new devices that do updates for free.
The same happens in PC land with laptops, you seldom get drivers from Microsoft for laptop specific components, those come from the OEM, and get you what you get.
It’s also a totally different security architecture that is considered actually defendable rather than the cat and mouse game we have going on today. It’s actually well designed for modern threats.
s3graham 11 hours ago [-]
I think the reasons have probably changed over time, but my recollection is mostly to have a stable Windows-style driver API so that kernel and drivers can be maintained separately. Making such an API on top of Linux was prototyped, but was unsuccessful.
(Historically, that's one big reason that there's lots of Android phones that get a fork of whatever release was current some months before they shipped, and never get substantial updates.)
bobthecowboy 11 hours ago [-]
I'm sure there's technical reasons, but from Google's perspective, one benefit has got to be the non-copyleft license.
spankalee 9 hours ago [-]
I don't think this was ever really a concern. Google and device manufacturers already have ways of publishing non-GPL portions of a complete Android distribution.
okanat 10 hours ago [-]
Google is the owner of Fuchsia's copyrights. Licensing doesn't matter for them.
saidinesh5 9 hours ago [-]
It might not matter to Google, but it would definitely matter to the hardware vendors who'd write drivers and ship devices with Fuchsia.
So many GPL violations in the Android world currently
SkiFire13 5 hours ago [-]
IMO the fact there are so many GPL violations just goes to show they don't care about the GPL.
IshKebab 5 hours ago [-]
The reasons are pretty obvious IMO:
1. Control. It's pretty awkward if your main product depends on an open source community who might say "no" (or "fuck off you worthless imbecile") to half the things you want. You'll end up with a fork (they did!) which has serious downsides.
2. Stable driver ABI.
3. Modern security design. A microkernel, and Rust is used extensively.
jaypatelani 3 hours ago [-]
I think if google collaborated with NetBSD it would be successful than creating new OS from scratch
rjsw 1 hours ago [-]
How would that help? Someone still has to write drivers for devices.
ignoramous 3 hours ago [-]
Coincidentally, Android ABI is a scaffolding on *BSD by the way of bionic.
The best way to predict Fuchsia’s future is to look at its hardware architecture support and which runtime is getting attention.
Having tea leaves instead of a public strategy and roadmap is what's causing the FUD in the first place. Google probably has good reasons for not making any promises but that hedging comes with a cost.
bsimpson 10 hours ago [-]
FUD for who?
Feels like a quirk that some of its originators are open source hackers that ended up with Fuchsia being published externally at all. Google definitely doesn't want to attract more killedbygoogle headlines for its experimental projects, and I haven't seen any public Fuchsia evangelization.
If your target platforms are your own smart displays and maybe replacing the Linux kernel in a stack that already doesn't use the Linux userspace, why would you want to spend effort supporting third parties while you're still working on fundamentals?
ckocagil 11 hours ago [-]
Instead of a moonshot micro kernel, why didn't Google just build and maintain a new Linux driver API/ABI layer with backwards compatibility and security? Not an easy endeavor, but is it harder than Fuchsia?
okanat 10 hours ago [-]
It is more moonshot to design an API while Linux devs are constantly pulling the rug under.
Microkernels provide nice secure API boundaries and optimizations to reduce performance impact when crossing them on modern CPUs.
The monolithic design forces you to stay in either user or kernel mode as much as possible to not lose performance. Adding the API and ABI incompatibility makes it near impossible to maintain.
It will require a hard fork of Linux, which won't be Linux anymore. Monolithic design is the artifact of low-register-count CPUs of the past. If you are going to create a hard fork of a kernel, why not use a more modern design anyway?
ranguna 2 hours ago [-]
You have to wonder why the Linux devs are "pulling the rug under"
saidinesh5 9 hours ago [-]
Google kind of does this with Android. Most of the magic sauce for a lot of hardware is in user space -
OpenWRT was born because companies were forced to give the source code back to users.
koolala 15 minutes ago [-]
I always hoped they would release Fushsia XR but they went with Android XR.
ggm 12 hours ago [-]
I wish message passing and capabilities based OS had taken off stronger in the 80s because I sometimes feel like there are design goals and approaches latent in what we do now, which we could have been advanced in before now, beyond research models.
raggi 6 hours ago [-]
I don't think they were as tenable then as they are now.
When I was doing performance work on the platform one of the notable things was how slow some of the message passing was, but how little that mattered because of how many active components there are computing concurrently and across parallel compute units. It'd still show up where latency mattered, but there are a ton of workloads where you also basically hide or simply aren't worried about latency increases on that scale.
A counter case though, as an example, is building the system using a traditional C-style build system that basically spams stat(2) at mhz or these days ghz speeds. That's basically a pathological case for message passing at the filesystem layer, and it's a good example of why few microkernels which aimed at self-hosting made it over the line. It's probably possible to "fix" using modern techniques, but it's much easier to fix by adjusting how the compilation process works - a change that has major efficiency advantages even on monolithic kernels. Alas, the world moves slow on these axes, no matter how much we'd rather see everything move all at once!
milesrout 3 hours ago [-]
This is why Linux works and is good and most other kernels are slow and don't work. Linux has been optimised over the years based on how software actually works rather than fantasies about how it would work it we got to rewrite the world.
plagiarist 10 hours ago [-]
Me too. Containers are so janky compared to what we could have.
est31 10 hours ago [-]
ios is capabilities based, no?
Edit, to explain: in ios, everything revolves around mach ports, which are capabilities.
Yes. And without over stating it, iOS is an amazingly robust, very secure OS. it has high trust and low trust models, a secure zone, special purpose hardware and an operating system designed around minimum access rights models which manages to keep going, despite app authors worst intentions.
At one level, it proves the model. The shame is that Mach otherwise has kind of not taken off. Gnu the OS was going to be Mach at the core, at one point IIRC
supriyo-biswas 8 hours ago [-]
I'd have to disagree -- the lack of OS-level sandboxing primitives such as seccomp-bpf and SELinux[1] means that exploits happen rather regularly in iOS rather often ([2], among others).
iOS has a perfectly good sandboxing model that is literally called "the sandbox". You will note that the impact of that bug is limited to the process it is triggered in for precisely this reason.
ggm 7 hours ago [-]
Not to deny that, but it didn't (for example) break the Secure Enclave. Key exfiltration didn't happen AFAIK.
mdhb 2 hours ago [-]
iOS is without question one of the most secure OS out there today with any amount of real world use but the gap between what it is and what state of the art looks like is also insane. Fuchsia is actually quite well aligned with something that’s actually defendable in the real world and across time.
dd_xplore 8 hours ago [-]
It maybe robust, but it's very very limited in capability. There's no depth to GUI interactions.
bobajeff 14 hours ago [-]
I'm surprised this is still being worked on I was under the impression that Google abandoned this.
Also, I would be interested to see a comparison to the wasm component model as it also seems to want to do the same things docker containers do.
dexterdog 12 hours ago [-]
It never really got released so nobody is depending on it, so why would google abandon it yet?
curling_grad 11 hours ago [-]
They are running on Google's Nest Hub devices[0], so I guess this counts as a release?
Fuchsia has been on life support for a few years now, but not completely dead yet
jeffbee 12 hours ago [-]
There were 25 changes updated on the Fuchsia Gerrit in the last 15 minutes. It is much less dead than 99% of open source software projects.
bee_rider 10 hours ago [-]
Are those 15 minutes representative? That seems pretty high if so.
surajrmal 8 hours ago [-]
During California working hours it might actually be a bit low honestly as you would see several revisions uploaded for code review per commit and there are on average 200-250 commits submitted daily to the primary fuchsia repo.
michaelmrose 11 hours ago [-]
The two biggest factors are how committed to this is Google and who is interested in developing and using this if Google drops it.
What do you think the answers to those are?
surajrmal 8 hours ago [-]
Factors for what? Deciding whether it's in life support? Deciding if it's okay for you to depend on it?
mdhb 2 hours ago [-]
This is completely made up bullshit masquerading as a fact.
41 minutes ago [-]
mdhb 14 hours ago [-]
I think the lack of public information about their future plans for the project combined with the “killed by Google” meme got smashed together here and that is actually a really common perception but also one that is completely made up out of thin air.
It has been under heavy heavy development for many years now.
The fact that they are now starting to talk about it publicly now is probably a sign that they are looking to move beyond just IoT in the future.
For example, I know it’s coming to Android (not necessarily as a replacement but as a VM) and I know there is some plans around consolidating ChromeOS and Android as well. I expect that is also going to be another place we might see it before too long.
I know they are also working on a full Linux compatibility layer called Starnix [1] as well where I believe the goal is you can just run all your Linux workloads on Fuchsia without any changes is the goal AFAIK so you can probably extrapolate from there that the end state is going to be roughly in line with anywhere Linux runs currently is a good potential fit for Fuchsia and it will come with a lot of additional security guarantees on top of it that is going to make it particularly attractive.
I think the problem is that "lack of public information about their future plans" is hard to differentiate from "no future plans exist". For a company that's in the past been known for their willingness to go for low-likelihood but high impact "moonshot" projects and seemingly open with some of their long-term plans (like everyone knowing that they were working on self-driving cars years before self-driving became a topic that people wouldn't be shocked to hear, like, their grandparents talking about), it's honestly pretty weird that the only device they've shipped with it was the Nest Hub, which apparently used to run on the same thing as a Chromcast (so which I don't even think was ChromeOS?).
If this were something that they were planning on using mainly for internal stuff, like for some sort of competitive advantage in data centers or something, I could understand the radio silence on future plans, but it's hard for me to imagine that's their main purpose when they're publicly putting it on stuff like the Nest Hub and Chromebooks (they didn't sell any with it afaik, but they published a guide for putting it on them). It really feels like they just don't know exactly what to do with it, and they're trying to figure that out as well. As for ChromeOS and Android, those already feel like a pretty good example of them not having a super clear initial product strategy for how they overlap (and more important, how they _don't_), so while having some sort of consolidation would make sense, it's not clear to me how Fuschia would help with that rather than just make things even murkier if they start pushing it more. I'd expect that consolidating them would start with the lower-level components rather than the UI, and my understanding is that Fuschia (as opposed to Zircon, which is the kernel) has quite a lot of UI-related stuff in it specifically with Flutter. I'm not saying you're wrong, since it sounds like you might have more relevant knowledge than me, but I can't help but wonder how much of this has really been planned in the long term rather than just been played by ear by those with decision-making power.
surajrmal 11 hours ago [-]
Fuchsia's UI layers are roughly equivalent to Wayland. There is a compositor, but no special alignment to any particular UI toolkit like flutter.
Fuchsia is not itself a consumer product, it's an open source project meant to be used to build a product. There is no application runtime for app developers to care about or UI for an end user to see. It would be strange to talk about things like mesa or the Linux kernel the way you are talking about fuchsia. There are software layers it does need to integrate with, but unless you work on those things, it's not really interesting to you.
Companies don't really discuss products they build using these open source building blocks while contributing to those projects until after the product launches either. It shouldn't really matter where and how it gets used to the end consumer, only that when it is used there are tangible benefits (more stable, less security problems, etc). I don't really understand why folks are so keen to understand what internal plans for using it may or may not be.
shocking63 6 hours ago [-]
To me, one of the best features of ChromeOS is that it runs both Android and Linux. I have a number of telescopes that are controlled via Android apps, and being able to run astronomy processes apps like siril on the same platform is wonderful.
raggi 5 hours ago [-]
> The fact that they are now starting to talk about it publicly now is probably a sign that they are looking to move beyond just IoT in the future.
You're reading too much into a conference presentation.
The team has been allowed to make conference presentations for many years, it's just that most folks haven't wanted to put in the personal effort. A few have in the past, one I know of was Petr: https://www.youtube.com/watch?v=DYaqzEbU0Vk
IAmLiterallyAB 9 hours ago [-]
There used to be patches in AOSP for Fuchsia but they all got reverted a couple years ago. I believe Starnix is the new strategy to get Android working on Fuchsia, if they are going to try for it.
refulgentis 12 hours ago [-]
This is a potpourri of stuff over 7 years, sprinkled on a base of confirmation bias re: the common wisdom has a "perception...completely made up out of thin air", and a misunderstanding that speaking publicly in this individual talk represents a step change or something new.
I would bet, very, very, many dollars it is not coming to Android in any form, Starnix isn't coming soon if ever, and they're not looking to move beyond IoT. Long story short, it shipped on the Nest Hub, didn't get a great rep, and Nest Hubs haven't been touched in years because they're not exactly a profit center.
Meanwhile, observe Pixel Tablet release in smart display factor, Chrome OS being merged with Android, and the software-minded VP who championed the need for the project, moving on, replaced by the hardware VP.
When you mash all that together, what you get is: the future is Android. Or, there is no future. Depending on how you look at it.
Personally, as an outsider, losing what ChromeOS was becoming (Google's spin on desktop Linux) is the saddest part of this for me.
refulgentis 11 hours ago [-]
God yes. It is/was so so so good. 2000s macOS values. I had to give it up when I left for a MacBook Pro*, and I still miss it.
* had to give it up? TL;DR: A key part of my workflow was being able to remote desktop into a Linux tower for heavy builds. Probably could have made it work anyway, obviously you wouldn't try building Android on a laptop, but a consumer app would be fine. I left to try and pick up some of the work I saw a lot of smart people do towards something better. And monetizing that in the short-term requires supporting iOS/macOS, which only compile on Mac
13 hours ago [-]
fishtoucher 12 hours ago [-]
Fuchsia may not be outright dead, but it's definitely on life support and would've been killed a long time ago if senior people at Google weren't personally backing it.
It had great foundations but without a concrete use case or product development was constantly pulled in different directions. It seemed like every year a new niche for Fuchsia was on the horizon, 6 months of development time would be dedicated to it, an extremely hacky demo would get the public hyped up, and then the whole thing would be abandoned because it didn't make any business sense.
Starnix, for example, has been completely deprecated. There was even a newer system to replace it which also got cancelled.
* My knowledge is a couple years old at this point and I haven't kept up with recent developments so maybe the future is brighter than I think.
surajrmal 11 hours ago [-]
To wit, starnix has never been cancelled. Source: I work on fuchsia.
tbodt 11 hours ago [-]
I work on Starnix and I've never heard of anything meant to replace it. What are you talking about?
intexpress 7 hours ago [-]
They might be thinking about POSIX Lite losing favor
aurelien 1 hours ago [-]
AFAIK Fushia stay specific to Google HW in front of Linux that is thinked to works for all.
thornjm 14 hours ago [-]
Would appreciate anyone summarising the key differences here as I can't watch the video at the moment.
__MatrixMan__ 14 hours ago [-]
It seems like Fuchsia components have less that they can assume about their environment and require the caller to be more explicit about what the component can do ("capabilities"). So for instance a docker container might just decide--without the user's say-so--that it wants to write a debug log file to /foo/bar/baz and then it would be up to the user to go find that file if they care. By contrast a Fuchsia component would not by default have the capability to write anywhere, so the user would have to pass in a handle that says "write your logs to this place" if they wanted logs to exist at all.
Linux folk are familiar with working with file descriptors--one just writes to stdout and leaves it to the caller to decide where that actually goes--so that was the example used but it seems like this sort of thing is done with other resources too.
It looks like a design that limits the ways programs can be surprising because they're not capable of doing anything that they weren't explicitly asked to do. Like, (I'm extrapolating here) they couldn't phone home all sneaky like because the only way for them to be able to do that is for the caller to hand them a phone.
It's got strong "dependency injection" vibes. I like it.
ferfumarma 12 hours ago [-]
It's a lot like sandstorm; the web hosting platform that Kenon Varda created. It failed as a corporation, but is still open source. It's a shame: it was before it's time and still holds up incredibly well.
vlovich123 13 hours ago [-]
Sure, but it is allowed, at least as far as I understand, to phone home if it otherwise needs network access. In practice it’s really hard to prevent unauthorized semantic network access once you allow any network access.
The main benefit is that kernel space is drastically smaller which means that the opportunity for a kernel-level exploit is minimal vs something like the Linux kernel that a single device exploit compromises your entire machine.
bestorworse 13 hours ago [-]
The joy of having a properly implemented capability system is that, well, you can create arbitrary capabilities.
You don't need to give a process/component the “unrestricted network access capability” -- you could give it a capability to eg “have https access to this (sub)domain only” where the process wouldn't be able to change stuff like SSL certificates.
EDIT: and to be clear, fuchsia implements capabilities very well. Like, apart from low-level stuff, all capabilities are created by normal processes/components. So all sorts of fine-grained accesses can be created without touching the kernel. Note that in fuchsia a process that creates/provides a capability has no control on where/to who that capability will be available -- that's up to the system configuration to decide.
vlovich123 9 hours ago [-]
Ok, give me access to a subdomain I control and I’m phoning home and there’s no way you can restrict mysubdomain.foo.com/phonehome vs mysubdomain.foo.com/normal - and even if you tried to do path restrictions, I can arbitrarily side-channel phoning home with normal access (which by the way you can’t unless you’re sniffing the encrypted HTTP session somehow).
Also imagine you are trying to run a browser. It’s implicitly going to be able to perform arbitrary network access and there’s no way you can restrict it from phoning home asides from trying to play whackamole blocking access to specific subdomains you think are it’s phone home servers.
That’s why I said “semantic” capabilities aren’t a thing and I’m not aware of anyone who’s managed to propose a workable system.
__MatrixMan__ 7 hours ago [-]
I imagine one could create a capability such that the app gets a way to shove bits in and a way to get bits out, but no knowledge of the IP address or anything like that. A phone (or set of phones) that are already connected and have no keypad.
Jyaif 6 hours ago [-]
> there’s no way you can restrict mysubdomain.foo.com/phonehome vs mysubdomain.foo.com/normal
Of course you can!
With capabilities you can tell a program: "if you want to communicate with the external world here's the only function you can use :
Ok great. Now I put the phone home stuff within payload. It’s a game of whackamole you’re bound to lose. Like I said - if I control both endpoints, it’s going to be very hard for you to simultaneously give me a pipe connecting them while controlling the set of messages I’m allowed to send.
dekhn 13 hours ago [-]
In my experience lots of folks simply won't work with capability systems no matter how good the implementation is or whatever level of security and configuration granularity is provided.
For many people it's just extra friction in search of a use case.
__MatrixMan__ 7 hours ago [-]
I'm just hearing about capability systems today, so your experience is undoubtedly richer than mine, but I'd estimate that we're just scratching the surface re: ways to harm somebody by making their tech behave in surprising ways.
Maybe once those harms are all grown up, we'll find that fancier handcuffs for our software is worth a bit more than "just extra friction."
surajrmal 12 hours ago [-]
It makes testing a lot easier honestly. Also keep in mind that mobile apps and web apps are fairly capability oriented these days, so I wouldn't say no one will work with it...
onjectic 8 hours ago [-]
I am curious what your experience is with capability based security? They are still incredibly niche(unfortunately) so I’ve never had a chance to work with one at a job.
surajrmal 13 hours ago [-]
Most components don't need to talk to the network though and therefore do not. The ones that do can do powerful things but creating narrower capabilities to restrict what they can do is very much feasible.
vlovich123 9 hours ago [-]
I’m not against capabilities. I’m just highlighting it’s for the developers to implement protections against malicious intrusions against the OS, not for users to protect against developers doing malicious things.
surajrmal 8 hours ago [-]
While there is no direct UX exposing this to an end user, it hypothetically be used as the basis of such UI. The parent of a component gets to ultimately decide what capabilities it routes to a child component. It's not like landlock where the process decides to sandbox itself after it's already running. Similar to a user constructing a VM to run a hypothetically malicious program, the same could be done much more lightweight with a fuchsia component.
vlovich123 8 hours ago [-]
All I said is that in the general case you’re not going to be able to rely on capabilities to do things like prevent phoning home or otherwise doing things you semantically define as harmful. This isn’t a UX issue - this is a technical issue. Capabilities, outside from very rare circumstances, can’t enforce it no matter how you structure this. The only rare circumstances is if you can restrict access to servers that aren’t owned by the same people as wrote the component. As soon as you give access that allows access, they can implement it in ways that capabilities can’t prevent.
stouset 7 hours ago [-]
Your perspective is coming from a very rigid all-or-nothing mentality and I don’t think it’s wise to see things that way.
Sure, a web browser that needs to open arbitrary network connections can be built to phone home. But nearly none of the components it’s built out of can. The image decoding and rendering libraries can’t touch the network, the rendering engine can’t touch the network, and nor can the dozens of other subcomponents it needs to work.
Your installed editor extensions can’t phone home even if the editor itself can. Or perhaps even the editor itself wouldn’t be able to, if extensions are installed out of band.
Your graphics driver vendor can’t phone home, your terminal can’t phone home, and on and on and on.
A solution doesn’t have to be perfect for it to be an improvement, so stop acting like it does.
vlovich123 5 hours ago [-]
But your editor extensions can’t phone home only if your editor sandboxes them into a separate process. Hint: VSCode doesn’t do such sandboxing and neither do most editors that I can think of.
Anyway, you’ve just proven my point with “install extensions out of band” - you’ve ceded that it’s a losing position technically and are arguing for alternative UX solutions. I’m not pretending it has to be perfect. Like I said, capabilities are great for creating a secure OS and writing more secure software more generally. But the threat model it’s protecting against is not software that phones home but against the size of the exploit opened up from a compromise.
Think about it this way, Android apps and iOS apps are largely sandboxed through a primitive capabilities system already, not super fine-grained capabilities but still the same concept. Would you care to claim that privacy and malware isn’t a problem on these systems or that the permissions model has meaningfully curtailed anything but the most egregious of problems?
stouset 4 hours ago [-]
Your editor doesn’t do it because handling, delegating, and slicing up capabilities isn’t a core part of the OS.
vlovich123 2 hours ago [-]
Firstly, VSCode runs on 3 major OSes that don’t have this capability and such software results the way it is partially because of targeting the lowest common denominator. Only a Fuscia first editor would do this.
Secondly, the editor also does it this way this because of reasons other than support within the OS because even with components it would need to design a capabilities model for extensions and a sandbox process to maintain the permissions - it’s much easier to just do the extensions in-process and not think about it.
nicce 13 hours ago [-]
Sounds like it has just AppArmor/Seccomp/SELinux policies built in. You usually reach the same with previous.
surajrmal 8 hours ago [-]
The difference is that those solutions are mandatory access control. Fuchsia doesn't have a global namespace that everyone shares. Each component gets it's own view of the world based on what is passed to it. This is often easier to work with then MAC. It's similar to writing a program without relying on globals for state, but instead passing everything into every function that it needs.
warkdarrior 14 hours ago [-]
From the slide deck, it seems that Fuchsia components have the following characteristics, which make them different from Linux containers:
* Capability-centric design
* Single machine scope
* Tree of sandboxes
* Weaker inter-sandbox fault tolerance
* Standardized IPC system
* Model powers low-level OS features
* More detailed inputs/outputs from sandbox
* Configuration and building in separate files
* Sandboxes can encapsulate other sandboxes
jackpeterfletch 14 hours ago [-]
Is it similar to NixOS? Recent convert, would be interested to read a comparison to fuchsia from someone in the know of both.
If it’s anywhere close Google might be sat on a huge opportunity to tread the same ground while solving the ergonomic issues that NixOS has. (I’ve never been more happy with a distro, but I’ll admit it took me months to crack)
gryn 13 hours ago [-]
NixOs is built on Linux kernel, Fushia is built on a new (micro-ish) kernel called zircon, they are not interchangable.
They are working on some components/layer to run things from Linux, but you would not expect all things built to work directly or as well as thing designed from the get-go for Fushia in mind.
jackpeterfletch 4 hours ago [-]
Thanks - I figure its step away in terms of target platform.
I meant a little more in the way that software is packaged and run. My understanding is that theres a similar mechanism for storing and linking shared libraries that means multiple versions can go exist and be independently linked depending on the requirements of the calling package.
cletus 9 hours ago [-]
Xoogler here. I never worked on Fuchsia (or Android) but I knew a bunch of people who did and in other ways I was kinda adjacent to them and platforms in general.
Some have suggested Fuchsia was never intended to replace Android. That's either a much later pivot (after I left Google) or it's historical revisionism. It absolutely was intended to replace Android and a bunch of ex-Android people were involved with it from the start. The basic premise was:
1. Linux's driver situation for Android is fundamentally broken and (in the opinion of the Fuchsia team) cannot be fixed. Windows, for example, spent a lot of time on this issue to isolate issues within drivers to avoid kernel panics. Also, Microsoft created a relatively stable ABI for drivers. Linux doesn't do that. The process of upstreaming drivers is tedious and (IIRC) it often doesn't happen; and
2. (Again, in the opinion of the Fuchsia team) Android needed an ecosystem reset. I think this was a little more vague and, from what I could gather, meant different things to different people. But Android has a strange architecture. Certain parts are in the AOSP but an increasing amount was in what was then called Google Play Services. IIRC, an example was an SSL library. AOSP had one. Play had one.
Fuchsia, at least at the time, pretty much moved everything (including drivers) from kernel space into user space. More broadly. Fuchsia can be viewed in a similar way to, say, Plan9 and micro-kernel architectures as a whole. Some think this can work. Some people who are way more knowledgeable and experienced on OS design seem to be pretty vocal saying it can't because of the context-switching. You can find such treatises online.
In my opinion, Fuchsia always struck me as one of those greenfield vanity projects meant to keep very senior engineers. Put another way: it was a solution in search of a problem. You can argue the flaws in Android architecture are real but remember, Google doesn't control the hardware. At that time at least, it was Samsung. It probably still is. Samsung doesn't like being beholden to Google. They've tried (and failed) to create their own OS. Why would they abandon one ecosystem they don't control for another they don't control? If you can't answer that, then you shouldn't be investing billions (quite literally) into the project.
Stepping back a bit, Eric Schmidt when he was CEO seemed to hold the view that ChromeOS and Android could coexist. They could compete with one another. There was no need to "unify" them. So often, such efforts to unify different projects just lead to billions of dollars spent, years of stagnation and a product that is the lowest common denominator of the things it "unified". I personally thought it was smart not to bother but I also suspect at some point someone would because that's always what happens. Microsoft completely missed the mobile revolution by trying to unify everything under Windows OS. Apple were smart to leave iOS and MacOS separate.
The only fruit of this investment and a decade of effort by now is Nest devices. I believe they tried (and failed) to embed themselves with Chromecast
But I imagine a whole bunch of people got promoted and isn't that the real point?
raggi 5 hours ago [-]
This is probably the most complete story told publicly, but there was a lot of timeline with a lot of people in it, so as with any such complicated history "it depends who you ask and how you frame the question": https://9to5google.com/2022/08/30/fuchsia-director-interview...
murderfs 4 hours ago [-]
I remember reading the fuchsia slide deck and being absolutely flabbergasted at the levels of architecture astronautics going on in it. It kept flipping back and forth between some generic PM desire ("users should be able to see notifications on both their phone and their tablet!") to some ridiculous overcomplication ("all disk access should happen via a content-addressable filesystem that's transparently synchronized across every device the user owns").
The slide with all of the "1.0s" shipped by the Fuchsia team did not inspire confidence, as someone who was still regularly cleaning up the messes left by a few select members, a decade later.
10 hours ago [-]
10 hours ago [-]
sigmonsays 14 hours ago [-]
What are the target use cases?
like mobile, servers, desktops, tablets?
bestorworse 14 hours ago [-]
It's technically a general-purpose OS. They had a workstation build target sometime ago which was used for the desktop use-case. They've shipped only for an IoT device so far (Google Nest Hub).
Main goal would be to replace the core of AOSP considering the main work that's being done, but it seems like Google isn't convinced it's there yet.
dekhn 14 hours ago [-]
Hasn't this project been running for (checks notes) almost ten years now? Isn't that enough runway to determine that it's never going to replace AOSP at this rate?
bestorworse 13 hours ago [-]
Yep. It's anyone's guess what's been going on there. Lots of theories out there. IMO Google doesn't consider this a high priority and the cost to keep the development going on considering the engineers working on it is low enough.
Also note that swapping the core of a widely used comercial OS like AOSP would be no easy feat. Imagine trying to convince OEMs, writing drivers practically from scratch for all the devices (based on a different paradigm), the bugs due to incompatibility, etc.
alex-mohr 13 hours ago [-]
As far as I could tell, its main goal was to have fun writing an OS. At that, it seems to have succeeded for a number of the people involved?
In terms of impact or business case, I'm missing what the end goal for the company or execs involved is. It's not re-writing user-space components of AOSP, because that's all Java or Kotlin. Maybe it's a super-longterm super-expensive effort to replace Linux underlying Android with Fuchia? Or for ChromeOS? Again, seems like a weird motivation to justify such a huge investment in both the team building it and a later migration effort to use it. But what else?
dekhn 13 hours ago [-]
When I worked at $GOOG my manager left the team to work on Fuchsia and he described it as a "senior engineer retention project", but also the idea was to come up with a kernel that had a stable interface so that vendors could update their hardware more easily compared to linux.
Many things that google did when I was there was simply to have a hedge, if the time/opportunity arose, against other technologies. For example they kept trying to pitch non-Intel hardware, at least partly so they could have more negotiation leverage over Intel. It's amazing how much wasted effort they have created following bad ideas.
andrekandre 12 hours ago [-]
> the idea was to come up with a kernel that had a stable interface so that vendors could update their hardware more easily
interesting... if that was a big goal, i wonder why they didn't go with some kind of adapter/meta-driver (and just maintain that) to the kernel that has a stable interface.
maybe long-term not viable i guess...?
cmrdporcupine 12 hours ago [-]
The problem with Fuchsia is it went from that to "We're taking all your headcount and rewriting your entire project on Fuchsia" and then started making deadline promises to upper management that it couldn't fulfill.
They seemed to have unlimited headcount to go rewrite the entire world to put on display assistant devices that had already shipped and succeeded with an existing software stack that Google then refused to evolve or maintain.
Fuchsia itself and the people who started it? Pretty nifty and smart. Fuchsia the project inside Google? Fuck.
mmooss 13 hours ago [-]
How long does it take to develop a general purpose, fully capable OS, from scratch? Not a *NIX / POSIX variant, but brand new?
(IIUC, it's brand new?)
toast0 5 hours ago [-]
It really depends. If you have a good, small enough team, and a clear design, with a well defined and limited scope, it shouldn't take that long.
If your team is too large, and especially if you don't know what the use case is, it can take a very long time. You asked for general purpose and fully capable, so you're probably in this case, but I think the desired use cases for Fuchsia could be scoped to way less than general purpose and fully capable: a ChromeOS replacement needs only to run Chrome (which isn't easy, but...), and an Android replacement needs only to run Android apps (again, not easy), and the embedded devices only run applications authored by Google with probably a much smaller scope.
But it also depends on what 'from scratch' means. Will you lean on existing development tools, hosted on an existing OS? Will you borrow libraries where the scope and license are appropriate? Are you going to build your own bootloader or use an existing one?
bestorworse 13 hours ago [-]
Yeah, it's brand new as far as you would consider in practice (they use existing libraries and the like).
The answer is not much time. The real question is how long to develop good quality drivers for a give platform (say, an x64 laptop)? How long to port/develop applications so that the OS is useful? How long to convince OEMs, app developers and such folks to start using your brand new OS? It's a bootstrap problem.
mmooss 13 hours ago [-]
> The answer is not much time.
That would be surprising. Where do you get that? I don't mean toy OSes or experiments. Linux, MacOS and Windows are still in development and I can't imagine the number of hours invested.
> they use existing libraries and the like
Where can I find out about that? Thanks.
wqaatwt 12 hours ago [-]
IIRC it didn't take that long to develop first production versions of macOS? A couple of years maybe?
It's not like Fuschia was supposed to be a "fully capable OS developed from scratch", either? I mean it's "just" the kernel and other low level components, most of the software stack would remain same as Android/Linux at least for the time being.
RossBencina 8 hours ago [-]
> first production versions of macOS? A couple of years maybe?
Ok, I'll bite. If we're talking classic Macintosh OS, perhaps.[0] macOS? No way. The first Mac OS X was released in 2001, and was in development between 1997 and 2001 according to Wikipedia.[1] But the bulk of the OS already existed 1997. Mac OS X was a reskin of NeXTStep. NeXTStep was released in 1989, final release 1995, final preview 1997 (just before Apple sold out to NeXT).[2] NeXTStep was in production for quite some time before the x86 version shipped (around '95 from memory). In case you are wondering, I can assure you that NeXTStep was a very capable OS. NeXTStep was in development for a couple of years before the first hardware shipped in 1989. NeXTStep was built on top of Mach and BSD 4.3 userspace. Mach's initial release was 1985.[3]. Not sure how long the first release of Mach took to develop. You can check BSD history yourself. But I'd say, conservatively, that macOS took at least 14 years to develop.
> IIRC it didn't take that long to develop first production versions of macOS?
If you mean the early 1980s OS, that is not comparable. It probably ran in something like 512K of memory off of a 5.25" floppy disk (or a tape?).
> It's not like Fuschia was supposed to be a "fully capable OS developed from scratch", either? I mean it's "just" the kernel and other low level components
I don't know the answer, but doesn't the second sentence describe Linux?
fredoralive 39 minutes ago [-]
The original Mac has 128kB RAM, a 64kB ROM with a fair chunk of the OS in it, and used 400kb single sided 3.5" discs. The paltry RAM is generally considered to be the main problem, but the Mac team were working to a target price of $1500 (which they missed), and that’s all that could afford, with the largish ROM being a compensation. A quick unscientific look at Byte's January 1984 issue seems to show 128kB as the base level for IBM PC clones at the time as well, but they don't have a GUI.
In comparison, the Lisa OS required 1MB RAM and a 5MB hard disc, hence the eye watering $10,000 introductory price.
Development on the Mac apparently started in 1979, and release in 1984 although the early Jeff Raskin era machine was quite different to the final Steve Jobs led product.
kllrnohj 13 hours ago [-]
Android is very unapologetically Linux and it's unlikely anyone seriously proposed doing anything other than use Linux.
Fuchsia more likely was for all the stuff that Google kept experimenting with using Android just because it was there rather than because it was a good fit - wearables, IoT, AR/VR, Auto, etc...
rjsw 59 minutes ago [-]
Other operating systems have emulated Linux so that you can run Linux userland applications on top of a different kernel, WSL1 and FreeBSD are good examples.
yjftsjthsd-h 13 hours ago [-]
> wearables, IoT, AR/VR, Auto, etc...
Why would Android be a poor fit for those?
kllrnohj 10 hours ago [-]
I didn't say it was, although for wearables & IoT Android is pretty large.
MobiusHorizons 12 hours ago [-]
I believe the only product that currently ships with Fuchsia is the Google Nest Hub. I could also imagine it running on meeting room hardware for Google Meet, although I don't believe that is true today. I would imagine this is largely a defense in depth type of security measure, where it limits the blast radius of vulnerabilities services. Beyond that, it is not hard to imagine use-cases that would benefit from running less-trusted code, especially if that code comes from third parties like an app store or some sort of company specific add-on.
1. Fuchsia is a general-purpose operating system built to support Google’s consumer hardware.
2. It’s not designed to compete with or replace Android. Its goal is to replace Linux, which Android is built on. One big challenge it addresses is Linux’s driver problem. If Fuchsia succeeds, Android apps could run on it.
Fuchsia isn’t trying to replace Android. Its survival for over a decade—through layoffs and with hundreds still working on it—says a lot.
I can’t predict Fuchsia’s future, but it’s already running on millions of devices. The git logs show big strides in running Linux programs on Fuchsia without recompilation, as well as progress on the Android runtime. The best way to predict Fuchsia’s future is to look at its hardware architecture support and which runtime is getting attention.
Fuchsia’s success will likely depend more on market forces than on technical innovation. Linux is “good enough” for most needs, and its issues may not justify switching. The choice between sticking with Linux or moving to Fuchsia often favors Linux.
Still, I hope Fuchsia succeeds.
My understanding from them was, as much as I can remember it now, something like:
1. That yes, Fuchsia was originally intended, by at least some in senior leadership on the team, to replace both Android and ChromeOS. This is why Fuchsia had a mobile shell (or two?) at one point.
2. The Android team wasn't necessarily on board with this. They took a lot of ideas from Fuchsia and incorporated them into Android instead.
3. When Platforms were consolidated under Hiroshi it brought the Android and Fuchsia teams closer together in a way that didn't look great for Fuchsia. Hiroshi had already been in charge of Android and was presumed to favor it. People were worried that Hiroshi was going to kill Fuchsia.
4. Fuchsia pivoted to Nest devices, and a story of replacing just the kernel of Android, to reduce the conflict with the Android team.
4a. The Android team was correct on point (2) because it's either completely infeasible or completely dumb for Google to launch a separate competitor to Android, with a new ecosystem, starting from scratch.
To work around the ecosystem problem, originally Android apps were going to be run in a Linux VM, but that was bad for battery and performance. Starnix was started to show that Fuchsia could run Linux binaries in a Fuchsia component.
5. Android and ChromeOS are finally merging, and this _might_ mean that Android gets some of the auto-update ability of ChromeOS? Does that make the lower layer more suitable for Nest devices and push Fuchsia out there too?
Again, I was a pretty removed from the project, but it seemed too simplifying to say that Fuchsia either was never intended to replace Android, or always intended to replace Android. It changed over time and management structures.
Fuchsias underlying goals are to be a great platform for computing. This is distilled in its current incantation into a short tagline on fuchsia.dev: simple, secure, updatable, performant.
The details of how when and where Fuchsia might fit / gets exercised are nuanced and far more often about other factors than those which make great stories. Maybe there will be some of the good stories told one day, but that'll need someone from the team to finish a book and take it through the Google process to publish :D
In the meantime, here's Chris interview: https://9to5google.com/2022/08/30/fuchsia-director-interview...
Main difference to Linux: stable driver API. So vendors could make their blobs and support them easier, without open sourcing, like linux demands.
I think the best way to look at it is like any software: there's Fuchsia The Artifact (thing that is made) and Fuchsia The Product (how thing is used, and how widely). I don't know anything about operating systems, but my understanding is that the engineers are very happy with Fuchsia The Artifact. Fuchsia The Product has had some wandering in the wilderness years.
This is like a textbook example of weak leadership of an executive team.
The power jockeying of a fiefdom’s chieftain (power reduction mitigation in this case) is allowed to drive the organizational structure and product strategy.
The word meant is doing a lot of heavy lifting here. Meant - by who? The technology itself doesn’t want anything.
Do some people want to use wasm instead of JavaScript for websites? Yes. Will JS ever be removed from web browsers? Probably not, no. Wasm isn’t a grand design with a destiny it’s “meant to” reach. It’s actually just some code written by a bunch of people trying to solve a bunch of disparate problems. How well wasm solves any particular problem depends on the desires and skills of the people in the room, pushing the technology forward.
It’s kind of like that for everything. Rust was never meant to be a high performance systems language by its original creator. But the people in the room pushed it in that direction. Fuscia could replace Linux in Android. I’m sure some people want that to happen, and some people don’t. There’s no manifest destiny. What actually happens depends on a lot of arguing in meeting rooms somewhere. How that turns out is anyone’s guess!
Says a lot about managing to cling onto another product as a dependency to save the team from cancelation. Gotta thank the Directors for playing politics well. (Dart also played that game.)
Does not say much about necessity. Won't be surprised if it gets DOGE'd away at some point.
> One big challenge it addresses is Linux’s driver problem
Android devices have been plagued with vendors having out-of-tree device drivers that compile for linux 3.x, but not 4.x or 5.x, and so the phone is unable to update to a new major android version wit ha new linux kernel.
A micro-kernel with a clearly defined device driver API would mean that Google could update the kernel and android version, while continuing to let old device drivers work without update.
That's consistently been one of the motivating factors cited, and linux's monolithic design, where the internal driver API has never been anything close to stable, will not solve that problem.
A monolithic kernel with a clearly defined device driver API would do the same thing. Linux is explicitly not that, of course. Maintaining backwards-compatibility in an API is a non-trivial amount of work regardless of whether the boundary is a network connection, IPC, or function call.
Maybe, but I doubt it. History has shown pretty clearly that driver authors will write code that takes advantage of its privilege state in a monolithic kernel to bypass the constraints of the driver API. Companies will do this to kludge around the GPL, to make their Linux driver look more like the Windows driver, because they were lazy and it was easier than doing it right, and for any number of other reasons. The results include the drivers failing if you look at the rest of the system funny and making the entire system wildly insecure.
If you want to a driver not subject to competent code review abide by the terms of the box in which it lives, then the system needs to strictly enforce the box. Relying on a header file with limited contents will not do the job.
Well, your job is shipping the driver. If the API is limited and/or your existing drivers in Windows or other OSs do something and the linux driver doesn't then you have a problem
Linux kernel pros: it evolves organically
Linux kernel cons: it evolves organically
Android using the absolutely most head or tip version of the Linux kernel sounds like a QA nightmare of its own.
Mobile SOC has to have everything to start up the phone, as there is no bios like system that the driver is kind work through. Maybe this is a problem that could be solved, but it hasn't been yet.
The same happens in PC land with laptops, you seldom get drivers from Microsoft for laptop specific components, those come from the OEM, and get you what you get.
For example, https://download.lenovo.com/eol/index.html
(Historically, that's one big reason that there's lots of Android phones that get a fork of whatever release was current some months before they shipped, and never get substantial updates.)
So many GPL violations in the Android world currently
1. Control. It's pretty awkward if your main product depends on an open source community who might say "no" (or "fuck off you worthless imbecile") to half the things you want. You'll end up with a fork (they did!) which has serious downsides.
2. Stable driver ABI.
3. Modern security design. A microkernel, and Rust is used extensively.
https://android.googlesource.com/platform/bionic/ (cf. "What's in libc") / https://github.com/GrapheneOS/platform_bionic/tree/15/docs
Having tea leaves instead of a public strategy and roadmap is what's causing the FUD in the first place. Google probably has good reasons for not making any promises but that hedging comes with a cost.
Feels like a quirk that some of its originators are open source hackers that ended up with Fuchsia being published externally at all. Google definitely doesn't want to attract more killedbygoogle headlines for its experimental projects, and I haven't seen any public Fuchsia evangelization.
If your target platforms are your own smart displays and maybe replacing the Linux kernel in a stack that already doesn't use the Linux userspace, why would you want to spend effort supporting third parties while you're still working on fundamentals?
Microkernels provide nice secure API boundaries and optimizations to reduce performance impact when crossing them on modern CPUs.
The monolithic design forces you to stay in either user or kernel mode as much as possible to not lose performance. Adding the API and ABI incompatibility makes it near impossible to maintain.
It will require a hard fork of Linux, which won't be Linux anymore. Monolithic design is the artifact of low-register-count CPUs of the past. If you are going to create a hard fork of a kernel, why not use a more modern design anyway?
https://siliconsignals.io/blog/implementing-custom-hardware-...
That's why you used to not touch vendor partition when flashing a custom ROM etc..
Maybe one could run a Fuchsia-like thing inside Linux and use Linux to provide the Linux userland ABI, but that might be challenging to maintain.
https://github.com/vsrinivas/fuchsia/blob/main/LICENSE
OpenWRT was born because companies were forced to give the source code back to users.
When I was doing performance work on the platform one of the notable things was how slow some of the message passing was, but how little that mattered because of how many active components there are computing concurrently and across parallel compute units. It'd still show up where latency mattered, but there are a ton of workloads where you also basically hide or simply aren't worried about latency increases on that scale.
A counter case though, as an example, is building the system using a traditional C-style build system that basically spams stat(2) at mhz or these days ghz speeds. That's basically a pathological case for message passing at the filesystem layer, and it's a good example of why few microkernels which aimed at self-hosting made it over the line. It's probably possible to "fix" using modern techniques, but it's much easier to fix by adjusting how the compilation process works - a change that has major efficiency advantages even on monolithic kernels. Alas, the world moves slow on these axes, no matter how much we'd rather see everything move all at once!
Edit, to explain: in ios, everything revolves around mach ports, which are capabilities.
https://docs.darlinghq.org/internals/macos-specifics/mach-po...
At one level, it proves the model. The shame is that Mach otherwise has kind of not taken off. Gnu the OS was going to be Mach at the core, at one point IIRC
[1] https://source.android.com/docs/security/app-sandbox#protect...
[2] https://www.csoonline.com/article/3811322/iphone-users-targe...
[1] https://www.ise.io/wp-content/uploads/2017/07/apple-sandbox....
[2] http://www.trustedbsd.org/mac.html
[3] http://www.trustedbsd.org/sedarwin.html
Also, I would be interested to see a comparison to the wasm component model as it also seems to want to do the same things docker containers do.
[0] https://www.theverge.com/2021/8/18/22630245/google-fuchsia-o...
What do you think the answers to those are?
It has been under heavy heavy development for many years now.
The fact that they are now starting to talk about it publicly now is probably a sign that they are looking to move beyond just IoT in the future.
For example, I know it’s coming to Android (not necessarily as a replacement but as a VM) and I know there is some plans around consolidating ChromeOS and Android as well. I expect that is also going to be another place we might see it before too long.
I know they are also working on a full Linux compatibility layer called Starnix [1] as well where I believe the goal is you can just run all your Linux workloads on Fuchsia without any changes is the goal AFAIK so you can probably extrapolate from there that the end state is going to be roughly in line with anywhere Linux runs currently is a good potential fit for Fuchsia and it will come with a lot of additional security guarantees on top of it that is going to make it particularly attractive.
[1] https://fuchsia.dev/fuchsia-src/concepts/components/v2/starn...
If this were something that they were planning on using mainly for internal stuff, like for some sort of competitive advantage in data centers or something, I could understand the radio silence on future plans, but it's hard for me to imagine that's their main purpose when they're publicly putting it on stuff like the Nest Hub and Chromebooks (they didn't sell any with it afaik, but they published a guide for putting it on them). It really feels like they just don't know exactly what to do with it, and they're trying to figure that out as well. As for ChromeOS and Android, those already feel like a pretty good example of them not having a super clear initial product strategy for how they overlap (and more important, how they _don't_), so while having some sort of consolidation would make sense, it's not clear to me how Fuschia would help with that rather than just make things even murkier if they start pushing it more. I'd expect that consolidating them would start with the lower-level components rather than the UI, and my understanding is that Fuschia (as opposed to Zircon, which is the kernel) has quite a lot of UI-related stuff in it specifically with Flutter. I'm not saying you're wrong, since it sounds like you might have more relevant knowledge than me, but I can't help but wonder how much of this has really been planned in the long term rather than just been played by ear by those with decision-making power.
Fuchsia is not itself a consumer product, it's an open source project meant to be used to build a product. There is no application runtime for app developers to care about or UI for an end user to see. It would be strange to talk about things like mesa or the Linux kernel the way you are talking about fuchsia. There are software layers it does need to integrate with, but unless you work on those things, it's not really interesting to you.
Companies don't really discuss products they build using these open source building blocks while contributing to those projects until after the product launches either. It shouldn't really matter where and how it gets used to the end consumer, only that when it is used there are tangible benefits (more stable, less security problems, etc). I don't really understand why folks are so keen to understand what internal plans for using it may or may not be.
You're reading too much into a conference presentation.
The team has been allowed to make conference presentations for many years, it's just that most folks haven't wanted to put in the personal effort. A few have in the past, one I know of was Petr: https://www.youtube.com/watch?v=DYaqzEbU0Vk
I would bet, very, very, many dollars it is not coming to Android in any form, Starnix isn't coming soon if ever, and they're not looking to move beyond IoT. Long story short, it shipped on the Nest Hub, didn't get a great rep, and Nest Hubs haven't been touched in years because they're not exactly a profit center.
Meanwhile, observe Pixel Tablet release in smart display factor, Chrome OS being merged with Android, and the software-minded VP who championed the need for the project, moving on, replaced by the hardware VP.
When you mash all that together, what you get is: the future is Android. Or, there is no future. Depending on how you look at it.
(disclaimer: former Googler on Pixel team, all derivable from open source info. I wish it wasn't the case, but it is :/ https://arstechnica.com/gadgets/2023/01/big-layoffs-at-googl... https://9to5google.com/2023/07/25/google-abandons-assistant-... https://9to5google.com/2024/11/18/chrome-os-migrating-androi..., note 7d views on starnix bugs, all 1 or 0, with the exception of a 7 and 4 https://issues.fuchsia.dev/issues?q=status:open%20componenti...)
* had to give it up? TL;DR: A key part of my workflow was being able to remote desktop into a Linux tower for heavy builds. Probably could have made it work anyway, obviously you wouldn't try building Android on a laptop, but a consumer app would be fine. I left to try and pick up some of the work I saw a lot of smart people do towards something better. And monetizing that in the short-term requires supporting iOS/macOS, which only compile on Mac
* My knowledge is a couple years old at this point and I haven't kept up with recent developments so maybe the future is brighter than I think.
Linux folk are familiar with working with file descriptors--one just writes to stdout and leaves it to the caller to decide where that actually goes--so that was the example used but it seems like this sort of thing is done with other resources too.
It looks like a design that limits the ways programs can be surprising because they're not capable of doing anything that they weren't explicitly asked to do. Like, (I'm extrapolating here) they couldn't phone home all sneaky like because the only way for them to be able to do that is for the caller to hand them a phone.
It's got strong "dependency injection" vibes. I like it.
The main benefit is that kernel space is drastically smaller which means that the opportunity for a kernel-level exploit is minimal vs something like the Linux kernel that a single device exploit compromises your entire machine.
You don't need to give a process/component the “unrestricted network access capability” -- you could give it a capability to eg “have https access to this (sub)domain only” where the process wouldn't be able to change stuff like SSL certificates.
EDIT: and to be clear, fuchsia implements capabilities very well. Like, apart from low-level stuff, all capabilities are created by normal processes/components. So all sorts of fine-grained accesses can be created without touching the kernel. Note that in fuchsia a process that creates/provides a capability has no control on where/to who that capability will be available -- that's up to the system configuration to decide.
Also imagine you are trying to run a browser. It’s implicitly going to be able to perform arbitrary network access and there’s no way you can restrict it from phoning home asides from trying to play whackamole blocking access to specific subdomains you think are it’s phone home servers.
That’s why I said “semantic” capabilities aren’t a thing and I’m not aware of anyone who’s managed to propose a workable system.
Of course you can!
With capabilities you can tell a program: "if you want to communicate with the external world here's the only function you can use :
`void postToMySubDomainSlashWhatever(char* payload, size_t size)`
For many people it's just extra friction in search of a use case.
Maybe once those harms are all grown up, we'll find that fancier handcuffs for our software is worth a bit more than "just extra friction."
Sure, a web browser that needs to open arbitrary network connections can be built to phone home. But nearly none of the components it’s built out of can. The image decoding and rendering libraries can’t touch the network, the rendering engine can’t touch the network, and nor can the dozens of other subcomponents it needs to work.
Your installed editor extensions can’t phone home even if the editor itself can. Or perhaps even the editor itself wouldn’t be able to, if extensions are installed out of band.
Your graphics driver vendor can’t phone home, your terminal can’t phone home, and on and on and on.
A solution doesn’t have to be perfect for it to be an improvement, so stop acting like it does.
Anyway, you’ve just proven my point with “install extensions out of band” - you’ve ceded that it’s a losing position technically and are arguing for alternative UX solutions. I’m not pretending it has to be perfect. Like I said, capabilities are great for creating a secure OS and writing more secure software more generally. But the threat model it’s protecting against is not software that phones home but against the size of the exploit opened up from a compromise.
Think about it this way, Android apps and iOS apps are largely sandboxed through a primitive capabilities system already, not super fine-grained capabilities but still the same concept. Would you care to claim that privacy and malware isn’t a problem on these systems or that the permissions model has meaningfully curtailed anything but the most egregious of problems?
Secondly, the editor also does it this way this because of reasons other than support within the OS because even with components it would need to design a capabilities model for extensions and a sandbox process to maintain the permissions - it’s much easier to just do the extensions in-process and not think about it.
* Capability-centric design
* Single machine scope
* Tree of sandboxes
* Weaker inter-sandbox fault tolerance
* Standardized IPC system
* Model powers low-level OS features
* More detailed inputs/outputs from sandbox
* Configuration and building in separate files
* Sandboxes can encapsulate other sandboxes
If it’s anywhere close Google might be sat on a huge opportunity to tread the same ground while solving the ergonomic issues that NixOS has. (I’ve never been more happy with a distro, but I’ll admit it took me months to crack)
They are working on some components/layer to run things from Linux, but you would not expect all things built to work directly or as well as thing designed from the get-go for Fushia in mind.
I meant a little more in the way that software is packaged and run. My understanding is that theres a similar mechanism for storing and linking shared libraries that means multiple versions can go exist and be independently linked depending on the requirements of the calling package.
Some have suggested Fuchsia was never intended to replace Android. That's either a much later pivot (after I left Google) or it's historical revisionism. It absolutely was intended to replace Android and a bunch of ex-Android people were involved with it from the start. The basic premise was:
1. Linux's driver situation for Android is fundamentally broken and (in the opinion of the Fuchsia team) cannot be fixed. Windows, for example, spent a lot of time on this issue to isolate issues within drivers to avoid kernel panics. Also, Microsoft created a relatively stable ABI for drivers. Linux doesn't do that. The process of upstreaming drivers is tedious and (IIRC) it often doesn't happen; and
2. (Again, in the opinion of the Fuchsia team) Android needed an ecosystem reset. I think this was a little more vague and, from what I could gather, meant different things to different people. But Android has a strange architecture. Certain parts are in the AOSP but an increasing amount was in what was then called Google Play Services. IIRC, an example was an SSL library. AOSP had one. Play had one.
Fuchsia, at least at the time, pretty much moved everything (including drivers) from kernel space into user space. More broadly. Fuchsia can be viewed in a similar way to, say, Plan9 and micro-kernel architectures as a whole. Some think this can work. Some people who are way more knowledgeable and experienced on OS design seem to be pretty vocal saying it can't because of the context-switching. You can find such treatises online.
In my opinion, Fuchsia always struck me as one of those greenfield vanity projects meant to keep very senior engineers. Put another way: it was a solution in search of a problem. You can argue the flaws in Android architecture are real but remember, Google doesn't control the hardware. At that time at least, it was Samsung. It probably still is. Samsung doesn't like being beholden to Google. They've tried (and failed) to create their own OS. Why would they abandon one ecosystem they don't control for another they don't control? If you can't answer that, then you shouldn't be investing billions (quite literally) into the project.
Stepping back a bit, Eric Schmidt when he was CEO seemed to hold the view that ChromeOS and Android could coexist. They could compete with one another. There was no need to "unify" them. So often, such efforts to unify different projects just lead to billions of dollars spent, years of stagnation and a product that is the lowest common denominator of the things it "unified". I personally thought it was smart not to bother but I also suspect at some point someone would because that's always what happens. Microsoft completely missed the mobile revolution by trying to unify everything under Windows OS. Apple were smart to leave iOS and MacOS separate.
The only fruit of this investment and a decade of effort by now is Nest devices. I believe they tried (and failed) to embed themselves with Chromecast
But I imagine a whole bunch of people got promoted and isn't that the real point?
The slide with all of the "1.0s" shipped by the Fuchsia team did not inspire confidence, as someone who was still regularly cleaning up the messes left by a few select members, a decade later.
like mobile, servers, desktops, tablets?
Main goal would be to replace the core of AOSP considering the main work that's being done, but it seems like Google isn't convinced it's there yet.
Also note that swapping the core of a widely used comercial OS like AOSP would be no easy feat. Imagine trying to convince OEMs, writing drivers practically from scratch for all the devices (based on a different paradigm), the bugs due to incompatibility, etc.
In terms of impact or business case, I'm missing what the end goal for the company or execs involved is. It's not re-writing user-space components of AOSP, because that's all Java or Kotlin. Maybe it's a super-longterm super-expensive effort to replace Linux underlying Android with Fuchia? Or for ChromeOS? Again, seems like a weird motivation to justify such a huge investment in both the team building it and a later migration effort to use it. But what else?
Many things that google did when I was there was simply to have a hedge, if the time/opportunity arose, against other technologies. For example they kept trying to pitch non-Intel hardware, at least partly so they could have more negotiation leverage over Intel. It's amazing how much wasted effort they have created following bad ideas.
maybe long-term not viable i guess...?
They seemed to have unlimited headcount to go rewrite the entire world to put on display assistant devices that had already shipped and succeeded with an existing software stack that Google then refused to evolve or maintain.
Fuchsia itself and the people who started it? Pretty nifty and smart. Fuchsia the project inside Google? Fuck.
(IIUC, it's brand new?)
If your team is too large, and especially if you don't know what the use case is, it can take a very long time. You asked for general purpose and fully capable, so you're probably in this case, but I think the desired use cases for Fuchsia could be scoped to way less than general purpose and fully capable: a ChromeOS replacement needs only to run Chrome (which isn't easy, but...), and an Android replacement needs only to run Android apps (again, not easy), and the embedded devices only run applications authored by Google with probably a much smaller scope.
But it also depends on what 'from scratch' means. Will you lean on existing development tools, hosted on an existing OS? Will you borrow libraries where the scope and license are appropriate? Are you going to build your own bootloader or use an existing one?
The answer is not much time. The real question is how long to develop good quality drivers for a give platform (say, an x64 laptop)? How long to port/develop applications so that the OS is useful? How long to convince OEMs, app developers and such folks to start using your brand new OS? It's a bootstrap problem.
That would be surprising. Where do you get that? I don't mean toy OSes or experiments. Linux, MacOS and Windows are still in development and I can't imagine the number of hours invested.
> they use existing libraries and the like
Where can I find out about that? Thanks.
It's not like Fuschia was supposed to be a "fully capable OS developed from scratch", either? I mean it's "just" the kernel and other low level components, most of the software stack would remain same as Android/Linux at least for the time being.
Ok, I'll bite. If we're talking classic Macintosh OS, perhaps.[0] macOS? No way. The first Mac OS X was released in 2001, and was in development between 1997 and 2001 according to Wikipedia.[1] But the bulk of the OS already existed 1997. Mac OS X was a reskin of NeXTStep. NeXTStep was released in 1989, final release 1995, final preview 1997 (just before Apple sold out to NeXT).[2] NeXTStep was in production for quite some time before the x86 version shipped (around '95 from memory). In case you are wondering, I can assure you that NeXTStep was a very capable OS. NeXTStep was in development for a couple of years before the first hardware shipped in 1989. NeXTStep was built on top of Mach and BSD 4.3 userspace. Mach's initial release was 1985.[3]. Not sure how long the first release of Mach took to develop. You can check BSD history yourself. But I'd say, conservatively, that macOS took at least 14 years to develop.
[0] check https://folklore.org/
[1] https://en.wikipedia.org/wiki/Mac_operating_systems
[2] https://en.wikipedia.org/wiki/NeXTSTEP
[3] https://en.wikipedia.org/wiki/Mach_(kernel)
If you mean the early 1980s OS, that is not comparable. It probably ran in something like 512K of memory off of a 5.25" floppy disk (or a tape?).
> It's not like Fuschia was supposed to be a "fully capable OS developed from scratch", either? I mean it's "just" the kernel and other low level components
I don't know the answer, but doesn't the second sentence describe Linux?
In comparison, the Lisa OS required 1MB RAM and a 5MB hard disc, hence the eye watering $10,000 introductory price.
Development on the Mac apparently started in 1979, and release in 1984 although the early Jeff Raskin era machine was quite different to the final Steve Jobs led product.
Fuchsia more likely was for all the stuff that Google kept experimenting with using Android just because it was there rather than because it was a good fit - wearables, IoT, AR/VR, Auto, etc...
Why would Android be a poor fit for those?