As the famous misquoted saying goes, build it and they will come. Since the quote is always out of context, allow me to add, brag about it, and they will laugh. This is vaporware. This, whatever this is (more on that below), will not be ready in six months, and when something will be demoed in some future, we will all laugh at how it neither does all the grandiose promised bag of features, but whatever little it does, it does badly.
Hasn't Canonical learned not to overpromise and underdeliver? Examples are multiple, Juju, the Ubuntu Phone, etc.
Oh, and speaking about Juju and what Canonical hasn't learned, it appears it still hasn't learned that when it announces something, it has to explain in clear terms what it does. In the same vein that "Juju is DevOps distilled", LXD is everything and nothing. It's a hypervisor, but it's not really a hypervisor. Now I did my own research, and after heroic effort I think I understand what this supposedly is. Mind you, I am not sure, but here goes nothing.
LXD is tooling over Linux namespaces, cgroups and associated technology that uses CRIU, or something like CRIU (is CRIU even production-ready?) that manages containers, and can move containers between different hosts.
Now this is one phrase that I think everyone understands. Instead of this we got vague statements evoking delusions of grandeur, unwarranted comparisons to virtual machine, inappropriately and misleadingly using the word hypervisor, and mentions of Ceph and OpenStack. And we still haven't learned what the supposed hardware-integration is. They say they can't say because of contractual obligation. That sounds lame, but whatever, if they can't tell us what it is, why mention it at all? To check a buzzword from the list of used buzzwords?
Now what is the purpose of this thing? I genuinely don't know. What's wrong with docker? Docker can't move containers. Surely that can be solved (IFF it can be solved) easier than creating all this contraption from scratch. Docker is supposed to run processes while they seem to want to run full Linux distributions. Is that it? I don't know. Why do they even want to run full Linux distribution? We don't know. It might have something to do with the fact that this is what Juju expects. It might not, who knows.
I'll stop speculating here because we have very little to work with.
I'm curious to know about why you say that Juju overpromised and underdelivered. I definitely agree that the marketing of the technology detracts from making it clear what the technology does, but Juju is still pretty cool. Perhaps you're thinking of MAAS (which is basically unusable)?
No, I mean Juju. Juju overpromised that it will be ready years ago. It wasn't. Then it overpromised that the Go rewrite will only take a short while and that it will be production ready in a very short time. I would not call Juju production ready today, after all these years. Juju overpromised total independence from the underlying provider; EC2, MASS, OpenStack, LCX, etc. In reality the one that sucks less is EC2, and the difference from the others is pretty massive.
Juju overpromised that it will change the way people build infrastructure. It changed nothing. Sure, the reason for that is a very complex phenomenon that's not only technical, but social and political as well. But that's kind of my point. Juju promised things it couldn't ever have a direct influence over.
Juju has first class support for Azure, Joyent, HP, MAAS, and whatever Openstack you want, in addition to EC2. They're all at feature parity (plus or minus some pretty minor stuff).
There are a lot of features still to be added, for sure... but calling it not production ready is disingenuous, IMO.
Spoken like a Juju developer. The list of problems with Juju is not short, but the primary two are whatever is the opposite of synergistic: you are completely locked into ubuntu for every step, and the support available from canonical is utterly unacceptable. In this space, "the code is there" is step 1, and it took Juju an awfully long time to get there. Step 2 is "the system is sustainable" and that's currently where Juju falls over.
As for the Ubuntu lock-in: we actually are going to be announcing support for deploying to Windows very soon (the code is all there now, we just need to work through some issues with the public clouds and Microsoft). And we're actively working on support for other linuxes (yes, including some in the redhat family), which will land in the next few months.
I'm sorry you've had trouble with support. I would love to hear what problems you've seen, and how you think we can do better in that regard. The Juju team is still pretty small, so it can be difficult for us to cover all the places where people might want help, but we generally try to give the best support we can, when given the opportunity.
I'm not sure what you mean about the system not being sustainable, if you could explain, that would be helpful.
> Perhaps you might read a bit about CPU VT extensions, No Execute Bits, and similar hardware security technologies. Use your imagination a bit, and you can probably converge on a few key concepts that will significantly extend the usefulness of Linux Containers.
This is the most interesting part. Anyone want to guess? I'm having a hard time - from a hardware perspective contained processes are currently no different from any other processes, taking advantage of the standard user/kernel divide that hardware has supported for decades; they're merely namespaced differently by the kernel. How do you inject hardware into that?
I was going to guess they were using the improved support for things like shadow pagetables to help with live migration - but I'm not sure why they couldn't just do that with the existing user/kernel divide, as you say.
Another possibility that occurs to me is that you might use the VT-d device-side stuff to provide some restricted directed access to containers (as you can for full VMs). I'm struggling to think of a device where that would be a significant improvement on what was already available, though...
I've spent a lot of time looking at this area in a cross-platform way (ie. distribution-neutral, even OS-neutral, even 'service code execution environment might be a language VM instead of an OS' type ways / smacks of erlang, clojure or embedded development). Ubuntu contributions may be interesting but there's a few things in this statement that stuck out to me:
(1) Serge Hallyn moved to Ubuntu a fair while back. He was one of the two original LXC developers. We were in touch about 2009/2010, before he moved. The kernel features were IIRC originally funded by IBM with a view towards more efficient allocation of resources on their mega-scale platforms, but apparently failed to get adequate traction against the rest of industry which is parallelizing on commodity hardware.
(2) CRIU's DOOM demo is a loaded one. Moving containers around is very cute, and has definite appeal to less technical management who perceive it as adding flexibility, however the real tradeoff here is random weirdness across your application or service's execution (from probably-OK latency spikes to probably-not network stream accounting interruptions on third-party devices and associated layer 2 and 3 security hiccups) for assumed portability. These days, that's a crutch, albeit a useful one - particularly for very specific cases such legacy applications you can't rewrite requiring very high uptime - but not something to use if you can avoid it. Removing the monolithic assumptions in your service and incorporating node-failure and a distributed execution model is generally a much better idea... cheaper hardware, lower blood pressure, lower TCO, higher uptime.
(3) Our primary design goal with LXD, is to extend containers into process based systems that behave like virtual machines. What does that actually mean? For one, workflow considerations are not at the fore. It also means hybrid paravirt/container or hybrid third-party-cloud/local infrastructures are still going to need another management layer to abstract the various portions of infrastructure in to a common interface (as will services relying in whole or in part on non-Linux OSs or execution environments). These are a big part of the infrastructure and process challenge in many/most modern environments. As far as the 'extending' part goes, mostly LXC already offers everything, though the plethora of paradigms available for security, storage, resource allocation and QoS represent a relatively imposing research and execution challenge to even the most hardened Linux users: there's some ease of use work to be done, but how to achieve the same without sacrificing flexibility?
(4) our customers are, in fact, asking us for complete operating systems that boot and function within a Linux Container's execution space, natively [...] Ubuntu boots unmodified in Linux Containers today. That's the very easiest thing to achieve and has in fact been considered the 'lazy way' of using containers rather than writing custom applications with carefully crafted chroots and minimalistic dynamically-linked library dependency trees, refining custom kernel security toolkit policies for your application to back up carefully considered capability restrictions for the container, network namespaces with additional restrictions, custom cgroup-based multi-subsystem resource caps, etc. The fact that some distros assume they in full control of the hardware during their init process is largely resolved these days ... and I'd wager with very little to no thanks to LXD, except perhaps internal lobbying within Ubuntu.
(5) "Cloud Solutions Product Manager" version: LXD is a whole new semantic ... meaningless. Coder version from https://github.com/lxc/lxd : "a REST API, command line tool and OpenStack plugin based on liblxc". 'Nuff said.
Yea but terminal.com is not the enabling technology, that is the front end service you provide. LXC will let anyone do this on their own hosts for free.
Hasn't Canonical learned not to overpromise and underdeliver? Examples are multiple, Juju, the Ubuntu Phone, etc.
Oh, and speaking about Juju and what Canonical hasn't learned, it appears it still hasn't learned that when it announces something, it has to explain in clear terms what it does. In the same vein that "Juju is DevOps distilled", LXD is everything and nothing. It's a hypervisor, but it's not really a hypervisor. Now I did my own research, and after heroic effort I think I understand what this supposedly is. Mind you, I am not sure, but here goes nothing.
LXD is tooling over Linux namespaces, cgroups and associated technology that uses CRIU, or something like CRIU (is CRIU even production-ready?) that manages containers, and can move containers between different hosts.
Now this is one phrase that I think everyone understands. Instead of this we got vague statements evoking delusions of grandeur, unwarranted comparisons to virtual machine, inappropriately and misleadingly using the word hypervisor, and mentions of Ceph and OpenStack. And we still haven't learned what the supposed hardware-integration is. They say they can't say because of contractual obligation. That sounds lame, but whatever, if they can't tell us what it is, why mention it at all? To check a buzzword from the list of used buzzwords?
Now what is the purpose of this thing? I genuinely don't know. What's wrong with docker? Docker can't move containers. Surely that can be solved (IFF it can be solved) easier than creating all this contraption from scratch. Docker is supposed to run processes while they seem to want to run full Linux distributions. Is that it? I don't know. Why do they even want to run full Linux distribution? We don't know. It might have something to do with the fact that this is what Juju expects. It might not, who knows.
I'll stop speculating here because we have very little to work with.