Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Need more details. Matching AWS service for service would be overkill I imagine, so what services does this need to have. Is hardware virtualisation is must-have.

The way I made systems/applications before AWS existed was to create bootable images. I could already use a hypervisor, Xen, because the OS fully supported both guest and host, all virtualisation modes, before Linux did, and before AWS existed. But because I am not a hosting provider I saw little need for virtualisation. The OS I used also had "unikernel" capability, before Docker, etc. existed. As it happened, this inexpensive, self-determination was not to be the future, instead we got "the cloud". Sharing servers with other "tenants". Less expensive for the hosting provider, but more expensive and more limited for the subscriber. No argument, the "limited" part has improved since then, but it is still expensive (for continuous use that is, spot pricing was a neat benefit of "the cloud").

Anyway, I "deploy" images to "local bare metal" which is a laptop, RPi, or some other smaller form factor. I can use unikernel for some kernel drivers in userspace. Basic filesystem is embedded in the kernel, and larger filesystem is on the USB stick filesystem in compressed format. Updates are easy enough. I put two kernels on the USB stick, one is the running kernel and the other is the update kernel. Same for the larger filesystem which may contain servers and configuration files. Can update or go back to last working kernel/configuration by selecting one or the other in the boot menu/renaming the larger compressed filesystem.

Here is someone running a search engine out of his living room. AKAIK, his setup survived a sustained HN thundering herd, hug of death without a hiccup.

https://news.ycombinator.com/item?id=28552805

The expense of AWS is an obvious point of discussion but another one not mentioned here is control. When I create images for bare metal I do not need to jump through any hoops as I would in order to create an image that will run on AWS. Nor do I need to fiddle with all the AWS knobs. There are no silly marketing names for every program I run. I know the system I am creating as well as I know the OS and the software I choose. That is much better than how well I know every aspect of AWS which just gets more and more complex every year. AWS documentation is as cringeworthy as it it is voluminous. The ever-increasing complexity of AWS, including the "tooling", is, IMO, How To Create Lock-In 101



> The ever-increasing complexity of AWS, including the "tooling", is, IMO, How To Create Lock-In 101

That is an interesting point. Complexity creates lock-in. Why? Because when you are interacting with a complex system you depend on it working in the complex ways it does. It is unlikely that anybody else could duplicate those features of the AWS your application is depending on.

This all runs counter to the idea of "encapsulation". You should be able to use a system via a well-defined interface. Once the interface is well-defined, other providers can provide their own implementation of the same interface.

So, AWS is basically bad software engineering, lacking encapsulation?


I would absolutely love reading you blogging in details about this. It's a lost art, and one I've ever been good at. I want to lately, but people like you are always undercover and don't document their knowledge. :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: