> If your business invests in physical servers anticipating strong growth next year then later finds out actually we're going into a recession and those servers are no longer needed, then that's a sunk cost.
Yes, but that sunk cost is probably still lower than what you paid AWS for the option to scale up and down.
This. And I think people tend not to understand how little actual hardware they are paying for when using AWS et al.
A really cheap server leasing deal will cost you yearly about as much as the purchase price of the server. With opaque AWS services it is probably more like a month of subscription to pay for the hardware that you are indirectly using.
I worked for a global company that maintained it's own "cloud" of VMs that we'd use for development purposes.
They were entirely unusable.
Opening a relatively small file in notepad could take multiple minutes. OS click and typing response times were measured in seconds.
Despite wasting thousands of developer hours each year, they refused to upgrade their data center. Probably because doing so would have been a major budget fight that requires an executive to actually advocate for something instead of making their characteristic animalistic grunts of agreement.
For better or worse I haven't seen the same issue with cloud expenditure. It seems to be perceived as a necessary expense, rather than the engineering department getting ideas above their station.
I just spent the better part of two years advocating, pushing, and fighting for months to add new bandwidth to our datacenter.
Thankfully after they understood the problem it only took 8 months of procurement, techs going to the data center 10+ times with endless screw ups, and everyone pointing the finger at each other.
While the cloud sucks in many ways the traditional setup has big problems as soon as you hit a midsize company ime.
A cloud vendor (who will be nameless as I signed an NDA specifically that prevents me from disparaging them; but one of the big three) ran out of capacity for me and it was 3 months before they managed to fix it. -- that was with a couple million a month in spend.
Cloud is still servers; you just depend on someone elses capacity management skills and you hope that there isn't a rush to populate a location (like when a region goes down and everyone's auto-provisioners move regions to yours)
Barring exceptional circumstances, I don't have to fight that fight at the cloud provider though. Their business is more likely to be amenable to maintaining and expanding reasonable levels of capacity.
I have to deal with a grumpy finance guy that thinks my whole department is overpaid already, especially so if we might use the dreaded `CapEx` word.
I think the main point here is that there is no limit to incompetence. And sure, having your own servers allow for some goofs that won't happen with cloud (the opposite is also true). But your org had the means to fix the issue, and they choose not to. That has fundamentally got nothing to do with technology choice.
Or maybe there's a mid point. It's not datacenter or cloud. There are providers offering physical servers for rent for example. Lots of combinations in-between.
And that you can also lease servers directly from vendors like OVH so you don't even need to bother with the "drive to datacenter and install it" part. It's more expensive but still far cheaper than cloud
Most companies will pay way more for the engineers maintaining their on-premises infrastructure than they would for AWS. On-premises still makes sense when you reach a certain scale. When you reach a certain scale.
They kind of need to be there anyway, physically maintaining servers turns out to be a miniscule part of the whole maintenance. If you really care about uptime you still need people on-call who can intervene as necessary.
It's not a minuscule part of a small company. I made the point that on-premises makes sense after a certain scale.
Once you have on-premises you need people that know switches, routers, rackmount server, hardware, virtualization, etc, plus keeping all of that properly maintained (security patches, IaC, periodic updates, analyzing performance, making sure it's properly architected, etc).
I often see people saying it's the same cost or less but it's really not. Unless you have no idea what you should be doing.
I don't know, I worked at a few companies that did this early in my career (early 2000s), and it was just the devs or the sysadmin of the office IT that did this sort of thing. There are lots of people who know enough about switches and routers to get them up and running.
Virtualization, IaC, analyzing performance, right architecture etc is all for later, when you've grown enough to need that.
> Virtualization, IaC, analyzing performance, right architecture etc is all for later, when you've grown enough to need that.
Yeah, I think it might be a different perspective about when that all should be done.
I tend to do that right from the beginning because I often see it snowball later on and nobody ever fixes it or does it "properly" (in my opinion, possibly not the right one).
Yes, but that sunk cost is probably still lower than what you paid AWS for the option to scale up and down.