>Now a good company would concentrate risk on their differentiating factor or the specific part they have competitive advantage in.
Yes, but one differentiating factor is always price and you don't want to lose all your margins to some infrastructure provider.
Think of a ~5000 employee startup. Two scenarios:
1. if they win the market, they capture something like ~60% margin
2. if that doesn't happen, they just lose, VC fund runs out and then they leave
In this dynamic, costs associated with infrastructure don't change the bottomline of profitability. The risk involved with rolling out their on infrastructure can hurt their main product's existence itself.
>Unless on premises helps the bottom line of the main product that the company provides, these decisions don't really matter in my opinion.
Well, exactly. But the degree to which the price of a specific input affects your bottom line depends on your product.
During the dot com era, some VC funded startups (such as Google) made a decision to avoid using Windows servers, Oracle databases and the whole super expensive scale-up architecture that was the risk-free, professional option at the time. If they hadn't taken this risk, they might not have survived.
[Edit] But I think it's not just about cloud vs on-premises. A more important question may be how you're using the cloud. You don't have to lock yourself into a million proprietary APIs and throw petabytes of your data into an egress jail.
But most importantly, the attractive power that companies doing on-premise infrastructure have towards the best talent.
If you don’t, you’ll be stuck trying to figure out data centres. Hiring tons of infrastructure experts, trying to manage power consumption. And for what? You won’t sell any more nails.
If you’re a company like Google, having better data centres does relate to your products, so it makes sense to focus on them and build your own.
Capex needs work. A couple of years, at least.
If you are willing to put in the work. Your mundane computer is always better than the shiny one you don't own.
Of course creating a VM is still a teraform commit away (you're not using clickops in prod surely)
If you want a custom server, one or a thousand, it's at least a couple of weeks.
If you want a powerful GPU server, that's rack + power + cooling (and a significant lead time). A respectable GPU server means ~2KW of power dissipation and considerable heat.
If you want a datacenter of any size, now that's a year at least from breaking ground to power-on.
But we are talking about a cost difference of tens of times, maybe a few hundred. The cloud is not like "most of the time".
Scale up, prove the market and establish operations on the credit card, and if it doesn’t work the money moves onto more promising opportunities. If the operation is profitable you transition away from the too expensive cloud to increase profitability, and use the operations incoming revenue to pay for it (freeing up more money to chase more promising opportunities).
Personally I can’t imagine anything outside of a hybrid approach, if only to maintain power dynamics with suppliers on both sides. Price increases and forced changes can be met with instant redeployments off their services/stack, creating room for more substantive negotiations. When investments come in the form of saving time and money, it’s not hard to get everyone aligned.
I think the primary reason that people over fixate on the cloud is that they can't do math. So renting is a hedge.
Even spending 10k recurring can be easier administratively that spending 10k on a one time purchase that depreciates over a 3 year cycle in some organisations because you don’t have to go into meetings to debate whether it’s actually a 2 or 4 year depreciation or discuss opportunity costs of locking up capital for 3 years etc.
Getting things done is mostly a matter of getting through bureaucracy. Projects fail because of getting stuck in approvals far more often than they fail because of going overbudget.
Of course not.