upvote
> but two servers rooms in different locations, with resilient power and network is a bit too much effort IMHO

I worked in a company with two server farms (a main and a a backup one essentially) in Italy located in two different regions and we had a total of 5 employees taking care of them.

We didn't hear about them, we didn't know their names, but we had almost 100% uptime and terrific performance.

There was one single person out of 40 developers who's main responsibility were deploys, and that's it.

It costed my company 800k euros per year to run both the server farms (hardware, salaries, energy), and it spared the company around 7-8M in cloud costs.

Now I work for clients that spend multiple millions in cloud for a fraction of the output and traffic, and I think employ around 15+ dev ops engineers.

reply
it depends on complexity of your infra.

Running full scale kubernets, with multiple databases and services and expected 99.99% uptime likely can't be handled by one person.

reply
Takes a team of 3-4 in my experience. One person doesn't cut it when the talk of percents of uptime starts no matter what scale. (and no matter cloud, dedicated or on-premises).
reply
> I would rather pay a competent cloud provider than being responsible for reliability issues.

Why do so many developers and sysadmins think they're not competent for hosting services. It is a lot easier than you think, and its also fun to solve technical issues you may have.

reply
The point was about redundancy / geo spread / HA. It’s significantly more difficult to operate two physical sites than one. You can only be in one place at a time.

If you want true reliability, you need redundant physical locations, power, networking. That’s extremely easy to achieve on cloud providers.

reply
You can just rent the rack space in datacenter and have that covered. It's still much cheaper than running that in cloud.

It doesn't make sense if you only have few servers, but if you are renting equivalent of multiple racks of servers from cloud and run them for most of the day, on-prem is staggeringly cheaper.

We have few racks and we do "move to cloud" calculation every few years and without fail they come up at least 3x the cost.

And before the "but you need to do more work" whining I hear from people that never did that - it's not much more than navigating forest of cloud APIs and dealing with random blackbox issues in cloud that you can't really debug, just go around it.

reply
How much does your single site go down?

On cloud it's out of your control when an AZ goes down. When it's your server you can do things to increase reliability. Most colos have redundant power feeds and internet. On prem that's a bit harder, but you can buy a UPS.

If your head office is hit by a meteor your business is over. Don't need to prepare for that.

reply
You don't need full "cloud" providers for that, colocation is a thing.
reply
or just to be good at hiding the round trip of latency
reply
Also I'd add this question, why do so many developers and sysadmins think, that cloud companies always hire competent/non-lazy/non-pissed employees?
reply
> Why do so many developers and sysadmins think they're not competent for hosting services. It is a lot easier than you think, and its also fun to solve technical issues you may have.

It is a different skillset. SRE is also an under-valued/paid (unless one is in FAANGO).

reply
It’s all downside. If nothing goes wrong, then the company feels like they’re wasting money on a salary. If things go wrong they’re all your fault.
reply
Correct
reply
SRE has also lost nearly all meaning at this point, and more or less is equivalent to "I run observability" (but that's a SaaS solution too).
reply
Maybe you find it fun. I don’t, I prefer building software not running and setting up servers.

It’s also nontrivial once you go past some level of complexity and volume. I have made my career at building software and part of that requires understanding the limitations and specifics of the underlying hardware but at the end of the day I simply want to provision and run a container, I don’t want to think about the security and networking setup it’s not worth my time.

reply
Because when I’m running a busy site and I can’t figure out what went wrong, I freak out. I don’t know whether the problem will take 2 hours or 2 days to diagnose.
reply
Usually you can figure out what went wrong pretty quickly. Freaking out doesn't help with the "quickly" part though.
reply
> Why do so many developers and sysadmins think they're not competent for hosting services.

Because those services solve the problem for them. It is the same thing with GitHub.

However, as predicted half a decade ago with GitHub becoming unreliable [0] and as price increases begin to happen, you can see that self-hosting begins to make more sense and you have complete control of the infrastructure and it has never been more easier to self host and bring control over costs.

> its also fun to solve technical issues you may have.

What you have just seen with coding agents is going to have the same effect on "developers" that will have a decline in skills the moment they become over-reliant on coding agents and won't be able to write a single line of code at all to fix a problem they don't fully understand.

[0] https://news.ycombinator.com/item?id=22867803

reply
At a previous job, the company had its critical IT infrastructure on their own data center. It was not in the IT industry, but the company was large and rich enough to justify two small data centers. It notably had batteries, diesel generators, 24/7 teams, and some advanced security (for valid reasons).

I agree that solving technical issues is very fun, and hosting services is usually easy, but having resilient infrastructure is costly and I simply don't like to be woken up at night to fix stuff while the company is bleeding money and customers.

reply
> Maintaining one server room in the headquarters is something, but two servers rooms in different locations, with resilient power and network is a bit too much effort IMHO.

Speaking as someone who does this, it is very straightforward. You can rent space from people like Equinix or Global Switch for very reasonable prices. They then take care of power, cooling, cabling plant etc.

reply
Yes, we still use the azure for user-facing services and the website. They don't need GPUs and don't need expensive resources, so it's not as worth it to bring those in-house.

We also rely on github. It has historically been good a service, but getting worth it.

reply
Unfortunately we experienced an issue where our Slurm pool was contaminated by a misconstrained Postgres Daemon. Normally the contaminated slurm pool would drain into a docker container, but due to Rust it overloaded and the daemon ate its own head. Eventually we returned it to a restful state so all's well that ends well.

(hardware engineer trying to understand wtaf software people are saying when they speak)

reply
I don't get why most everyone insists on comparing cloud to on-premises and not to dedicated. Why would anyone run own DC infra when there's Hetzner and many others?
reply