upvote
> even the internal talent to know whether they are hiring a good infrastructure engineer or not during the interview process.

This is really the core problem. Every time I’ve done the math on a sizable cloud vs on-prem deployment, there is so much money left on the table that the orgs can afford to pay FAANG-level salaries for several good SREs but never have we been able to find people to fill the roles or even know if we had found them.

The numbers are so much worse now with GPUs. The cost of reserved instances (let alone on-demand) for an 8x H100 pod even with NVIDIA Enterprise licenses included leaves tens of thousands per pod for the salary of employees managing it. Assuming one SREs can manage at least four racks the hardware pays for itself, if you can find even a single qualified person.

reply
I work in SRE and the way you describe it would give me pause.

The first is that SRE team size primarily scales with the number of applications and level of support. It does scale with hardware but sublinearly, where number of applications usually scales super linearly. It takes a ton less effort to manage 100 instances of a single app than 1 instance of 100 separate apps (presuming SRE has any support responsibilities for the app). Talking purely in terms of hardware would make me concerned that I’m looking at an impossible task.

The second (which you probably know, but interacts with my next point) is that you never have single person SRE teams because of oncall. Three is basically the minimum, four if you want to avoid oncall burnout.

The last is that I don’t know many SREs (maybe none at all) that are well-versed enough in all the hardware disciplines to manage a footprint the size we’re talking. If each SRE is 4 racks and a minimum team size is 4, that’s 16 racks. You’d need each SRE to be comfortable enough with networking, storage, operating system, compute scheduling (k8s, VMWare, etc) to manage each of those aspects for a 16 rack system. In reality, it’s probably 3 teams, each of them needs 4 members for oncall, so a floor of like 48 racks. Depending on how many applications you run on 48 racks, it might be more SREs that split into more specialized roles (a team for databases, a team for load balancers, etc).

Numbers obviously vary by level of application support. If support ends at the compute layer with not a ton of app-specific config/features, that’s fewer folks. If you want SRE to be able to trace why a particular endpoint is slow right now, that’s more folks.

reply
> The last is that I don’t know many SREs (maybe none at all) that are well-versed enough in all the hardware disciplines to manage a footprint the size we’re talking. If each SRE is 4 racks and a minimum team size is 4, that’s 16 racks. You’d need each SRE to be comfortable enough with networking, storage, operating system, compute scheduling (k8s, VMWare, etc) to manage each of those aspects for a 16 rack system. In reality, it’s probably 3 teams, each of them needs 4 members for oncall, so a floor of like 48 racks. Depending on how many applications you run on 48 racks, it might be more SREs that split into more specialized roles (a team for databases, a team for load balancers, etc).

That's vastly overstating it. You hit nail in the head in previous paragraphs, it's number of apps (or more generally speaking ,environments) that you manage, everything else is secondary.

And that is especially true with modern automation tools. Doubling rack count is big chunk of initial time spent moving hardware of course, but after that there is almost no difference in time spent maintaining them.

In general time per server spent will be smaller because the bigger you grow the more automation you will generally use and some tasks can be grouped together better.

Like, at previous job, server was installed manually, coz it was rare.

At my current job it's just "boot from network, pick the install option, enter the hostname, press enter". Doing whole rack (re)install would take you maybe an hour, everything else in install is automated, you write manifest for one type/role once, test it, and then it doesn't matter whether its' 2 or 20 servers.

If we grew server fleet say 5-fold, we'd hire... one extra person to a team of 3. If number of different application went 5-fold we'd probably had to triple the team size - because there is still some things that can be made more streamlined.

Tasks like "go replace failed drive" might be more common but we usually do it once a week (enough redundancy) for all servers that might've died, if we had 5x the number of servers the time would be nearly the same because getting there dominates the 30s that is needed to replace one.

reply
Noteworthy: the number of apps isn't affected by whether the machines are in your datacenter or Amazon's.
reply
So your definition of SRE is anybody that works on infra?
reply
I disagree with on-prem being ideal for GPU for most people.

If you're doing regular inference for a product with very flat throughput requirements (and you're doing on-prem already), on-prem GPUs can make a lot of sense.

But if you're doing a lot of training, you have very bursty requirements. And the H100s are specifically for training.

If you can have your H100 fleet <38% utilized across time, you're losing money.

If you have batch throughput you can run on the H100s when you're not training, you're probably closer to being able to wanting on-prem.

But the other thing to keep in mind is that AWS is not the only provider. It is a particularly expensive provider, and you can buy capacity from other neoclouds if you are cost-sensitive.

reply
deleted
reply
Self-hosted 8xH100 is ~$250k, depreciated across three years => $80k/year, with power and cooling => $90k/year (~$10/hour total).

AWS charges $55/hour for EC2 p5.48xlarge instance, which goes down with 1 or 3 year commitments.

With 1 year commitment, it costs ~$30/hour => $262k per year.

3-year commitment brings price down to $24/hour => $210k per year.

This price does NOT include egress, and other fees.

So, yeah, there is a $120k-$175k difference that can pay for a full-time on-site SRE, even if you only need one 8xH100 server.

Numbers get better if you need more than one server like that.

reply
$120K isn't going to cover the fully loaded costs of an SRE who can set up and run that.

Hiring 1 person to run the infrastructure means that 1 person is on-call 24/7 forever.

If there's an issue with the server while they're sick or on vacation, you just stop and wait.

If they take a new job, you need to find someone to take over or very quickly hire a replacement.

There's a second bus factor: What happens when that 8xH100 starts to get flakey? You can't move the jobs to another server because you only have one. You can start diagnosing things and replacing parts and hope it gets to the root issue, but that's more downtime.

Going on-prem like this is highly risky. It works well until the hardware starts developing problems or the person in charge gets a new job. The weeks and months lost to dealing with the server start to become a problem. The SRE team starts to get tired of having to do all of their work on weekends because they can't block active use during the week. Teams start complaining that they need to use cloud to keep their project moving forward.

reply
> $120K isn't going to cover the fully loaded costs of an SRE who can set up and run that.

> Hiring 1 person to run the infrastructure means that 1 person is on-call 24/7 forever.

> If there's an issue with the server while they're sick or on vacation, you just stop and wait.

Very much depends on what you're doing, of course, but "you just stop and wait" for sickness/vacation sometimes is actually good enough uptime -- especially if it keeps costs down. I've had that role before... That said, it's usually better to have two or three people who know the systems though (even if they're not full time dedicated to them) to reduce the bus factor.

reply
> There's a second bus factor: What happens when that 8xH100 starts to get flakey? You can't move the jobs to another server because you only have one.

You can still use cloud for excess capacity when needed. E.g. use on-prem for base load, and spin up cloud instances for peaks in load.

reply
> There's a second bus factor: What happens when that 8xH100 starts to get flakey? You can't move the jobs to another server because you only have one. You can start diagnosing things and replacing parts and hope it gets to the root issue, but that's more downtime.

they come with warranty, often with technican guaranteed to arrive within few hours or at most a day. Also if SHTF just getting cloud to augument current lackings isn't hard

reply
deleted
reply
If a business which require at least a quarter million bucks worth of hardware for the basic operation yet it can't pay the market rate for someonr who would operate it - maybe the basics of that business is not okay?
reply
> There's a second bus factor: What happens when that 8xH100 starts to get flakey?

These come in a non-flakey variant?

reply
It's called a warranty.

And the other argument: every company I've ever know to do AWS has an AWS sysadmin (sorry "devops"), same for Azure. Even for small deployments. And departments want their own person/team.

reply
>If there's an issue with the server while they're sick or on vacation, you just stop and wait.

You can ask AI to troubleshoot and fix the issue.

reply
Out of all the comments on numbers, SREs, and scaling, you get the response for meeting numbers with numbers!

> $120K isn't going to cover the fully loaded costs of an SRE who can set up and run that.

Literally this. I can do SRE on-prem and cloud, and my 50/30/20 budget break-even point (as in, needs and savings but no wants - so 70%) is $170k before taxes. Rent is astonishingly high right now, and the sort of mid-career professional you want to handle SRE for your single DC is going to take $150k in this market before fucking off to the first $200k job they get.

Know your market, and pay accordingly. You cannot fuck around with SREs.

> Hiring 1 person to run the infrastructure means that 1 person is on-call 24/7 forever.

This is less of an issue than you might think, but strongly dependent upon the quality of talent you’ve retained and the budget you’ve given them. Shitbox hardware or cheap-ass talent means you’ll need to double or triple up locally, but a quality candidate with discretion can easily be supported by a counterpart at another office or site, at least short-term. Ideally though, yeah, you’ll need two engineers to manage this stack, but AWS savings on even a modest (~700 VMs) estate will cover their TC inside of six months, generally.

> There's a second bus factor: What happens when that 8xH100 starts to get flakey? You can't move the jobs to another server because you only have one. You can start diagnosing things and replacing parts and hope it gets to the root issue, but that's more downtime.

This strikes at another workload I neglected to mention, and one I highly recommend keeping in the public cloud: GPUs.

GPUs on-prem suck. Drivers are finnicky, firmware is flakey, vendor support inconsistent, and SR-IOV is a pain in the ass to manage at scale. They suck harder than HBAs, which I didn’t think was possible.

If you’re consuming GPUs 24x7 and can afford to support them on-prem, you’re definitely not here on HN killing time. For everyone else, tune your scaling controls on your cloud provider of choice to use what you need, when you need it, and accept the reality that hyperscalers are better suited for GPU workloads - for now.

> Going on-prem like this is highly risky.

Every transaction is risky, but the risk calculus for “static” (ADDS) or “stable” (ERP, HRIS, dev/test) work makes on-prem uniquely appealing when done right. Segment out your resources (resist the urge for HPC or HCI), build sensible redundancies (on-prem or in the cloud), and lean on workhorse products over newer, fancier platforms (bulletproof hypervisors instead of fragile K8s clusters), and you can make the move successful and sensible. The more cowboy you go with GPUs, K8s, or local Terraform, the more delicate your infra becomes on-prem - and thus the riskier it is to keep there.

Keep it simple, silly.

reply
This factually did not play out like this in my experience.

The company did need the same exact people to manage AWS anyway. And the cost difference was so high that it was possible to hire 5 more people which wasn't needed anyway.

Not only the cost but not needing to worry about going over the bandwidth limit and having soo much extra compute power made a very big difference.

Imo the cloud stuff is just too full of itself if you are trying to solve a problem that requires compute like hosting databases or similar. Just renting a machine from a provider like Hetzner and starting from there is the best option by far.

reply
> The company did need the same exact people to manage AWS anyway.

That is incorrect. On AWS you need a couple DevOps that will Tring together the already existing services.

With on premise, you need someone that will install racks, change disks, setup high availability block storage or object storage, etc. Those are not DevOps people.

reply
> With on premise, you need someone that will install racks, change disks, setup high availability block storage or object storage, etc. Those are not DevOps people.

we have 7 racks and 3 people. The things you mentioned aren't even 5% of the workload.

There are things you figure out once, bake into automation, and just use.

You install server once and remove it after 5-10 years, depending on how you want to depreciate it. Drives die rarely enough it's like once every 2 months event at our size

The biggest expense is setting up automation (if I was re-doing our core infrastructure from scratch I'd probably need good 2 months of grind) but after that it's free sailing. Biggest disadvantage is "we need a bunch of compute, now", but depending on business that might never be a problem, and you have enough savings to overbuild a little and still be ahead. Or just get the temporary compute off cloud.

reply
Moving around the physical hardware is a truly tiny part of the actual job, it's really not relevant. (especially nowadays, see the top level comment about how you can do an insane amount (probably more than the median cloud deployment) with a fraction of a rack).
reply
To be clear, I'm not writing about on-premise. I mean difference between managed cloud and renting dedicated servers
reply
Even if you do include physical server setup and maintenance, one or two days per month is probably enough enough for a couple hundred rack units.
reply
"Those are not DevOps people."

Real Devops people are competent from physical layer to software layer.

Signed,

Aerospace Devop

reply
People will install racks and swap drives for significantly less money than DevOps, lol. People who can build LEGO sets are cheaper than software developers.
reply
Ops people are typically more useful given you probably already have devs.
reply
You need the exact same people to run the infra in the cloud. If they don't have IT at all, they aren't spinning up cloud VMs. You're mixing together SaaS and actual cloud infra.
reply
I'm one of those people, and I don't agree.

Before I drop 5 figures on a single server, I'd like to have some confidence in the performance numbers I'm likely to see. I'd expect folk who are experienced with on-prem have a good intuition about this - after a decade of cloud-only work, I don't.

Also, cloud networking offers a bunch of really nice primitives which I'm not clear how I'd replicate on-prem.

I've estimated our IT workload would roughly double if we were to add physically racking machines, replacing failed disks, monitoring backups/SMART errors etc. That's... not cheap in staff time.

Moving things on-prem starts making financial sense around the point your cloud bills hit the cost of one engineers salary.

reply
> Also, cloud networking offers a bunch of really nice primitives which I'm not clear how I'd replicate on-prem.

Like what?

reply
This is not the case. We had to double staff count going from three cages to AWS. And AWS was a lot more expensive. And now we're stuck.

On top of that no one really knows what the fuck they are doing in AWS anyway.

reply
> The main cost with on-prem is not the price of the gear but the price of acquiring talent to manage the gear. Most companies simply don't have the skillset internally to properly manage these servers, or even the internal talent to know whether they are hiring a good infrastructure engineer or not during the interview process.

That's partially true; managing cloud also takes skill, most people forget that with end result being "well we saved on hiring sysadmins, but had to have more devops guys". Hell I manage mostly physical infrastructure (few racks, few hundred VMs) and good 80% of my work is completely unrelated to that, it's just the devops gluing stuff together and helping developers to set their stuff up, which isn't all that different than it would be in cloud.

> And remember, you want 24/7 monitoring, replication for disaster recovery, etc.

And remember, you need that for cloud too. Plenty of cloud disaster stories to see where they copy pasted some tutorial thinking that's enough then surprise.

There is also partial way of just getting some dedicated servers from say OVH and run infra on that, you cut out a bit of the hardware management from skillset and you don't have the CAPEX to deal with.

But yes, if it is less than at least a rack, it's probably not worth looking for onprem unless you have really specific use case that is much cheaper there (I mean less than usual half)

reply
As opposed to talent to manage the AWS? Sorry, AWS loses here as well.
reply
I know of AWS's reputation as a business and what the devs say who work there, so I have no argument against your point, except to say that they do manage to make it work. Somewhere in there must be some unsung heroes keeping the whole thing online.
reply
The point being that AWS runs AWS, they don't run your business on AWS. You still need someone to actually set up AWS to do what you want, much like you would need someone to run your on-premises servers. And in my experience, the difference is not much.
reply
> price of acquiring talent to manage the gear

Is it still a problem in 2026 when unemployment in IT is rising? Reasons can be argued (the end of ZIRP or AI) but hiring should be easier than it was at any time during the last 10 years.

reply
Hiring people is still fucked in 2026 in my experience. HR processes are extremely dysfunctional at many organizations...
reply
people with that set of skills are never looking for job for long.
reply
hiring in 2026 is 100x harder than ever before
reply
What about the cost of k8s and AWS experts etc.?
reply
> main cost with on-prem is not the price of the gear but the price of acquiring talent to manage the gear

Not quite. If you hire a bad talent to manage your 'cloud gear' then you would find what the mistakes which would cost you nothing on-premises would cost you in the cloud. Sometimes - a lot.

reply
Managing AWS is a ton of work anyway
reply
Given how good Apple Silicon is these days, why not just buy a spec'd out Mac Studio (or a few) for $15k (512 GB RAM, 8 TB NVMe), maybe pay for S3 only to sync data across machines. No talent required to manage the gear. AWS EC2 costs for similar hardware would net out in something ridiculous like 4 months.
reply