upvote
i was just curious so i tested this actually.

Using fio

Hetzner (cx23, 2vCPU, 4 GB) ~3900 IOPS (read/write) ~15.3 MB/s avg latency ~2.1 ms 99.9th percentile ≈ ~5 ms max ≈ ~7 ms

DigitalOcean (SFO1 / 2 GB RAM / 30 GB Disk) ~3900 IOPS (same!) ~15.7 MB/s (same!) avg latency ~2.1 ms (same!) 99.9th percentile ≈ ~18 ms max ≈ ~85 ms (!!)

using sequential dd

Hetzner: 1.9 GB/s DO: 850 MB/s

Using low end plan on both but this Hetzner is 4 euro and DO instance is $18.

reply
I love Hetzner so much. I'm not affiliated I'm a really happy customer these guys just do everything right.
reply
As long as you never have to interact with them. If you run into issues they have caused themselves, you'll find yourself dealing with a unique mix of arrogance and incompetence.
reply
I've been using Hetzner for ~20 years and every single support interaction I've ever had with them has been top tier. Never AI bots, always humans who are helpful, courteous and prompt. I can't think of a single company, let alone hosting company, whose customer service has been so consistently good.
reply
It certainly helps the service never does anything wonky that requires a support interaction in the first place.
reply
Just for comparison I use the cheapest netcup root server:

RS 1000 G12 AMD EPYC™ 9645 8 GB DDR5 RAM (ECC) 4 dedicated cores 256 GB NVMe

Costs 12,79 €

Results with the follwing command:

fio --name=randreadwrite \ --filename=testfile \ --size=5G \ --bs=4k \ --rw=randrw \ --rwmixread=70 \ --iodepth=32 \ --ioengine=libaio \ --direct=1 \ --numjobs=4 \ --runtime=60 \ --time_based \ --group_reporting

IOPS Read: 70.1k IOPS Write: 30.1k IOPS ~100k IOPS total

Throughput Read: 274 MiB/s Write: 117 MiB/s

Latency Read avg: 1.66 ms, P99.9: 2.61 ms, max 5.644 ms Write avg: 0.39 ms, P99.9: 2.97 ms, max 15.307 ms

reply
Nice, on Hetzner AX41-nvme (~50 eur, from 2020) non-raid I get:

IOPS: read 325k, write 139k

Throughput: read 1271MB/s, write 545MB/s

Latency: read avg 0.3ms, P99.9 2.7ms, max 20ms; write: 0.14ms, P99.9 0.35ms max 3.3ms

so roughly 100 times iops and throughput of the cloud VMs

reply
That is a bit of a unfair comparison. The Hetzner and DO instances are shared hosting, you are using dedicated ressources.

Using a Netcup VPS 1000 G12 is more comparable.

read: IOPS=18.7k, BW=73.1MiB/s

write: IOPS=8053, BW=31.5MiB/s

Latency Read avg: 5.39 ms, P99.9: 85.4 ms, max 482.6 ms

Write avg: 3.36 ms, P99.9: 86.5 ms, max 488.7 ms

reply
Hetzner has dedicated resources too, but they also have 2 levels of shared resources, "Cost-Optimized" and "Regular Performance". The 3900 IOPS CX23 above is "Cost-Optimized".

Here are some "Regular Performance" shared resource stats

Hetzner CPX11 (Ashburn, 2 CPUs, 2GB, 5.49€ or $6.99/month before VAT)

read: IOPS=36.7k, BW=144MiB/s, avg/p99.9/max 2.4/6.1/19.5ms

write: IOPS=15.8k, BW=61.7MiB/s, avg/p99.9/max 2.4/6.1/18.7ms

Hetzner CPX22 (Helsinki, 2 CPUs, 4GB, 7.99€ or $9.49/month before VAT)

read: IOPS=48.2k, BW=188MiB/s, avg/p99.9/max 1.9/5.7/10.8ms

write: IOPS=20.7k, BW=80.8MiB/s, avg/p99.9/max 1.8/5.8/10.9ms

Hetzner CPX32 (Helsinki, 4 CPUs, 8GB, 13.99€ or $16.49/month before VAT)

read: IOPS=48.3k, BW=189MiB/s, avg/p99.9/max 1.9/6.2/36.1ms

write: IOPS=20.7k, BW=81.0MiB/s, avg/p99.9/max 1.8/6.3/36.1ms

reply
Storage performance is practically always a shared resource, and that's what y'all are talking about here...
reply
deleted
reply
>3000 IOPS

If that's true, I wonder if this is a deliberate decision by cloud providers to push users towards microservice architectures with proprietary cloud storage like S3, so you can't do on-machine dbs even for simple servers.

reply
It's probably a combination of high density storage nodes getting I/O bound and SSDs having finite write endurance. Anything that improves the first problem costs them money to improve it and then makes the second problem worse, and the second one costs them money again, so why would they want to make the default something that costs then more twice if most people don't need it?

Instead they make the default "meager IOPS" and then charge more to the people who need more.

reply
Many cloud vendors have you pay through the nose for IOPS and bandwidth.

Edit: I posted this before reading, and these two are the same he points out.

reply
Yes, but you can’t directly compare SAN-style storage with a local NVMe. But I agree that it’s too expensive, but not nearly as insane as the bandwidth pricing. If you go to a vendor and ask for a petabyte of storage, and it needs to be fully redundant, and you need the ability to take PIT-consistent multi-volume snapshots, be ready to pay up. And this is what’s being offered here.

And yes, IO typically happens in 4kb blocks, so you need a decent amount of IOPS to get the full bandwidth.

reply
Sure, but a petabyte of block storage with redundancy and PIT backups is a poor abstraction to build on, in large part because it’s not a thing that can be built without either paying an wild amount of money or taking a huge performance hit or both. If you do your PIT recovery at a higher layer, you have to work a bit harder but you get far better cost, perf and recovery.

That latter part is a big deal, too. If I buy 1PB of block storage, I’m decently likely to be running a fancy journaled or WAL-ed or rollback-logged thing on top, and that thing might be completely unable to read from a read only snapshot. So actually reading from a PIT snapshot is a pain regardless of what I paid for it. Even using EBS or similar snapshots is far from being an amazing experience.

reply
> Cloud vendor pricing often isn't based on cost.

Business 101 teaches us that pricing isn't based on cost. Call it top down vs bottom up pricing, but the first principles "it costs me $X to make a widget, so 1.y * $X = sell the product for $Y is not how pricing works in practice.

reply
Just to spell this out more clearly for the back row.of the classroom:

The price is what the customer will pay, regardless of your costs.

reply
Economics teaches us that a big difference between cost and price attracts competition which should make the price trend towards the cost.
reply
A big difference between cost and price is often won at the expense of many years of concerted R&D, though
reply
Practice taught me that that "should" is doing a lot of heavy lifting here and it's often not the case, even across long time periods (years) that should allow competitors to emerge.

For example I calculated the cost of a solar install to be approximately: Material + Labour + Generous overhead + Very tidy profit = 10,000€

In practice I keep getting offers for ~14,000€, which will be reduced to 10,000€ with a government subsidy and my request for an itemized invoice is always met with radio silence.

reply
Only if the barrier of entry is low.

Which it won't be, if at every turn you choose the hyperscaler.

reply
Economics has a lot of other lessons teaching us why prices of major clouds have remained somewhat expensive relative to cost
reply
If this is the case, cheap bandwidth for AWS, when?
reply
Exactly.
reply
That's not a business 101.
reply
> That's not a business 101.

It kinda is, but obscured by GP's formula.

More simply; if it costs you $X to produce a product and the market is willing to pay $Y (which has no relation to $X), why would you price it as a function of $X?

If it costs me $10 to make a widget and the market is happy to pay $100, why would I base my pricing on $10 * 1.$MARGIN?

reply
Exactly. The mechanism by which the price ends up as X plus margin is just competition. Others enter the market and compete with you until the returns are driven down to the rental rate of capital. Any barriers to entry result in higher margins.

But that is an equilibrium result, and famously does not apply to monopolies, where elasticity of substitution will determine the premium over the rental rate of capital.

reply