And we are talking about static content. You will be bottlenecked by bandwidth before you are ever bottlenecked by your laptop.
While I've also got enough other stuff running that my 15 min load average is at 4 and I've got 83% RAM used ignoring buffers/caches/otherwise.
I went and grabbed a random benchmarking tool and pointed it at it with 125 concurrent connections.
Sustained an average of 13914 reqs/s. Highest latency was 53.21ms.
If there are 10,000 people online at any given time hitting the API on average once every 3 seconds (which I believe are generous numbers), you'd only be around 3.3k reqs/s, or about 24% of what my laptop could serve even before any sort of caching, CDN, or anything else.
So... if a laptop can't serve that sort of request load, it sounds more like an indictment of the site's software than anything.
Considering the content is essentially static, this is actually viable. Search functions might be a bit problematic, but that's a solvable problem.
Of course you pay with engineering skills and resources.
Or if you go ping pong across containers to handle a single request. That will certainly make a laptop unable to handle this load.
I guess you would need some sort of search term to document id mapping that gets downloaded to the browser but maybe there's something more efficient than trying to figure out what everyone might be searching for in advance?
And how would you do searching for phrases or substrings? I've no idea if that's doable without having a database server-side that has the whole document store to search through.
there might be some piece I'm missing, but the first thing that comes to mind would be using that, possibly with the full-text search extension, to handle searching the metadata.
at that point you'd still be paying S3 egress costs, but I'd be very surprised if it wasn't at least an order of magnitude less expensive than Vercel.
and since it's just static file hosting, it could conceivably be moved to a VPS (or a pair of them) running nginx or Caddy or whatever, if the AWS egress was too pricey.
Some cloud products have distorted an entire generation of developers understanding of how services can scale.
I’d interpret “thousands of people hitting a single endpoint multiple times a day” as something like 10,000 people making ~5 requests per 24 hours. That’s 0.5 requests per second.
Part of why this is a problem is that consumer grade NICs often tend to overload quite a lot of work to the CPU that higher end server specced NICs do themselves, as a laptop isn't really expected to have to keep up with 10K concurrent TCP connections.
and if it doesn't spawn up another $30 instance and add another RR entry to the dns
serving static content scales horizontally perfectly
They will sell you a 10Gbps uplink however, with (very reasonably priced) metered bandwidth.
I've hosted side projects on Hetzner for years and have never experienced anything like that. Do you have any references of projects to which it happened?
They offer unlimited bandwidth with their dedicated servers under a “fair usage” policy.
The bandwidth costs would be higher than what you pay monthly, so they would simply drop you.
You are probably using very little bandwidth, so it doesn’t matter in your case.
However, I assume Jmail consumes a very large amount of bandwidth.
But not because of being "not a profitable customer". Mind sharing some links here?