upvote
Isn’t it just serving static content and the content fitting in RAM? If so your laptop can serve it just fine even.
reply
A laptop would have a hard time serve thousands of people hitting a single endpoint multiple times a day.
reply
It shouldn't. The issue is that most developers would rather spin up another instance of their server than solve the performance issue in their code, so now it's a common belief that computers are really slow to serve content.

And we are talking about static content. You will be bottlenecked by bandwidth before you are ever bottlenecked by your laptop.

reply
To be fair, computers are slow if you intentionally rent slow & overpriced ones from really poor-value vendors like cloud providers. For people who started their career in this madness they might be genuinely unaware of how fast modern hardware has become.
reply
I just fired up a container on my laptop... running on kubernetes... running in a linux VM. It's lightly dynamic (no database or filesystem I/O).

While I've also got enough other stuff running that my 15 min load average is at 4 and I've got 83% RAM used ignoring buffers/caches/otherwise.

I went and grabbed a random benchmarking tool and pointed it at it with 125 concurrent connections.

Sustained an average of 13914 reqs/s. Highest latency was 53.21ms.

If there are 10,000 people online at any given time hitting the API on average once every 3 seconds (which I believe are generous numbers), you'd only be around 3.3k reqs/s, or about 24% of what my laptop could serve even before any sort of caching, CDN, or anything else.

So... if a laptop can't serve that sort of request load, it sounds more like an indictment of the site's software than anything.

reply
With a 2025 tech stack, yes. With a 2005 tech stack, no. Don't use any containers, no[/limited] server-side dynamic script languages, no microservices or anything like that.

Considering the content is essentially static, this is actually viable. Search functions might be a bit problematic, but that's a solvable problem.

Of course you pay with engineering skills and resources.

reply
SRE here, Containers are not causing any performance problem.
reply
Maybe the perception comes from all the Mac and Windows devs having to run a Linux VM to use containers.
reply
Containers themselves don't, but a lot of the ecosystem structures around them do. Like having reverse proxies (or even just piles of ethernet bridges) in front of everything.

Or if you go ping pong across containers to handle a single request. That will certainly make a laptop unable to handle this load.

reply
Is there any feasible way to implement search client-side on a database of this scale?

I guess you would need some sort of search term to document id mapping that gets downloaded to the browser but maybe there's something more efficient than trying to figure out what everyone might be searching for in advance?

And how would you do searching for phrases or substrings? I've no idea if that's doable without having a database server-side that has the whole document store to search through.

reply
there's been demos of using SQLite client-side, with the database hosted in S3, and HTTP range requests used to only fetch the necessary rows for the query.

there might be some piece I'm missing, but the first thing that comes to mind would be using that, possibly with the full-text search extension, to handle searching the metadata.

at that point you'd still be paying S3 egress costs, but I'd be very surprised if it wasn't at least an order of magnitude less expensive than Vercel.

and since it's just static file hosting, it could conceivably be moved to a VPS (or a pair of them) running nginx or Caddy or whatever, if the AWS egress was too pricey.

reply
Theoretically, just thinking about the problem... You could probably embrace offline first and sync to indexeddb? After that search would become simple to query. Obviously comes with it's own challenges, depending on your user base (e.g. not a good idea if it's only a temporary login etc)
reply
There are several implementations of backing an Sqlite3 database with a lazy loaded then cached network storage, including multiple that work over HTTP (iirc usually with range requests). Those basically just work.
reply
No it won't. This is static content we're talking about. The only thing limiting you is your network throughput and maybe disk IO (assuming it doesn't fit in a compressed RAM). Even for an "around the globe roundtrip" latency, we're still talking few hundred msec.

Some cloud products have distorted an entire generation of developers understanding of how services can scale.

reply
I think it’s more helpful to discuss this in requests per second.

I’d interpret “thousands of people hitting a single endpoint multiple times a day” as something like 10,000 people making ~5 requests per 24 hours. That’s 0.5 requests per second.

reply
A 6 core server or laptop can easily serve 100K requests per second, so 259B requests per month. 576x more than their current load.
reply
A laptop from 10 years ago should be able to comfortably serve that. Computers are really really fast. I'm sorry, thousands of users or tens of thousands of requests a day is nothing.
reply
It all depends of course, but generally no, a laptop could handle that just fine.
reply
There may be a risk of running into thermal throttling in such a use-case, as laptops are really not designed for sustained loads of any variety. Some deal with it better than others, but few deal with it well.

Part of why this is a problem is that consumer grade NICs often tend to overload quite a lot of work to the CPU that higher end server specced NICs do themselves, as a laptop isn't really expected to have to keep up with 10K concurrent TCP connections.

reply
Lol yes? It's all reads. If it can all fit in ram, great. Otherwise an SSD will do fine too.
reply
You could probably serve it from the quad-core ARM64 inside the SSD controller, if you were trying "for the lulz".
reply
If it's mostly static, just cache it at the http level e.g. cloudflare which I believe wouldn't even charge for 450m requests on the $20 plan at least
reply
yes

and if it doesn't spawn up another $30 instance and add another RR entry to the dns

serving static content scales horizontally perfectly

reply
I would use a $100/mo box with a much better CPU and more RAM, but I think the pinch point might be the 1Gbps unmetered networking that Hetzner provide.

They will sell you a 10Gbps uplink however, with (very reasonably priced) metered bandwidth.

reply
For sure, even cheaper if you cache effectively.
reply
No . Hetzner would terminate your server as you are not a profitable customer.
reply
A profitable customer? How would Hetzner know if you're profitable or not?

I've hosted side projects on Hetzner for years and have never experienced anything like that. Do you have any references of projects to which it happened?

reply
Because you are using an incredibly large amount of bandwidth for €30 a month.

They offer unlimited bandwidth with their dedicated servers under a “fair usage” policy.

The bandwidth costs would be higher than what you pay monthly, so they would simply drop you.

You are probably using very little bandwidth, so it doesn’t matter in your case.

However, I assume Jmail consumes a very large amount of bandwidth.

reply
We handle 200x their request load on two Hetzner servers.
reply
I have heard of hetzner terminating customer relationships if too many legal complaints are filed against your VPSes.

But not because of being "not a profitable customer". Mind sharing some links here?

reply
reply
I am not sure how one even gets 250TB/mo through a 1Gbps link. In any case, completely saturating your networking for the full month is outside most people's definition of "fair use".
reply
Yeah but they still advertise with unlimited traffic. "All root servers have a dedicated 1 GBit uplink by default and with it unlimited traffic" https://docs.hetzner.com/robot/general/traffic/
reply
deleted
reply