On the one hand I get it, TLS is pretty heavy, and it makes sense to take advantage of a VPS or Cloudflare or however you want to do it.
But once you are spinning up a VPS, the question is ... why the Pi? The VPS in the article has less RAM, but more storage. If you're already doing TLS termination on the VPS (the most RAM intensive part), you might as well just do the whole shebang there.
I know this is all for fun, I'm just wondering -- is the Pi Zero really too slow to handle TLS, especially with an optimized TLS library? In this setup, the Pi is already being directly exposed to the Internet anyway, there's no VPN being used. That ARM11 isn't "fast", but surely a 1 GHz ARM11 can handle an optimized TLS library serving some subset of TLS1.2.
What was supposed to be a cool achievement is rendered pointless when one of the key elements is offloaded elsewhere.
While I may make the argument that most are probably hosting and doing php on the same server, it's not the typical approach for any custom software at this point.
Edit: No, the article mentions listening on port 80 at home. I thought they'd be SSH tunneling or something. That is unusual, but I guess for a static website it doesn't really matter.
It sorta does matter. Either the actual raspi does nothing of value or the traffic has value that should be protected.
Sure, I heard the argument that public HTTP traffic does not need encryption but if it is of any value then both parties have a interest in it unmanipulated, uncenscored, validated or all of the before. Even if it is just preventing the ISP injecting dumb ads.
Don't have a Pi plugged in now to check, but I have a fresh x86_64 linux and it's using like 600 Mb of ram - server install, I only got around to installing and configuring openssh and samba so far.
Oh and it's Devuan, so not even systemd to blame. I think it was close to 1 G with a systemd distro.
Also, all web pages are served from RAM. It's automatic that modern OSes will cache this stuff on first access.
I retired my 486 in ‘95 or thereabouts…
It had a second life doing stuff like delivering mail, handling IRC, serving web pages, and whatever else a few of us wanted from it. The performance was fine.
(The Pentium-ish machines stayed on desktop duty where GUIs devoured resources.)
Kind of irrelevant since operating systems and web pages in the 90's were significantly smaller in footprints, as the web was mostly plain text back then. Windows XP with its GUI would run Max Payne on 128MB of RAM. You could do a lot more back then that You can't do modern stuff like that today with 128MB of RAM.
Yesterday I one-shotted several interactive pages, that Qwen built out of straight HTML and Javascript. I handed it my API (source code, not even a swagger, via an MCP that Qwen wrote for me), asked for a frontend, and it delivered. One page at a time to keep context down, and mightve gotten lucky on the first draw but after the first one I told it to make the next ones like the first.
Can't say I've had that experience with backend languages & frameworks, incl writing that same API, but perhaps I'm off the beaten path with those, or perhaps there's greater breadth of things to do vs a narrower set of acceptance criteria? IDK.
Here I was sweating that I'd have to research and learn a current-day frontend framework. It felt like a magic wand using consumer-grade AI. HTML and plain old Javascript was plenty.
Tangent but apropos of other contemporary threads on HN, it puts a spin on supply chain threats. There's no NPM or anything, except perhaps whatever mysteries are baked into the model.
HTML code, CSS, Javascript, Images.
In this case, they are static elements, which can even be cached locally to share more easily.
If someone wants a massive build system to render a static HTML page, that's on them, and their personal interpretation. Increasingly, and maybe more often than not, there is more than one way to get the same outcome.
The fact that there's hundreds of downloads for a single web page is up to the constructor of that page. Still, these things can be reasonably cached. For example, host it on the Pi, then put a cloudflare in front of it or something.
The Pi Zero might not be for you, or easy to try to undermine. Which criticisms would go away if it was on a regular pi?
Maybe you misunderstood. Which criticism did I make of the pi zero? I criticized present day SW.