ie. Yes, you could run a full on corporate CA, issue SSL certificates for your domains, manually rig up wireguard and run your own internal corporate VPN... or you just accept that your grand total of 1 concurrent user on an intranet is probably just better served by setting up Tailscale and a wildcard LE certificate so that the browser shuts up. (Which is still not great, but the argument over HTTPS and intranets is not for right now.)
Same with other deployment tools like Docker - yes, there's a ton of fancy ways to do persistent storage for serverless setups but get real: you're throwing the source folder in /opt/ and you have exactly one drive on that server. Save yourself the pain and just bind mount it to somewhere on your filesystem. Being able to back the folder up just with cp/rsync/rclone/scp is a lot easier than fiddling with docker's ambiguous mess of overlay2 subfolders.
Every overengineered decision of today is tomorrow's "goddammit I need to ssh into the server again for an unexpected edgecase".
The trick is twofold: if it isn't 'declare and deploy' don't run it. If it isn't in your backup/restore pipeline don't run it.
Pfsense and Home assistant are huge pains in the ass. Everything else is easy breezy.
Proxmox/pbs/truenas/talos/linstor/DRBD are all amazing.
I'm thinking about ditching pfsense for tailscale/cloudflare tunnels, but it's not worth the time atm. I don't have a viable alternative for HA.
You might think you are protected with UPSes and what not, but nothing will stop the electromagnetic effects if it hits within a few feet. Every piece of copper is going to get lit up. No solution is 100% guaranteed here, but EC2 and snapshots is a hell of a lot more likely to survive a single event like that.