upvote
Backups, litestream gives you streaming replication to the second.

Deployment, caddy holds open incoming connections whilst your app drains the current request queue and restarts. This is all sub second and imperceptible. You can do fancier things than this with two version of the app running on the same box if that's your thing. In my case I can also hot patch the running app as it's the JVM.

Server hard drive failing etc you have a few options:

1. Spin up a new server/VPS and litestream the backup (the application automatically does this on start).

2. If your data is truly colossal have a warm backup VPS with a snapshot of the data so litestream has to stream less data.

Pretty easy to have 3 to 4 9s of availability this way (which is more than github, anthropic etc).

reply
My understanding is litestream can lose data if a crash occurs before the backup replication to object storage. This makes it an unfair comparison to a Postgres in RDS for example?
reply
Last I checked RDS uploads transaction logs for DB instances to Amazon S3 every five minutes. Litestream by default does it every second (you can go sub second with litestream if you want).
reply
your understanding is very wrong. please read the docs or better yet the actual code.
reply
> Backups, litestream gives you streaming replication to the second.

You seem terribly confused. Backups don't buy you high availability. At best, they buy you disaster recovery. If your node goes down in flames, your users don't continue to get service because you have an external HD with last week's db snapshots.

reply
If anything backups are the key to high availability.

Streaming replication lets you spin up new nodes quickly with sub second dataloss in the event of anything happening to your server. It makes having a warm standby/failover trivial (if your dataset is large enough to warrant it).

If your backups are a week old snapshots, you have bigger problems to worry about than HA.

reply
No offense, you wait. Like everyone's been doing for years in the internet and still do

- When AWS/GCP goes down, how do most handle HA?

- When a database server goes down, how do most handle HA?

- When Cloudflare goes down, how do most handle HA?

The down time here is the server crashed, routing failed or some other issue with the host. You wait.

One may run pingdom or something to alert you.

reply
> When AWS/GCP goes down, how do most handle HA?

This is a disingenuous scenario. SQLite doesn't buy you uptime if you deploy your app to AWS/GCP, and you can just as easily deploy a proper RDBMS such as postgres to a small provider/self-host.

Do you actually have any concrete scenario that supports your belief?

reply
> SQLite doesn't buy you uptime if you deploy your app to AWS/GCP

This is...not true of many hyperscaler outages? Frequently, outages will leave individual VMs running but affect only higher-order services typically used in more complex architectures. Folks running an SQLite on a EC2 often will not be affected.

And obviously, don't use us-east-1. This One Simple Trick can improve your HA story.

reply