upvote
The one that gets me a lot, which is similar in practice to your point, is when I need server redundancy, even if one server is otherwise plenty for my task. As soon as I'm not running in one place, you need network data storage, and that kicks pretty hard in the direction of a network-accessible database. S3 works sometimes and the recent work on being able to atomically claim files has helped with some of the worst rough edges but it still doesn't take a lot to disqualify it, at least as the only store.
reply
In short, once you need reliability, your complexity necessarily grows due to the redundancy and failover you need to introduce.

If your downtime does not cost much, you can host many things on a single tiny computer.

reply
SQLite has become my new go-to when starting any project that needs a DB. The performance is very fast, and if anything is ever successful enough to outgrow SQLite, it wouldn't be that hard to switch it out for Postgres. Not having to maintain/backup/manage a separate database server is cheaper and easier.
reply
Backups are super-simple as well.

I'm also a convert.

reply
Seeing the Rust 1M benches were an amazing reminder as to how fast stuff really is.
reply
The reality is that things will be blazing fast in any language if you save things by PK in HashMaps.
reply