upvote
Sqlite smokes postgres on the same machine even with domain sockets [1]. This is before you get into using multiple sqlite database.

What features postgres offers over sqlite in the context of running on a single machine with a monolithic app? Application functions [2] means you can extend it however you need with the same language you use to build your application. It also has a much better backup and replication story thanks to litestream [3].

- [1] https://andersmurphy.com/2025/12/02/100000-tps-over-a-billio...

- [2] https://sqlite.org/appfunc.html

- [3] https://litestream.io/

The main problem with sqlite is the defaults are not great and you should really use it with separate read and write connections where the application manages the write queue rather than letting sqlite handle it.

reply
> Sqlite smokes postgres on the same machine even with domain sockets [1].

SQLite on the same machine is akin to calling fwrite. That's fine. This is also a system constraint as it forces a one-database-per-instance design, with no data shared across nodes. This is fine if you're putting together a site for your neighborhood's mom and pop shop, but once you need to handle a request baseline beyond a few hundreds TPS and you need to serve traffic beyond your local region then you have no alternative other than to have more than one instance of your service running in parallel. You can continue to shoehorn your one-database-per-service pattern onto the design, but you're now compelled to find "clever" strategies to sync state across nodes.

Those who know better to not do "clever" simply slap a Postgres node and call it a day.

reply
https://antonz.org/sqlite-is-not-a-toy-database/ — 240K inserts per second on a single machine in 2021. The problem you describe is real, but the TPS ceiling is wrong by three orders of magnitude on modern hardware.
reply
> SQLite on the same machine is akin to calling fwrite.

Actually 35% faster than fwrite [1].

> This is also a system constraint as it forces a one-database-per-instance design

You can scale incredibly far on a single node and have much better up time than github or anthropic. At this rate maybe even AWS/cloudflare.

> you need to serve traffic beyond your local region

Postgres still has a single node that can write. So most of the time you end up region sharding anyway. Sharding SQLite is straight forward.

> This is fine if you're putting together a site for your neighborhood's mom and pop shop, but once you need to handle a request baseline beyond a few hundreds TPS

It's actually pretty good for running a real time multiplayer app with a billion datapoints on a 5$ VPS [2]. There's nothing clever going on here, all the state is on the server and the backend is fast.

> but you're now compelled to find "clever" strategies to sync state across nodes.

That's the neat part you don't. Because, for most things that are not uplink limited (being a CDN, Netflix, Dropbox) a single node is all you need.

- [1] https://sqlite.org/fasterthanfs.html

- [2] https://checkboxes.andersmurphy.com

reply
I wonder what percentage of services run on the Internet exceed a few hundred transactions per second.
reply
I’ve seen multimillion dollar “enterprise” projects get no where close to that. Of course, they all run on scalable, cloud native infrastructure costing at least a few grand a month.
reply
I think the better question to ask is what services peak at a few hundred transactions per second?
reply
Looks like the overhead is not insignificant:

    Running 100,000 `SELECT 1` queries:
    PostgreSQL (localhost): 2.77 seconds
    SQLite (in-memory): 0.07 seconds
(https://gist.github.com/leifkb/1ad16a741fd061216f074aedf1eca...)
reply
I love them both too but that might not be the best metric unless you’re planning to run lots of little read queries. If you’re doing CRUD, simulating that workflow may favor Postgres given the transactional read/write work that needs to take place across multiple concurrent connections.
reply
> I love them both too but that might not be the best metric unless you’re planning to run lots of little read queries.

Exactly. Back in the real world,anyone who is faced with that sort of usecase will simply add memory cache and not bother with the persistence layer.

reply
This is mostly about thread communication. With SQLite you can guarantee no context switching. Postgres running on the same box gets you close but not all the way. It's still in a different process.
reply
This. Run an app on the same box as PG and you can easily be plagued by out of memory etc (as there's memory contention between the two processes).
reply
Most important is that that local SQLite gets proper backups, so a restore goes without issues
reply
Gets proper backups if you back it up the right way https://sqlite.org/backup.html
reply
Would be nice to see PGLite[1] compared too

1: https://pglite.dev/

reply
A total performance delta of <3s on ~300k transactions is indeed the definition of irrelevant.

Also:

> PostgreSQL (localhost): (. .) SQLite (in-memory):

This is a rather silly example. What do you expect to happen to your data when your node restarts?

Your example makes as much sense as comparing Valkey with Postgres and proceed to proclaim that the performance difference is not insignificant.

reply
Why are you comparing PostgreSQL to an in-memory SQLite instead of a file-based one? Wow, memory is faster than disk, who would have thought?
reply
Because it doesn't make a difference, because `SELECT 1` doesn't need to touch the database:

    Running 100,000 `SELECT 1` queries:
    PostgreSQL (localhost): 2.71 seconds
    SQLite (in-memory): 0.07 seconds
    SQLite (tempfile): 0.07 seconds
(https://gist.github.com/leifkb/d8778422d450d9a3f103ed43258cc...)
reply
Why are you doing meaningless microbenchmarks?
reply
> Because it doesn't make a difference, because `SELECT 1` doesn't need to touch the database:

I hope you understand that your claim boils down to stating that SQLite is faster at doing nothing at all, which is a silly case to make.

reply
The original claim being discussed is about the overhead of an in-process database vs. a database server in a separate process, not about whether SQLite or PostgreSQL have a faster database engine.
reply
It is insignificant if you're doing 100k queries per day, and you gain a lot for your 3 extra seconds a day.
reply
What a useful "my hello-world script is faster than your hello-world script" example.
reply
I have used SQLite with extensions in extreme throughput scenarios. We’re talking running through it millions of documents per second in order to do disambiguation. I won’t say this wouldn’t have been possible with a remote server, but it would have been a significant technical challenge. Instead we packed up the database on S3, and each instance got a fresh copy and hammered away at the task. SQLite is the time tested alternative for when you need performance, not features
reply
> It's not much harder to use than SQLite, you get all of the Postgres features, it's easier to run reports or whatever on the live db from a different box, and much easier if it comes time to setup a read replica, HA, or run the DB on a different box from the app.

Isn't this idea to spend a bit more effort and overhead to get YAGNI features exactly what TFA argues against?

reply
I've been doing that for decades.. People seem to simply not know about unix architecture.

What I like about sqlite is that it's simply one file

reply
Thats just swapping another enterprise focused concern into the mix. Your database connection latency is absolutely not a concerning part of your system.
reply
Author's own 'auth' project works with sqlite and postgres.
reply
I mean, you’re not wrong about the facts, but it’s also pretty trivial to migrate the data from SQLite into a separate Postgres server later, if it turns out you do need those features after all. But most of the time, you don’t.
reply
I bet that takes more time than the 5 extra minutes you take to setup Postgres in the same box upfront.
reply
ORDERS OF MAGNITUDE NEWS
reply
IIRC TCP/IP through localhost actually benchmarked faster than Unix sockets because it was optimized harder. Might've been fixed now. Unix sockets gives you the advantage of authentication based on the user ID of who's connecting.

My experience with sqlite for server-based apps has been that as your app grows, you almost always eventually need something bigger than sqlite and need to migrate anyway. For a server-based app, where minimizing deployment complexity isn't an extremely important concern, and with mixed reads and writes, it's rarely a bad idea to use Postgres or MariaDB from the start. Yes there are niche scenarios where sqlite on the server might be better, but they're niche.

reply