Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's even more complicated.

The reduction in latency brought on by in-process databases, combined with modern NVMe storage, means that SQLite is a substantially faster approach than any other solution which requires a trip through the network stack.

I've got services in production for several financial institutions right now that are resolving most SQL queries within 100-1000 micro seconds by simply using SQLite on reasonable hardware.

How many more users could you support if the amount of time you had to await IO for each was reduced by ~2 orders of magnitude?

Obvious caveats here being the resilience angle. We addressed this with application-level protocols and additional instances.



Another angle is unbounded horizontal scalability across tenants. If your application can be divided cleanly between different customers and one customer can be served with one instance (see "How many more users could you support if the amount of time you had to await IO for each was reduced by ~2 orders of magnitude?") then scaling to more customers is extremely simple and linear.


Agreed, but a fairer comparison would be SQLite vs PostgreSQL installed on the same machine.


Even there you'll see the same: micros in SQLite vs millis for PostgreSQL. PostgreSQL is simply not the right choice if you want in-process speeds.


What stops you from running postgres on the same vm/machine as your app ?! You avoid the network and reap all the benefits of the NVMe storage.


It still goes through the network stack, just not the network hardware.

Unix sockets are faster, but they still require some system calls that an in-process database can do without.


Can you share more info about resilience, app protocols and additional instances?


> How many more users could you support if the amount of time you had to await IO for each was reduced by ~2 orders of magnitude?

None - or practically not many, when using async (or sort of green threading) backend stacks. The waiting connections / users are just waiting and don't block any new connections.

If the network round trip to Postgres (in my experience around 3-10 ms, but of course highly depending on your server infra) was a concern Postgres could be placed on the same server as the backend, though I would not recommend it. But this relatively small IO overhead usually is not a concern for many apps.


Thank you for sharing! Anecdotes like this are very useful.

Can you share more about the context? How big/gnarly are the tables? How frequently/concurrently are they written to? Based on your experience here, when wouldn't you want to use this approach?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: