The reduction in latency brought on by in-process databases, combined with modern NVMe storage, means that SQLite is a substantially faster approach than any other solution which requires a trip through the network stack.
I've got services in production for several financial institutions right now that are resolving most SQL queries within 100-1000 micro seconds by simply using SQLite on reasonable hardware.
How many more users could you support if the amount of time you had to await IO for each was reduced by ~2 orders of magnitude?
Obvious caveats here being the resilience angle. We addressed this with application-level protocols and additional instances.
Another angle is unbounded horizontal scalability across tenants. If your application can be divided cleanly between different customers and one customer can be served with one instance (see "How many more users could you support if the amount of time you had to await IO for each was reduced by ~2 orders of magnitude?") then scaling to more customers is extremely simple and linear.
> How many more users could you support if the amount of time you had to await IO for each was reduced by ~2 orders of magnitude?
None - or practically not many, when using async (or sort of green threading) backend stacks. The waiting connections / users are just waiting and don't block any new connections.
If the network round trip to Postgres (in my experience around 3-10 ms, but of course highly depending on your server infra) was a concern Postgres could be placed on the same server as the backend, though I would not recommend it. But this relatively small IO overhead usually is not a concern for many apps.
Thank you for sharing! Anecdotes like this are very useful.
Can you share more about the context? How big/gnarly are the tables? How frequently/concurrently are they written to? Based on your experience here, when wouldn't you want to use this approach?
The reduction in latency brought on by in-process databases, combined with modern NVMe storage, means that SQLite is a substantially faster approach than any other solution which requires a trip through the network stack.
I've got services in production for several financial institutions right now that are resolving most SQL queries within 100-1000 micro seconds by simply using SQLite on reasonable hardware.
How many more users could you support if the amount of time you had to await IO for each was reduced by ~2 orders of magnitude?
Obvious caveats here being the resilience angle. We addressed this with application-level protocols and additional instances.