Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've done the SQLite benchmark a few times. You can insert somewhere in the range of 10~20k rows per second if you are using NVMe (i.e. ~50uS per row). Requires some pragmas (i.e. WAL). This is 100% serialized throughput.

No clue what Postgres would manage, but I suspect it would be about an order of magnitude higher latency in the happy case.



> No clue what Postgres would manage, but I suspect it would be about an order of magnitude higher latency in the happy case.

Unless you’re talking 1 versus 10 microseconds (or less), I don’t think Postgres will have an order of magnitude higher latency. And if we are talking this range, why would it matter for a web app where the client’s latency is almost certainly >1 millisecond?


Because it changes the kinds of things you can build: https://www.sqlite.org/np1queryprob.html

With SQLite, it's often practical to issue several queries in situations where that would be too slow for a traditional client-server database.


I'm pretty sure sqlite is capable of significantly higher throughput that 10-20k rows per second depending on the workload. Inserts can be much faster if they are batched in large transactions and prepared statements are used to avoid sql parsing for each row. This only works of course if your workload can be batched.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: