> How many more users could you support if the amount of time you had to await IO for each was reduced by ~2 orders of magnitude?
None - or practically not many, when using async (or sort of green threading) backend stacks. The waiting connections / users are just waiting and don't block any new connections.
If the network round trip to Postgres (in my experience around 3-10 ms, but of course highly depending on your server infra) was a concern Postgres could be placed on the same server as the backend, though I would not recommend it. But this relatively small IO overhead usually is not a concern for many apps.
None - or practically not many, when using async (or sort of green threading) backend stacks. The waiting connections / users are just waiting and don't block any new connections.
If the network round trip to Postgres (in my experience around 3-10 ms, but of course highly depending on your server infra) was a concern Postgres could be placed on the same server as the backend, though I would not recommend it. But this relatively small IO overhead usually is not a concern for many apps.