Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a very interesting notion, I wonder what are the performance constraints for such a SQL to MongoDB query, has anyone attempted?


We measured performance across several SQL queries, and found that it depends. Assuming that the working set fits into memory:

* If the query goes over a single table and doesn't have filter clauses, then the cost of converting BSON data to PostgreSQL tuples becomes the bottleneck.

* If the query goes over a single table, doesn't have filter clauses and touches only a few columns, then the cost of reading the data from MongoDB (over TCP) becomes the bottleneck.

* If the query joins several tables or has complex sub-selects, then the lack of accurate data statistics forces choosing bad execution plans and becomes the bottleneck.

For most queries though, we found that the query performance was reasonable. PostgreSQL processed around 200K-400K documents per second per CPU core.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: