Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you use a bulk insert pattern you might be able to get ~50K inserts per second. If you have to insert one record at a time, it will likely drop down to ~3K or so.

The key is the covering index, you don't want to hit the actual table at all. You should re-organize the index periodically as well in order to ensure optimal query times.

This is suboptimal in many ways as the records are tiny yet there is per row overhead. However, it works fine to low billions of rows in general. I think it would work fine in SQLite as well. Redis is a very different animal however so a different strategy would be needed.

Migrating the same schema to ClickHouse when you need it (I have no affiliation with them), will give you 50X lower data footprint, increase your ingest rates by 10X and your query times by about 2X. However, if you don't really need that, don't bother adding another component in your infrastructure (imo). Also, ClickHouse is much more of a purpose built Ferrari. I.e. you can't expect it to do all kinds of "off schema" type queries (i.e. give me all the stock values on Jan 10, @ 10:32:02.232am). In a relational DB you can add indexes and get results reasonably well. In ClickHouse schema == query pattern essentially. Naturally, you should know what you are doing with both models but in ClickHouse, don't treat it like a "magic data box" that will can handle anything you throw at it (beyond a certain scale at least).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: