Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Actually, we're very much pushing state-of-the-art :)

It's just that we're tackling a different problem - low latency applications - and when I say low latency I mean in the microseconds range. Big spatial data is a very interesting problem, and tracking a large number (though less than billions) of moving objects is another - though very different - interesting problem.

For working sets that don't fit in one machine's RAM we offer a cluster.



While I do not currently work on ultra-low latency spatial databases (more like milliseconds), I have in the past and so have some idea of what is out there. :-) I am not all that familiar with the design of your system so I was mostly working off the scale numbers offered.

The best example I can think of is an ultra-low latency in-memory prototype I designed in 2009 on a parallel cluster. The working set was several billion irregular 3-cubes ranging in (metaphorical) size from birds to hurricanes. The average CPU cost of an access operation was sub-microsecond so the latency was mostly interconnect related (which was a slow but proper low-latency supercomputing fabric). The current work I do uses complex geodetic polygons geometries so the computational cost of operations is quite a bit more but the actual computational cost of the access method is below the noise floor of the network fabric.

You are correct though that if you are mostly dealing with tracking points or cubes then in-memory is sufficient to hold many applications. It is the sensing data that really kills you... :-)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: