Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you have a polling queue, as the number of workers grows too large, the overhead of maintaining connections from (worker) -> (stuff in the middle) -> (db or whatever keeps queue state) becomes too high. In the extreme case, you can get into a point of diminishing returns, where adding more workers hurts both throughput and latency in a bad way.

That is - initially you can decrease latency by sacrificing utilization and just adding more workers that usually sit idle. This increases throughput during heavy load, and decreases latency. Until you can't, because the overhead from all the extra workers causes a slowdown up the stack (either the controller that's handling messages, or the db itself that's got the queue state, or whatever is up there). Then when you throw more overhead, you increase latency and decrease throughput, because of the extra time wasted on overhead.

It depends on a lot of different variables.

If you have work items that vary significantly in cost, it can add even more problems (i.e. if your queue items can vary 3+ orders of magnitude in processing cost).



Well yes but you can solve that in a lot of ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: