Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Mostly a reminder/clarification of things I knew, but a good and welcome one well-stated, because I probably sometimes forget. (I don't do performance work a lot).

But this:

> If you must use latency to measure efficiency, use mean (avg) latency. Yes, average latency

Not sure if I ever thought about it before, but after following the link[1] where OP talks more about it, they've convinced me. Definitely want mean latency at least in addition to median, not median alone.

[1]: https://brooker.co.za/blog/2017/12/28/mean.html



> at least in addition to median

There was an interesting article here not long ago that made the point that median is basically useless. If you load 5 resources on a page load, the odds of all of them being faster than the median (so it represents the user experience) is about 3%. You need a very high rank to get any useful information, probably with a number of 9s.


Median for a particular action/page might be more useful.


No doubt about that (even then, you will probably want the 90 or 99 percentile, depending on how many interactions you expect a person to have).

Th real median is just very hard to measure, and an easier 99.99 (with more 9's as needed) rank is almost as good.


Can you say more about why you say the "real median" is hard to measure? It doesn't seem hard to measure to me, or any harder than a 99.99 percentile. Why is 50th percentile harder to measure than 99.99th?


> If we're expecting 10 requests per second at peak this holiday season, we're good.

Problem is, sometimes system engineers do not know what to expect, but they still need to have a plan for this case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: