Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I did mass scale performance benchmarking on highly optimized workloads using lockfree queues and fibers, and locking to a core almost never was faster. There were a few topologies where it was, but they were outliers.

This was on a wide variety of intel, AMD, NUMA, ARM processors with different architectures, OSes and memory configurations.

Part of the reason is hyper threading (or threadripper type archs) but even locking to groups wasn’t usually faster.

This was even moreso the case when you had competing workloads stealing cores from the OS scheduler.



Most high-performance workloads are limited by memory-bandwidth these days. Even in HPC that became the primary bottleneck for a large percentage of workloads in the 2000s. High-performance data infrastructure is largely the same. You can drive 200 GB/s of I/O on a server in real systems today.

The memory-bandwidth bound cases is where thread-per-core tends to shine. It was the problem in HPC that thread-per-core was invented to solve and it empirically had significant performance benefits. Today we use it in high-scale databases and other I/O intensive infrastructure if performance and scalability are paramount.

That said, it is an architecture that does not degrade gracefully. I've seen more thread-per-core implementations in the wild that were broken by design than ones that were implemented correctly. It requires a commitment to rigor and thoroughness in the architecture that most software devs are not used to.


I think workload might be as (if not more) the factor than the uniqueness of the topology itself for how much pinning matters. If your workload is purely computationally limited then it doesn't matter. Same if it's actually I/O limited. If it's memory bandwidth limited then it depends on things like how much fits in per core cache vs shared cache vs going to RAM, and how is RAM actually fed to the cores.

A really interesting niche is all of the performance considerations around the design/use of VPP (Vector Packet Processing) in the networking context. It's just one example of a single niche, but it can give a good idea of how both "changing the way the computation works" and "changing the locality and pinning" can come together at the same time. I forget the username but the person behind VPP is actually on HN often, and a pretty cool guy to chat with.

Or, as vacuity put it, "there are no hard rules; use principles flexibly".


Thanks for sharing. Aside from what the other replies to you have shared, I admittedly have less experience, and I'm mainly interested in the OS perspective. Balancing global and local optimizations is hard, so the OS deserves some leeway, but as I see it, mainstream OSes tend to be awkward no matter what. It's long past time for OS schedulers to consider high-level metadata to get a rough idea of the idiosyncrasies of the workload. In the extreme case, designing the OS from the ground up to minimize cross-core contention[0] gives the most control, maximizing potential performance. As jandrewrogers says in a sibling reply, this requires a commitment to rigor, treacherous and nonportable as it is. In any case, with improved infrastructure ("with sufficiently smart compilers"...), thread-per-core gains power.

[0] https://news.ycombinator.com/item?id=45651183


What type of workloads?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: