Part of why that happens is Intel selling chips closer to the red line. You need cooling similar to what used to be exclusive to overclocking just to keep the stock CPU cool.
Yep. We're apparently finding out that it's mostly a waste of electricity to get an extra 5% performance due to how far outside the efficiency sweet spot chips are being pushed.
Not just Intel either. AMD has joined the game as of Zen 4, and NVIDIA's been playing it with their GPUs forever as well.
Zen 4 desktop CPUs appear to have (as expected) virtually unchanged single core performance, and maybe 5% reduced multi-core performance, on CPU-bound workloads by reducing the power limit to cut total power consumption -- by over a 100W reduction in the case of the new 7950X! Granted, Intel's been doing that forever -- reign in Alder Lake and its power consumption also comes way down, again for barely a performance hit in CPU-bound multi-core tasks.
-----
Enthusiast grade CPUs and GPUs are basically sold in the equivalent of a TV's retail "demo mode" now -- where a TV has max brightness, contrast and saturation that you'd NEVER use, but is intended to just grab a customer's attention as they walk by. Being pushed so far outside of their efficiency sweet spot just to get that extra 5% to "win benchmarks", when outside of specific use cases (and even if you actually need that 5%!) you're just consuming 50-100% more electricity for utterly marginal gains.
What a waste of resources! All so children (or people who still act like them) can feel better about a purchase as they argue on the internet over nothing worth arguing about.
If you truly wanted to maximize performance per watt, you'd pick a very different design more reminiscent of GPU's. But then single-thread and low-multi-thread performance would really suck. So it will always be a carefully tuned tradeoff.
Nope. Not even. Again, as the grandparent post stated, everything is being sold with the default configuration being redlined.
You absolutely can reign it back in to sanity, and get better performance per watt over the previous gen, and still be noticeably faster over the previous gen.
-- -----
With AMD, apparently we're going to see BIOS updates from board manufacturers to make doing that simple. Set it to a lower power limit in a few keystrokes, still have something blazing fast (faster than previous gen), but use 145W instead of 250W. Or go even lower, still be a bit faster than previous gen while using around 88W on a 7950X instead of the 118W a 5950X did.
Intel -- who has been redlining their CPUs for years now -- even noted Raptor Lake's efficiency at lower power levels. Again, cut power consumption by 40-50%, for only a tiny performance hit. They actually made entire slides for their recent presentation highlighting this!
NVIDIA no different, and has been for years. Ampere stock voltages were well outside their sweet spot. Undervolt, cut power consumption by 20-25% and have performance UNCHANGED.
-- -----
Sure, there's more efficient stuff. Take last generation's 8-core Ryzen 7 PRO 5750GE. About 80% of the performance of an Intel Core i5-12600K, but only uses 38W flat out instead of 145W.
You don't even really need to rein it back, modern processors will throttle back automatically depending on how effective the cooling is. Anyway, the issue with manual undervolting is that it may adversely impact reliability if you ended up with a slightly substandard chip, that will still work fine at stock settings. That's why it can't just be a default.
>You don't even really need to rein it back, modern processors will throttle back automatically depending on how effective the cooling is
This isn't about thermals. This is about power consumption.
I'm not suggesting reigning in a CPU's power limits because it's "too hot".
I'm suggesting getting 95% of the performance for 59% of the power consumption. Because it's not worth spending 72% more on electricity for 5% increased performance. Again, even the manufacturers themselves know this and are admitting it. Look at this slide from Intel: https://cdn.arstechnica.net/wp-content/uploads/2022/09/13th-... Purported identical performance of previous gen at 25% of the power consumption. They KNOW the default configuration setting (which you can change in a few keystrokes) is total garbage in terms of power efficiency.
-----
I guarantee you server CPUs aren't going to be configured to be this idiotic out of the box. Because in the datacenter, perf/W matters and drives purchasing decisions.