2022.06.15 08:44:22 (1536962803222708224) from Daniel J. Bernstein, replying to "Ruben Kelevra (@RubenKelevra)" (1536957907622928384):
"Obviously"? Have you measured the difference? Would you call it "extreme"? If it turns out that you're waiting primarily for the CPU to finish web-page computations (and not the network), wouldn't the best way to reduce latency be to split those computations across _all_ cores?
2022.06.15 08:10:26 (1536954264953573376) from Daniel J. Bernstein:
As someone who happily runs servers and laptops at constant clock frequencies (see https://bench.cr.yp.to/supercop.html for Linux advice) rather than heat-the-hardware random frequencies, I dispute the claim in https://www.hertzbleed.com that this has an "extreme system-wide performance impact".
2022.06.15 08:24:55 (1536957907622928384) from "Ruben Kelevra (@RubenKelevra)":
Well, that depends on the workload. If you want maximum CPU cycles out of a CPU over its life span, you may be right. But starting a browser or loading a webpage needs only one or two cores, which obviously complete the task much faster at 3.8 GHz than say 2.4 GHz.