The microblog: 2022.06.15 08:10:26

2022.06.15 08:10:26 (1536954264953573376) from Daniel J. Bernstein:

As someone who happily runs servers and laptops at constant clock frequencies (see for Linux advice) rather than heat-the-hardware random frequencies, I dispute the claim in that this has an "extreme system-wide performance impact".

2022.06.15 08:19:36 (1536956569220308992) from Daniel J. Bernstein:

Using all server cores _while keeping the hardware alive for a long time_ is what gets the most computation done per dollar. My experience running >100 servers of many different types is that the best clock frequencies for this are at or below base frequency, no Turbo Boost.

2022.06.15 08:26:46 (1536958374041923589) from Daniel J. Bernstein:

Meanwhile I'm rarely waiting for my laptop, even with it running at very low speed. I'm happy with the laptop staying cool and quiet. Yes, I know there are some people using monster "laptops" where I'd use a server, but are they really getting "extreme" benefits from Turbo Boost?

2022.06.15 08:32:25 (1536959798654009345) from Daniel J. Bernstein:

It's easy to find Intel laptops where the nominal top Turbo Boost frequency is more than twice the base frequency. These laptops can't run at anywhere near that top frequency for optimized computations running on all cores. Where's the "extreme system-wide performance impact"?

2022.06.15 08:38:21 (1536961290656047104) from Daniel J. Bernstein:

What I find particularly concerning about these unquantified claims of an "extreme" impact is that, in context, these claims are trying to stop people from considering a straightforward solution to a security problem. If the costs are supposedly unacceptable, let's hear numbers.