The microblog: 2020.02.04 16:49:22

2020.02.04 16:49:22 (1224721615717617665) from Daniel J. Bernstein, replying to "John Schanck (@susurrusus)" (1224707230550974464):

You have the attack ordering backwards. One can massively slow down the obvious attack by replacing RAM with disk and it still takes (for n=53, as per IBM vs. Google) only a few days where the "better" attack takes 10000 years. RAM + special-purpose chips would be milliseconds.

2020.02.04 16:52:48 (1224722481929428992) from Daniel J. Bernstein:

Regarding the algorithm details, it should be obvious that recomputing a path is slower than caching it, and all of the low-memory algorithms have _tons_ of recomputation. Aaronson is talking about (and claims that Google has) generic enough circuits that 2^n is the speed record.


2020.02.04 09:42:06 (1224614090326278144) from Daniel J. Bernstein:

After computing the distribution of 2^n outputs (16-bit accuracy is overkill), the attacker samples from the distribution (don't even have to bother adding fake errors given the protocol definition), using a hidden PRNG seed. This directly violates the "certified randomness".

2020.02.04 10:21:01 (1224623886538608640) from Daniel J. Bernstein:

I think I've diagnosed the conceptual mistake that led to this broken protocol. The primary argument for algorithms using vastly more than 2^n time, but less space, is that the legitimate user can't afford 2^n RAM. Conceptual mistake: thinking that attackers can't afford 2^n RAM.

2020.02.04 15:52:02 (1224707187571818504) from "John Schanck (@susurrusus)":

Where are you getting "vastly more than 2^n time, but less space"? The standard parallel attack has cost exp(Õ(min(n, depth))). The circuits that near-term quantum devices can execute are of depth o(n).

2020.02.04 15:52:12 (1224707230550974464) from "John Schanck (@susurrusus)", replying to "John Schanck (@susurrusus)" (1224707187571818504):

There's an existing security claim relative to the standard parallel attack. So where's the conceptual error? Obviously there's room for debate on whether ms/round latency is low enough, but that's in scope prior to your tweet.