Discussion
A Cryptography Engineer’s Perspective on Quantum Computing Timelines
pdhborges: What do you recomend as reading material for someone that was in college a while ago (before AEAD got popular) to get up to speed with the new PQ developments?
tux3: This is a good take, there's really not much to argue about.>[...] the availability of HPKE hybrid recipients, which blocked on the CFRG, which took almost two years to select a stable label string for X-Wing (January 2024) with ML-KEM (August 2024), despite making precisely no changes to the designs. The IETF should have an internal post-mortem on this, but I doubt we’ll see oneMy kingdom for a standards body that discusses and resolves process issues.
OhMeadhbh: The rebuttal: https://eprint.iacr.org/2025/1237
schmichael: That's not a rebuttal. The post references the paper and a rebuttal to it from an expert in the field.
vonneumannstan: This seems like something uniquely suited to the startup ecosystem. I.e. offering PQ Encryption Migration as a Service. PQ algorithms exist and now theres a large lift required to get them into the tech with substantial possible value.
hlieberman: … really? This is simultaneously so far down in the plumbing and extremely resistant to measuring the impact of, I can’t imagine anyone building a company off of this that’s not already deep in the weeds (lookin’ at you, WolfSSL).The idea that a startup would be competitive in the VC “the only thing that matters are the feels” environment seems crazy to me.
OhMeadhbh: Yeah... I spent the 90s working for RSADSI and Certicom implementing algorithms. Crypto is a vitamin, not an aspirin. Hardly anyone is capable of properly assessing risk in general, much less the technical world of information risk management. Telling someone they should pay you money to reduce the impact of something that may or may not happen in the future is not a sales win.
alephnerd: Aligns with what I've been hearing in my network as well.
janalsncm: Building out a supercomputer capable of breaking cryptography is exactly the kind of thing I expect governments to be working on now. It is referenced in the article, but the analogy to the Manhattan Project is clear.Prior to 1940 it was known that clumping enough fissile material together could produce an explosion. There were engineering questions around how to purify uranium and how to actually construct the weapon etc. But the phenomenon was known.This is almost perfect analogy. We know exactly what could happen if we clumped enough qubits together. There are hard engineering challenges of actually doing so, and governments are pretty good at clumping dollars together when they want to.
adrian_b: It should be noted that if indeed there has not remained much time until a usable quantum computer will become available, the priority is the deployment of FIPS 203 (ML-KEM) for the establishment of the secret session keys that are used in protocols like TLS or SSH.ML-KEM is intended to replace the traditional and the elliptic-curve variant of the Diffie-Hellman algorithm for creating a shared secret value.When FIPS 203, i.e. ML-KEM is not used, adversaries may record data transferred over the Internet and they might become able to decrypt the data after some years.On the other hand, there is much less urgency to replace the certificates and the digital signature methods that are used today, because in most cases it would not matter if someone would become able to forge them in the future, because they cannot go in the past to use that for authentication.The only exception is when there would exist some digital documents that would completely replace some traditional paper documents that have legal significance, like some documents proving ownership of something, which would be digitally signed, so forging them in the future could be useful for somebody, in which case a future-proof signing method would make sense for them.OpenSSH, OpenSSL and many other cryptographic libraries and applications already support FIPS 203 (ML-KEM), so it could be easily deployed, at least for private servers and clients, without also replacing the existing methods used for authentication, e.g. certificates, where using post-quantum signing methods would add a lot of overhead, due to much bigger certificates.
FiloSottile: That was my position until last year, and pretty much a consensus in the industry.What changed is that the new timeline might be so tight that (accounting for specification, rollout, and rotation time) the time to switch authentication has also come.ML-KEM deployment is tangentially touched on in the article because it's both uncontroversial and underway, but:> This is not the article I wanted to write. I’ve had a pending draft for months now explaining we should ship PQ key exchange now, but take the time we still have to adapt protocols to larger signatures, because they were all designed with the assumption that signatures are cheap. That other article is now wrong, alas: we don’t have the time if we need to be finished by 2029 instead of 2035.> For key exchange, the migration to ML-KEM is going well enough but: 1. Any non-PQ key exchange should now be considered a potential active compromise, worthy of warning the user like OpenSSH does, because it’s very hard to make sure all secrets transmitted over the connection or encrypted in the file have a shorter shelf life than three years. [...]You comment is essentially the premise of the other article.
adrian_b: I agree with you that one must prepare for the transition to post-quantum signatures, so that when it becomes necessary the transition can be done immediately.However that does not mean that the switch should really be done as soon as it is possible, because it would add unnecessary overhead.This could be done by distributing a set of post-quantum certificates, while continuing to allow the use of the existing certificates. When necessary, the classic certificates could be revoked immediately.
layer8: > The only exception is when there would exist some digital documents that would completely replace some traditional paper documents that have legal significance, like some documents proving ownership of something, which would be digitally signed, so forging them in the future could be useful for somebody, in which case a future-proof signing method would make sense for them.This very much exists. In particular, the cryptographic timestamps that are supposed to protect against future tampering are themselves currently using RSA or EC.
palata: What is the consequence on e.g. Yubikeys (or say the Android Keystore)? Do I understand correctly that those count as "signature algorithms" and are a little less at risk than "full TEEs" because there is no "store now, decrypt later" for authentication?E.g. can I use my Yubikey with FIDO2 for SSH together with a PQ encryption, such that I am safe from "store now, decrypt later", but can still use my Yubikey (or Android Keystore, for that matter)?
amluto: Your Yubikey itself is doomed.If you are doing a post-quantum key exchange and only authenticating with the Yubikey, then you are safe from after-the-fact attacks. Well, as long as the PQ key exchange holds up, and I am personally not as optimistic about that as I’d like to be.
FiloSottile: How do you do revocation or software updates securely if your current signature algorithm is compromised?
OsrsNeedsf2P: Why do we "need to ship"? 1,000 qubit quantum computers are still decades away at this point
OhMeadhbh: So... In 2013 I was working for Mozilla adding TLS 1.1 and 1.2 support into Firefox. It turns out that some of the extensions common in 1.1, in some instances caused PDUs to grow beyond 16k (or maybe it was 32k, can't remember.). This caused middle boxes to barf. Sure, they shouldn't barf, but they did. We discovered the problem (or rather one of our users discovered the problem) by increasing the key size on server and client certs to push PDU sizes over the limit.At the very least, you want to start using hybrid legacy / pqc algorithms so engineers at Cisco will know not to limit key sizes in PDUs to 128 bytes.
ekr____: A few points here: There is already very wide use of PQ algorithms in the Web context [0], which is the most problematic one because clients need to be able to connect to any site and there's no real coordination between sites and clients. So we're exercising the middleboxes already.The incident you're thinking of doesn't sound familiar. None of the extensions in 1.1 really were that big, though of course certs can get that big if you work hard enough. Are you perhaps thinking instead of the 256-511 byte ClientHello issue addressed ion [1][0] https://blog.cloudflare.com/pq-2025/ [1] https://datatracker.ietf.org/doc/html/rfc7685
OhMeadhbh: Damn. It's like I insulted Vault.Also, I went over Filippo's post again and still can't see where it references the Gutmann / Neuhaus paper. Are we talking about the same post?
xvector: From the abstract:> This paper presents implementations that match and, where possible, exceed current quantum factorisation records using a VIC-20 8-bit home computer from 1981, an abacus, and a dog.From the link:> Sure, papers about an abacus and a dog are funny and can make you look smart and contrarian on forums. But that’s not the job, and those arguments betray a lack of expertise[1]. As Scott Aaronson said[2]:> > Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”[1]: https://bas.westerbaan.name/notes/2026/04/02/factoring.html[2]: https://scottaaronson.blog/?p=9665#comment-2029013
ekr____: As a practical matter, revocation on the Web is handled mostly by centrally distributed revocation lists (CRLsets, CRLite, etc. [0]), so all you really need is:(1) A PQ-secure way of getting the CRLs to the browser vendors. (2) a PQ-secure update channel.Neither of these require broad scale deployment.However, the more serious problem is that if you have a setting where most servers do not have PQ certificates, then disabling the non-PQ certificates means that lots of servers can't do secure connections at all. This obviously causes a lot of breakage and, depending on the actual vulnerability of the non-PQ algorithms, might not be good for security either, especially if people fall back to insecure HTTP.See: https://educatedguesswork.org/posts/pq-emergency/ and https://www.chromium.org/Home/chromium-security/post-quantum...[0] The situation is worse for Apple.
tkhattra: From Filippo's post: "Sure, papers about an abacus and a dog are funny and can make you look smart and contrarian on forums."