Get the top HN stories in your inbox every day.
matthewdgreen
yuhong
[dead]
Strilanc
I want to contrast this paper with Shor's factoring paper [1].
One of the things that stands out to me about Shor's paper is how meticulous he is. He is considering the various ways the algorithm might fail, and proving it doesn't fail in that way. For example, the algorithm starts by picking a random seed and you can show that some choices of seed simply don't work. He proves a lower bound on how many have to work. Also, given a working seed, sometimes the quantum sampling process can correctly return a useless result. He bounds how often that can occur as well. He never says "I think this problem is rare so it's probably fine", instead he says "this problem is at least this rare therefore it is fine". Essentially the only real problem not addressed by the paper was that it required arbitrarily good qubits... so he went and invented quantum error correction [2].
The paper being discussed here [3] does not strike me as meticulous. It strikes me as sloppy. They are getting good numbers by hoping potential problems are not problems. Instead of addressing the biggest potential showstoppers, they have throwaway sentences like "It should be pointed out that the quantum speedup of the algorithm is unclear due to the ambiguous convergence of QAOA".
How many shots are needed for each sample point fed into the classical optimization algorithm? How many steps does the optimization algorithm need? How do these scale as the problem size is increased? How big are they for the largest classically simulable size (RSA128 with 37 qubits according to their table)? These are absolutely critical questions!... and the paper doesn't satisfyingly address them.
Is there somewhere where I can bet money that this doesn't amount to anything?
1: https://arxiv.org/abs/quant-ph/9508027
2: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.52.R2...
HWR_14
The difference is Shor is attempting to prove something. This article is by a security researcher who cares about staying ahead of threats. That is, a 10% chance that RSA-2048 was broken means he's screaming about changing to -4096 or another standard. Because he is trying to make security systems reliable.
Or, to put it a different way, most papers focus on being right. To many publishers, "being right" means being true in what the paper is saying. In some other cases, "being right" means that the action you take is correct. Trying reading it as not a paper on "is RSA-2048 cracked" but "is RSA-2048 still safe".
rezonant
It sounds like you are talking about Schneier (the blog author and well known security researcher) and the parent post is talking about the paper Schneier is blogging about. And comparing it to another paper by a different author.
I don't think there was a comparison between Schneier and Shor, or I missed it.
HWR_14
I am talking about Schneier. I may have misread the parent post
Strilanc
I agree that it makes sense for cryptographers to be jumpy around papers claiming improvements in quantum factoring, even if those papers are low quality and likely to be wrong. But that doesn't mean you stop calling the papers low quality and likely to be wrong.
I guess I'd also be a lot more sympathetic if the paper had a paragraph in the abstract, or at least the intro and conclusion, where they explicitly positioned the paper as a wild idea that could work but probably won't but is still worth considering because of the risks.
jackmott42
I think you are right that this paper may not amount to anything but it should also be a big wakeup call that maybe we need to switch or enhance our public key crypto starting now rather than later, in case similar ideas can work, or in case quantum computers get a little bit better faster than we thought, etc.
rubberband
Let me know if you find such a betting venue... I'll take the same side.
I'm aware of both Microsoft and Google publishing papers claiming astounding quantum feats, then later retracting them (I have to assume there were similar instances with lesser known companies). I think your skepticism is valid.
Strilanc
For the Microsoft one I'm assuming you're referring to the retraction of the detection of Majoranas [1].
Do you have a reference for a Google quantum paper being retracted? I don't recall an instance of that (disclaimer: I am on the Google quantum team; my views do not represent them).
pclmulqdq
The Google "supremacy" paper wasn't retracted per se, it was just shown to be wrong. They claimed to do a thing on a quantum computer (specifically, simulating a particular quantum circuit) that would take 40,000 years on a classical supercomputer, assuming a particular method for doing the computation.
The real number was 2.5 days, and the computing breakthrough involved in the huge speedup was... using SSDs to store your intermediate state instead of recomputing it every iteration.
dekhn
Not a retraction, just that many of the claims about supercomputer performance were easily shown to be less than accurate, making any claims about supremacy less exciting.
EGreg
Just deploy a smart contract on, say, Ethereum MainNet.
One side will publish the public keys for RSA and the private key signature can take the money.
The other side will likewise lock up some money, but that money can be moved by a smart contract method after a certain date, if the first account still has money in it. You can have multiple dates, for removing some or all of the money in the multiple bets against it. For example, "$200 that it's cracked by 2004".
The main problem with bets and contests is that the side which knows the private key can simply withdraw the money itself. That’s why you need the private key to be generated by all parties involved in a ceremony.
ofcrpls
I want you to plan my next birthday party.
miga
It is fair to say, that in security assessment you estimate lower bound on complexity of hacking.
Schor proved upper bound.
conjectureproof
Lenny Baum, Lloyd Welch, and their colleagues at IDA were using the EM algorithm for code cracking well before they were able to prove anything about its convergence.
EM worked in practice, so they spent a long time trying to prove convergence. Modern proofs are simpler.
Could be the case that this method also works in practice. I haven't the faintest idea whether it will.
gronky_
You divide the number of physical qubits by 5 to get the number of fault tolerant error corrected qubits. [0]
If their algorithm works, they need a 1860 (372*5) qubit computer to break 2048 bit RSA.
IBM expects to get there by 2025. [1]
[0] https://en.wikipedia.org/wiki/Five-qubit_error_correcting_co...
Strilanc
The amount of error correction you need is more than what the five qubit code provides. A more typical estimate is that you'd need 1000 physical qubits per logical qubit.
For example, using the surface code, a back of the envelope estimate would be that you need a code distance of d = ln(number_of_operations). Each logical qubit will use 2d^2 physical qubits. So for a million operations you'd need around 400 physical qubits per logical qubit and for a trillion operations you'd need around 1500 physical qubits per logical qubit. So, way more than 5.
(A major practical obstacle to using almost-anything-that-isn't-the-surface-code is that the surface code has forgiving connectivity and maximum-allowed-physical-noise requirements.)
XorNot
The other issue when you start talking about very large numbers of qubits is that they're not independent. Building more qubits influences your noise environment substantially.
Expectations of a Moore's law type improvement rate are going to be left wanting.
gronky_
So if the paper is correct then you need about a half a million qubits? Is this roughly a 40x improvement on the 20M qubits in 8 hours or is there more to it?
Strilanc
If the paper is correct then yes, it would be a huge improvement in the required space even accounting for the overhead of error correction.
Shor's algorithm requires performing a modular exponentiation under superposition. For an n bit modulus this requires 2n or 3n qubits of storage, plus let's say 50% overhead for routing and gates. You end up needing 5n to 10n logical qubits for an n bit number. So to factor a 2048 bit number you'd need on the order of ten thousand logical qubits. Improving that to a few hundred logical qubits would be a big improvement. Also, there's fewer operations so the code distance can be lower.
...but don't forget that "if the paper is correct" bit.
reikonomusha
A 5-qubit code corrects against single qubit errors. You could say a factor of 5 is a lower bound. In more technical terms, this scheme fixes all weight-1 errors.
More realistically [1], you'd have a factor of around 1,600 for a distance-27 code.
andrewla
If this roadmap is accurate for the present, the types of qbits here are NISQ qubits, so not useful for any sort of generalized computation no matter how many they have.
Which is to say that Osprey has 433 qubits, so should be capable of 86 fault tolerant error corrected qubits, so they should be able to factor (not bothering with the math) AT LEAST ONE NUMBER using Shor's algorithm, and yet they cannot.
krastanov
If the paper is to be believed you are wrong in two different ways:
1) If the paper is right, they are claiming they need 300ish physical qubits that can sustain about 1000 gates before decohering. No need for scalable error correction.
2) Independently of the veracity of the paper, if you actually need logical error corrected (and fault tolerant) qubits, you need error correcting codes with much more severe overhead than the 5-qubit code. The 5-qubit code is a pedagogical example, not something that would actually work under realistic conditions. And even the 5-qubit code needs quite a few extra ancillary qubits for fault tolerance (which is more expensive than simple error correction).
greggarious
What practical things can be done once RSA is broken?
bawolff
Breaking RSA is a practical thing.
E.g. forge email (most dkim keys are 1024 bit rsa). Break ssh (depends on key algo chose). Break pgp (depending on settings). Mitm https connections, Etc.
greggarious
So if you were going to explain it to someone who's less technical, maybe saying "remove HTTPS" would be an oversimplified way to explain?
(I don't think my contacts aren't going to know what SSH or PGP is, if that helps.)
nunez
isn't most stuff elliptic curve now?
quotemstr
Decrypt decades of pre-PFS archived encrypted internet traffic.
retrocryptid
If only someone archived it in a data center in the desert.
meltyness
A note to cast further suspicion on the immediate risk severity:
The researchers indicate use of a computer built with superconducting qubits in the abstract, to that, superconducting qubits present barriers such as
- limited coherence time due to common atmospheric muon events, and resulting phonons
- limited topological connectivity, further increasing needed coherence time.
reikonomusha
Limited coherence time is not explained as simply as "atmospheric muon events". There are a variety of reasons, some environmental, others a result of imperfect fabrication, etc. that contribute to decoherence, gate error, etc.
meltyness
I know that, but I recall specific literature reflecting the specific effect mentioned[0] and it seems intuitive that the surface area and orientation currently in use for these qubits may potentiate the effect, I think the other effects you list are barriers to other qubit registers as well.
[0] https://ai.googleblog.com/2022/01/resolving-high-energy-impa...
gjsman-1000
I'm surprised nobody has written a thriller yet where a new mathematical algorithm is found (maybe something to do with primes?), all encryption suddenly collapses, and with that we go into a psuedo-apocalypse where nobody is sure whether anything is authentic anymore. Banks can't share cash, ID systems are useless, we've still got electricity but no functioning internet, hardware root of trust is shattered...
barathr
It's called Impagliazzo's Worlds Paper:
https://scholar.google.com/scholar?cluster=14678868687868063...
A classic paper that explores what happens if various scenarios come to pass. Would be worth exploring some of the updated versions and fictionalizing them.
omoikane
"Summer Wars"[1] features breaking keys as a plot element, with a passing mention of Shor's Algorithm. The rest of the movie is mostly unrelated to math. Good movie though.
aburan28
Sneakers is the thriller movie you’re looking for
riffraff
And it's a great movie!
pyuser583
Or The Net.
BillSaysThis
They have, usually this is called a skeleton key. NCIS has done at least two separate episodes with it as the MacGuffin.
claytongulick
All that would need to happen for this exact scenario to occur would be for anyone, anywhere to find a case where P == NP. [1]
P !== NP is a theory that has never been proven, so it very well could happen in reality.
This is one of those things that keeps me awake, like Carrington events [2].
rrjjww
As someone who is only vaguely familiar with the P = NP problem, can someone explain to me if proving P=NP automatically solves the numerous problems that can then be “quickly computed” or does it simply prove there is an existence of an algorithm for each problem?
To rephrase if this is not the case - what value does solving P = NP provide?
jcranmer
There are roughly four possibilities for P = NP:
* P != NP. In practical terms, nothing changes.
* Nonconstructive case. The resulting algorithm looks something like some primality test algorithms (which I'll describe): essentially, if a number n is composite, then there is some (X + a)^n = X^n + a in Z/nZ (X is a polynomial here). If you test "enough" a's, then you can prove whether n is prime or composite. A nonconstructive case would mean we have a proof that you only need to test poly(lg n) a's to confirm truth, without necessarily telling which a's you have to test. In this world, there is no practical change to problems--the proof doesn't yield an effective algorithm to actually solve any NP-complete problem.
* Combinatorial algorithm for an NP-complete problem. The good example here is what has been done to prove L = SL. The result is "technically" in L, but the factors in the algorithm run very quickly into "more than the number of atoms in the universe" phase. The goal is to find a memory-less algorithm (can't use a visit stack) that can prove a path between two points in an undirected graph, and it turns out that you can transform the graph into another one that will guarantee that you will visit every node in a certain amount of time. The found result has a new graph that replaces every node with more nodes than exist atoms in the universe, so it technically meets the requirements but is completely and totally impractical. Sometimes people handwave this possibility by saying that once an algorithm is found, people will find better results, but this result hasn't been improved on in close to two decades.
* "Simple" algorithm for an NP-complete problem. This is the result that really, really changes things. Once you get a simple algorithm for one NP-complete problem, you can generally find analogous ways to get simple algorithms for other NP-complete problems. However, the research done to date does suggest that this is perhaps the least likely solution: looking at the "hard" instances of SAT problems, it does seem that there is a certain complexity that is at best reducible via some sort of combinatorical nightmare construction rather than the kinds of decompositions that yields "simple" algorithms.
btdmaster
It's only required to prove an algorithm that solves an NP-complete problem exists, not find it.
Even if that algorithm exists and was found, it could be that such an algorithm is O(n^123456789), which would not break RSA in any practical sense, though it would be mathematically asymptotically faster than O(2^n).
erik
If P == NP, then it could be possible that a proof would show that an algorithm must exist without providing such an algorithm. But that approach wouldn't be necessary, actually showing an algorithm would of course also be a proof.
> To rephrase if this is not the case - what value does solving P = NP provide?
P vs NP is a question of enormous practical interest. But it's also a very interesting question of pure mathematics. A proof that P != NP, or a proof of P == NP that didn't provide an algorithm would still be a huge deal in the math and computer science world.
bodhiandphysics
Think about cryptography for a second…. In cryptography you need a problem that if you know the key is fast to decode but if you don’t is really slow… like you would have to search all the possibilities one by one. Such problems are (basically…) called NP. P are all the algorithms that are fast on computers. If P = NP than any problem you could use for cryptography could be decoded fast
goodgoblin
It's been a little while - but I think it's because NP problems can be converted into each other, so if you can solve one of them in P you can solve all of them in P.
gjsman-1000
Some have speculated though that P doesn't necessarily have to be a short period of time. If P == NP, but P takes hundreds of years to compute, we may survive.
janalsncm
The Carrington event page was fascinating to read. Telegraph operators held an entire conversation without power! It is fairly unsettling though. I wonder if there are contingency plans that include thoroughly shielded or non-electronic communications equipment.
filleokus
Not exactly that but: https://suricrasia.online/unfiction/basilisk/
timerol
It took me a surprisingly long time to realize that I was reading fiction. The example SHA sums even do work, it's just that they only start with 40 bits set to 0 (expect to require 10^12 operations to generate each randomly - about a second on an ANTMiner) instead of 88 0 bits (multiple weeks of the full Bitcoin network to generate each randomly)
flippy_flops
Can a quantum computer fit in an answering machine? :) https://www.youtube.com/watch?v=F5bAa6gFvLs
somat
One of the neater bits of world building/subplots in the book "A fire upon the deep" is their ship is a freighter with a cargo full of one time pad keys. that is, the encryption you would revert to in a world where factorization is easy.
itcrowd
Huge, if true. However, many such claims have surfaced before and turned out to be dudds.
For now, I am taking it with a spoon of salt but with an interest of any follow-ups and peer review.
gfaster
In an update on the article, it reveals that it relies on an algorithm that breaks down with larger N for an unknown reason.
It seems like we're safe for now.
esjeon
Even when it worked, QC itself is not massively deployable right now. Agencies will be sucking their thumbs while everyone migrates to something stronger or even quantum-proof algorithms. No drama.
barathr
Yeah, I think this is a sign that while, as the post notes, it's not clear that this is a 100% airtight paper/algorithm, there are major advancements happening in this space.
K0balt
I think history will show that the current state of the art quantum cypher breaking is well ahead of what is being discussed here.
shockeychap
> Honestly, most of the paper is over my head—both the lattice-reduction math and the quantum physics.
Yeah, that alone is impressive. Schneier led a group that wrote Twofish, which was one of the AES finalists before losing Rijndael.
tptacek
It's not in fact that impressive. The math for all sorts of cryptography goes over Schneier's head (as he sort of infamously implied with elliptic curve, over a decade ago). That's normal! Cryptographers specialize. Having a really careful, fine-grained, up-to-date intuition for differential cryptanalysis is crucial for designing hashes and ciphers, but less so for a key exchange.
Not writing this to dunk on Schneier so much as to relate that cryptography is specialized, and that generally there aren't a lot of people that you'd expect to be ultra up on PQ key exchanges and modern block cryptography.
thesz
Here's an implementation of Schnorr's algorithm with an attempt to estimate the amount of work needed to factorize big numbers: https://github.com/lducas/SchnorrGate
It also contains some links to critique of the Schnorr's algorithm paper. It looks like either much more p_n-smooth integer pairs are needed or the size of the p_n-smooth integers should be much bigger than estimated by original Schnorr's paper. Or both estimations are off.
As the paper discussed Schneier relies on the assumptions of the (classic) Schnorr's algorithm, it may also be off in the calculations as well.
jokoon
so just increase to 4096 or 8192 bits or beyond?
TacticalCoder
Why is this downvoted? Ain't it exponentially harder to break RSA using qubits each time you double the key length?
Until we switch to quantum resistant algorithms, we can keep doubling the key length for some time no?
8192 bits should still be acceptable speed wise (if we consider that 2048 bits is broken, then I'll take slower operations over broken keys any day of the week).
krastanov
No, if you have a scalable quantum computer (which no one has yet), then doubling the key size just requires doubling the number of logical qubits. That is for asymmetric encryption that uses "hidden subgroup problems", like RSA. We have newer asymmetric algorithms that are resistant to quantum attacks, they are just not well tested yet.
jeandejean
Judging by the current difficulties of building a quantum computer with bigger number of qbits, I'd assume scaling linearly isn't trivial at all. So increasing key size should definitely increase security, if not exponentially, at least buying a lot of time?
adgjlsfhk1
Because it isn't. This paper (alegedly) uses sublinear qbits (that's the whole point).
3np
What isn't what?
TacticalCoder
> A group of Chinese researchers have just published a paper claiming that they can—although they have not yet done so—break 2048-bit RSA.
If a quantum computer can break 2048-bit RSA, what about elliptic curves?
TillE
Elliptic curves are also vulnerable to Shor's algorithm. However, there are several new algorithms for asymmetric cryptography, such as Google's NewHope:
https://en.wikipedia.org/wiki/Post-quantum_cryptography
https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...
vlovich123
I had the same question as OP and you seem to have misinterpreted the question. Specifically this article is about showing a meaningful reduction in the Quantum bits required for Shor's algorithm by pairing it with lattice reduction (whatever that means). Does this approach of combining classical lattice reduction technique + quantum Shor's hold for elliptic curves? Are there any other problems beside cracking crypto algorithms that this technique could be applied to that might actually be useful vs just a more complicated attack vector?
molticrystal
Who was the guy and what was the flawed and retracted paper mentioned in the below quote from Schneier's article?
>In email, Roger Grimes told me: “Apparently what happened is another guy who had previously announced he was able to break traditional asymmetric encryption using classical computers…but reviewers found a flaw in his algorithm and that guy had to retract his paper.
psyfi
Schnorr's "This distroyes the RSA cryptosystem"?
undefined
pelasaco
I remember that some small countries, running DPI in their up-link were just storing their whole encrypted traffic to - when this day comes - be able to decrypt the traffic and see who was conspiring against the government back in the days and then start the witch hunt. Maybe this day is coming. Maybe its already there for some degree.
simne
In Russia few decades lobbied initiative, to store all country traffic for some amount of time, for law enforcement agencies. As I know, now this reborn-ed to store just connection initiation data (tcp syn from tcpdump, gsm call originate/receiver numbers, datetime stamp and duration).
In Ukraine, I hear inside, one law enforcement agency asked big provider ~ in 2000 to gather all emails. Co-owner said, "ok, you will got it". Than he few weeks gather old hdds from everywhere, and once sent to this agency small truck with few hundreds hdds, filled with data.
After that, at least 10 years, nothing similar asked.
pelasaco
there were/are countries in africa with small uplink, pretty easy to store everything in flows, pcaps and then extract encrypted attachments offline, run different tools and techniques to identify adversaries (that in that region, are mostly seem as deadly enemies)
simne
Yes, and Africa is not alone.
Colleague said, he considered to build internet in Uzbekistan, even visited country. What he see, they have all very unfriendly or extremely conservative (Afghanistan) countries around, so he cannot just buy external internet channel.
Internet there extremely slow, most time near impossible to use, only email working (but could delay for few hours).
To send tweet or whatsapp, start app, push "send" and lay phone/tablet alone turned on, and in few hours will see "message delivered".
Get the top HN stories in your inbox every day.
This is not my area, but I wanted to mention this before speculation got out of control: some quick feedback from colleagues is that the analysis in this paper seems to assume Schnorr's claims from 2021 [0] without detailed supporting evidence that these claims are true for large parameters like the ones needed to factor RSA-2048. But these claims are viewed skeptically, and this paper doesn't provide much evidence to change that.
[0] https://eprint.iacr.org/2021/933