Hacker Newsnew | past | comments | ask | show | jobs | submit | analogmind's commentslogin

Terabyte priv/pub keys, "we are the leading pq lattice based implementation", broken implementations, attack scripts. I have the feeling a lot of submissions are the result of pressure on academia to publish...


While likely right, that particular entry was more a comedy submission by djb himself, no one is taking pqRSA seriously, the merging of the sig/encryption versions too was itself hilarious and a subtle metajoke about how big the competition is. There's plenty on the line here and govts know it, there's a little bit of a lottery feeling associated with it all.

I do like their honesty about people going after easy targets. The real goal here is breaking some of the big contenders and hopefully more people have a crack.

SIKE and NTRU needs some serious attention/money/cred associated with breaking it


The danger is that people, like me, will have literally no idea that this is a "comedy submission". I mean, I know that djb is well-respected in his field and so as an interested layperson I would tend to assume that anything that looks serious is serious. In fact, I did just that when I tried to read the pqRSA paper...


I don't know how much danger there ever was of taking an e-mail like this one seriously: https://groups.google.com/a/list.nist.gov/forum/#!topic/pqc-...

At the time, it read like pure satire to me.


Why is that a danger?


They might decide to use it, or push for its use in some product. Obviously it's an extreme example, but stuff like that does happen (not that I think cryptographers shouldn't have fun).


You don't think that anyone who would non-ironically implement this paper would do just as much damage with every reasonable cryptography paper?


In this case the pressure mostly comes from the submission deadline for the NIST competition, I think.


And otherwise many of these would all be unactionable research fancies with insufficient review.

Are we going to have standard PQ algos in time for government, infrastructure, and industry to upgrade before disparate or widespread availability of quantum computation capabilities?

One estimate indicated that Bitcoin and ECDSA would be broken with Shor's by - optimistically - 2027.

How many n-year tech acquisition refresh cycles are there between then and now?

Will Grover's algorithm find a more efficient approach?


> One estimate indicated that Bitcoin and ECDSA would be broken with Shor's by - optimistically - 2027.

I think 2027 is beyond optimistic. We still haven't solved quantum storage (in fact, there's a strong argument that it might not be physically possible since you'd likely need 4-dimensional media to store passively-error-correcting qubits) or any other number of fairly fundamental issues before we can actually run CrackPrivateKey().

> Will Grover's algorithm find a more efficient approach?

Grover's algorithm is the optimum speedup for problems of its form ("brute force" searches), and it gives a square-root speedup. So doubling keys solves post-quantum for Grover's algorithm.

Shor's algorithm gives an exponential speedup, and so is going to always be a better speedup.


Thank you for your perspective.

I should've been more specific: is it at all likely that Grover's will enable faster/feasible search of "quantum algorithm space" (?) or crypto_algo_classical_implementations-space? Or are we limited by number of qubits and QEC for the foreseeable future?

... Quantum Algorithm Zoo: http://math.nist.gov/quantum/zoo/


Attended this talk. I especially liked the fact they try to make it process-independent.


First timer here also! Would like to meet up! Check my profile to get in touch.


Hey on the off chance i didn't get the right one on mastodon here's a small mailinglist for the 35CCC Firsttimers and friends i've set up, join if you like!

https://evolvis.org/mailman/listinfo/ccc35-virgins-discuss


Messaged you on mastodon i think? Check if i got the right person :)


"Moreover, despite our manipulations, we do not intend to advocate consuming large quantities of sugar as an ideal strategy for improving self-control"


I recently got an analog paper agenda and deleted all my google calendar items. It's a weird but nice kind of relief...


Can anyone give me a simplified explanation of this paper and the consequences of it?


So, to start with, it's worth reminding that this is a preprint. It hasn't necessarily been reviewed or anything, so shouldn't be considered a final result.

The basic idea is that they've found an efficient quantum algorithm for solving an NP-hard problem (MAX-E3-SAT). This is a version of the boolean satisfiability problem: given a boolean expression (in this case, in a certain normal form) with N variables, is there some way to set the variables such that the expression is true?

As those familiar with complexity theory will know, if it's possible to solve one NP-hard problem efficiently, it's also possible to solve every NP problem efficiently by adding efficient algorithmic steps. Therefore, we can now use a quantum computer to solve arbitrary NP problems (well, we could if we had a large enough one).

"NP" is computer science shorthand for problems whose answers can be checked efficiently, but cannot be discovered efficiently.

Prior to this, only a few NP problems were efficiently solvable by quantum computers (factoring and things that reduce to factoring, due to Shor's algorithm). With this discovery, any NP problem shares this characteristic. Technically NP only applies to decision problems, but it's generally not that difficult to extend it to computations that produce additional information.

One consequence of this is that it will completely ruin public-key cryptography in the face of a quantum computer: public-key cryptography relies on a hard instance of an NP problem, and no such instances are hard on a quantum computer anymore. This is relatively minor, as we currently use factorization (RSA) or elliptic-curves for public-key encryption, and those were already breakable with Shor's algorithm.

Quantum computers should also be able to reverse hashes and find collisions easily: the NP algorithm for this is trivial. You may not be able to reverse them to the "correct" input, though. If someone were silly enough to use a perfectly good quantum computer to mine Bitcoin, it would be insanely efficient. This is relatively minor as well -- more important than the public key cryptography, but it's not going to cause all computer security to collapse.

Many useful real-world problems are in the NP space: traveling salesman, protein folding, things of that nature. A quantum computer could be used to solve these problems. This would be a really big deal, and overall a great boon to humanity.


Factoring is not NP-hard (fixed!), it's simpler than that (but not, until the present day, P)

It is NP though (edited as per child comment)


It is NP (well, it would be if it were a decision problem). It is not (known to be) NP-complete (or, equivalently since it is NP, NP-hard).


Ah true, my mistake, it is not NP-hard but it is in NP

Wikipedia says it's UP https://en.wikipedia.org/wiki/UP_%28complexity%29 (which is contained in NP)


See, the weird way I always understood NP hardness was by virtue of the impossibility of it. Suppose all NP hard problems are solvable. Assume this knowledge is distributed, a culture surrounds it, eventually that knowledge is structured into the culture intrinsically (like intuitive knowledge, like knowing how to move your arm, the knowledge becomes to effect: deterministic).

Now, if that occurs, then new layers of conceptual reasoning (however it may manifest, the problems of human creativity, of opening domains of knowledge, of forming new ideas or building things, anything humans will be able to try to do once all NP hard problems can be both efficiently discovered and checked).

Assuming we can do new stuff now that we couldn't before, this opens the table to new variation, new problems, and new plausible solutions (but on a meta layer, where the problems of today are intrinsically quickly solvable).

To me, that is what NP hard has always been. Even when you think you have solved it, you haven't, precisely because of the ways we construct and define problems. We either select that which can be known or that which can not, and we translate this into a language. For what can not be known, that which has significant variance across the human domain, such as mind reading, can not be predicted. Even in a fairly formal logical or mathematical language, which becomes more difficult the more compressed the language is with cultural information.

So even though we might say we've solved NP hard problems, that allows us to define new problems of which are going to be more difficult to solve (unless the universe just stops forever or approaches an information uniformity). All that new variation is the new NP hard.

I know it's not a super mathy solution, but there is a problem with being able to look at a letter and knowing it means 'A' which may have very little meaning in one context, but in another it might represent a culturally standardized 'memory'. It retains that which is known. The more information you have to compress into symbols, the complexity of your systems explodes as those symbols can construct more and more permutations and meanings (like what a graph represents). You can't know the answer unless you know the answer, literally.

I still study all the math and formal logic and complexity theory in my free time though (because I do get the dimensional perspective - finding the shortest path is a mathematical problem at heart), but the above is my intuitive understanding of NP hard, that which is infinite and capable of redefining itself in ways the human mind is independent of the collective, incapable of imagining.


> Assuming we can do new stuff now that we couldn't before, this opens the table to new variation, new problems, and new plausible solutions.

Right, but that's circular reasoning. Closed systems exist, there are things that can be understood so fully that you can't think of any new questions to ask about them that you don't already understand. What if we really did solve everything?

> So even though we might say we've solved NP hard problems, that allows us to define new problems of which are going to be more difficult to solve (unless the universe just stops forever or approaches an information uniformity). All that new variation is the new NP hard.

NP is a specific, well-defined category of things. It isn't remotely the class of the hardest problems we can think of. If NP=P that doesn't mean we can solve every problem easily. But it does mean we can solve a lot of important problems easily.


> Closed systems exist, there are things that can be understood so fully that you can't think of any new questions to ask about them that you don't already understand.

The existence of closed systems does not imply all systems are closed. Also, you can think you understand things so well that you have to revisit things you think you know in order to find new questions.

> It isn't remotely the class of the hardest problems we can think of. If NP=P that doesn't mean we can solve every problem easily. But it does mean we can solve a lot of important problems easily.

That's fine to you, but there's obviously some communication and definition of terms problems going on, and I don't think that's entirely surprising considering how lost everyone tends to get in the language of it.

P=NP means you can get computers to write their own math proofs. Tell that to a mathematician who would rather compare his process and functionality to that of Picasso or Rembrandt painting, Mozart composing, than that of the self serve pay station at your local grocers. Maybe we can start birthing mechanical babies, I don't really know.


> The existence of closed systems does not imply all systems are closed

I never said they were. Just that we don't know one way or the other.

> P=NP means you can get computers to write their own math proofs.

Yes and no. It means the computer can fill in the technical details of a proposition you've already formalized. But mathematicians only ever sketched those parts in papers anyway.

> Maybe we can start birthing mechanical babies, I don't really know.

You're not making any sense. Try starting with more concrete things before waxing philosophical.


I've read from mathematicians who consider their computer to be a tool, and from others who see the computer as a partner. As a hobbyist who studies automated theorem provers (like coq), there is so much elegance that goes into the computational proofs in order to even construct such a program that allows a mathematician to do their work.

Can you honestly say that it doesn't take an extreme amount of effort to construct the perfect typing system that allows you to even begin to be able to write a proof with a machine? People have to prove that theorem provers are correct with respect to that which what they define. This is no trivial task, it's an entirely different domain of knowledge.


Can you try making a concrete example for your reasoning?


I don't know if I can. What do you mean by concrete?

For me, as soon as I use language, it is symbolic. So even if I were to describe the shape of a rock to you, it is still not a concrete example, unless I give you the rock.

Maybe that sounds absurd, but I am being serious. Can you clarify?


Polynomial complexity doesn't necessarily mean efficient if the polynom is of a high order.


That is true and worth remembering if you're a theoretical computer scientist, but we're trying to simplify here.


In essence, if this algorithm is correct, for quantum computers it holds that P = NP, which means that every problem for which the answer can be checked in polynomial time, you can also compute the answer in polynomial time.


Polynomial time in a quantum computer is called QP. (The same way that polynomial time in a non deterministic computer is called NP.)


Isn't it actually only showing that BQP contains NP, not P = NP?


The short version is, if this is correct (which is exceedingly unlikely) then either quantum mechanics is wrong or whether P=NP has just become irrelevant. This is because, as Scott Aaronson is often forced to tirelessly point out, an inability to build a Quantum Computer must also show Quantum mechanics to somehow be in error. If instead, we can build a Quantum computer and this result is true then the universe has suddenly become an incredibly more interesting place (for one example, building this would also solve AI).

The result also has consequences for quantum computation specifically, in that it also takes care of the negative sign problem, which arises when simulating certain kinds of important quantum many body systems.

And last but certainly not least, all those press releases breathlessly proclaiming Quantum computers work by trying a gazillion possibilities at once (with no worries about the plausibility of actually reading out the correct answer with a non-negligible probability) weren't so far off after all.

All in all, either there is a mistake hidden somewhere in the preprint or this is the greatest scientific achievement since Evolution invented Human Level intelligence.

See here for intuition on why NP-complete problems if at all solvable without brute force, should also be efficiently solvable: http://windowsontheory.org/2013/05/06/reasons-to-care-in-hon...


This breaks a lot of cryptography. More than Shor's.


Just to add to the GP, this would break any cryptography algorithm that one can run on a classical computer.

Currently, quantum computers are believed to break only our most used algorithms, not all of them.


Maybe add some fields to manipulate the other parameters :-)


Regretting things in your life is awful. I learned a LOT from not finishing deadlines, messing up complete branches of code and working on projects I don't like at all! In the moment I could have a regretted it, but looking back on things, it was always a valuable life-lesson.


It's very true, you often learn most when things break or you have to work with old and bad code which makes you realize why certain design patterns exist


All I can say is, bad karma.


This is lame. The title should also mention that you need to buy the app.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: