Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1. The presented construction is pretty simple and falls in the category of computing the output for all possible inputs in parallel and then extract the correct or best input.

2. This is exactly the type of construction that is often(?) believed not to work, i.e. even on a quantum computer you have to exploit the structure of the problem and can not just throw quantum parallelism at the problem.

3. They already published a paper in May [1] in which they applied the same techniques to graph partitioning problems.

4. Which option is true? The algorithm is wrong, the believe that quantum computers are of not much help for NP-hard problems was wrong, or there is a way to translate this algorithm to classical computers? The last one would be awesome but the first two seem much more likely.

5. There is one thing in the paper that struck me as a bit odd, namely that they just add a couple of dummy qubits to boost the probability of one measurement. It is certainly possible that something like that works but at least it runs counter to my intuition. It reminds me of zero-padding FFT inputs to get a result with higher frequency resolution which of course is not real.

[1] http://arxiv.org/abs/1505.06284



Can you (or someone else) explain why that simple construction where all outputs are computed in parallel is not believed to work? Is it that quantum computers have bounded parallelism in some sense?


The "tricky bit" is not in branching out, it's in collapsing back down to determinism again.

When you do your measurement at the end of the process, if you have a mixed state you'll just pick one of the "parallel computations" -- you can't just "pick the best one". Put simply, it's like a nondeterministic Turing machine that branches randomly instead of taking "all paths at once."

Where quantum computers are meant to get their extra power is in what's called "destructive interference". The idea here is that there aren't really classical probabilities associated with the computation branches, there are complex "amplitude" numbers that square to give probabilities. When your computation is laid out in the right way, you want the amplitudes of the "bad" paths in the computation to cancel out, so when you take a measurement at the end all you're left choosing between are "good" paths.


It is possible to compute all outputs in parallel with a quantum computer, but what you'll end up is an unintelligibly entangled state. What you really want is a single output, and there's no generic way of disentangling the entangled state in a useful way.

The reason why numbers can be factored faster using quantum computers is precisely that factoring has an algebraic structure which can be used to achieve this disentangling.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: