Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Student's physics homework picked up by Amazon quantum researchers (sydney.edu.au)
99 points by jakecopp on April 13, 2021 | hide | past | favorite | 9 comments




Can anyone eli5 the breakthrough that this (seemingly bright!) student made? I read the paper earlier but a lot of it went over my head!


Errors experienced by quantum computers can be decomposed into two types of errors: X type (bit flips) and Z type (phase flips). Classical computers only have bit flips.

The surface code is a quantum error correcting code built out of a checkerboard of interlocking parity checks, where the qubits lives at the intersections, the black squares check one type of parity (Z type) for their four neighboring qubits, and the white squares check the other kind of parity (X type). You need a 2d checkerboard instead of a 1d line because there are constraints in how the parity checks can overlap. Adjacent parity checks that disagree about the type of check need to touch at an even number of places, not an odd number.

The convention that one square is X and the other is Z is arbitrary. You can swap the X and Z roles of a qubit as long as you do it consistently. So instead of this local situation, where the 2x2 blocks indicate a nearby parity check:

    xx ZZ
    xx ZZ
      q
    ZZ xx
    ZZ xx
You can just as well do this:

    xx ZZ
    xZ xZ
      q
    Zx Zx
    ZZ xx
If you swap the X and Z role of qubits in a checkerboardy sort of way, you end up with every parity check looking like

    xZ
    Zx
Which is sort of neat. Two different things became one thing. The paper shows that this arrangement has some other benefits. In particular, it does surprisingly well if one type of error dominates. There are proposals and concepts for hardware where one type of error dominates.

I will note that when you attempt to translate these parity check specifications into a quantum circuit, they have a tendency to end up compiling into the same thing regardless of whether you swapped the X and Z roles of a qubit in the parity check spec. So in a certain sense nothing has actually changed, and the improvement is an artificial result of considering noise models that aren't aware of the step where you compile into a circuit. In order for the idea to really land, hardware with a dominant error has to implement enough types of interactions between qubits that you don't need to use an operation called the Hadamard operation that swaps the X and Z axes of a qubit. Because if you ever use that operation then your dominant Z type errors have a simple route to be transformed into X type errors, which removes all the benefit. The hardware needs to enable you to change how you compile the circuit. AFAIK, no one has yet demonstrated any two qubit interaction while maintaining a dominant type of error.


“It’s a bit like playing battleships with a quantum opponent. Theoretically, they could place their pieces anywhere on the board. But after playing millions of games, we know that certain moves are more likely.”


Headlines like this always surprise me. We love the lone genius myth, despite it being, as the name suggests, a total myth.


No matter* what, quantum computing physicists are continuing to be haunted by Heisenberg.

*see what I did there?


Where’s the code? How was it changed that made it faster? What makes it elegant? What’s the point of this piece?


From the article: https://www.nature.com/articles/s41467-021-22274-1#Sec2

The point of the article is, in my estimation, the university's press office celebrating the achievement of its student, its faculty, and, by extension, the university.


The paper is linked in the article.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: