Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can anyone eli5 the breakthrough that this (seemingly bright!) student made? I read the paper earlier but a lot of it went over my head!



Errors experienced by quantum computers can be decomposed into two types of errors: X type (bit flips) and Z type (phase flips). Classical computers only have bit flips.

The surface code is a quantum error correcting code built out of a checkerboard of interlocking parity checks, where the qubits lives at the intersections, the black squares check one type of parity (Z type) for their four neighboring qubits, and the white squares check the other kind of parity (X type). You need a 2d checkerboard instead of a 1d line because there are constraints in how the parity checks can overlap. Adjacent parity checks that disagree about the type of check need to touch at an even number of places, not an odd number.

The convention that one square is X and the other is Z is arbitrary. You can swap the X and Z roles of a qubit as long as you do it consistently. So instead of this local situation, where the 2x2 blocks indicate a nearby parity check:

    xx ZZ
    xx ZZ
      q
    ZZ xx
    ZZ xx
You can just as well do this:

    xx ZZ
    xZ xZ
      q
    Zx Zx
    ZZ xx
If you swap the X and Z role of qubits in a checkerboardy sort of way, you end up with every parity check looking like

    xZ
    Zx
Which is sort of neat. Two different things became one thing. The paper shows that this arrangement has some other benefits. In particular, it does surprisingly well if one type of error dominates. There are proposals and concepts for hardware where one type of error dominates.

I will note that when you attempt to translate these parity check specifications into a quantum circuit, they have a tendency to end up compiling into the same thing regardless of whether you swapped the X and Z roles of a qubit in the parity check spec. So in a certain sense nothing has actually changed, and the improvement is an artificial result of considering noise models that aren't aware of the step where you compile into a circuit. In order for the idea to really land, hardware with a dominant error has to implement enough types of interactions between qubits that you don't need to use an operation called the Hadamard operation that swaps the X and Z axes of a qubit. Because if you ever use that operation then your dominant Z type errors have a simple route to be transformed into X type errors, which removes all the benefit. The hardware needs to enable you to change how you compile the circuit. AFAIK, no one has yet demonstrated any two qubit interaction while maintaining a dominant type of error.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: