Hacker News new | past | comments | ask | show | jobs | submit login

> And watching the experiment progress forward through time, it remains impossible to say beyond 50/50 odds what the actual final state of the particle will be in. But, if you follow the experiment backwards, following the particle's timeline from its future to its past, Murch and co. found that they could up their odds to 90 percent correctness. The implication is that everything that happened to the particle/state after the strong measurement, influences the strong measurement itself ... in the past.

I'm not a quantum physicist, but I don't get this. Maybe it's just explained in a silly way, but this seems to imply correlation, not "causation" (i.e. the future influencing the present). Correlation isn't that surprising, obviously - if I flip a coin, a friend looks at it, then I look at it, I can predict what my friend saw (in the past) with 100% accuracy.

Obviously I'm missing something.




It's explained in a silly way because explaining it in a non-silly way is not so easy. Here's my best shot:

Measurement in QM is not an all-or-nothing process (and hence there is no such thing as "wave function collapse"). You can measure "just a little bit" by carefully controlling the entanglements of the system you are "measuring". These are called "weak measurements". When you make weak measurements, the probabilities of various outcomes can be non-intuitive because of how the quantum math shakes out, similar to the Bell inequality, except that the entanglements in this case are across time instead of across space. It's cool experimental work, but there's nothing fundamentally new here. It was already well known that the quantum wave function is timeless. This experiment just confirms it.


It is worth emphasizing that weak measurements only (and can only) apply to ensemble averages. They work by making weak measurements on many copies of the same system (or equivalently, many identically-prepared systems) and then drawing inferences about "the system" from this. But it is not, in fact "the system" but the average over many copies of a system.

Bohr would likely have made a big deal over this, because there is no actual measurement on any single system that corresponds to the retrodicted inference, and that matters a great deal in Bohr's quantum ontology.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: