Hacker News new | past | comments | ask | show | jobs | submit login
When Time Flows Backwards (vice.com)
70 points by DiabloD3 on Feb 14, 2015 | hide | past | favorite | 14 comments



> And watching the experiment progress forward through time, it remains impossible to say beyond 50/50 odds what the actual final state of the particle will be in. But, if you follow the experiment backwards, following the particle's timeline from its future to its past, Murch and co. found that they could up their odds to 90 percent correctness. The implication is that everything that happened to the particle/state after the strong measurement, influences the strong measurement itself ... in the past.

I'm not a quantum physicist, but I don't get this. Maybe it's just explained in a silly way, but this seems to imply correlation, not "causation" (i.e. the future influencing the present). Correlation isn't that surprising, obviously - if I flip a coin, a friend looks at it, then I look at it, I can predict what my friend saw (in the past) with 100% accuracy.

Obviously I'm missing something.


It's explained in a silly way because explaining it in a non-silly way is not so easy. Here's my best shot:

Measurement in QM is not an all-or-nothing process (and hence there is no such thing as "wave function collapse"). You can measure "just a little bit" by carefully controlling the entanglements of the system you are "measuring". These are called "weak measurements". When you make weak measurements, the probabilities of various outcomes can be non-intuitive because of how the quantum math shakes out, similar to the Bell inequality, except that the entanglements in this case are across time instead of across space. It's cool experimental work, but there's nothing fundamentally new here. It was already well known that the quantum wave function is timeless. This experiment just confirms it.


It is worth emphasizing that weak measurements only (and can only) apply to ensemble averages. They work by making weak measurements on many copies of the same system (or equivalently, many identically-prepared systems) and then drawing inferences about "the system" from this. But it is not, in fact "the system" but the average over many copies of a system.

Bohr would likely have made a big deal over this, because there is no actual measurement on any single system that corresponds to the retrodicted inference, and that matters a great deal in Bohr's quantum ontology.


Much better article (and from the source):

http://news.wustl.edu/news/Pages/future-affects-past-quantum...


And the actual paper:

http://arxiv.org/abs/1409.0510


Future me probably understands this...


How does this compare to the classical electrodynamics interpretation of http://en.wikipedia.org/wiki/Wheeler%E2%80%93Feynman_absorbe... ?

I mean, the former is not hard to believe for me, as a model for classical eletrodynamics (due to the sucessful consistency of the theory). Why should the quantum equivalent be hard to believe?


The interpretation this is based on is usually called http://en.wikipedia.org/wiki/Two-state_vector_formalism.


The 90-10 odds of "retrodiction" as opposed to the 50-50 odds of prediction suggest it could in reality be 100-0 given better tools, and the initial 90-10 odds are the result of macroscopic "interference". Could this then mean quantum decoherence isn't so much the result of measurement as it is the result of time running backwards at the quantum scale? This would mean time runs forwards at the gravitational scale but backwards at the quantum, and the collapse of quantum states is where the two time directions smash into each other, so to speak.

Perhaps the existence of consciousness is also the result of time running backwards, somehow reaching up from the quantum to the macroscopic level. It would be just as good as the weak anthropic principle to explain why we humans seem to be the only intelligent life in the Universe. If the rise of consciousness was somehow planned and executed from the future, it would perhaps even require only one instance of consciousness evolving somewhere given how difficult it might be for those in the future to execute it.

One hint this is actually what's happening is the fact the distribution of the planets in the solar system appear to be fine-tuned for inter-stellar space travel. There's only about 10 other solar objects with gravity comparable to Earth's and hence suitable for long-term civilization. The Moon has already been walked on, and humans will one day easily build large contained cities there using telepresent robots with a 1 second response time. Then they will be experienced enough to tackle Mars and Mercury with semi-intelligent robots to cater for the much longer response time. Mercury will be a good candidate to build a huge particle accelerator around its equator because its very slow rotation allows human workers aid in construction, and its highly elliptical orbit around the Sun would give many gravitational variances for experiments. Jupiter's 4 large moons would provide humankind with experience of much longer travel times, a single large moon of Saturn eliminates any initial choice of target, then the huge gap skipping Uranus's lack of sizable moons to Neptune's Tritan gives an even longer travel distance. Concurrent with all this, humans will spend probably thousands of years to terraform Venus, the ultimate in hostile environments. By then humankind will have trained up, apparently serendipitously, for inter-stellar travel using knowledge from terraforming Venus, travelling huge distances to Tritan, and whatever's learnt from the Mercury particle accelerator.

Perhaps humans will build a silicon-based consciousness in each star system they colonize. Each planet-based consciousness in the galaxy could then communicate with one another to build a one-off galactic consciousness by structuring the gravitons flowing between the star systems, though its speed of thought would be far slower than that of humans. Ditto for each galaxy humans eventually reach in the observable Universe, and then the whole Universe. This Universal Consciousness based on gravity then "begins" to reach back in time using quantum retrodiction to ensure fine-tunings so one planet in one galaxy 14 billion yrs after the Big Bang evolves monkeys then humans who then mutate as they colonize the galaxy and build silicon-based AI and eventually a single Universe-wide gravity-based consciousness. If the Universe continues expanding and never comes to an end, then such a Universal gravity-based consciousness need never finish being built, it only needs to be continually being built. If a graviton-based consciousness can indeed reach back in time through retrodiction, not only would the Universe increase in entropy over time, but also increase in intelligence. Perhaps there's a law of physics saying that as entropy increases, so does intelligence.


Think about it this way: Everything you’re doing at this present moment has already happened. You’ve already reached your destination. All you’re doing right now is letting yourself (go through the) experience (of that moment).

In the next 30 years, this will be much better understood, with our (scientific) tools and "consciousness" evolving to a point that we can measure, proof and present this as fact.

Go back to the middle ages and try not to fall for the common belief that the “The Black Death” is a punishment by God for your sins, simply because there were no tools to locate and fight “Yersinia pestis”.

Go back to when it was a common belief that the earth was flat, until someone ventured out across the (then) vast oceans and prove otherwise.

And don’t forget, it already happened. Your just in for the ride.


A really fine piece of flash SF. Thanks!


I like this! crash Another!


How high are you right now?


Thiotimoline




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: