Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I thought "coherence" was the model by which quantum systems spread entanglement to other systems; the larger the system the 1st system into contact with, the bigger the effect of coherence loss in the 1st system and the larger the "measurement"


Also see this piece by Steven Weinberg:

http://www.nybooks.com/articles/2017/01/19/trouble-with-quan...

"One common answer is that, in a measurement, the spin (or whatever else is measured) is put in an interaction with a macroscopic environment that jitters in an unpredictable way. For example, the environment might be the shower of photons in a beam of light that is used to observe the system, as unpredictable in practice as a shower of raindrops. Such an environment causes the superposition of different states in the wave function to break down, leading to an unpredictable result of the measurement. (This is called decoherence.) It is as if a noisy background somehow unpredictably left only one of the notes of a chord audible. But this begs the question. If the deterministic Schrödinger equation governs the changes through time not only of the spin but also of the measuring apparatus and the physicist using it, then the results of measurement should not in principle be unpredictable. So we still have to ask, how do probabilities get into quantum mechanics?"


Doesn't the probability basically follow from the uncertainty of your own eigenstate, ie. the system performing the measurement? This contextuality is why deterministic interpretations of QM also entail probabilistic measurements.


It is generally agreed (except, perhaps, by the strongest champions of the decoherence program) that decoherence does not completely solve the measurement problem.

Some good references here: http://physics.stackexchange.com/questions/295527/decoherenc...

It helps explain the loss of interference, but it does not resolve the question of why and how we see one particular outcome.


> that decoherence does not completely solve the measurement problem

It's kind of funny how the problem keeps getting pushed to higher levels of "meta":

If you consider the experimenter and his system, measurements of (non-eigenstate) quantum systems appear indeterministic to him. However, the state of [experimenter + system] is governed by an entirely deterministic equation that follows a reversible, unitary path through time. Great! But the problem is that you then have another experimenter who measures that composite system, and the outcomes he sees likewise appear indeterministic. So now you consider the system of [experimenter 2 + [experimenter 1 + system]], and we've got infinite regress — a.k.a. the measurement problem.


>So now you consider the system of [experimenter 2 + [experimenter 1 + system]], and we've got infinite regress — a.k.a. the measurement problem.

At the risk of sending things off an a huge tangent, it's interesting to see physicists recognizing that an infinite regress is, at least sometimes, unsatisfactory (even though there is of course nothing incoherent per se about the concept of an infinite sequence). Physicists usually tend to give short shrift to metaphysical arguments that rule out certain states of affairs on the grounds that they would involve an infinite regress of a problematic kind. But the logic you're using to argue against decoherence as a solution to the measurement problem is very similar to e.g. Aristotle/Aquinas's argument that the causal hierarchy must have a terminus. I'm not saying that in an attempt to start an argument about God. It's just interesting to see similar reasoning used in such different domains. (And of course in neither domain is it entirely clear that the reasoning works.)


The point is that you can prove that a thing is self-consistent without proving it is true. And I think what are calling an "infinite regress" is, in this case, self-consistency.

Proving self-consistency is a decent achievement, and is certainly error. But it is only weak evidence in favour of a position.


> And I think what are calling an "infinite regress" is, in this case, self-consistency.

I don't understand what you mean by this.


> quantum systems appear indeterministic to him. > However, the state of [experimenter + system] > is governed by an entirely deterministic equation ...

This sort of thing only makes sense in the context of many-worlds QM, and it is amusing how many professed non-many-worlders say such things.

We often describe quantum systems using an entirely deterministic (Schroedinger) equation. But we don't know in what sense that equation describes the physical state of the experimenter + system, or in what sense it is "just" a probability model.

If you choose to include the whole thing as physical reality, then you are left with all the terms in the equation -- and thus all of Everett's multiple worlds. Fine, that is a logically coherent position. But it's not the only one.


> and it is amusing how many professed non-many-worlders say such things.

I'm not saying I do or don't believe any of this (if anything, I'm interpretation-agnostic at the moment). I'm just pointing out that there is a contradiction in having one postulate demand unitary state evolution (the Schrödinger equation) for some ill-defined "system" while another postulate says that unitarity is broken at the system/environment boundary. While there's been plenty of attempts to work around this (e.g. https://arxiv.org/abs/quant-ph/0101012), I wouldn't say that anyone has formulated a consistent set of axioms that definitively resolves the issue.

> We often describe quantum systems using an entirely deterministic (Schroedinger) equation. But we don't know in what sense that equation describes the physical state of the experimenter + system, or in what sense it is "just" a probability model.

Agreed. It's certainly a useful model but it leaves out all kinds of interesting phenomena that we observe in practice (namely, relativistic and radiative effects). Curiously though, if you turn to QFT for a better probabilistic model, Haag's theorem (https://en.wikipedia.org/wiki/Haag%27s_theorem) implies that a universal Hilbert space representation cannot describe both free and interacting fields (a problem that the non-relativistic Schrödinger equation doesn't have!)


I feel like if I claimed the schrodinger equation was just a probability model, I'll be immediately lambasted because "the wave equation is reality" or then I'm immediately a "local hidden variables proponent"


Let's say I toss a fair coin; before I look at it, the outcome is indeterministic to me: there's a 50/50 chance of heads or tails. Once I look at it, I gain 1 bit of information. To another experimenter, the 'me + coin' system is indeterministic: it's either me seeing heads or me seeing tails. Once I tell that experimenter the outcome, they gain 1 bit of information. To a different experimenter, the two of us with the coin is indeterministic... and so on.

Is this scenario fundamentally different than the quantum system? Is this scenarion also a "problem"?


What you just described is a hidden variable model which cannot reproduce the behavior of entangled particles. For details look up Bell's theorem.


Oh I'm well aware of Bell's theorem; I'm a trained Physicist :)

Bell's theorem rules out theories of local hidden variables. It says nothing about non-local hidden variables, or even something more mundane like determinism (sometimes referred to as "superdeterminism").

In any case, I'm not sure Bell's theorem has much impact on my question: why is one of these things (transfer of information from a coin toss) not a problem, conceptually; whilst the other (entanglement of quantum systems) is a problem, conceptually?

(Personally, I don't find either particularly troubling; just curious to know what the philosophical distinction is, without appeals to "quantum weirdness")


A classical analogue to entanglement is not problematic for causally connected processes and in some cases it's a good model, like with human reasoning or neural networks. QM is a generalisation of bayesian reasoning that can handle non-commuting variables.

Yet entanglement is observed non-locally, even backwards in time or between degrees of freedom that never co-exist. Nobody has been able to create a non-local model that doesn't require fine tuning and the idea goes against the spirit of special relativity.

Thus, I would agree with you, there is no "problem" with entanglement. Except non-locality.


I've long thought that the measurement problem is a problem with the interface between consciousness and reality and that it's more of a psychology problem than a physics one.


ok, but I don't think that is what Xcelerate is quite saying here


> it does not resolve the question of why and how we see one particular outcome.

agreed. dechoherence doesn't explain particular outcomes. It explains the scale & magnitude of mixing quantum states from different systems.

I thought "what kinds of physical interactions qualify as measurements" was referring to a different part of understanding QM. Decoherence doesn't explain which outcome, it does explains "part" of (or place constraints upon) the mechanism of measurement process.


The problem here is that very much of what we mean by "scale and mangitude" boils down to the frequency of particular outcomes when situations are repeated.

So Decoherence only explains those things after you already have the Born rule (P ∝ |Ψ|^2). But that's the very thing we are trying to explain!


Maybe you are trying to explain the Born rule, but I was not claiming to explain that; I've already stated decoherence doesn't explain why we get certain outcomes instead of others.


It explains what we experience perfectly well. In the simple case of a 50/50 coin flip, there will be two equally real future versions of you. One will see heads, one tails.

You don't need to dive into quantum mechanics to understand this concept; instead, consider being cloned twice while you are asleep, then killing the original.

Your current self knows what will happen; you'll fall asleep and wake up as either one or the other, becoming both but never being both.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: