A good chunk of that is that people had finally gotten a delivery mechanism to work in the past few years. The "for a decade" is misleading, because a decade ago it was an interesting idea nobody had gotten to work in a way that was useful. 2020 it was something with good promise and product development under way that could be sped up massively with more money and acceptance of it being a gamble.
This feels like a post-hoc rationalization to me. Of course it was less ready, and then more ready. But exactly how much of a gamble was it?
The upside is immense even considering just existing diseases, but we are really bad at pricing things that improve the status quo vs prevent bad things.
The theory, assuming a delivery mechanism is found, is rock solid---"if not when"---and I therefore wouldn't call it a gamble.
On the other hand, I don't know enough about the biology to speak to the difficulty and uncertainty around delivery mechanisms. Can we get more info on that? I suspect it wasn't too uncertain, though "oh, we change this base pair a bit and the immune system doesn't care" does seem like a relatively unplanned discovery.
Hitler was not good, much less great, for technology. Germany was the scientific center of the world before the Nazis took power, so their successes are mostly attributable to inertia. The Nazis destroyed this on their own with the 1933 Nuremberg laws.
Entropy is a measure of the uncertainty that an observer has over the microstates (i.e. the exact state of every atom) given their knowledge of the macrostate (i.e. what they are capable of describing: like "the water is just above freezing"). So it's inherently a subjective concept. The most common unit for entropy is the bit, same unit as information.
Anyway, entropy is measurable just like any other physical quantity. You need to determine what your model of the microstates is (often an "ideal gas", with independent molecular point particles with their own positions and velocities, and all interactions are perfectly elastic collisions), what you currently know (stuff like type of gas, pressure/volume/temperature), and then there is a clear answer to what the entropy is.
The true law underlying the second law of thermodynamics is conservation of information. In (classical) physics, this is typically cast as Liouville's theorem, which shows that the area of the phase space of a system must remain constant. (In quantum, it's only true for unitary transformations, which may or may not be everything depending on your interpretation of QM).
Anyway, if you're curious about learning more, I highly recommend http://www.av8n.com/physics/thermo which is an amazing online (and free) book that clarifies the concepts of thermodynamics brilliantly.
>> Anyway, entropy is measurable just like any other physical quantity
Uh, I measure the mass of a baseball by putting it on a scale, and I get an objective number in kilograms. Similar for velocity, temperature, and volume.
So far as I know, entropy is pretty unique in that there isn't, and will never be an instrument that gives the number of bits in a baseball.
>> So it's inherently a subjective concept.
Yes, this is my point. Almost nothing in physics is subjective.
Yeah, it is different, but once you know what you're after you can measure. It's a "type error" to ask for the number of bits of entropy in a baseball, but if you ask for the bits of entropy of a baseball given everything you know about the baseball (i.e. mass, composition, temperature), then you can measure it. You could even design a special instrument which makes relevant measurements of observable quantities and then calculates the entropy of a given object from those.
Temperature is actually defined from entropy, it's the change in energy per change in entropy. So it too is inherently subjective. One way to think about this is that to a simulator or god outside of our universe, who can precisely see everything happening in the universe, the temperature and entropy of everything is exactly zero (of course, they would be able to predict what we would measure it as). To them, they would see the level of a thermometer as simply a mechanical consequence of all the particles nudging it to that exact place (like you would if you saw someone pump the mercury up the tube -- you wouldn't conclude it must have gotten much hotter suddenly).
> So far as I know, entropy is pretty unique in that there isn't, and will never be an instrument that gives the number of bits in a baseball.
Nonsense. Black Hole entropy is measurable in exactly the same way - put it on a scale, get mass in kilograms, from mass compute radius and area - voila, you've got an entropy
"The Many-Worlds Interpretation has it that each time we make a measurement, reality splits into several alternative versions, identical except for the measurement outcome."
That's not right, the Many-Worlds Interpretation is just that the wavefunction in the Schrödinger equation (or its generalizations) is real, and it rejects that there's a separate process that somehow collapses the wavefunction to the single "branch" we perceive ourselves to be in. At the metaphysical level, there's no splitting involved, and no ontological measurements either (much less a "we" to do the measuring).
In the book, Wallace quotes an interchange between Paul Davies and David Deutsch. <Davies> "So the parallel universes are cheap on assumptions but expensive on universes?" <Deutsch> "Exactly right. In physics we always try to make things cheap on assumptions." & Wallace re-quotes 'I do not know how to refute an incredulous stare'.
No offense intended, but the "lost contact with empirical reality" critique really lands for the above description of MWI. A mathematical construct (the wavefunction) is taken as more real than basic empirical experiences like "measurements" and "we."
One of the major aims of physics is to try to describe reality with mathematical constructs.
If you start with what can be directly observed or experienced, then that's phenomenology.
Those are both valid and interesting ways of trying to understand reality, and I believe they are ultimately fully compatible with each other. But I think you are trying to hold physics to phenomenology standards in a way that doesn't make sense. No one thinks that Newton's theory of gravity "lost contact with empirical reality" because it doesn't have measurements or people as ontological components.
Anyway, that's all beside the point I was trying to make, which was just that the description of the MWI in the article was plain wrong.
Not necessarily. When you live inside the wavefunction the outcomes of certain experiments very much look like a wavefunction collapse. MWI is a way to explain why that happens without having to have a special case where the universe has a completely different behaviour for just a moment in time.
You can formalise the mathematics of it using the concept of quantum decoherence.
MWI handwaves that problem away without really explaining it. "It's random but subject to the Born Rule" is a description of what's already observed, not an explanation with predictive power for new and distinct observations.
There are much more complex criticisms that use words like "ontic" and "epistemic", but that's the fundamental problem that MWI claims to solve but doesn't.
How does MWI imply that we should simultaneously experience multiple realities? If two different states of you experience two different realities, each state of you is only aware of one reality.
MWI largely reduces to something like the Copenhagen interpretation for large systems. It's just that MWI explains the transition between the quantum and classical regime, doesn't require any ad hoc rules about observers, and doesn't need Schrödinger's equation to be violated.
The claim wasn't that we should experience inaccessible worlds for MWI to be compatible with our experience of reality. That's obviously absurd.
The claim was that inaccessible worlds are inherently incompatible with our ability to empirically investigate or falsify them (via experience).
MWI has a metaphysical parsimony to it. But to believe it's physics is religious faith not physical science- and that's fine, but it's still not science.
I don't think collapse is real either. I think it's a phenomenological byproduct of consciousness requiring particilarity to model the world it perceives. You've already alluded to that being the case ('we can only experience our worldline').
If you knew the metaphysics behind that particularity well enough, you'd also know that it leaves no ground for and has no need of the existence of a physical material reality to begin with. As such, materialist physics has already made an epistemic leap which inevitably leads to mistaken intuitions about the most reasonable ways to interpret empirical phenomena.
> The claim was that inaccessible worlds are inherently incompatible with our ability to empirically investigate or falsify them
That's not what the claim was. The original claim was that MWI is "incompatible with our experience of reality." TheOtherHobbes then said that the incompatibility is that we experience only one reality, instead of many realities.
> MWI has a metaphysical parsimony to it. But to believe it's physics is religious faith not physical science- and that's fine, but it's still not science.
It's not religion to point out that QM explains the phenomenon of wavefunction collapse without any additional postulates (through decoherence). That's all that MWI says.
Isn't that just a claim about the sensitivity of the detector?
I'm sure folks were dubious about the theory of the electromagnetic spectrum given their bias to visible light, but as experiments improved and detectors with them it doesn't seem so farcical.
From a physics point of view this is a very strange thing to say. Why are 'we' or 'measurements' things of special status? Because we have a soul or something? Now, that would be a theory where an abstract concept is taken to be more real than observable things. The measurement apparatus is also a physical thing that should be described by the physical theory in use as would the humans be. The very painful point about the copenhagen interpretation is this distinction between 'normal' time evolution and the special procedure when a 'measurement' is carried out. Actually, there exist mathematical proofs that the normal time evolution when applied in cases where information is transferred from microscopic stats to macroscopic ones leads to something that looks like a collapse of the wave function but is actually a split of it. This is as a physical theory much more attractive than giving a 'measurement' a special status. The question what this all means is a bit more mind-boggling though including multiple worlds and that kind of stuff. Taking 'a mathematical construct as more real than basic empirical experiences' is basically the history of physics and it has in the past been highly successful.
Why are 'we' or 'measurements' things of special status?
Because we experience the world through our senses. Everything else is one mathematical model or another that we’ve created. And our models aren’t even consistent!
very painful point about the copenhagen interpretation is this distinction between 'normal' time evolution and the special procedure when a 'measurement' is carried out
This is not a special procedure. Measurement occurs whenever physical interactions take place. To measure a particle, we bounce another particle off of it and then try to detect the result. The measurement is the particle collision, not the detection. It’s like playing billiards in the dark. We don’t know where the balls are.
Taking 'a mathematical construct as more real than basic empirical experiences' is basically the history of physics and it has in the past been highly successful.
Except for all of the times when it broke down. When one model was found to contradict our experiments and we had to replace it with another, which later turned out to be wrong as well. Perhaps the most embarrassing example of this, in human history, is all of our attempts to make geocentric models work [1].
The most well-known critique of science’s institutional habit of inventing new models whenever old ones broke down is probably Kuhn’s paradigms [2]. If you’re interested, you’re better off reading Kuhn than anything I have to write here. I think the best evidence for Kuhn’s thesis is the abject disappointment we witness every time particle physicists fail to overturn the standard model. If that’s not supremacy of measurement, then I don’t what is.
You say "The measurement is the particle collision, not the detection.", but this is not right.
When two particles bounce off each other, there is no collapse according to traditional copenhagen, instead the wave function just evolves according to the SE. Even worse, when that particle (let's say photon) then travels to the measurement device to interact with the particles that make up that machine, the evolution is similarly governed by the SE. Somehow though at some point, nature decides that a measurement has taken place and collapses the wave function. What dictates where that happens? Honestly this way of thinking about it makes no sense to me. The MWI is, to my mind, the simplest explanation for all of this mayhem.
I do not believe they are arguing for a soul but rather just pragmatism. It's not that “measurement” has some special status as coming from a soul or something: indeed your unwillingness to see it as a normal thing that happens, such that you immediately jump to this question, indicates that you have already “drunk the Kool-Aid.”
This is more clear in Everett's original formulation. Everett didn't speak of “many worlds” but of a difference between relative and absolute truth. You measure a spin-½ particle along some axis, it is only “relatively” true that you saw what you saw, say ↑, and there is also a relative truth that you saw ↓. But because these relative truths are exhaustive there is also an absolute truth that you have the deluded belief that you saw either one or the other; that truth holds in both relative truths. The “lost contact” objection is precisely that in Everett's theory this belief is ultimately delusional (there is nothing like collapse to which they correspond, it is just formally incorrect to confuse your relative truths with the absolute truths) and we are led by the theory to delusions like this “with high probability” (scare quotes because Everett realized more so than his successors that eliminating collapse also untethers the theory from probability in a deep way). It is just accepted that these deeply practical things like “my existence as an observer” and “my tool’s measurements” are ultimately based on a sort of illusion which has no correspondence to the true reality; this is buried in a mathematical technicality in his thesis (he notes that he is not looking for an “isomorphism” between experience and the external world, but a “homomorphism”), but it is kind of the crux of the whole enterprise. “It’s fine if I predict things which are not observed, so long as I also predict the things that are observed and I predict that you will be very opinionated about not observing the things predicted but not observed,” if you will, is the homomorphic approach to QM that Everett advocates.
In that respect there is something very different from “the history of physics.” Physics does in some cases say that certain things are illusions. And those statements track very closely with MWI. For example a rainbow appears to be a thing out in the world, but modern physics is very happy to say that the phenomenon cannot be correctly located as an object inside the cloud in which it is perceived, because it turns out the cloud is “rainbowing” in different ways in many different directions and you are only getting part of the story. You see part of the light suggesting a reconstruction of a physical object, but if we look at all of the light we realize that the different reconstructions are not reconcilable because they all have to occur at a 41° angle from the rays of the Sun and real objects have varying angles.
You can see a lot of similar ground trod here. Both claims of illusion appeal to the act of trying to reconstruct the external world from observed experience. Both then appeal to looking at all observations taken together. But there is a difference in scope. In the rainbow case there is a counterfactual, a “what you would have seen,” that can be confirmed with a camera at some other location. We can do a parallax measurement to reconstruct that the rainbow is actually around as far away from you as the Earth is from the Sun or so. But in MWI these sorts of experiments which may be possible are ultimately forever infeasible, I need to have quantum control of every atom of my measurement apparatus if I want to make some similar observation, and even then any particular observation will be consistent with a classical ontology, it's just the pattern of many observations which will suggest non-local correlations which I can interpret as evidence for maybe everything being illusory. One rocks your boat gently, the other is sailing in a hurricane.
I think there is some misunderstanding here based on the wording used. OPs point is simply that there is no separate, explicit branching process. Instead, there is only the normal, continuous evolution of the wavefunction.
Our experience is recovered from this by positing that subjective/phenomenological experience is somehow tied to the individual components of the wavefunction. Since the individual components don't interact with each other, it gives the appearance of branching. This is compatible with our observations.
Furthermore it firmly places ourselves and our own experiences within the very same fabric of reality it's describing. This is a rather inevitable problem of a fundamental theory of reality, since we ourselves are part of it and our inner working (whatever that is) must be layered on top of that fundamental machinery.
So we get from the “the wave function somehow collapses” to “the phenomelogical experience is somehow tied to the individual components wave function” which are somehow connected go the measurements.
You're right regarding the first part, we don't have a clear explanation of how the wave function might be related to subjective experience, but that's mostly because we don't have a good account of subjective experience at all.
For the second part, there's nothing mysterious going on at all: measurement is simply physical interaction.
What are "the individual components of the wave function"? They are a mathematical construction. The wave function can be decomposed in many (and infinity of) ways. Why is one decomposition chosen and not another? Or are all the decompositions equally "(un)real"? The issue is not so simple and it's related to the "mysteriousness" of the first part.
How do we know that opponents of MWI aren't succumbing to antibrunoist prejudice?
> [Giordano Bruno] is known for his cosmological theories, which conceptually extended the then-novel Copernican model. He proposed that the stars were distant suns surrounded by their own planets, and he raised the possibility that these planets might foster life of their own, a philosophical position known as cosmic pluralism.
I need to point out that he was burned at the stake for advocating cosmic pluralism.
Modern cosmology by definition doesn’t make different predictions than saying “cosmological models mention things like stars and galaxies, and those models fit our observations, but those stars and galaxies aren’t technically real.”
> I need to point out that he was burned at the stake for advocating cosmic pluralism
I need to point out that he wasn't. His heresy trial had nothing to do with his cosmic beliefs. He was convicted for teaching religious beliefs, as a Catholic, that were contrary to Catholic dogma. The church punishment for this was excommunication but the secular punishment was execution, so he got burned.
> A mathematical construct (the wavefunction) is taken as more real than basic empirical experiences like "measurements" and "we."
In chemistry, atoms are also taken as more real than "measurements" and "we". Some chemists would probably insist that "we" are actually composed of atoms.
The mathematical construct was developed/discovered to explain observations. Normally when we do that we consider the mathematical explanation to be “real” in that it is describing reality.
I mean, it seems strange to reject implications of a successful model if those implications aren't observable. Eg, I believe that stuff exists outside our light cone, because it would be unparsimonious for there to be an extra rule saying that stuff that I can't access doesn't exists.
The MWI is literally obtained by taking traditional Copenhagen QM and removing the postulate of measurement. In other words, it is by the most genuine way of measuring complexity of a theory, demonstrably simpler than copenhagen! We just don't like it because it doesn't jive with our experience... it feels weird.
When you have two theories that make the same predictions, but one is strictly simpler than the other, Occam's razor tells us to prefer the simpler one.
In my mind, MWI or something akin to it, is the way to go, and is generally the way I conceptually think about QM.
I agree MWI is the minimal "version" (don't get hung up on that word) of quantum mechanics. I take it as a given and consider the way we experience/interpret the universe as an insight into how the brain works.
I don't think you get to claim simplicity for the theory that results in an exponential explosion of entities. Simplicity isn't just about descriptive simplicity.
That's not the right way to frame MWI, there's no explosion of entities at all, just a single wave function that evolves. The wave function starts out being a function of n variables, evolves according to a differential equation and remains a function of n variables.
It is strange given my physics training to read this discussion.
Physicists do not directly apply Occam's razor in most circumstances, and we certainly don't do bookkeeping on how many “entities” there are, and your comment illustrates precisely why: how you count is not a given.
Here is something that did happen in classical mechanics: we transitioned from F_i = m a_i to Lagrangians even though they have roughly similar explanatory power. Here is an argument that was not made: “Lagrangians are truer because you don't have to postulate three equations of conservation of momentum and one of conservation of energy, you just have one law of least action.” Nobody even declared a confident end to the tyrrany of Newton's third law as Lagrangians no longer need it.
Furthermore nobody said that classical field theory was “better” per Occam's razor merely because you were no longer bound by the tyrrany of the least action principle and could now consider essentially a world in which F_i = m a_i was not universally true, to be replaced with a philosophical interpretation by some bloke Neverett who declares the fields on-shell “typical” and derives the least-action principle as a statement that “if you find yourself in a typical universe then almost surely your retroactive reconstruction of events satisfies the least action principle.”
No, many worlds interpretation is thriving precisely because it calls physicists attention to the importance of decoherence calculations in the understanding of various physical phenomena. It gives you an idea for how to model measurements that are somehow partial, or being continuously performed. Occam doesn't enter into the discussion in the first place.
You're right that you can't just count the number of postulates naively because there is generally not a well defined way to do so. A great example is the one you give: three laws of motion vs one law of least action. However, if I told you that it was possible to reproduce all of mechanics with only the first two of Newton's laws, then surely you'd agree that there wouldn't be a need for the third law and in that sense the new system of postulates would be simpler.
In other words, because MWI is obtained by removing a postulate from the usual formulation of QM, I think it's fair to say it's simpler. If, instead, MWI had been obtained by formulating all of QM in some other distinct framework where there was no mention of wave functions, measurements, or the Schrodinger equation etc, and it had one fewer postulate, then yes I would agree that you can't arbitrarily say that it's simpler.
Put another way, suppose you have some linear dynamical differential equation in n variables that you solve somehow. Then, take that solution and expand it in some set of basis functions (e.g. a Fourier series). You wouldn't throw your hands up in the air and say "wow that's so complex, look at all those infinite terms in the solution!". The complexity isn't really there, it just appears to be there because you've chosen to expand your solution in a basis that makes it appears really complex. Similarly in the MWI we see something that looks complex simply because we've chosen to expand the solution in a set of states that makes sense to us (state1 = particle at location 1, state2 = particle at location 2, ...)
With the Fourier example, there is a constant amount of information in the system, and so the apparent complexity in the Fourier basis representation is an illusion.
Is that the case with MWI? Is there a constant amount of information at time t and t+1? Note that I see a fundamental equivalence between information and entropy (of the computational sort), and so an exponential growth of computation required to get from t to t+1 is an inescapable theoretical burden.
To put it a different way, MWI seems to reify possibility. But the state of possibility grows exponentially in time, and so the theoretical entities grow exponentially.
Yes, there is a constant amount of information in the system. In fact, that's part of the beauty of MWI in contrast with Copenhagen. In MWI, the state at any point in time can be used to reconstruct the state at any other time. However, because of the collapse, that's not the case for Copenhagen. In other words, measurement in Copenhagen actually destroys information.
As far as computational complexity goes, the same happens in classical mechanics. Start with 10^23 particles far apart but moving toward each other. Then simulating the first second is simple, but once they get close together, it gets hard with the computational complexity growing as time progresses (or alternatively the error growing for fixed computational resources).
I still don't follow. There is a constant amount of information as input into the system, but (from my understanding) the "bookkeeping" costs grow exponentially with time. This is different than the classical case where the complexity is linear with respect to time. A quick google search says that simulating quantum mechanics is NP-hard, which backs up this take. This bookkeeping is an implicit theoretical posit of a QM formalism. We can think of different ways to cash out this bookkeeping as different flavors of MWI, but we shouldn't hide this cost behind the nice formalism.
Comparing MWI to collapse interpretations, collapse is better regarding this bookkeeping as collapse represents an upper limit to the amount of quantum bookkeeping required. MWI has an exponentially growing unbounded bookkeeping cost.
Yes, that's right but that has to do with entanglement in QM and is not specific to MWI. In classical mechanics, a system of n particles is specified by 3n different functions of time - the three coordinates for each of the n particles. The complexity in terms of e.g. memory then scales linearly with the number of particles.
In QM by contrast we have entanglement, which essentially means that we can't describe one particle separately from all the other particles (if we could, then QM would be just as "easy" to solve as classical mechanics). Instead of 3n functions of time, we instead have a single function of 3n variables (plus time). The complexity of these functions does not scale linearly with n (imagine e.g. a Fourier series in one variable vs one for two variables)
So, you're right that QM is an exponentially harder problem to solve compared to classical mechanics, but this is because of entanglement and has nothing to do with Copenhagen vs MWI.
We don't like it because it can make no actual predictions about the physical world if it doesn't include measurements. And when you include them it's no longer that simple.
You can make predictions if you assume that you are in (weighted) randomly selected world.
Well rather than a single world, I think that the perceived identity exists in multiple highly similar and interacting worlds seen as an entity. Just like we have a size in physical space we also have a non-zero "size" in probability "space".
uh... didnt early proponents of many worlds interpretation actually say that it did? Say Hugh Evrett? If it is reinterpreted to say otherwise, is it really the same QM model anymore?
Nope, black holes have entropy proportional to the surface area of their event horizon. So the more stuff they engulf, the more their entropy increases, and thus they satisfy the 2nd law of thermodynamics just like everything else.
Has this been observed, or is it more: here's some math that makes entropy even possible because the alternatives sound unlikely? I'm betting on the weird, things like the universe being generative on a macro scale and entropic in local timespace, wherein dark matter is merely newly created matter not in another universe, but in this universe, maybe some unknown interactions between the unstoppable force of expansion and the immovable object of a black hole's gravity. I'm probably blathering, I'm not a physicist.
The Bullet Cluster (https://en.wikipedia.org/wiki/Bullet_Cluster) is a big reason why: it's a collision of two clusters of galaxies. We can see where the gas and stars are, but gravitational lensing indicates the bulk of the mass is elsewhere.