The Quanta write up is a bit more neutral on this announcement. There is a computational result that was not included in the theoretical value used to bench the test against. Once reviewed, this difference may yet go back to oblivion.
To clarify, for those not familiar with this topic, this experiment is making measurements at such exquisite precision that even the calculations for the theoretical prediction are extremely non-trivial and require careful estimation of many many pieces which are then combined. Which is to say that debugging the theoretical prediction is (almost) as hard as debugging the experiment. So I would expect the particle physics community to be extremely circumspect while the details get ironed out.
The Quanta article explains it quite nicely. To quote their example of what has happened in the past:
> ”A year after Brookhaven’s headline-making measurement, theorists spotted a mistake in the prediction. A formula representing one group of the tens of thousands of quantum fluctuations that muons can engage in contained a rogue minus sign; fixing it in the calculation reduced the difference between theory and experiment to just two sigma. That’s nothing to get excited about.”
If the theoretical prediction can't be calculated until the experiment is done that motivates the choices of what and what not to approximate, is it really a prediction?
If the theoretical prediction can't be calculated until the experiment is done that motivates the choices of what and what not to approximate, is it really a prediction?
Let me make that more meta.
If a theory is unable to predict a particular key value, is it still a theory?
This is not a hypothetical question. The theory being tested here is the Standard Model. The Standard Model in principle is entirely symmetric with regards to a whole variety of things that we don't see symmetry in. For example the relative mass of the electron and the proton.
But, you ask, how can it be that those things are different? Well, for the same reason that we find pencils lying on their side rather than perfectly balanced around the point of symmetry on the tip. Namely that the point of perfect symmetry is unstable, and there are fields setting the value of each asymmetry that we actually see. Each field is carried by a particle. Each particle's properties reflect the value of the field. And therefore the theory has a number of free parameters that can only be determined by experiment, not theory.
In fact there are 19 such parameters. https://en.wikipedia.org/wiki/Standard_Model#Theoretical_asp... has a table with the complete list. And for a measurement as precise as this experiment requires, the uncertainty of the values of those parameters is highly relevant to the measurement itself.
That’s a good (and profound) question, not deserving of downvotes.
It turns out that the simplified paradigmatic “scientific method” is a very bad caricature of what actually happens on the cutting edge when we’re pushing the boundaries of what we understand (not just theory, but also experimental design). Even on the theoretical front, the principles might be well-understood, but making predictions requires accurately modeling all the aspects that contribute to the actual experimental measurement (and not just the simple principled part). In that sense, the border between theory and experiment is very fuzzy, and the two inevitably end-up influencing each other, and it is fundamentally unavoidable.
Unfortunately, it would require more effort on my part to articulate this, and all I can spare right now is a drive-by comment. Steven Weinberg has some very insightful thoughts on the topic, both generally and specifically in the context of particle physics, in his book “Dreams of a final theory” (chapter 5).
Philosopher Larry Laudan had a tripartite view. He proposed IIRC convergent processes between better (and more complete) measurements, better (and more complete) models and theory, and better instrumentation. Thus, one could also include a fourth term perhaps: improving technology.
Thanks for the pointer. That sounds vaguely like a view I've been toying with. I'll be interested to see if his version of it is more rigorous than mine.
That's what Duhem-Quine thesis in the philosophy of sciences is. The thesis is that "it is impossible to test a hypothesis in isolation, because an empirical of the hypothesis requires one or more auxiliary/background assumptions/hypotheses".
Not exactly. Analytic solutions to simple problems will produce as many predictions as you want from them, and you can test them in a year, two years, or a century from then. These highly approximated calculations, in contrast, will come out one way or the other, depending on how many of which terms you add (this is especially common in quantum chemistry) - and nobody will decide on the "right" way to choose terms until they have an experiment to compare it against. That means that they aren't predicting outcomes, they're rationalizing outcomes.
it's not good to cherry-pick paragraphs from the whole artile.
> But as the Brookhaven team accrued 10 times more data, their measurement of the muon’s g-factor stayed the same while the error bars around the measurement shrank. The discrepancy with theory grew back to three sigma by the time of the experiment’s final report in 2006.
No, the essence of my point is that the number of sigmas is meaningless when you have a systematic error — in either the experiment or the theoretical estimate — all that the sigmas tell you is that the two are mismatched. If a mistake could happen once, a similar mistake could easily happen again, so we need to be extremely wary of taking the sigmas at face value. (Eg: the DAMA experiment reports dark matter detections with over 40sigma significance, but the community doesn’t take their validity too seriously)
Any change in the theoretical estimates could in principle drastically change the number of sigmas mismatch with experiment in either direction (but as the scientific endeavor is human after all, typically each helps debug the other and the two converge over time).
“Similar” is doing a lot of work there - what constitutes similar basically dictates if error correction has any future proofing benefits or none at all.
Are you asking are systematic errors "priced-in"/"automatically represented" or are they hidden inside the sigma calculation?
Systematic errors can easily remain hidden. The faster-than-light neutrino had 6-sigma confidence[0], but 4 other labs couldn't reproduce the results. In the end it was attributed to fiber optic timing errors.
So if you don't know you have a system error, then you can very easily get great confidence in fundamentally flawed results.
No. As written in another comment, imagine trying to determine whether two brands of cake mixes have the same density by weighing them. If you always weigh one of the brands with a glass bowl, but the other one with a steel bowl, you'll get enormously high units of sigma, but in reality you've only proven that steel is heavier than glass.
> it's not good to cherry-pick paragraphs from the whole artile
Isn't that exactly what you just did?
There's nothing wrong with showing only small quotes, the problem would be cherry picking them in a way that leads people to draw incorrect conclusions about the whole.
> if the lattice result [new approach] is mathematically sound then there would have to be some as yet unknown correlated systematic error in many decades worth of experiments that have studied e+e- annihilation to hadrons
> alternatively, it could mean that the theoretical techniques that map the experimental data onto the g-2 prediction could be subtly wrong for currently unknown reasons, but I have not heard of anyone making this argument in the literature
In the Scientific American article also currently linked on the front page a scientist & professor* at an Italian university is quoted as saying something along the lines of “this is probably an error in the theoretical calculation”. Would this be what the professor was referring to?
Edit: I’m not entirely sure whether they’re a professor, but here’s the exact quote
> “My feeling is that there’s nothing new under the sun,” says Tommaso Dorigo, an experimental physicist at the University of Padua in Italy, who was also not involved with the new study. “I think that this is still more likely to be a theoretical miscalculation.... But it is certainly the most important thing that we have to look into presently.”
https://www.quantamagazine.org/muon-g-2-experiment-at-fermil...