Hacker News new | past | comments | ask | show | jobs | submit login
Long-sought decay of Higgs boson observed at CERN (home.cern)
243 points by chmaynard on Aug 28, 2018 | hide | past | favorite | 83 comments



Furthermore, both teams measured a rate for the decay that is consistent with the Standard Model prediction, within the current precision of the measurement.

And now everyone: Noooo, not again.

(Explanation: it's well-known that the Standard Model can't be completely correct but again and again physicists fail to find an experiment contradicting its predictions, see https://en.wikipedia.org/wiki/Physics_beyond_the_Standard_Mo... for example)


I've been told by a physicist friend of mine that one of the big problems is the amount of data produced by LHC sensor arrays. While the LHC is running, 600,000,000 particle collisions happen every second. Every collision produces particles that decay in complex ways into more particles, and so on.

I forgot the numbers, but they have severals layers of filtering between sensors and long-term storage. First there are FPGA-based real-time filters, next to the sensors, which throw away most of the data as "noise." Then there are local CPUs which throw away most of the remaining data, again classified as "noise" or uninteresting. Finally, what remains (30,000 TB/year) is stored long-term to be later analyzed by physicists.

All levels of filtering and analysis, from the FPGA to the physicist's algorithms, make use of the Standard Model itself and the rest of known physics to figure out what's "interesting" and what's "noise."

One big problem is thus: how can we find new things if we are only looking for what we already know? Hence the need for machine learning and automatic pattern discovery.


Well, the standard model can be correct. It is correct until some experiment proves otherwise.

It is full of unexplained hadcoded parameters, indeed, which need an explanation from outside of the SM.


The standard model has no conception of gravity, so it is most certainly incomplete, at the very least.


> It is full of unexplained hadcoded parameters, indeed, which need an explanation from outside of the SM.

https://en.wikipedia.org/wiki/Magic_number_(programming)#Unn...

> The term magic number or magic constant refers to the anti-pattern of using numbers directly in source code


Who is going to request changes when doing a code review on a pull request from God?


But which branch do you pull from? God has been forked so many times, it's hard to keep track. There are several distros available, so I guess you just get to pick the one that checks the most of your needs at the time.


God doesn't use GitHub.

You have to run `git format-patch` and email the results to Him.


Yeah him and Torvalds alike, maybe it's an ego thing. Or a graybeard thing?


I honestly don't get the "No, not again". Could you elaborate? Thanks

edit: thanks for the explanation


There are many ways to extend the standard model and it would be nice to have additional constraints on the extensions (which would come from observing behavior not described by the standard model).


> There are many ways to extend the standard model and it would be nice to have additional constraints on the extensions (which would come from observing behavior not described by the standard model).

For any experts reading this: what are plausible-looking extensions of the standard model which can probably be challenged/"proved" by current generation accelerators (in particular the LHC) and would lead to interesting extensions of the standard model?

As far as I am aware, the LHC could e.g. find no sign of lots of variants of supersymmetry, which was a plausible candiate for this in the past:

> https://www.scientificamerican.com/article/supersymmetry-fai...

> https://www.quantamagazine.org/complications-in-physics-lend...


I'm no theorist, so this is perhaps my jaded view, but right now my impression is that the phase space of theories is too large for someone to have a single prediction. The situation seems more like the LHC experiments are looking for ANY potential discrepancies with the standard model and then theorists will happily write up some reason for why it has to be that way. I went to a conference a few years ago where the LHC experiments announced that previous hints of a discrepancy with the SM evaporated with more statistics, which resulted in a whole session of theorists having to cancel / completely redo their talks.


> The situation seems more like the LHC experiments are looking for ANY potential discrepancies with the standard model and then theorists will happily write up some reason for why it has to be that way.

Until recently they were mostly looking for specific deviations predicted by extensions. They only recently announced that they're now going for a broader search.


>theorists will happily write up some reason for why it has to be that way.

They'll write down a model that describes it, which is all theorists' job to begin with.


I imagine they can't improve upon the standard model if everything they find never contradicts what they already know. Or rather what they can already conclude by the standard model.


Are there so many parts that could be wrong?


As well as ignoring gravity, the standard model also doesn't predict or explain any of those observations labelled "dark matter" and "dark energy".

We can add an extra parameter to general relativity (known as the "cosmological constant") to describe "dark energy" observations via a kind of 'anti gravity'; although we still don't fully understand what that means, or whether it's a correct description. It's also unclear whether this would have anything to do with quantum theories (like the standard model).

General relativity can explain "dark matter" observations by assuming there is more mass/matter than we can see (i.e. it's electrically neutral and doesn't interact with light). Since the standard model tries to describe all of the fundamental constituents of matter, and forces including electromagnetism (light), having nothing to say about such a seemingly large amount of stuff is a rather large discrepancy in the standard model.

AFAIK the standard model also says that neutrinos have zero mass; yet we've observed them undergoing radioactive decay ("neutrino oscillation", where each sort of neutrino can decay into the others). Particles which decay require some amount of time to do so. Particles with zero mass always travel at the speed of light (like photons, and hypothetical gravitons) and hence don't experience any time passing (this sentence is a consequence of special relativity). So particles with zero mass can't decay, so neutrinos can't have zero mass. I don't think we've measured their mass very accurately yet; we know it's very small, but it cannot be zero.


A big one is that the SM straight up does not account for neutrino mass, which we know they must have due to oscillations in flavor from sources such as the Sun. [1]

[1] https://en.wikipedia.org/wiki/Solar_neutrino_problem

OOPS: I see a sibling comment also covers this. Oh well, I'll keep this one up since it's slightly more eye-catching.


Crucially: it really doesn't work with gravity or relativity.


The standard model incorporates special relativity, but you're right in that general relativity doesn't mesh with the standard model.


To avoid confusion, relativity must be read as general relativity, the standard model is a relativistic quantum field theory and hence takes special relativity into account.


You can even formulate the standard model on a curved background metric that arises from GR. Just don't try to make the metric itself dynamical.


If I have a conversation with you and you agree with everything I say, I learn nothing.

If we have a conversation and you reveal that my assumptions are wrong, I can go back, rethink everything, and come out with a stronger world view.

Basically, this experiment didn't tell scientists anything they didn't already know, or didn't prove them wrong--which might lead to newer, more interesting models that reveal more about the universe.


> I honestly don't get the "No, not again". Could you elaborate?

As the cosmologist Sean Carroll said in his podcast, particle physicists haven't really been surprised by an observation since the 1970s. Presumably many were hoping for a surprise.


> within the current precision of the measurement

The error bars are ±20%, so there's quite a lot of wiggle room for new physics.


Does this increase the likeliness that we need a bigger collider to finally reach new physics?


To guarantee new physics, you'd need a 10^16 TeV collider, so literally one-thousand-trillion times more powerful than the LHC which is at 14 TeV!

Context: at 10^16 TeV you start probing the planck energy scale, where gravity becomes strong enough to influence particle interactions. In nature, these energies are reached only inside black holes and just after the big bang. The standard model does not describe gravity, so it has no predictive power at this scale, which is why the above phenomena are a complete mystery to us.

The standard model gets away with this at LHC energy scale because gravity is so weak it can literally be neglected in the calculations.

It's of course possible we'd see some new physics before then, but it's not guaranteed, building a collider 10 times would be a bit of a stab in the dark.


I wonder if it’s harder to get funding for that. With the LHC they at least expected the Higgs boson. And hopes for super symmetry signs that seem to have failed to materialize. With only hand waving and hopes for hypotheses left, and ever increasing costs... :-/


The easy ask now is for a precision electron-positron collider to study the Higgs itself. Lepton/anti-lepton colliders are as clean as an experiment can be, in stark contrast to hadron colliders, which are as clean as a car accident.

With the improved understanding of the Higgs that we have now, an e+/e- machine can be custom-built to study it in great detail. The Higgs is the newest fundamental particle to physics, and the first/only known scalar. We should expect that it has more to tell us than simply its own existence.

Even when the LHC was being planned, LHC was the discovery machine, and precision experiments were to follow. In my opinion, only when those measurements are complete should we consider a huge leap toward higher energy, unless new accelerator technology emerges.


> The Higgs is the newest fundamental particle to physics, and the first/only known scalar.

A question to physicists: Is there (strong) evidence that there might exist other scalars (if yes: at which energy level?). If not: Do there exist attempts of theories that predict why there is only one scalar?


I'm unaware of a particular compelling argument for another scalar. Scalars are easier for theorists to use when trying out a new theory, so they abound in theoretical physics.

At some level, it is something of a surprise to have found that there is just one particle responsible for the Higgs mechanism, and it is indeed spin-0, as Nature hasn't given us one of those before. Prior to the discovery of the Higgs, perhaps the majority of physicists were betting on something Higgs-like, but not quite the pure vanilla Standard Model Higgs.

I would not bet any money on it, but if one were looking for another scalar, one might find it in whatever mechanism underlies dark energy. Many theoretical models for dark energy use scalars, again, because they are easier...


It means that another way in which the standard model could have been broken doesn't appear, at first glance, to be broken.

Improved colliders, and improved experiments generally, tend to increase the number of ways in which we can search for a departure from the plan.

What we may need, at least as much as new colliders, is the right insight to open a new way forward. The standard model is large and complicated -- there aren't that many people who really understand its complexity enough to find something analogous to Gell-Mann's eightfold way.


At roughly what scale would new colliders need to increase by to open new paths?

Are proposal estimating new efforts at 100's of km? 1000s?


The problem is that it's very difficult to rule out most hypotheses, since they can be tweaked to avoid whatever a particular experiment said. Usually this means that colliders can't rule out a hypothesis; they can only put a limit, like "no effect is observed below energies of 14TeV".

Rather than trying to rule out a hypothesis completely, it's easier to place limits outside of which a hypothesis is essentially useless. IIUC, if the LHC didn't spot the Higgs mechanism below 14TeV then that wouldn't work as an explanation of other particles' mass. At that point there might have still been a Higgs mechanism at higher energies, but we wouldn't really care either way since the whole point of coming up with the Higgs mechanism was to explain the mass of other particles: if it can't do that, it becomes inconsequential. That's why the LHC was so important for the Higgs: we would either find it, and hence have a better explanation of particles' mass; or we would know that particle mass doesn't follow the Higgs mechanism.

At the moment we have a bunch of hypotheses which predict effects that larger colliders might see, but I don't think we have (feasible) limits which let us discard these hypotheses. Hence new colliders don't have a goal to aim at: it's just a case of going as big as possible to have the highest chance of seeing new effects. Yet we might see nothing, and that wouldn't actually tell us much, since the predictions could just be tweaked again.


An analogy I like is a person in a primitive culture planning an expedition to determine the curvature of their planet. How far do they have to go until it is detectable? Well, it depends on the curvature, which is what they're trying to find out in the first place.


I've been told it's physically impossible to make a bigger collider (any physicists around?), which is partly why the LHC was such a big deal, it's the end of a technological path. We can make different types of colliders though.

Edit: Nope I'm wrong.


Where'd you hear that? It's absolutely not true. The LHC itself has been upgraded several times during its life and there are several designs for next generation colliders (such as the Future Circular Collider).


There has only been one upgrade to the LHC by my count (splice consolidation in LS1).


There was a major upgrade in 2013-2015 which doubled the energy (from 3.5 TeV per beam to 6.5 TeV per beam), but the luminosity was increased in 2016 and 2017 as well.


> There was a major upgrade in 2013-2015 which doubled the energy (from 3.5 TeV per beam to 6.5 TeV per beam)

Yeah, that was the splice consolidation.

> but the luminosity was increased in 2016 and 2017 as well.

From what I know, that's through conditioning, repairs and improved techniques (BCMS, ATS, anti-levelling). The increase from 2015 to 2016 was because they didn't finish the intensity ramp in 2015 as the scrubbing campaign lasted so long. 2016 had the TIDVG dump problem as well as outgassing from the injection kickers. 2017 was marred by 16L2.


Sorry, I don't understand this point.

If a model of the universe was completely correct, wouldn't it have equivalent complexity to the universe and hence not be a model?


No.

This is false both due to the nature of mathematical modelling in physics, and the ad-hoc nature of the term "complexity" when comparing the universe and the science we use to understand (or, perhaps, only model it).

To elaborate on the first point: No measurement can be perfect enough for their to be no uncertainty on it. It is therefore impossible for a model to be completely correct, although it can (obviously) agree with experiment to within current experimental uncertainties - like the standard model is now, so far.


I think the answer to your question is that we're not modeling the universe itself, but the laws governing the dynamics of the universe. Hence we don't need to know all information about the universe.


So with the laws, how do we know when we're done?

For example, we had Newtonian Mechanics and that looked good for a couple of centuries, but then it turned out that it was too simple and we needed to add more laws.


We're so far away from being done it's not an applicable question just yet, but it may not be possible to know if there's something outside our ability to comprehend or perceive, even with sophisticated measuring tools.

We would have to settle for merely being able to explain every single phenomena in the observable universe at every scale.


Look up and see a recent essay by the head of the Princeton Institute of Advanced Study where, for my rough summary from rough memory, he explained that physics can have lots of models that all fit all the data the same. So, it can be tough to find the one correct model, the one book of physics, the one way the universe runs, etc.

Then see a recent essay at Quanta Magazine that explains that physics has long looked to some largely esthetic concerns especially about symmetry to pick and choose among alternate theoretical explanations -- again my rough summary from my rough memory.


I found an essay by the head of the IAS, published in Quanta Magazine. Could the two essays be one and the same?

https://www.quantamagazine.org/there-are-no-laws-of-physics-...


The article on beauty, e.g., symmetry, as a way to select candidate theories in physics I read was

Sabine Hossenfelder, Lost in Math: How Beauty Leads Physics Astray

at

https://www.basicbooks.com/titles/sabine-hossenfelder/lost-i...

and is an excerpt from her book of the same name.

I was led to that excerpt by page

https://www.quantamagazine.org/authors/sabine-hossenfelder/

from page

Sabine Hossenfelder, "The End of Theoretical Physics As We Know It", August 27, 2018.

at

https://www.quantamagazine.org/the-end-of-theoretical-physic...


I'd say you're done whenever you've been able to predict/explain all phenomena that you believe you should be able to predict/explain using your models? Not sure if that answers your question, but if not then I'm not sure I understand what aspect of this you're confused about.


Rephrasing other people's responses: I think it's sorta like the halting problem. If we could solve "how do we know if we've discovered all laws of the universe", then we would likely already require the knowledge of all the laws already, since it likely depends on everything else.


This makes me think of Plato's Cave.


> how do we know when we're done?

It's not currently (maybe ever?) possible to know when you're 'done' with an experimental science. With newtonian mechanics, we didn't know we needed something better until we did.


No? Think of the game of life. Low model complexity, high universe complexity.


I think the term used is emergent complexity.

So when you go from particle physics to chemistry, new complexities emerge that can't be explained in the realm of particle physics alone (iirc).

sadly certain "sciences" still cling to the idea that they can simply aggregate the results from multiple of their "particles" and get a solution for larger systems.


>So when you go from particle physics to chemistry, new complexities emerge that can't be explained in the realm of particle physics alone (iirc).

Correct me if I'm misunderstanding, but I think you're confusing general theory vs practical applicable models here maybe? Yes, fundamental principles of interaction can combine at scale to create new large scale effects, but that doesn't change the fact that they came out of fundamental principles nor that they can't be "explained" via those principles. There is no magic that pops into existence up the chain. The asymmetry of water molecules and the way their electron clouds distribute create all sorts of fascinating effects in bulk water, but they're still directly coming out of physics of course.

The issue in practice however is that the level of computing necessary to accurately model reality at scale from fundamentals matches or exceeds actually doing it in reality, and for us rapidly becomes absolutely, utterly infeasible for anything but the simplest systems. "New emergent complexities" absolutely "can be explained" from a correct lower level set of principles, but that doesn't mean we can actually crunch the math at any scale we want. So we need higher level bulk models too, at many levels all the way up, which are good enough to be effective approximations to a given level of accuracy in practical computing time. The low level fundamentals often at bulk average out due to random variance in sufficient quantity and are irrelevant to whatever we care about, so there is no need to do it that expensively (even if we could). But that isn't the same thing as the fundamentals being wrong somehow or not being at the root of everything above.


Right. There are roughly 10^15 atoms in a speck of dust. A terabyte is 10^12 bytes. Our inability to simulate does not mean our models are wrong.


The model isn't as complex as the universe in that sense, but filling out that model with the exact state of the entire universe would be. Just like how a chaotic function can be very simple, but give rise to incredibly complex outputs.


Only if the universe is not reducible in any way.

But why would a non-reducible universe be so predictable?


Does anyone here know of a good account (something I could read, preferably) suitable for someone without much knowledge of modern physics of how the Standard Model came to be constructed on the basis of experimental evidence?

Most descriptions of particle physics that I have encountered begin right away with an enumeration of the different types of particles, and the statement that some of them are composed of various combinations of quarks, but don't include (at least not without investing some hours of my time) any indication of how these things are observed, what set of data this model fits, what is the nature(if any) of a quark independent of the hadron in which it is a constituent, what are the laws governing quarks that cause these particles to arise, etc.

I don't feel I'm learning much of anything by just memorizing the names of all the members of the particle zoo. But it seems I must spend some hours doing this before I can gain any understanding of what particle physics means, or how particle physics is done?


The book "Fields of Color" is short, math free, and largely organized by the historical progression of discovery.

> any indication of how these things are observed

In the last 50 years or so, the bulk of the evidence has come from particle accelerators, but there's been meaningful results from other experiments as well. Sean Carrol organizes the current state of physics into two broad categories: intensity experiments, like the LHC, which are attempting to reach energy concentrations we haven't probed before, and sensitivity experiments, which observe natural but rarely produced or interacting particles, like neutrino detectors.

> what set of data this model fits

All the data. The standard model is the best model we've found to explain all experiments observed in the history of physics.

> what is the nature(if any) of a quark independent of the hadron in which it is a constituent

Quarks and Gluons are bound together in the nucleus by the strong force. This force is, as its name indicates, very strong, however it falls off with distance sharply. The way it works out, the force is such that if you try to pull two bound quarks apart, the energy you add is sufficient to create new quarks. So lone quarks never appear, they're always bound into a composite of two or three, and if you try to pull them apart, you just end up making a second composite when they separate.

> But it seems I must spend some hours doing this before I can gain any understanding of what particle physics means, or how particle physics is done?

The blunt truth is fully understanding the standard model requires a lot of non trivial mathematics. I can't work with the math, but I've read through enough textbooks I've got some intuition for the big picture now. This isn't a topic where you can swoop in, spend 15 minutes, and suddenly understand it all. It's not going to just take some hours, it'll take much much more time than that.

Some topics cannot be simplified into a tidy summary that can be skimmed in a couple hours.


>> what set of data this model fits

> All the data. The standard model is the best model we've found to explain all experiments observed in the history of physics.

This isn't the full story. The Standard Model does not explain neutrino mass (which we know exists from neutrino oscillations) or dark matter & dark energy. These are very big open questions!


Yeah, poor wording on my part. That's why I said "best explanation", but I should have emphasized we do know the standard model is incomplete. There's just no simple path forward at this point.


Or gravity, which is an even bigger open question.


Or antigravity, gravitomagnetism. This would be much cheaper to explore. you just need fast rotating supraconductors, best on planetary scale.


Could you describe what precisely you mean by antigravity?


"supraconductors"?

Edit: apparently an older term for "superconductors"


Nope, just my phone autocompletion at work.

For antigravity just look up M.Taijmar's work on the Thirring-Lense effect: https://patents.google.com/patent/WO2007082324A1/en?inventor...

To understand gravity you don't need to build super-expensive devices to find the particle-interpretation of this wave force. The Higgs makes no sense outside the standard model. Studying the wave-interpretation as attracting force is easier and also alignable with general relativity. Explaining an wide-reaching attractive force as particle really makes no sense at all (outside QM), as Heisenberg also complained.


> The Standard Model does not explain neutrino mass

Not as originally formulated (because back then neutrinos were thought to be massless) but it's straightforward to extend that original version to include neutrino masses, by giving leptons their own equivalent to the quark sector's CKM matrix [1], the PMNS matrix [2]. In modern parlance, "Standard Model" means this updated version.

Granted, if you do just that, you make neutrinos Dirac spinors (just like the quarks) and there is no obvious reason for why they should be so much lighter than the other fermions. Giving them a Majorana mass induced by a much heavier (GUT-scale) equivalent to the Standard Model Higgs would provide a natural explanation for that mass hierarchy through the seesaw mechanism [3], and that would be true Beyond the Standard Model physics.

[1] https://en.wikipedia.org/wiki/Cabibbo–Kobayashi–Maskawa_matr...

[2] https://en.wikipedia.org/wiki/Pontecorvo–Maki–Nakagawa–Sakat...

[3] https://en.wikipedia.org/wiki/Seesaw_mechanism


The introductory chapter of "Introduction to Elementary Particles" by David Griffiths. It doesn't require much knowledge of physics but tells the story of how the theory came to be over the 20th century (based on experimental evidence and the theoretical work happening in parallel).


Idk if I would recommend Griffiths, his texts are usually extremely rigorous. I can't speak for this one but certainly his ED and QM books.


Yes I agree that he has a full semester+ of undergraduate/early graduate "course work" in his books... but the first chapter of his particle physics book has almost zero math -- it's just a great story.


His elementary particles book - I say having skimmed and the not bought it, so a I may be wrong - looks a good place to start by virtue of both containing a full introduction to the phenomenology behind particle physics and also an introduction (but only an introduction, compared to (say) Weinberg) the mathematics behind it.


Try

https://arxiv.org/abs/hep-ph/0401010

(maybe with some Wikipedia lookups along the way) by one (arguably the greatest) of its masterminds.


https://theoreticalminimum.com/courses/particle-physics-2-st...

Not something you can read, but that's what you're looking for.


Hossenfelder’s book "Lost in Math" ( One review here http://www.math.columbia.edu/~woit/wordpress/?p=10314 ) covers a bit of why confirming the standard model is a bit disappointing. The rough idea is the standard model is known to be limited (problems unifying with general relativity and ready to explain dark matter), so one wants to see an explicit experimental-scale (non-galactic scale) exception to help find a replacement theory.


Thanks! That review alone is worth a HN submission and I'm not even a hobbyist physicist by any stretch of the imagination.


Slightly off-topic and maybe this is just the way especially physics-related press releases and layman articles are written, but they tend to often have a part with "now that we have finally proven this, imagine what new research/flying cars this discovery will open up for us!"

The truth is there whether attempts are made to prove it or not. Also proving/disproving does not alter the fact. If you prove it to be true, great, but it was true already before. Now you just have the fact formalized on paper, so to say.

Now, couldn't one just make an educated assumption that some-particle-physics-problem has been already proven and then see what new things can are made possible with this assumption.

And then try some low hanging fruit enabled by this "virtually proven" assumption. Any success would indirectly prove the assumption, too, or at least give strong evidence in favor. Also, it might make interesting discoveries happen faster.


How do they use Machine Learnig to analyze the data?


Machine learning is used to reconstruct individual particles in the detector and also to separate signal processes from background processes.

A general example... process A and process B can both have electrons in their final states (with other objects...). ML is used to separate A and B based on the kinematic properties of electrons (in combination with those other objects). Also, higher upstream, ML was used just to know that we had an electron to begin with!

A lot of BDTs are used, with deep learning under very active investigation. For example, when looking for the Higgs decaying to two bottom quarks, a slew of ML algorithms are used to identify so-called "b-jets" (jets which are identified as originating from a b-quark). In ATLAS we have low level taggers using deep neural networks (using Keras) in combination with higher level taggers using BDTs. Another example is the recent ttH observation, where XGBoost was used [1].

[1] https://atlas.cern/updates/physics-briefing/observation-tth-...


CERN did a particule tracking competition with Kaggle which illustrate at least a use case : https://www.kaggle.com/c/trackml-particle-identification


They did another Kaggle contest four years ago: https://atlas.cern/updates/atlas-news/machine-learning-wins-...





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: