Get the top HN stories in your inbox every day.
beezle
ssivark
To clarify, for those not familiar with this topic, this experiment is making measurements at such exquisite precision that even the calculations for the theoretical prediction are extremely non-trivial and require careful estimation of many many pieces which are then combined. Which is to say that debugging the theoretical prediction is (almost) as hard as debugging the experiment. So I would expect the particle physics community to be extremely circumspect while the details get ironed out.
The Quanta article explains it quite nicely. To quote their example of what has happened in the past:
> ”A year after Brookhaven’s headline-making measurement, theorists spotted a mistake in the prediction. A formula representing one group of the tens of thousands of quantum fluctuations that muons can engage in contained a rogue minus sign; fixing it in the calculation reduced the difference between theory and experiment to just two sigma. That’s nothing to get excited about.”
whatshisface
If the theoretical prediction can't be calculated until the experiment is done that motivates the choices of what and what not to approximate, is it really a prediction?
btilly
If the theoretical prediction can't be calculated until the experiment is done that motivates the choices of what and what not to approximate, is it really a prediction?
Let me make that more meta.
If a theory is unable to predict a particular key value, is it still a theory?
This is not a hypothetical question. The theory being tested here is the Standard Model. The Standard Model in principle is entirely symmetric with regards to a whole variety of things that we don't see symmetry in. For example the relative mass of the electron and the proton.
But, you ask, how can it be that those things are different? Well, for the same reason that we find pencils lying on their side rather than perfectly balanced around the point of symmetry on the tip. Namely that the point of perfect symmetry is unstable, and there are fields setting the value of each asymmetry that we actually see. Each field is carried by a particle. Each particle's properties reflect the value of the field. And therefore the theory has a number of free parameters that can only be determined by experiment, not theory.
In fact there are 19 such parameters. https://en.wikipedia.org/wiki/Standard_Model#Theoretical_asp... has a table with the complete list. And for a measurement as precise as this experiment requires, the uncertainty of the values of those parameters is highly relevant to the measurement itself.
ssivark
That’s a good (and profound) question, not deserving of downvotes.
It turns out that the simplified paradigmatic “scientific method” is a very bad caricature of what actually happens on the cutting edge when we’re pushing the boundaries of what we understand (not just theory, but also experimental design). Even on the theoretical front, the principles might be well-understood, but making predictions requires accurately modeling all the aspects that contribute to the actual experimental measurement (and not just the simple principled part). In that sense, the border between theory and experiment is very fuzzy, and the two inevitably end-up influencing each other, and it is fundamentally unavoidable.
Unfortunately, it would require more effort on my part to articulate this, and all I can spare right now is a drive-by comment. Steven Weinberg has some very insightful thoughts on the topic, both generally and specifically in the context of particle physics, in his book “Dreams of a final theory” (chapter 5).
If you don’t have access to the book, in a pinch, you could peruse some slides that I made for a discussion: https://speakerdeck.com/sivark/walking-through-weinbergs-dre...
6gvONxR4sf7o
Sometimes it's like unit tests, where you might get the test itself wrong at first, but that still helps you get closer and write better tests.
raincom
That's what Duhem-Quine thesis in the philosophy of sciences is. The thesis is that "it is impossible to test a hypothesis in isolation, because an empirical of the hypothesis requires one or more auxiliary/background assumptions/hypotheses".
platz
it's not good to cherry-pick paragraphs from the whole artile.
> But as the Brookhaven team accrued 10 times more data, their measurement of the muon’s g-factor stayed the same while the error bars around the measurement shrank. The discrepancy with theory grew back to three sigma by the time of the experiment’s final report in 2006.
ssivark
No, the essence of my point is that the number of sigmas is meaningless when you have a systematic error — in either the experiment or the theoretical estimate — all that the sigmas tell you is that the two are mismatched. If a mistake could happen once, a similar mistake could easily happen again, so we need to be extremely wary of taking the sigmas at face value. (Eg: the DAMA experiment reports dark matter detections with over 40sigma significance, but the community doesn’t take their validity too seriously)
Any change in the theoretical estimates could in principle drastically change the number of sigmas mismatch with experiment in either direction (but as the scientific endeavor is human after all, typically each helps debug the other and the two converge over time).
eloff
> it's not good to cherry-pick paragraphs from the whole artile
Isn't that exactly what you just did?
There's nothing wrong with showing only small quotes, the problem would be cherry picking them in a way that leads people to draw incorrect conclusions about the whole.
jessriedel
That new alternative approach is considered substantially less reliable by most experts.
https://mobile.twitter.com/dangaristo/status/137982536595107...
From Gordan Krnjaic at Fermilab:
> if the lattice result [new approach] is mathematically sound then there would have to be some as yet unknown correlated systematic error in many decades worth of experiments that have studied e+e- annihilation to hadrons
> alternatively, it could mean that the theoretical techniques that map the experimental data onto the g-2 prediction could be subtly wrong for currently unknown reasons, but I have not heard of anyone making this argument in the literature
https://mobile.twitter.com/GordanKrnjaic/status/137984412453...
elliekelly
In the Scientific American article also currently linked on the front page a scientist & professor* at an Italian university is quoted as saying something along the lines of “this is probably an error in the theoretical calculation”. Would this be what the professor was referring to?
Edit: I’m not entirely sure whether they’re a professor, but here’s the exact quote
> “My feeling is that there’s nothing new under the sun,” says Tommaso Dorigo, an experimental physicist at the University of Padua in Italy, who was also not involved with the new study. “I think that this is still more likely to be a theoretical miscalculation.... But it is certainly the most important thing that we have to look into presently.”
beezle
On the BMW collaboration with the lattice qcd computational estimate -
This is a pre-print https://arxiv.org/abs/2002.12347
This is the link to the Nature publication: https://www.nature.com/articles/s41586-021-03418-1
atty
As someone who has worked in fields that use lattice calculations (on the experimental side), the new calculation is interesting, but I would not say it’s particularly convincing yet. Lattice calculations are VERY difficult, and are not always stable. I am not questioning whether they did their work well or not, just pointing out that in high energy physics and high energy nuclear physics, many times our experimental results are significantly better constrained and also undergo significantly more testing via reproduction of results by other experiments than our theory counterparts’ work. Is it possible that all of our previous experiments have had some sort of correlated systematic error in them? Unlikely, but yes. Is it more likely that this lattice calculation may be underestimating its errors? Much more likely. Another interesting option is that one of the theoretical calculations was actually done slightly wrong. My first guess would be the lattice result, since it’s newer, but both procedures are complicated, so it could be either.
glofish
I am not sure I follow the logic. The new computation aligns with the experiment.
Why is it more likely for it to be wrong than the calculation that shows the theory deviating from experiment.
atty
The old calculation relies on older experimental results that have been verified by multiple experiments - so if the older value is wrong, it means either the calculation was done wrong (possible), or the experiments all have had a significant correlated systematic error that has never been caught (also possible). However, I’d say both of those things are relatively unlikely, when compared to the probability of some small error in a new paper that was just released that uses a new method that involves lattice calculations. This is all a balance of probabilities argument, but from my experience in the field, I’d say it’s more likely that any errors in calculation or missed systematics would be in the new paper.
However, I’m an experimentalist who has worked close to a lot of this stuff, not an actual theorist, so I’d love to get a theorists interpretation as well.
evanb
I'm a lattice QCD practitioner. What I'll say is that the BMW collaboration isn't named that by coincidence---they're a resource-rich, extremely knowledgeable, cutting-edge group that is the envy of many others.
They're also cut-throat competitive, which is very divisive. Grad students and postdocs are forced to sign NDAs to work on the hot stuff. That's insane.
What's worse, from my point of view (as an actual LQCD practitioner) is: they're not very open about the actual details of their computation. It's tricky, because they treat their code as their 'secret sauce'. (Most of the community co-develops at least the base-level libraries; BMW goes it alone.)
OK, so they don't want to share their source code; that's fine. But they ALSO don't want to share any of their gauge configurations (read: monte carlo samples) because they're expensive to produce and can be reused for other calculations. So it'd be frustrating to share your own resource-intensive products and have someone else scoop you with them. I disagree with that, but I get it at least.
My biggest problem, and the one that I do not understand, is their reluctance to share the individual measurements they've made on each Monte Carlo sample. Then, at least, a motivated critic could develop their own statistical analysis (even if they can't develop their whole from-scratch computation).
Because of the structure and workflow of a LQCD calculation it's very difficult to blind. So, the only thing I know to do is to say "here are all the inputs, at the bit-exact level, to our analysis, here are our analysis scripts, here's the result we get; see if you agree."
This is the approach my collaborators and I took when we published a 1% determination of the nucleon axial coupling g_A [Nature 558, 91-94 (2018)]: we put the raw correlation functions as well as scripts on github https://github.com/callat-qcd/project_gA and said "look, here's literally exactly what we do; if you run this you will get the numbers in the paper." It's not great because our analysis code isn't the cleanest thing in the world (we're more interested in results than in nice software engineering). But at least the raw data is right there, we tell you what each data set is, and you're free to analyze it.
BMW does nothing of the sort. They (meaning those with power to dictate how the collaboration operates) seem to not want to adopt principles of nothing-up-my-sleeve really-honestly-truly open science. So their results need to be treated with care. That said, they themselves are extremely rigorous, top-notch scientists. They want you to trust them. Not that you shouldn't. Trust---but verify. That's currently not possible. I bet they're vindicated. But I can't check for myself.
podiki
As a particle physicist (no longer working in the field, sadly), this is one of the more exciting results in a long time. Muon g-2 has been there, in some form of another for debate and model building, for many years (taken somewhat seriously for 15+?), waiting for better statistics and confirmation. At over 4 sigma this is much more compelling than it has ever been, and the best potential sign of new (non-Standard Model) physics.
I'm not current on what models people like to explain this result, but it has been factored in (or ignored if you didn't trust it) in particle physics model building and phenomenology for years. This result makes it much more serious and something I imagine all new physics models (say for dark matter or other collider predictions or tensions in data) will be using.
Whether or not anything interesting is predicted, theoretically, from this remains to be seen. I don't know off hand if it signals anything in particular, as the big ideas, like supersymmetry, are a bit removed from current collider experiments and aren't necessarily tied to g-2 if I remember correctly.
manspacetar
re "what models people like to explain": There was some good discussion of lepton universality violation at the end of the announcement talk.
tl;dr - electrons and muons are leptons, but what if they don't interact with photons the same way? (ie the rules of physics aren't universal to all leptons)
nyc640
There was a nice explanation of the finding in comic format from APS & PhD Comics: https://physics.aps.org/articles/v14/47
jhoutromundo
Let me say that this is the best thing that I ever saw in science: people using art to explain extremely complex findings that might change the future in a bit. I laughed a bit on 'I don't know you anymore'.
When I was younger, I remember to read cyberpunk comics quite a lot. They explain a vision of the future that is improbable, but in many ways it get stuff right. Imagine aligning this with real word science. Imagine hearing from a superhero how his powers came to him. Imagine having a scientist name on the movie credits.
It doesn't need to make everything scientifically accurate, but explaining the fundamentals can engage more people to enter science.
Yesterday I was watching a new movie from Netflix called 'hacker'. The movie is awful, but it starts showing how Stuxnet should work, and that is pretty awesome. This is cool because I know the fundamentals of Stuxnet.
If they break the 4th wall and show something that could happen for real, it could bring more emotions to the movie.
gct
I used to read the Cartoon Guide to... books as a kid: https://www.amazon.com/Cartoon-Guide-Physics/dp/0062731009. They were great.
astrange
Cartoon History of the Universe is probably the best "nonfiction" comic ever made. (it's not inaccurate but it's kind of psychedelic and retells more than one religious founding text as if it actually happened)
colechristensen
Today no starch press has a series of Manga Guide to ...
which are pretty great.
aasasd
> They explain a vision of the future that is improbable
We're currently heading into cyberpunk in basically every aspect except for the anarchy. More like totalitarian cyberpunk. It's left to see whether tech gives us the means for a semblance of anarchy, but I'm not getting my hopes up.
SyzygistSix
Economix, a comic book explanation of basic economics, is the only book on economics I have ever read.
It seemed biased but still covered the basics well, I thought, not that I'm a good judge.
kazinator
They mystery here is why that comic image that is inlined into the page loads so slowly, but if you click on it while it is loading, you get a pop-up which shows the whole darn thing almost instantly, at what looks like the same resolution, even as the in-line one is still loading.
Spooky quantum effect, there!
loup-vaillant
NoScript lets you peek at a parallel universe in which the image pretty much instantly.
I didn't feel the need to click anything.
megablast
The creation of new particles, is that bremsstrahlung?? I’m trying to find more info on it.
eigenhombre
Bremsstrahlung is not the creation of virtual particles, though it does involve a virtual photon. It is rather the radiation of (real) photons by electrons when they suddenly "decelerate" (i.e. collide with other charged particles). In fact the name "bremsstrahlung" means "braking radiation," if memory serves.
codethief
> when they suddenly "decelerate" (i.e. collide with other charged particles)
I think it'd be more accurate to say "interact" instead of "collide" – the electron could still be far away from the charged particle. More generally, bremsstrahlung also occurs when an electron's velocity vector (not necessarily its modulus) changes, i.e. when the electron changes direction, like in a synchroton.
> In fact the name "bremsstrahlung" means "braking radiation," if memory serves.
That's correct :)
NL807
Basically the change of momentum for the electron sheds some of energy used to accelerate it.
manspacetar
it is also important to note that due to experimental constraints and the nature of quantum mechanics different possible processes interfere with eachother.
eg: (a+b)^2 = a^2 + b^2 + 2ab
That 2ab is an interference term so a different process can get mixed in (quantum mechanically speaking). And we may not experimentally be able to disentangle it.
gigama
Also concisely covered in Fermilab's Youtube channel: https://www.youtube.com/watch?v=ZjnK5exNhZ0
Fiahil
why did they move the magnet from Brookhaven to Chicago?
nyc640
From what I understand the Magnet is extremely specialized and it would cost millions more to manufacture a second one rather than ship the existing one. As to why Fermilab, scientists had exhausted the capabilities of the particle accelerator at Brookhaven and Fermilab already possessed the equipment to generate more intense muon beams.
gm2
All are correct! Also making a new magnet would take at least 3-5 more years.
devb
The NYT sort of explained that repeating the experiment in Brookhaven would have cost a lot of money but wouldn't have resulted in an increase in accuracy that was worth that amount of money. Presumable other equipment exists at Fermilab that made the move cost effective compared to other options.
BlueTemplar
Oh, so it's a bit like electron screening, but with virtual particles ? Fine structurally neat !
danellis
What's the symbol that looks like a b fell over?
monocasa
Lowercase Sigma
nyc640
Just to expand a bit, the sigma symbol is a standard symbol used to indicate the standard deviation of a measurement, and standard deviation is roughly a measure of how much variation there is within a data set (and consequently how confident you can be in your measurement). So when they say that the theoretical result is now 4.2 sigma (units of standard deviation) away from the experimental result instead of 2.7 sigma, that is because the new experiment provided more precise data that scientists could use to lower the perceived variance.
Assuming that there were no experimental errors, you can use the measure of standard deviation to express roughly what % chance a measurement is due to a statistical anomaly vs. a real indication that something is wrong.
To put some numbers to this, a measurement 1 sigma from the prediction would mean that there is roughly a 84% chance that the measurement represented a deviation from the prediction and a 16% chance that it was just a statistical anomaly. Similarly:
> 2 sigma = 97.7%/2.3% chance of deviation/anomaly
> 3 sigma = 99.9%/0.1% chance of deviation/anomaly
> 4.2 sigma = 99.9987%/0.0013% chance of deviation/anomaly
Which is why this is potentially big news since there is a very small chance that the disagreements between measurement and prediction are due to a statistical anomaly, and a higher chance that there are some fundamental physics going on that we don't understand and thus cannot predict.
edit: Again, this assumes both that there were no errors made in the experiment (it inspires confidence that they were able to reproduce this result twice in different settings) and that there were no mistakes made in the predicition itself, which as another commenter mentions eleswhere, is a nontrivial task in and of itself.
lgrebe
This sound like the hypothesized „subtle-matter“ as proposed by Dr. Klaus Volkamer [1]?
- still looking for a better link than the Book… I’ll update this later
dan-robertson
But if muons are inanimate, why would they be affected by this hypothesised “subtle matter” which makes up the soul of living things?
lgrebe
heres is a paper [1] from 1994 here the results of weighing thermodynamically closed reactions are "interpreted to reveal the existence of a heretofore unknown kind of non-bradyonic, cold dark matter with two different forms of interaction with normal matter"
[1] http://klaus-volkamer.de/wp-content/uploads/2014/11/1994-Vol...
kjs3
Maybe the muons are hitting the angels at a good fractions of the speed of light and the difference is the angel-splat. Maybe FERMI can contract Dr Klaus to come up the an experiment to measure the angel-goo and true the difference right up. Thanks for the link to an 'authoritative source'. :-)
lgrebe
Absolutely. there's this Paper from 1999[1] "Experimental Evidence of a New Type of Quantized Matter with Quanta as Integer Multiples of the Planck Mass" about how the weight of a closed system with a chemical reaction changes, violating the conservation of mass.
[1] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1....
fsloth
"Weightable soul". Sounds like a con-man, who wants only the most foolish of marks to make his job as easy as possible, and hence begins his script "I am about to hoax you...but I have something very important to tell you" - and those that remain after that are proven suckers and can be taken to any sort of ride.
lgrebe
I totally agree. Id be great to have a peer review of his papers[1][2] and either confirm something interesting or just shut him up.
Seems like all he was initially doing in the 80’s was dig into the 2 out of 10 experiments from Landolt that failed to confirm a conservation of mass
[1] http://klaus-volkamer.de/wp-content/uploads/2014/11/1994-Vol...
[2] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1....
gct
lol
atty
Alexey Petrov, quoted in the article, subbed in to teach one day in my quantum mechanics class :) It was the first day we were being introduced to the theory of scattering, and I will never forget his intro. He asked the class, “what is scattering?”, waited a moment, and then threw a whiteboard marker against the wall, and answered his own question: “that’s scattering”. Lots of times, physics classes can be so heavy on math that it’s hard to even remember that you’re trying to describe the real world sometimes, and moments like that were always very memorable to me, because it helped remind me I wasn’t just solving equations for the hell of it :)
ISL
My favorite example of this was during a lecture on waveguides, when Michael Schick picked up the section of cylindrical metal pipe he was using to motivate the cylindrical-waveguide problem at hand, looked at the class through the pipe, and said, "clearly, it admits higher-order modes."
That little episode brought great joy to this experimentalist's heart.
lifeisstillgood
I have a theory about how well educated the mass of humans are, could be and should be.
Bear with me.
Roughly 2000 years ago, the number of people who could do arithmetic and writing was < 1% of the population. By 200 years ago it was maybe what 10%?
Now it is 95% of the world population, and 99.9% of 'Western' world.
Lets say that Alexey Petrov is about as highly educated and trained as any human so far. (A Physics PhD represents pretty much 25 years of full-time full-on education). But most of us stop earlier, say 20 years, and many have less full-on education, perhaps not doing an hour a day of revision or whatever.
But imagine we could build the computing resources, the smaller class sizes, the gamification, whatever, that meant that each child was pushed as far as they could get (maybe some kind of Mastery learning approach ) - not as far as they can get if the teacher is dealing with 30 other unruly kids, but actually as far as their brain will take them.
Will Alexey be that much far ahead when we do this? Is Alexey as far ahead as any human can be? Or can we go further - how much further? And if every kid leaving university is as well trained as an Astronaut, is capable of calculus and vector multiplication, will that make a difference in the world today?
JohnBooty
You can't really manufacture geniuses, right?
I'm "smart" relative to the general population, but you could have thrown all the education in the world at me and I'd never have become Alexey Petrov.
I have a hunch that the Alexey Petrovs -- the upper 0.001% or whatever -- of the world do tend to get recognized and/or carve out their own space.
I think the ones who'd benefit from your plan would be... well, folks like me. I mean, I did fine I guess, but surely there are millions as smart as me and smarter than me who fell through the cracks in one way or another.
I suspect fairly quickly we'd run into some interesting limits.
For example, how many particle physicists can the world actually support? There are already more aspiring particle physicists than jobs or academic positions. Throwing more candidates at these positions would raise the bar for acceptance, but it's not like we'd actually get... hordes of additional practicing particle physicists than we have now. We'd also have to invest in more LHC-style experimental opportunities, more doctorate programs, and so on.
Obviously, you can replace "particle physicist" with other cutting-edge big-brain vocation. How many top-tier semiconductor engineers can the world support? I mean, there are only so many cutting-edge semiconductor fabs, and the availability of top-tier semiconductor engineers is not the limiting factor preventing us from making more.
There are also cultural issues. A lot of people just don't trust the whole "establishment" for science and learning these days. Anti-intellectualism is a thing. You can't throw education at that problem when education itself is seen as the problem.
diegoperini
> ...will that make a difference in the world today?
It will make a huge difference, and no difference at all. It will probably help us solve all of our current problems. And then it will also introduce a whole new brand of problems which will be sources of crises that generation will deal with. What you read on news will change, but the human emotional response to those news will be very similar to today's.
ryan93
Most people demonstrate pretty clearly that they don’t have the aptitude for serious physics. A substantial number of people can’t get passed freshman classes and that’s true even for the top few% of high school students.
gdubya
That doesn't necessarily mean that the content is the problem. 200 years ago you could probably say the same thing about "basic algebra" instead of "serious physics".
plebianRube
I agree wholeheartedly. We would live in an exceptional world. The obstacle preventing this is greed and exploitation of people who are born into low income situations. Rising out is the exception, not the rule. Affording many years of education is simply not an option for some. I wish it were, but this is another issue.
centimeter
The evidence is quite clear that going to college doesn’t actually improve life outcomes very much at all. We mistakenly thought it did for a while, but what was actually happening is the people who were going to college were smart and very likely to succeed anyway.
dieortin
Everyone being as trained as an astronaut would definitely make a difference, if only because they would appreciate the importance of science, technology, innovation... And not believe stupid conspiracy theories about vaccines.
schoen
Not all trained astronauts follow scientific consensus about everything.
https://en.wikipedia.org/wiki/Edgar_Mitchell#Post-NASA_caree...
kache_
An old professor of mine loved the "Throw something at the blackboard" technique. Great way to get the class potheads to wake up
forgotmysn
how many potheads did you have in your quantum mechanics class?
xzel
Hmm probably about a third of my graduate level QED class and considerably less in my undergraduate QM but you'd be surprised at the cross over between potheads and high level physics.
jefft255
Is this trying to imply that it would be surprising for a pothead to take a quantum mechanics class? Cause, having hung out with plenty of physicists, that wouldn’t surprise me too much... :P
kache_
It was an algorithms class. But I'm 100% certain there was at least one ;)
mhh__
The joke I have heard is that Physics students are either shut-ins or party animals, either way they're both microdosing something or other...
dplavery92
Personally I had grown out of that habit a semester or two before undergrad QM (though "Modern Physics" and "Experimental Physics" were another story...) but there were still some hangers on. Maybe 1-3 in a class of 20-25? Neither the norm nor unheard of. From that point on the statistics were probably about the same in grad school.
dang
That article is https://www.bbc.com/news/56643677.
(The comment was posted to https://news.ycombinator.com/item?id=26726981 before we merged the threads.)
surfsvammel
I have the opposite experience. Physics classes where always the most interactive and practical. But then again, I only ever studied up to undergrad level physics.
dylan604
would have been even more impressive example with a dusty chalkboard eraser to be able to see the scattering
snissn
that's super cool! i've always been able to connect the work in physics class to some physical system except for when i studied quantum mechanical density matrices. still have no idea what those are about :)
geniium
I love that kind of practical example.
gus_massa
Only 4.2 sigmas. ;)
That is really a lot. It's less than the official arbitrary threshold of 5 sigmas to proclaim a discovery, but it's a lot.
In the past, experiments with 2 or 3 sigmas were later classified as flukes, but AFAIK no experiment with 4 sigmas has "disappeared" later.
sgt101
Oh sweet summer physicist, what do you know of reality? Reality is for the markets, lovely mathey person, when a one in a million chance comes every month, and investment portfolios lie scattered over the floor like the corpses on a battlefield. Reality is for when your mortgage and the kid's school fees are riding on it, and quantitative strategies are borne and die with the fads of last summers interns pet projects.
In some domains 7 sigma events come and go - statistics is not something to be used to determine possibility in the absence of theory. If you go shopping you will buy a dress, just because it's a pretty one doesn't mean that it was made for you.
comboy
Neutrinos faster than light had 6 sigma.
It just shows probabilistic significance. Confirmation by independent research teams helps eliminate calculation and execution errors.
gizmo686
As I recall, FTL neutrinos were the result of experimental error, not chance; and so are outside the scope of what sigma screen for.
theptip
In scope for the context of this thread though; your GP claimed that 4 sigmas means “it’ll probably pan out as being real”, your parent provided a 6-sigma counter example.
lamontcg
That's the point.
At the time it was very significant results, just like this one.
Turned out someone hadn't plugged a piece of equipment in right and it was very precisely measuring that flaw in the experiment.
You can't look at any 8 sigma result and just state that it must necessarily be true. Your theory may be flawed or you may not understand your experiment and you just have highly precise data as to how you've messed something else up.
mhh__
It's probably worth saying that even "chance" is still a little misleading in the sense that the quantification of that chance is still done by the physicists and therefore can be biased
8note
Isn't the existence of experiemental error also something you can model as a probability?
thepangolino
This is the second separate experiment giving similar value.
Robotbeat
That does help a lot!
Of course, this is still not good enough. But the nice thing about things that are real is they eventually stand up to increasing levels of self-doubt and 3rd party verification... it’s an extraordinary result (because, of course, the Standard Model seems to be sufficient for just about everything else... so any verified deviation is extraordinary), and so funding shouldn’t be a problem.
A decent heuristic: Real effects are those that get bigger the more careful your experiment is (and the more times it is replicated by careful outsiders), not smaller.
XorNot
The use of a secret frequency source not known to the experimenters is also a very good way to deal with potential bias.
davrosthedalek
"Separate" for slightly small values of separate. It's the same measurement approach, and using many components from the first experiment, so there could be correlated errors. But they made many fundamental improvements to the experiment, so it's great to see that the effect hasn't gone away.
selectodude
Ideally they have all their fiber optic cables screwed on tight at Fermilab.
wrnr
Live from the Fermilab: https://www.youtube.com/watch?v=81PfYnpuOPA
mjevans
I read the release written by the lab.
https://news.fnal.gov/2021/04/first-results-from-fermilabs-m...
j4yav
There is a nice video explanation from PBS at https://youtu.be/O4Ko7NW2yQo
seventytwo
PBS, man. Just steadily and reliably educating everyone for years now. Good shit.
dimator
SpaceTime (that channel) in general is of impeccable quality and production value. Definitely worth subscribing.
Crash0v3rid3
Worth the patreon contribution also.
aaomidi
Everytime I see news like this, it just reminds me of the three body problem and the extremely unique Sophons in them.
glofish
Amusingly - fittingly for our times - in the same issue of the exact same journal (Nature) another paper has been published that indicates that the prior, so much "hyped" discrepancy might be due to the theory having being applied inaccurately in the past. When computed with the new method, the experimental and theoretical models align far more accurately.
So now all that matters is what kind of article do your want to write. A sensationalist one to get eyeballs or a realistic one that is far less exciting. Thus the exact same discovery can be presented via two radically different headlines:
BBC goes with "Muons: 'Strong' evidence found for a new force of nature" https://www.bbc.com/news/56643677
> "Now, physicists say they have found possible signs of a fifth fundamental force of nature"
ScienceDaily says: "The muon's magnetic moment fits just fine" https://www.sciencedaily.com/releases/2021/04/210407114159.h...
> "A new estimate of the strength of the sub-atomic particle's magnetic field aligns with the standard model of particle physics."
There you have it, the mainstream media is not credible even when they attempt to write about a physics experiment ...
gameswithgo
What were the times when journalism was better?
devb
This is an incredibly complicated and abstract subject, yet you have somehow managed to boil it down into a sweeping generalization about the basics of media and reporting. Masterfully done.
mkaic
I highly recommend the YouTube channel PBS Space Time's coverage of this, it's informative, well organized, and accessible even to someone like me who doesn't have any background in physics.
Get the top HN stories in your inbox every day.
The Quanta write up is a bit more neutral on this announcement. There is a computational result that was not included in the theoretical value used to bench the test against. Once reviewed, this difference may yet go back to oblivion.
https://www.quantamagazine.org/muon-g-2-experiment-at-fermil...