Hacker News

9 days ago by beezle

The Quanta write up is a bit more neutral on this announcement. There is a computational result that was not included in the theoretical value used to bench the test against. Once reviewed, this difference may yet go back to oblivion.

https://www.quantamagazine.org/muon-g-2-experiment-at-fermil...

9 days ago by ssivark

To clarify, for those not familiar with this topic, this experiment is making measurements at such exquisite precision that even the calculations for the theoretical prediction are extremely non-trivial and require careful estimation of many many pieces which are then combined. Which is to say that debugging the theoretical prediction is (almost) as hard as debugging the experiment. So I would expect the particle physics community to be extremely circumspect while the details get ironed out.

The Quanta article explains it quite nicely. To quote their example of what has happened in the past:

> ”A year after Brookhaven’s headline-making measurement, theorists spotted a mistake in the prediction. A formula representing one group of the tens of thousands of quantum fluctuations that muons can engage in contained a rogue minus sign; fixing it in the calculation reduced the difference between theory and experiment to just two sigma. That’s nothing to get excited about.”

9 days ago by whatshisface

If the theoretical prediction can't be calculated until the experiment is done that motivates the choices of what and what not to approximate, is it really a prediction?

9 days ago by btilly

If the theoretical prediction can't be calculated until the experiment is done that motivates the choices of what and what not to approximate, is it really a prediction?

Let me make that more meta.

If a theory is unable to predict a particular key value, is it still a theory?

This is not a hypothetical question. The theory being tested here is the Standard Model. The Standard Model in principle is entirely symmetric with regards to a whole variety of things that we don't see symmetry in. For example the relative mass of the electron and the proton.

But, you ask, how can it be that those things are different? Well, for the same reason that we find pencils lying on their side rather than perfectly balanced around the point of symmetry on the tip. Namely that the point of perfect symmetry is unstable, and there are fields setting the value of each asymmetry that we actually see. Each field is carried by a particle. Each particle's properties reflect the value of the field. And therefore the theory has a number of free parameters that can only be determined by experiment, not theory.

In fact there are 19 such parameters. https://en.wikipedia.org/wiki/Standard_Model#Theoretical_asp... has a table with the complete list. And for a measurement as precise as this experiment requires, the uncertainty of the values of those parameters is highly relevant to the measurement itself.

9 days ago by ssivark

That’s a good (and profound) question, not deserving of downvotes.

It turns out that the simplified paradigmatic “scientific method” is a very bad caricature of what actually happens on the cutting edge when we’re pushing the boundaries of what we understand (not just theory, but also experimental design). Even on the theoretical front, the principles might be well-understood, but making predictions requires accurately modeling all the aspects that contribute to the actual experimental measurement (and not just the simple principled part). In that sense, the border between theory and experiment is very fuzzy, and the two inevitably end-up influencing each other, and it is fundamentally unavoidable.

Unfortunately, it would require more effort on my part to articulate this, and all I can spare right now is a drive-by comment. Steven Weinberg has some very insightful thoughts on the topic, both generally and specifically in the context of particle physics, in his book “Dreams of a final theory” (chapter 5).

If you don’t have access to the book, in a pinch, you could peruse some slides that I made for a discussion: https://speakerdeck.com/sivark/walking-through-weinbergs-dre...

9 days ago by 6gvONxR4sf7o

Sometimes it's like unit tests, where you might get the test itself wrong at first, but that still helps you get closer and write better tests.

9 days ago by raincom

That's what Duhem-Quine thesis in the philosophy of sciences is. The thesis is that "it is impossible to test a hypothesis in isolation, because an empirical of the hypothesis requires one or more auxiliary/background assumptions/hypotheses".

9 days ago by platz

it's not good to cherry-pick paragraphs from the whole artile.

> But as the Brookhaven team accrued 10 times more data, their measurement of the muon’s g-factor stayed the same while the error bars around the measurement shrank. The discrepancy with theory grew back to three sigma by the time of the experiment’s final report in 2006.

9 days ago by ssivark

No, the essence of my point is that the number of sigmas is meaningless when you have a systematic error — in either the experiment or the theoretical estimate — all that the sigmas tell you is that the two are mismatched. If a mistake could happen once, a similar mistake could easily happen again, so we need to be extremely wary of taking the sigmas at face value. (Eg: the DAMA experiment reports dark matter detections with over 40sigma significance, but the community doesn’t take their validity too seriously)

Any change in the theoretical estimates could in principle drastically change the number of sigmas mismatch with experiment in either direction (but as the scientific endeavor is human after all, typically each helps debug the other and the two converge over time).

9 days ago by eloff

> it's not good to cherry-pick paragraphs from the whole artile

Isn't that exactly what you just did?

There's nothing wrong with showing only small quotes, the problem would be cherry picking them in a way that leads people to draw incorrect conclusions about the whole.

9 days ago by jessriedel

That new alternative approach is considered substantially less reliable by most experts.

https://mobile.twitter.com/dangaristo/status/137982536595107...

From Gordan Krnjaic at Fermilab:

> if the lattice result [new approach] is mathematically sound then there would have to be some as yet unknown correlated systematic error in many decades worth of experiments that have studied e+e- annihilation to hadrons

> alternatively, it could mean that the theoretical techniques that map the experimental data onto the g-2 prediction could be subtly wrong for currently unknown reasons, but I have not heard of anyone making this argument in the literature

https://mobile.twitter.com/GordanKrnjaic/status/137984412453...

9 days ago by elliekelly

In the Scientific American article also currently linked on the front page a scientist & professor* at an Italian university is quoted as saying something along the lines of “this is probably an error in the theoretical calculation”. Would this be what the professor was referring to?

Edit: I’m not entirely sure whether they’re a professor, but here’s the exact quote

> “My feeling is that there’s nothing new under the sun,” says Tommaso Dorigo, an experimental physicist at the University of Padua in Italy, who was also not involved with the new study. “I think that this is still more likely to be a theoretical miscalculation.... But it is certainly the most important thing that we have to look into presently.”

9 days ago by beezle

On the BMW collaboration with the lattice qcd computational estimate -

This is a pre-print https://arxiv.org/abs/2002.12347

This is the link to the Nature publication: https://www.nature.com/articles/s41586-021-03418-1

9 days ago by atty

As someone who has worked in fields that use lattice calculations (on the experimental side), the new calculation is interesting, but I would not say it’s particularly convincing yet. Lattice calculations are VERY difficult, and are not always stable. I am not questioning whether they did their work well or not, just pointing out that in high energy physics and high energy nuclear physics, many times our experimental results are significantly better constrained and also undergo significantly more testing via reproduction of results by other experiments than our theory counterparts’ work. Is it possible that all of our previous experiments have had some sort of correlated systematic error in them? Unlikely, but yes. Is it more likely that this lattice calculation may be underestimating its errors? Much more likely. Another interesting option is that one of the theoretical calculations was actually done slightly wrong. My first guess would be the lattice result, since it’s newer, but both procedures are complicated, so it could be either.

9 days ago by glofish

I am not sure I follow the logic. The new computation aligns with the experiment.

Why is it more likely for it to be wrong than the calculation that shows the theory deviating from experiment.

9 days ago by atty

The old calculation relies on older experimental results that have been verified by multiple experiments - so if the older value is wrong, it means either the calculation was done wrong (possible), or the experiments all have had a significant correlated systematic error that has never been caught (also possible). However, I’d say both of those things are relatively unlikely, when compared to the probability of some small error in a new paper that was just released that uses a new method that involves lattice calculations. This is all a balance of probabilities argument, but from my experience in the field, I’d say it’s more likely that any errors in calculation or missed systematics would be in the new paper.

However, I’m an experimentalist who has worked close to a lot of this stuff, not an actual theorist, so I’d love to get a theorists interpretation as well.

9 days ago by evanb

I'm a lattice QCD practitioner. What I'll say is that the BMW collaboration isn't named that by coincidence---they're a resource-rich, extremely knowledgeable, cutting-edge group that is the envy of many others.

They're also cut-throat competitive, which is very divisive. Grad students and postdocs are forced to sign NDAs to work on the hot stuff. That's insane.

What's worse, from my point of view (as an actual LQCD practitioner) is: they're not very open about the actual details of their computation. It's tricky, because they treat their code as their 'secret sauce'. (Most of the community co-develops at least the base-level libraries; BMW goes it alone.)

OK, so they don't want to share their source code; that's fine. But they ALSO don't want to share any of their gauge configurations (read: monte carlo samples) because they're expensive to produce and can be reused for other calculations. So it'd be frustrating to share your own resource-intensive products and have someone else scoop you with them. I disagree with that, but I get it at least.

My biggest problem, and the one that I do not understand, is their reluctance to share the individual measurements they've made on each Monte Carlo sample. Then, at least, a motivated critic could develop their own statistical analysis (even if they can't develop their whole from-scratch computation).

Because of the structure and workflow of a LQCD calculation it's very difficult to blind. So, the only thing I know to do is to say "here are all the inputs, at the bit-exact level, to our analysis, here are our analysis scripts, here's the result we get; see if you agree."

This is the approach my collaborators and I took when we published a 1% determination of the nucleon axial coupling g_A [Nature 558, 91-94 (2018)]: we put the raw correlation functions as well as scripts on github https://github.com/callat-qcd/project_gA and said "look, here's literally exactly what we do; if you run this you will get the numbers in the paper." It's not great because our analysis code isn't the cleanest thing in the world (we're more interested in results than in nice software engineering). But at least the raw data is right there, we tell you what each data set is, and you're free to analyze it.

BMW does nothing of the sort. They (meaning those with power to dictate how the collaboration operates) seem to not want to adopt principles of nothing-up-my-sleeve really-honestly-truly open science. So their results need to be treated with care. That said, they themselves are extremely rigorous, top-notch scientists. They want you to trust them. Not that you shouldn't. Trust---but verify. That's currently not possible. I bet they're vindicated. But I can't check for myself.

9 days ago by podiki

As a particle physicist (no longer working in the field, sadly), this is one of the more exciting results in a long time. Muon g-2 has been there, in some form of another for debate and model building, for many years (taken somewhat seriously for 15+?), waiting for better statistics and confirmation. At over 4 sigma this is much more compelling than it has ever been, and the best potential sign of new (non-Standard Model) physics.

I'm not current on what models people like to explain this result, but it has been factored in (or ignored if you didn't trust it) in particle physics model building and phenomenology for years. This result makes it much more serious and something I imagine all new physics models (say for dark matter or other collider predictions or tensions in data) will be using.

Whether or not anything interesting is predicted, theoretically, from this remains to be seen. I don't know off hand if it signals anything in particular, as the big ideas, like supersymmetry, are a bit removed from current collider experiments and aren't necessarily tied to g-2 if I remember correctly.

9 days ago by manspacetar

re "what models people like to explain": There was some good discussion of lepton universality violation at the end of the announcement talk.

tl;dr - electrons and muons are leptons, but what if they don't interact with photons the same way? (ie the rules of physics aren't universal to all leptons)

9 days ago by nyc640

There was a nice explanation of the finding in comic format from APS & PhD Comics: https://physics.aps.org/articles/v14/47

9 days ago by jhoutromundo

Let me say that this is the best thing that I ever saw in science: people using art to explain extremely complex findings that might change the future in a bit. I laughed a bit on 'I don't know you anymore'.

When I was younger, I remember to read cyberpunk comics quite a lot. They explain a vision of the future that is improbable, but in many ways it get stuff right. Imagine aligning this with real word science. Imagine hearing from a superhero how his powers came to him. Imagine having a scientist name on the movie credits.

It doesn't need to make everything scientifically accurate, but explaining the fundamentals can engage more people to enter science.

Yesterday I was watching a new movie from Netflix called 'hacker'. The movie is awful, but it starts showing how Stuxnet should work, and that is pretty awesome. This is cool because I know the fundamentals of Stuxnet.

If they break the 4th wall and show something that could happen for real, it could bring more emotions to the movie.

9 days ago by gct

I used to read the Cartoon Guide to... books as a kid: https://www.amazon.com/Cartoon-Guide-Physics/dp/0062731009. They were great.

9 days ago by astrange

Cartoon History of the Universe is probably the best "nonfiction" comic ever made. (it's not inaccurate but it's kind of psychedelic and retells more than one religious founding text as if it actually happened)

9 days ago by colechristensen

Today no starch press has a series of Manga Guide to ...

which are pretty great.

https://nostarch.com/catalog/manga

9 days ago by aasasd

> They explain a vision of the future that is improbable

We're currently heading into cyberpunk in basically every aspect except for the anarchy. More like totalitarian cyberpunk. It's left to see whether tech gives us the means for a semblance of anarchy, but I'm not getting my hopes up.

9 days ago by SyzygistSix

Economix, a comic book explanation of basic economics, is the only book on economics I have ever read.

It seemed biased but still covered the basics well, I thought, not that I'm a good judge.

9 days ago by emikulic

Which cyberpunk comics? Give us some recommendations please. :)

9 days ago by slim

not op but I recommend the Nikopol trilogy by Enki Bilal

9 days ago by kazinator

They mystery here is why that comic image that is inlined into the page loads so slowly, but if you click on it while it is loading, you get a pop-up which shows the whole darn thing almost instantly, at what looks like the same resolution, even as the in-line one is still loading.

Spooky quantum effect, there!

9 days ago by loup-vaillant

NoScript lets you peek at a parallel universe in which the image pretty much instantly.

I didn't feel the need to click anything.

9 days ago by megablast

The creation of new particles, is that bremsstrahlung?? I’m trying to find more info on it.

9 days ago by eigenhombre

Bremsstrahlung is not the creation of virtual particles, though it does involve a virtual photon. It is rather the radiation of (real) photons by electrons when they suddenly "decelerate" (i.e. collide with other charged particles). In fact the name "bremsstrahlung" means "braking radiation," if memory serves.

9 days ago by codethief

> when they suddenly "decelerate" (i.e. collide with other charged particles)

I think it'd be more accurate to say "interact" instead of "collide" – the electron could still be far away from the charged particle. More generally, bremsstrahlung also occurs when an electron's velocity vector (not necessarily its modulus) changes, i.e. when the electron changes direction, like in a synchroton.

> In fact the name "bremsstrahlung" means "braking radiation," if memory serves.

That's correct :)

9 days ago by NL807

Basically the change of momentum for the electron sheds some of energy used to accelerate it.

9 days ago by manspacetar

it is also important to note that due to experimental constraints and the nature of quantum mechanics different possible processes interfere with eachother.

eg: (a+b)^2 = a^2 + b^2 + 2ab

That 2ab is an interference term so a different process can get mixed in (quantum mechanically speaking). And we may not experimentally be able to disentangle it.

9 days ago by gigama

Also concisely covered in Fermilab's Youtube channel: https://www.youtube.com/watch?v=ZjnK5exNhZ0

9 days ago by atty

Alexey Petrov, quoted in the article, subbed in to teach one day in my quantum mechanics class :) It was the first day we were being introduced to the theory of scattering, and I will never forget his intro. He asked the class, “what is scattering?”, waited a moment, and then threw a whiteboard marker against the wall, and answered his own question: “that’s scattering”. Lots of times, physics classes can be so heavy on math that it’s hard to even remember that you’re trying to describe the real world sometimes, and moments like that were always very memorable to me, because it helped remind me I wasn’t just solving equations for the hell of it :)

9 days ago by ISL

My favorite example of this was during a lecture on waveguides, when Michael Schick picked up the section of cylindrical metal pipe he was using to motivate the cylindrical-waveguide problem at hand, looked at the class through the pipe, and said, "clearly, it admits higher-order modes."

That little episode brought great joy to this experimentalist's heart.

9 days ago by lifeisstillgood

I have a theory about how well educated the mass of humans are, could be and should be.

Bear with me.

Roughly 2000 years ago, the number of people who could do arithmetic and writing was < 1% of the population. By 200 years ago it was maybe what 10%?

Now it is 95% of the world population, and 99.9% of 'Western' world.

Lets say that Alexey Petrov is about as highly educated and trained as any human so far. (A Physics PhD represents pretty much 25 years of full-time full-on education). But most of us stop earlier, say 20 years, and many have less full-on education, perhaps not doing an hour a day of revision or whatever.

But imagine we could build the computing resources, the smaller class sizes, the gamification, whatever, that meant that each child was pushed as far as they could get (maybe some kind of Mastery learning approach ) - not as far as they can get if the teacher is dealing with 30 other unruly kids, but actually as far as their brain will take them.

Will Alexey be that much far ahead when we do this? Is Alexey as far ahead as any human can be? Or can we go further - how much further? And if every kid leaving university is as well trained as an Astronaut, is capable of calculus and vector multiplication, will that make a difference in the world today?

9 days ago by JohnBooty

You can't really manufacture geniuses, right?

I'm "smart" relative to the general population, but you could have thrown all the education in the world at me and I'd never have become Alexey Petrov.

I have a hunch that the Alexey Petrovs -- the upper 0.001% or whatever -- of the world do tend to get recognized and/or carve out their own space.

I think the ones who'd benefit from your plan would be... well, folks like me. I mean, I did fine I guess, but surely there are millions as smart as me and smarter than me who fell through the cracks in one way or another.

I suspect fairly quickly we'd run into some interesting limits.

For example, how many particle physicists can the world actually support? There are already more aspiring particle physicists than jobs or academic positions. Throwing more candidates at these positions would raise the bar for acceptance, but it's not like we'd actually get... hordes of additional practicing particle physicists than we have now. We'd also have to invest in more LHC-style experimental opportunities, more doctorate programs, and so on.

Obviously, you can replace "particle physicist" with other cutting-edge big-brain vocation. How many top-tier semiconductor engineers can the world support? I mean, there are only so many cutting-edge semiconductor fabs, and the availability of top-tier semiconductor engineers is not the limiting factor preventing us from making more.

There are also cultural issues. A lot of people just don't trust the whole "establishment" for science and learning these days. Anti-intellectualism is a thing. You can't throw education at that problem when education itself is seen as the problem.

9 days ago by diegoperini

> ...will that make a difference in the world today?

It will make a huge difference, and no difference at all. It will probably help us solve all of our current problems. And then it will also introduce a whole new brand of problems which will be sources of crises that generation will deal with. What you read on news will change, but the human emotional response to those news will be very similar to today's.

9 days ago by ryan93

Most people demonstrate pretty clearly that they don’t have the aptitude for serious physics. A substantial number of people can’t get passed freshman classes and that’s true even for the top few% of high school students.

9 days ago by gdubya

That doesn't necessarily mean that the content is the problem. 200 years ago you could probably say the same thing about "basic algebra" instead of "serious physics".

9 days ago by plebianRube

I agree wholeheartedly. We would live in an exceptional world. The obstacle preventing this is greed and exploitation of people who are born into low income situations. Rising out is the exception, not the rule. Affording many years of education is simply not an option for some. I wish it were, but this is another issue.

9 days ago by centimeter

The evidence is quite clear that going to college doesn’t actually improve life outcomes very much at all. We mistakenly thought it did for a while, but what was actually happening is the people who were going to college were smart and very likely to succeed anyway.

9 days ago by kache_

An old professor of mine loved the "Throw something at the blackboard" technique. Great way to get the class potheads to wake up

9 days ago by forgotmysn

how many potheads did you have in your quantum mechanics class?

9 days ago by xzel

Hmm probably about a third of my graduate level QED class and considerably less in my undergraduate QM but you'd be surprised at the cross over between potheads and high level physics.

9 days ago by jefft255

Is this trying to imply that it would be surprising for a pothead to take a quantum mechanics class? Cause, having hung out with plenty of physicists, that wouldn’t surprise me too much... :P

9 days ago by kache_

It was an algorithms class. But I'm 100% certain there was at least one ;)

9 days ago by mhh__

The joke I have heard is that Physics students are either shut-ins or party animals, either way they're both microdosing something or other...

9 days ago by dang

That article is https://www.bbc.com/news/56643677.

(The comment was posted to https://news.ycombinator.com/item?id=26726981 before we merged the threads.)

10 days ago by gus_massa

Only 4.2 sigmas. ;)

That is really a lot. It's less than the official arbitrary threshold of 5 sigmas to proclaim a discovery, but it's a lot.

In the past, experiments with 2 or 3 sigmas were later classified as flukes, but AFAIK no experiment with 4 sigmas has "disappeared" later.

9 days ago by sgt101

Oh sweet summer physicist, what do you know of reality? Reality is for the markets, lovely mathey person, when a one in a million chance comes every month, and investment portfolios lie scattered over the floor like the corpses on a battlefield. Reality is for when your mortgage and the kid's school fees are riding on it, and quantitative strategies are borne and die with the fads of last summers interns pet projects.

In some domains 7 sigma events come and go - statistics is not something to be used to determine possibility in the absence of theory. If you go shopping you will buy a dress, just because it's a pretty one doesn't mean that it was made for you.

10 days ago by comboy

Neutrinos faster than light had 6 sigma.

It just shows probabilistic significance. Confirmation by independent research teams helps eliminate calculation and execution errors.

9 days ago by gizmo686

As I recall, FTL neutrinos were the result of experimental error, not chance; and so are outside the scope of what sigma screen for.

9 days ago by theptip

In scope for the context of this thread though; your GP claimed that 4 sigmas means “it’ll probably pan out as being real”, your parent provided a 6-sigma counter example.

9 days ago by lamontcg

That's the point.

At the time it was very significant results, just like this one.

Turned out someone hadn't plugged a piece of equipment in right and it was very precisely measuring that flaw in the experiment.

You can't look at any 8 sigma result and just state that it must necessarily be true. Your theory may be flawed or you may not understand your experiment and you just have highly precise data as to how you've messed something else up.

9 days ago by mhh__

It's probably worth saying that even "chance" is still a little misleading in the sense that the quantification of that chance is still done by the physicists and therefore can be biased

9 days ago by 8note

Isn't the existence of experiemental error also something you can model as a probability?

9 days ago by thepangolino

This is the second separate experiment giving similar value.

9 days ago by Robotbeat

That does help a lot!

Of course, this is still not good enough. But the nice thing about things that are real is they eventually stand up to increasing levels of self-doubt and 3rd party verification... it’s an extraordinary result (because, of course, the Standard Model seems to be sufficient for just about everything else... so any verified deviation is extraordinary), and so funding shouldn’t be a problem.

A decent heuristic: Real effects are those that get bigger the more careful your experiment is (and the more times it is replicated by careful outsiders), not smaller.

9 days ago by XorNot

The use of a secret frequency source not known to the experimenters is also a very good way to deal with potential bias.

9 days ago by davrosthedalek

"Separate" for slightly small values of separate. It's the same measurement approach, and using many components from the first experiment, so there could be correlated errors. But they made many fundamental improvements to the experiment, so it's great to see that the effect hasn't gone away.

9 days ago by selectodude

Ideally they have all their fiber optic cables screwed on tight at Fermilab.

Daily digest email

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.