Particle Physics Follow Up
In the comments to this post, Rafe and Daniel asked me to tell them the punch line of Lightness of Being. I’ll do my best. Spoilers ahead.
The book presents an elegant simultaneous solution to three questions:
- How can the strong force possibly get more powerful with distance?
- Why can’t we break protons into their component quarks?
- Where the heck does a proton’s mass really come from?
In fact, Wilczek won a Nobel Prize for the solution. First, you have to understand three basic facts of physics:
- Quantum mechanics says that short-lived particles and their anti-particles are constantly popping in and out of existence.
- Protons are made up of 2 up quarks and 1 down quark. These three quarks have different primary color charges (RGB) so that together, they are color neutral (white).
- Quantum mechanics says it takes a tremendous amount of energy to constrain the location of a particle (AKA the uncertainty principle).
With electric charge, the cloud of particles from (1) screen the EM force. Assume you have a proton out in space. A particle-antiparticle particle appear. There is no net energy or charge created. However, the negative particle will move a little bit towards the proton and the positive charge will move a little bit away from the proton before they annihilate each other. This absorbs a tiny bit of the EM force.
Wilczek and company had an idea. What if color charge works differently? What if it is anti-screened by (1). The ephemeral particles move so that the color charge gets relayed. Think of it as reinforcing a wave rather than canceling it. When they worked through the color field equations, they found a very few solutions where anti-screening was possible. This is good because it means the theory wasn’t arbitrary.
So that explains question (1) of how the strong force can get larger with distance. But if you do the integral of an ever increasing force over all points in the universe, you get a metric butt-load of energy bound up in a quark.
Here’s where (2) comes in. If you combine quarks with R, G, and B color charges, they cancel leaving no net force. Problem solved. And it explains the answer to our question (2) of why whe can’t break protons into quarks. The energy required is just too high.
But what about our question (3)–the mass of the proton? Enter the uncertainty principle. The three quarks in a proton can’t all be exactly on top of each other. That also requires too much energy. So at some point, the energy equation for the strong force increasing over distance and constraining the quantum location of the quarks balances. Plug that energy into m = E/c^2. That accounts for 95% of the mass of the proton. Most of the rest consists of the quarks’ masses themselves.
Cool, huh?
I sort of get it. But what strikes me is that if certain “universal rock solid truths” aren’t as universal as are assumed in proofs like the above, then of course the conclusions can come unglued to an arbitrary level of amplification. E.g. if E=mc^2 doesn’t hold at all scales, all times, all places, then 95% might look like 60% (pick a number).
And one thing that seems true is that “universal laws” are just approximations, damn good ones, but bound to not hold under some circumstances. Now, I know some people (maybe even Kevin) will be apoplectic that I could make such a claim, but I have polled some really smart people (physicists/mathematicians) who agree.
Therefore, what I’m left with is a sense that physics will proceed better when we focus less on interpreting the meanings of equations than on challenging their universality and looking for nuance that way.
Okay, shoot me.
rafefurst
January 17, 2009 at 2:32 am
I think you are overly enamored of the fact that models can be wrong. In fact, it’s ironic because you rightly indict people who think models can be right. It seems to me that you have fallen into the same cognitive trap of categorizing models as “correct” or “incorrect”. It’s always a matter of precision, confidence, and applicability
The precision and confidence levels for the standard model of particle physics (and relativity for that matter) are both incredibly high over an incredibly wide range of energies, distances, and space.
Is there some purpose you have in mind for which they are insufficient? I agree that anyone who preaches an equation as truth should be ridiculed. But I also think anyone who refuses to use the best current model for it’s intended purpose because it might be “wrong” should as well.
Wilczek won the Nobel because his equations made novel _predictions_ about what would happen in a variety of experiments, not because the equations themselves were nice. It’s hard for me to see how you can argue with that.
kevindick
January 17, 2009 at 4:40 am
That’s fair. I didn’t mean to imply that all models are equally wrong.
My point is that almost all models ever created have been shown to be over-simplifications at some point, and it would be foolish to think that just because the standard model has not been yet shown as such that it won’t.
Additionally, when models are eventually qualified to be brought back into alignment with data, this is called scientific breakthrough.
Thus, when we are stuck and looking for a breakthrough (as we have been for many years now on grand unification), wouldn’t it make sense to revisit some of the assumptions that haven’t been challenged in a while? Why would the standard model be sacrosanct just because it works really really well? Isn’t it possible that it breaks down under certain conditions? And if it does, couldn’t that exploration yield a breakthrough on unification?
Relatedly, I question the assumption that there ever will be a “final theory”. It could just be turtles all the way down, fractally speaking.
rafefurst
January 19, 2009 at 4:33 am
I’m not saying your wrong about models. I’m saying that you’re attacking a straw man.
What Wilczek is writing about is what he proposed BACK IN THE 70s. This was precisely the sort of breakthrough you note. It brought the then standard model back into agreement with data and made novel new predictions which turned out to be right. He’s just trying to explain this to today’s lay audience.
Moreover, I think if you polled working physicists, 90% would say there’s at least a 90% chance that there will be very substantial revisions to the Standard Model (circa 2008) within the next 50 years.
I’m trying to get you to update your assessment of whether real scientists fall into the model=truth fallacy. Most don’t.
kevindick
January 19, 2009 at 5:01 am