Wednesday, December 06, 2017

The cosmological constant is not the worst prediction ever. It’s not even a prediction.

Think fake news and echo chambers are a problem only in political discourse? Think again. You find many examples of myths and falsehoods on popular science pages. Most of them surround the hype of the day, but some of them have been repeated so often they now appear in papers, seminar slides, and textbooks. And many scientists, I have noticed with alarm, actually believe them.

I can’t say much about fields outside my specialty, but it’s obvious this happens in physics. The claim that the bullet cluster rules out modified gravity, for example, is a particularly pervasive myth. Another one is that inflation solves the flatness problem, or that there is a flatness problem to begin with.

I recently found another myth to add to my list: the assertion that the cosmological constant is “the worst prediction in the history of physics.” From RealClearScience I learned the other day that this catchy but wrong statement has even made it into textbooks.

Before I go and make my case, please ask yourself: If the cosmological constant was such a bad prediction, then what theory was ruled out by it? Nothing comes to mind? That’s because there never was such a prediction.

The myth has it that if you calculate the cosmological constant using the standard model of particle physics the result is 120 orders of magnitude larger than what is observed due to contributions from vacuum fluctuation. But this is wrong on at least 5 levels:

1. The standard model of particle physics doesn’t predict the cosmological constant, never did, and never will.

The cosmological constant is a free parameter in Einstein’s theory of general relativity. This means its value must be fixed by measurement. You can calculate a contribution to this constant from the standard model vacuum fluctuations. But you cannot measure this contribution by itself. So the result of the standard model calculation doesn’t matter because it doesn’t correspond to an observable. Regardless of what it is, there is always a value for the parameter in general relativity that will make the result fit with measurement.

(And if you still believe in naturalness arguments, buy my book.)

2. The calculation in the standard model cannot be trusted.

Many theoretical physicists think the standard model is not a fundamental theory but must be amended at high energies. If that is so, then any calculation of the contribution to the cosmological constant using the standard model is wrong anyway. If there are further particles, so heavy that we haven’t yet seen them, these will play a role for the result. And we don’t know if there are such particles.

3. It’s idiotic to quote ratios of energy densities.

The 120 orders of magnitude refers to a ratio of energy densities. But not only is the cosmological constant usually not quoted as an energy density (but as a square thereof), in no other situation do particle physicists quote energy densities. We usually speak about energies, in which case the ratio goes down to 30 orders of magnitude.

4. The 120 orders of magnitude are wrong to begin with.

The actual result from the standard model scales with the fourth power of the masses of particles, times an energy-dependent logarithm. At least that’s the best calculation I know of. You find the result in equation (515) in this (awesomely thorough) paper. If you put in the numbers, out comes a value that scales with the masses of the heaviest known particles (not with the Planck mass, as you may have been told). That’s currently 13 orders of magnitude larger than the measured value, or 52 orders larger in energy density.

5. No one in their right mind ever quantifies the goodness of a prediction by taking ratios.

There’s a reason physicists usually talk a about uncertainty, statistical significance, and standard deviations. That’s because these are known to be useful to quantify the match of a theory with data. If you’d bother writing down the theoretical uncertainties of the calculation for the cosmological constant, the result would be compatible with the measured value even if you’d set the additional contribution from general relativity to zero.

In summary: No prediction, no problem.

Why does it matter? Because this wrong narrative has prompted physicists to aim at the wrong target.

The real problem with the cosmological constant is not the average value of the standard model contribution but – as Niayesh Afshordi elucidated better than I ever managed to – that the vacuum fluctuations, well, fluctuate. It’s these fluctuations that you should worry about. Because these you cannot get rid of by subtracting a constant.

But of course I know the actual reason you came here is that you want to know what is “the worst prediction in the history of physics” if not the cosmological constant...

I’m not much of a historian, so don’t take my word for it, but I’d guess it’s the prediction you get for the size of the universe if you assume the universe was born by a vacuum fluctuation out of equilibrium.

In this case, you can calculate the likelihood for observing a universe like our own. But the larger and the less noisy the observed universe, the less likely it is to originate from a fluctuation. Hence, the mere fact that you have a fairly ordered memory of the past and a sense of a reasonably functioning reality would be exceedingly tiny in such a case. So tiny, I’m not interested enough to even put in the numbers. (Maybe ask Sean Carroll.)

I certainly wish I’d never have to see the cosmological constant myth again. I’m not yet deluded enough to believe it will go away, but at least I now have this blogpost to refer to when I encounter it the next time.

102 comments:

Matthew Rapaport said...

Thank you Dr. H. Always an interesting topic, well expressed and informative. Do not quite understand the last part but I think you are saying our universe is very unlikely to have emerged from a vacuum fluctuation... Indeed! Again thanks for an interesting post

Alexander McLin said...

A quite enjoyable blog post. My next question is what is a "fluctuation" as used in physics? The common definition as given by Google's dictionary is "an irregular rising and falling in number or amount; a variation" Is that the same meaning physicists have in mind? In addition what is a "vacuum fluctuation" or a "quantum fluctuation"? If you'd be kind to clarify or refer me to a not too technically demanding reference.

I often hear those terms used in laymen and serious discussions but nobody has really defined what they mean. Are they even well-defined concepts?

Sabine Hossenfelder said...

Alexander,

A fluctuation is a temporary deviation from the average.

Don Lincoln said...

Sabine

(a) I have ordered your book. I will read it when it arrives.

(b) If I didn't know, I would have said that this post was written by a grumpy old guy.

(c) These are my thoughts on your five points.

(1) You say that the standard model (SM) can only contribute to the cosmological constant (CC), which is true. But if you assume the entire CC comes from the SM, how do these differ? Yes, they are different things, but they are similar things, and if one contributes to the other, then this doesn't sound like a silly statement.

(2) Yes, of course it is silly to trust the SM at all energy scales. But everyone (well everyone who knows science) knows that.

(3) I guess I don't see that it is idiotic to take ratios of densities. I mean, it's not obvious that densities of ratios is the best thing, but at least it normalizes over volumes, which is a plus. And, even if you're right in your assertion, isn't 30 orders of magnitude still a bit of a problem?

(4) Ditto 13 or 52.

(5) I don't see why the ratio thing is a problem. Yes, obviously uncertainties are crucial. And if the uncertainties are such that the uncertainty spans 120 orders of magnitude, then the whole thing is silly. But the ratio tells you that there is a missing idea, calculation, or concept, somewhere in the connection between the two, correct?

There are tons of predictions that eventually fail. The one you mention here will likely be one. The metastability of the universe due to the mass of the Higgs & top is another. The taming of the quadratic divergences of the Higgs mass are a third. But your posts seem to boil down to "we don't know anything for sure, so we can't say anything." And of course we are ignorant. That's why people like you and me still have jobs.

I do look forward to receiving your book, so I can better understand your point. I'm trying to see something deeper and more insightful than the obvious.

naivetheorist said...

"I’d guess it’s the prediction you get for the size of the universe if you assume the universe was born by a vacuum fluctuation out of equilibrium.". the problem with deciding which prediction is the worst, is that you should only consider predictions made by reasonable theories. there are lots of bad predictions made by theories that are not reasonable. also, one could say that the 'prediction' that Schrodinger's cat is half dead-half alive is bad but is this a prediction rather than an interpretation? i am tempted to add the prediction that accepting Christ as your savior results in your getting into heaven is a bad prediction because, like string theory and the multiverse, it is untestable and must be taken as an act of faith. LOL

akidbelle said...

Hi Sabine,

I thought Lambda = 2 PI/3 R_U^2.

Best,
J.

Sabine Hossenfelder said...

Don,

I sound like a grumpy old guy? Must be making progress with my writing!

1) Well, don't assume it. Why would you?

2) Then better not forget mentioning it.

3) What's the point of normalizing something if you take a ratio anyway?

4) Who is grumpy here?

5) As I tried to get at later, the actual problem is the size of the fluctuations. I really think it's important to be clear on what the problem is. If you zoom in on the wrong problem, you'll be looking for the wrong "missing concept."

(Note that I didn't say there is no problem here.)

Uncle Al said...

"its value must be fixed by measurement" Accepted theories’ testable predictions are empirically sterile: proton decay, neutrinoless double beta-decay, colossal dark matter detection attempts. A 90-day experiment in existing bench top apparatus falsifies accepted theory. But, the Emperor cannot be naked!

https://www.forbes.com/sites/startswithabang/2017/12/05/how-neutrinos-could-solve-the-three-greatest-open-questions-in-physics/
https://i.pinimg.com/736x/45/5c/84/455c847510f593d06efef3e83a0aac06.jpg
...Ars gratia artis

The answer is where it should not be. Look. The Emperor is naked.

Theophanes Raptis said...

Ah, what to do! "Demiurge" was always known to be a mostly cunning entity (to clear out all those fluctuating theological anxieties.)

Louis Tagliaferro said...


Sabine said… “Many theoretical physicists think the standard model is not a fundamental theory but must be amended at high energies.”

Sabine,

I enjoyed the post. I wanted to ask you to consider discussing the quoted issue in a future post. It's something I was unaware of and would like to read your thoughts on.

Don Lincoln said...

Hi Sabine,

The difference is that I >>actually am<< a grumpy old guy, not a smart, charming, and lovely young theorist. And if it ever came down to a grump contest, I'd win because I have you beat by years...even decades. Practice and all that....

To your points.

1.) Nobody is assuming. There may be other mechanisms. Clearly there are other mechanisms. But those only exacerbate the situation. Or...and I know this is antithetical to your current thinking...there is a large unknown component with an opposite sign that nearly-perfectly cancels a very large number. Yes, you don't like naturalness-based arguments, but it seems to be a compelling argument that the thinking isn't completely foolish. Not religiously perfect, of course...but not completely foolish.

2.) I always mention that bit. Ditto with the vacuum stability and the quadratic divergences and a myriad of other tensions.

3.) I think the normalization is just a clarity thing. But, you're right, it isn't necessary. And it dispenses with arguments on how to define the volume.

4.) I never claimed to be anything but. On the other hand, you rejected the 120 orders of magnitude and substituted 10 and 50, give or take. At some level, once you get above 2 - 3 orders of magnitude, it's all the same, crisis-wise. They all say "ummmmmm...you forgot something."

5. I would like to hear more about this point. The rest of the points you have made all seem pretty obvious to me and much ado about nothing. But I don't understand this point in enough detail to say much. However, I think it is likely that I could learn something about a discussion on this point.

And, as far as your closing point, I didn't think you made such a claim. I think my point is that the way you have written has conveyed a point that isn't what you intended.

The way I read what you wrote is more along the lines of arguing about various orders of magnitude and somehow in showing that since people can't even agree on the numbers, maybe there is nothing to this. In contrast, I would flip this around and say that no matter how you decide to cast the problem, they all show that something is wrong. Accordingly, we should embrace the wrongness and instead focus on why to pick one way of looking at it so as to make it more likely to advance our knowledge.

While we might disagree on some points, I think we agree on the big ones. This is mostly (for me) a discussion about distilling your concerns in a way that makes it clear that there is a problem and doesn't confuse the reader into thinking that physicists are just so clueless as to not even be sure if there is a problem at all.

Or something like that.

Kevin Van Horn said...

"If you’d bother writing down the theoretical uncertainties of the calculation for the cosmological constant, the result would be compatible with the measured value even if you’d set the additional contribution from general relativity to zero."

I see you saved your big gun for last. Given this, do you even need to mention the other points? (BTW, I hope you expand on this point in some future blog post.)

Reminds of this talk, "Dissolving the Fermi Paradox":

http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf

yankyl said...

This is probably an odd point to get nitpicky about but you said that

"Many theoretical physicists think the standard model is not a fundamental theory but must be amended at high energies"

I'm curious about the converse, are there people who think that standard model suffices to arbitrary high energy?

Jack said...

Sabine,
I often enjoy your crotchety posts, But this one is too much. You are doing a dis-service to science if you imply that the cosmological constant is not a problem worthy of scientific consideration.

I can make a rigorous prediction of the contribution to the cosmological constant linear in the up quark mass which is valid to within 20%. (For details read my papers or ask me.) The result is 43 orders of magnitude bigger than the final answer. Yes, we all know that there are other contributions. But if this does not sound like it identifies a problem worthy of further study, I do not know what else to say.

Look, all of these issues are really just motivations for our work. You have chosen to devote a large fraction of your scientific career to quantum gravity phenomenology. This choice is despite the fact that the only reliable predictions in quantum gravity – via the effective field theory – gives results 50-100 orders of magnitude below observations. But you are willing to bet your life-work on some highly speculative ideas which might possibly overcome this barrier. Great. I respect your choice and I would love it if you are correct in this bet.

Others look at the great disparity between the magnitude of known contributions to the cosmological constant and the measured value and view that as a motivation for a problem worthy of study. For the life of me, I cannot see why does not sound like a good problem to you. It looks like a great issue to study as an indicator of new physics.

So I think that, as a writer about science who connects to many audiences, you do a dis-service to the way that we look for new physics when you say that this is not a problem. It is a significant puzzle and may be important. Scientists who take it seriously and look for new physics associated with it are trying to do good science. Hopefully in the end we will find out whether their choices or your choices are fruitful.

Best,
John Donoghue
(Yes, Google says Jack as the identity, but it is John from UMass.)

Sabine Hossenfelder said...

John,

The problem is not the prediction of the average value that's supposedly wrong by 120 orders of magnitude because there is no such prediction. It's an ill-posed problem and if you think otherwise, then please explain. I wrote on length elsewhere why naturalness arguments are not good arguments to make if you don't have a probability distribution. You end up trying to solve what's not a problem and waste a lot of time.

I am doing a "dis-service" to science by pointing out that this argument is flawed and many people are wasting their time? Now that's an interesting claim.

"The result is 43 orders of magnitude bigger than the final answer. Yes, we all know that there are other contributions. But..."

Anything that comes after the "but" is irrelevant. "We" all know there are other contributions, hence the result of the calculation isn't meaningful. End of story. Why talk about it?

The reason for writing this post is that "we" doesn't include most of the people who have been told this myth.

Best,

B.

Sabine Hossenfelder said...

Kevin,

I explained that in point 2. If there are any particles heavier than the ones we have seen so far, these will make the largest contribution. You can't neglect them. Hence the uncertainty is larger than the average value.

Sabine Hossenfelder said...

Don,

1) "it seems to be a compelling argument that the thinking isn't completely foolish. Not religiously perfect, of course...but not completely foolish."

You can only solve a problem with mathematical methods if you have a mathematically well-posed problem to begin with. And this is not such a problem. I really think theoreticians should try somewhat harder. I know you are speaking about your gut feelings here, but go and try to make it formally exact. Just go and try. You'll figure out that it's not possible. If you look at the math, there is no problem.

2) Good for you. You are great. You are wonderful. You are excused from everything.

4) One can get very philosophical about this. You know, there are a lot of numbers floating around in theories that you can clump together to get fairly large factors. Here's an example: (2*Pi)^10 is about 6*10^8. Why an exponent of 10? Well, because there's ten dimensions in case you haven't heard. More seriously, who says how large is too large?

In any case, are you saying that 120 orders of magnitude is as bad as 13 but 13 is much worse than 3? I can't follow the logic if there's any.

5) Not sure what you are referring to here. To the fifth point in my list? This merely refers to the uncertainties mentioned in point 2) with the addition that if you'd accurately state them, it would be clear there isn't any wrong prediction because the calculation allows for pretty much any result.

In case that refers to the problem with the fluctuations, well, the thing is that while you can get rid of the average value by using the free constant that GR contains anyway, the fluctuations around the average will still be of the same order of magnitude (ie, huge) and wobble your spacetime. Now, since gravity is a very weak force, this isn't as bad as it sounds as first, but it's not entirely without consequences. (Niayesh in his paper worked out what goes wrong.)

"This is mostly (for me) a discussion about distilling your concerns in a way that makes it clear that there is a problem and doesn't confuse the reader into thinking that physicists are just so clueless as to not even be sure if there is a problem at all."

Well, I explicitly wrote "The real problem with the cosmological constant is..." etc. It's hard for me to see how this could be mistaken for "no problem at all." Best,

B.

Sabine Hossenfelder said...

Louis, yankyl,

I'll discuss that in a future post.

Paul Hayes said...

The QFT calculations of vacuum "fluctuation" contributions to the CC seem potentially dubious to me for another reason. And apparently it's not just psi-epistemicists who might suspect that uncertainty is being mistaken for fluctuation there.

dlb said...

Regarding (1) I'd be more nuanced. Yes, according to (2) and (5) we can't calculate the contribution of vacuum fluctuations. But if we could, and if it ends up being very different than the observed value, there are two possible conclusions:
* The cosmological constant is an integration constant, and therefore can be anything. It almost cancels all other contributions, the end.
* This is a prediction that the cosmological constant is in fact not an integration constant, and some extension of GR might explain it.

To be honest, I'd have sympathy for people exploring the second option. I'm curious too.

Yves said...

"(...)
This is a prediction that the cosmological constant is in fact not an integration constant, and some extension of GR might explain it.

To be honest, I'd have sympathy for people exploring the second option. I'm curious too"

Or this one (https://arxiv.org/abs/1303.4444v3, Buchert et al.)? It doesn't pretend to "extend" GR, but takes in account the fact that a homogeneous-isotropic-at-every-scale space-time might be an over-simplified model of our universe, and predicts in its framework the emergence of an effective cosmological "constant".

@Sabine: this is a little off-topic here. But if you have already posted something about this approach (which I know is subject to debate among specialists of GR), I would be interested in reading your opinion.

Cobi said...

Its funny that you start your own post from 2016 with
"The cosmological constant is the worst-ever prediction of quantum field theory, infamously off by 120 orders of magnitude."
http://backreaction.blogspot.de/2016/02/much-ado-around-nothing-cosmological.html
and here you say
"(...) And many scientists, I have noticed with alarm, actually believe (this kind of statements)."

To me this statement always appeared to be given with a wink.
Yet if you do a calculation that naively should contribute to the effective cosmological constant although it could of course be cancelled by another coupling
it is somewhat surprising that it turns out to be almost exactly cancelled.
This is like observing two particles that have almost the same mass.
Just imagine the quotient of their masses would be 10^-8.
This would be a curious observation that asks for understanding.
Of course we don't know the distribution of particle masses over all possible universes and it could just be by chance.
But this would not be your default assumption, i hope.

"If you’d bother writing down the theoretical uncertainties of the calculation for the cosmological constant, the result would be compatible with the measured value even if you’d set the additional contribution from general relativity to zero."
Can you provide a source for this calculation? I am genuinely interested.

Don Lincoln said...

That was a bit snarky. I may have to walk back from my claim of being able to "out grumpy old guy" you by virtue of my extra years.

I do not desire to be excluded from anything. All I meant is that I always say that there are substantive limitations to the calculations, most obviously that the Standard Model cannot apply to the unification scale (if, indeed, the unification scale exists).

I look forward to reading your book. Perhaps there I'll find out if I think your points are substantive and thoughtful or just a bit on the pedantic and banal side. I'm bummed that I have to wait until June.

For whatever it's worth, I am also displeased when I read science popularizations (usually by theorists) that put forward the results of a prediction as if it were gospel and without mentioning the (well known to experts) limitations. To do that is a disservice to one's readers.

Sabine Hossenfelder said...

Cobi,

It didn't really register in my brain until recently how much harm the oversimplified stories have begun to do in the community. It's one thing to write this in a blogpost, knowing that it kinda captures the essence, another thing to put it in a textbook or teach it to students leaving them to think that's the full story.

Sabine Hossenfelder said...

Don,

I think I am somewhat scared because I have now talked to quite a number of students and more senior people not in the foundations (cond mat, mostly) and they tend to think that the cosmological constant is an actually wrong prediction. (As opposed to: there is something here that we don't really understand.)

Don Lincoln said...

I have not encountered this. But, I live very deeply embedded in the particle world. It's both the good and bad side of being a researcher at a national lab.

On the other hand, I have no difficulty imagining that condensed matter types don't understand this well. But I forgive them, as my mastery of condensed matter is rather tenuous as well. My deep grasp of your particular discipline is also not what I might like it to be.

Sabine Hossenfelder said...

Don,

I don't blame condensed matter physicists for this at all. I blame myself for not realizing earlier that such myths are hard to contain and eventually will affect the community, even though it's patently obvious.

See, the people who understand the issue don't see anything wrong with making somewhat sloppy or inaccurate statements, because they think everyone knows what they mean anyway, right? It's just some kind of little joke about the worst prediction ever, haha. Then they start printing it in textbooks and in papers and students read the textbooks and the papers and you end up with a fraction of the community believing it actually is a mathematical problem. Same thing happened with the flatness problem and with naturalness (and, more recently, the bullet cluster). These stories have been repeated so often, a lot of physicists actually believe them. And whether or not they themselves believe it, they use it as a motivation for their research.

So, well. I don't consider myself pedantic. You can't write on the frequency that I do and be pedantic. Words happen. But I think every once in a while it's good to remind people of what the real story was, so I'm here to remind them.

Ari Okkonen said...

I have understood that the "dark energy" concept is brought to the cosmology to explain observed acceleration of distant galaxies in order to avoid talking about "forbidden" cosmological constant, which is equivalent as far as I understand.

Physics Comment said...

Dear Sabine,

I believe that you are missing some crucial points about naturalness and why it is an important criterium. I will not go into all of them, but there is always one very simple way to shut-down such attacks on naturalness that I describe below.

There is a very simple way to understand naturalness, say in the context of the hierarchy problem for example. Try to write down a physics theory at 10^16 GeV which gives the Higgs mass to within an order of magnitude, or even two, three, four orders of magnitude. I guarantee that you simply can not write down this theory, which numerical values do you take for the parameters? Say you choose some parameter value M1=1.52324...5657, but then you remember that you forget to account for one (of 10,000) 3-loop diagram. This would completely change M1. The only way to get the Higgs mass from such a theory, in the absence of some principle or symmetry which makes it natural, is to put the Higgs mass in by hand in some way. But then your theory is completely useless - its only requirement is that it gives us the Higgs mass and it has failed to do so. Exactly the same argument can be made for the cosmological constant.

Naturalness is not some abstract hand-wavy contentless requirement as you seem to be suggesting, but a concrete in-your-face problem that you can not avoid if you ever try to actually calculate anything. And after all, calculating things is what physics is supposed to do.

Enrico said...

Cosmological constant is not a prediction. I agree. Why did Einstein say his theory predicted an expanding universe after Hubble discovered it? He added the cosmological constant to make a static universe model, and later said it was his biggest blunder. The cosmological constant is a free parameter. It doesn’t predict anything. I don’t know what he meant. Matter must be moving away from each other to counteract gravity from pulling them all together in one big lump. Was that his “expanding universe?”

I say the worst prediction in the history of physics is Einstein’s “prediction” of an expanding universe.

Unknown said...

Leon Lederman made the claim of "worst prediction" in his popular The God Particle.

I'm slightly confused by your points 1) and 2). I thought the 120 orders claim was being used as a justification for thinking there was something beyond the Standard Model.

Sabine Hossenfelder said...

Ari,

Various types of dark energy fields try to address various types of supposed problems. Need I add that there's no evidence any of these fields exist? See why I am trying to say maybe it's a good idea to figure out what the problem is in the first place?

Sabine Hossenfelder said...

Unknown,

Yes, right. It's used as a justification. That doesn't make it correct.

Sabine Hossenfelder said...

Physics Comment,

You seem to live under the impression that the Higgs mass is actually calculable and not put in by hand. That is not so.

Tim Maudlin said...

Alexamder McLin & Sabine:

What particle physicists—and people using the quantum theory—mean by a "fluctuation", and particularly a "vacuum fluctuation"— is a great question. Probably they don't mean what they seem to mean, namely that there is this physical state or thing, the quantum vacuum, that is fluctuating, i.e. changing its value. Why not? Because the standard view is that the wave function of a quantum system is complete. That means that every real physical feature of a system is reflected somehow in its wave function, so two systems with the same wave function are physically identical. If the wave function is complete, then the only way for anything in a quantum system to fluctuate or change is for the wave function of the system to change. But the quantum vacuum state does not change. It is what it is at all times. Since the quantum state is not changing nothing in the system is changing or fluctuating.

Now one way out of this conclusion is to deny that the wave function is complete. If you do, then it is at least possible for something in the system to be fluctuating even though the wave function is not. Any theory according to which the wave function is not complete is called a "hidden variables" theory. (Ignore the word "hidden": that is another linguistic error that physicists have made.) But if you ask a standard issue physicist whether she or he accepts a hidden variables account of quantum theory, she or he will deny it up and down. (BTW, the whole point of the famous Einstein, Podolsky and Rosen "paradox" was to argue that the wave function is not complete.) So any physicist who renounces hidden variable simply cannot consistently mean by a "fluctuation" or a "vacuum fluctuation" that there is anything actually fluctuating or changing.

So what in the world does it mean? Well, as we all know, the predictions of quantum theory are typically just statistical or probabilistic: they say that *if you perform a certain experimental intervention on a system* then there are various possible outcomes that will occur with different probabilities. Of course, that says exactly nothing about what is going on if you don't perform the experiment. And if you take these various possible outcomes and square them (so the positive and negative outcomes don't cancel out) and weight them by their probability, then you get a number that is called a "fluctuation". It is a sort of average sqaured-deviation from zero when you do the experiment on the vacuum state. And on any state, the "fluctuation" of a quantity is the average squared-deviation from the average result. If the average tells you approximately where you can expect the result to lie, the fluctuation tells you how much noise—how much deviation from that expected value—the data will have.

In sum, *if you do a certain kind of experiment*, the fluctuation tells you how much variation in the outcomes to expect. But on the "standard" understanding of quantum theory, this does not and cannot reflect any actual fluctuations in the system when it is not measured. The noise is the product of the measurement itself.

Probably quite a few physicists will deny what I have just written, and also deny that they think the wave function is incomplete. But they are just confused: such a position is self-contradictory.

Physics Comment said...

Hello,

Are you claiming that it is not possible to predict the Higgs mass, at least to within order of magnitude estimates, from a given natural theory? I'm sorry but this is just wrong. Take the MSSM at 10^16 GeV and set all soft masses to 1.0 TeV - this predicts a Higgs mass m \sim 1 TeV to some order of magnitude accuracy. Loop correction do not change this prediction significantly - and that is precisely the point. Take an unnatural theory at 10^16 GeV and try to do the same. Forget about why the underlying parameters take certain values - statistics, distributions, etc.. - pick them as you like - you simply can not even calculate what those values need to be to get even close to a prediction for the Higgs mass. Any prediction you get would completely change if you didn't include the 10,456th 3-loop diagram into your analysis.

You can compare this to a technically natural parameter, such as the Yukawa couplings. There are many theories of why the Yukawa couplings are hierarchical. For example, Froggett-Nielsen models. We can propose theories in the UV, calculate the resulting Yukawa couplings, and test the predictions against experiments. You simply can not do this for parameters like the Higgs mass unless you address the problem of naturalness (for example by a symmetry).





David Schroeder said...

While I haven't read all the posts, as it's early AM, the clarity of Don Lincoln's writing and thinking, especially the 2:21 PM, Dec. 6 post, really impressed me. It reminded me of the way our 19th century president, sharing the same surname, addressed and handled contemporary issues in the political sphere of his day.

Sabine Hossenfelder said...

Physics,

I am saying that currently the Higgs mass is a free parameter and no one can calculate it. Whether there is any underlying theory that predicts it, nobody knows.

As to the rest of your comment, you seem to be saying that a theory in which you don't know how to calculate the Higgs mass is a theory you think shouldn't exist. Are you seriously trying to tell me that this is a scientific argument?

Jarek Stolarz said...

Higgs mass in SM is 125GeV.
There is a problem with SM theoretical prediction and LHC results.

Sabine Hossenfelder said...

The standard model does not predict the Higgs mass. The Higgs mass (as all the other particle masses) is a parameter determined by measurement.

Physics Comment said...

Whether there is an underlying theory that is correctly a part of nature that predicts the Higgs mass indeed nobody knows. But whether a given theory, be it in nature or purely a construction of our mind (as most theoretical physics naturally is), can predict the Higgs mass we certainly do know - I just gave you an example of one (the MSSM with SUSY breaking at 1 TeV).

A theory whose aim is to explain the Higgs mass in which you can not calculate the Higgs mass (even to within multiple orders of magnitude) is not a scientific theory at all. It contains no more information (regarding the Higgs mass) than the sentence: 'The Higgs mass is 125 GeV'. Of course nature is not obliged to us to be calculable or understood, but if we are to make scientific progress in understanding the Higgs mass we inevitably have to construct a theoretical framework that will in one way or another address the naturalness problem. This is why it is such a central problem scientifically and practically.

The philosophical type arguments usually adopted in the literature about the likelihood of a given choice of parameters in the UV, I believe, are correct. But certainly one can debate them - as you do. But my point is that if you want to avoid such debates you can focus on the fact that there is no way to make practical scientific progress in understanding the Higgs mass (or the cosmological constant) without addressing the issue of naturalness in some way. Even anthropic arguments are addressing naturalness in some way (whether you like them or not). But not worrying about naturalness at all is the same as giving up on scientific progress - and that is fine if you are happy to sit and philosophise about it - but if you are actually a working scientist who has not given up on understanding the universe this is not an option.

Sabine Hossenfelder said...

Physics,

That you believe a theory more fundamental than the standard model must allow you not only to calculate the mass of the Higgs, but must allow you to do so a) easily and b) using currently known techniques for quantum field theories is just that, a belief.

"Of course nature is not obliged to us to be calculable..."

You can stop right there. This is sufficient to demonstrate there is no scientific argument to support your conviction.

Don Lincoln said...

Thanks David...

Sabine...

Reading Physics Comment's comments, I think you might have been hasty. Reading the general tenor of his or her comments there was an implied missing word. It would appear that they meant "easily calculable."

BTW...you have >>WAY<< better educated commenters than I get on my posts on my own internet contributions. But, then again, you write for a much more academically-gentrified audience.

Sabine Hossenfelder said...

Don,

Well, in case that was what he or she meant, it was clearly not what they wrote. I am not in the business of reading other people's thoughts.

Even so, yes, one can rephrase "sensitivity to UV parameters" as "sucks to make any IR calculation," but that doesn't mean anything is wrong with a theory that has such properties. It also sucks to have commenters who hide behind anonymity to talk down to me.

Don Lincoln said...

Perhaps I am obtuse. I didn't read it as talking down. I read it as disagreeing.

I disagree with you on some of your core points too. But that doesn't mean I disrespect you.

Regarding anonymity, well those are the breaks. You voluntarily write a blog, post it on the internet and solicit comments. If anonymity and criticism bother you, I question your choice to do what you do. These are relatively tame and informed. You should see the comments I receive to some of my efforts. I never knew some of the things that have been claimed about my mother...

Sabine Hossenfelder said...

Don,

Well, if someone comes here and proclaims "you are missing some crucial points about naturalness" and then goes on to tell me some standard folklore, I consider that talking down. How about "what do you think about..." Or "have you considered the argument..."? Or maybe just, to put it bluntly, "I chose to believe in naturalness because..."

I have no problem with criticism per se, but it is exceedingly tiresome to have to repeat the same things over and over again because the "critics" don't take the time to think about what I am saying.

None of which is to say that I have a problem with your comments. I get the impression you're not seeing the problems I see. That's unfortunate, but I think it's my fault.

I am generally not in favor of anonymity in public debate unless a safety concern calls for it. I consider it pure cowardice. Whoever is this commenter clearly has a background in particle physics, yet they don't have the guts to sign their naturalness obsession with their name. (It's a different issue with peer review. I am not in favor of anonymity in peer review either, but the way academia works right now I think it's the lesser evil.)

Uncle Al said...

Rigorous disciplines triumph at will. Structure and predictions are mathematical. The world's organic bulk is synthetic chemistry whose quality is %-yield. Theory fails.

Managerial fine-grain control and mathematical certainties exclude serendipity (insubordination). Embrace rule enforcement, produced units, and minimized risk (real costs/modeled gains). Fifty years of sublime theory replaced tangibles with advertising. Unending parameterizations rationalize mirror-symmetry failures. Pure geometric chirality tests are ridiculous versus empirically sterile accepted theory excluding them.

0.1 nm³ chiral emergence volume enantiomorphic Eötvös (U/Washington) or microwave rotational temperature (Melanie Schnell) contrasts are quickly executed in existing apparatus. Test mirror-asymmetric vacuum toward hadrons. Accepted theory is already ridiculous versus observation. %-Yield look.

Lee McCulloch said...

Enjoyed reading Sabine versus Don (and Physics-person). I am reminded of Bohr's parable on how to embrace Complimentarity:

In an isolated village there was a small Jewish community. A famous rabbi once came to a neighbouring city to speak and, as the people of the village were eager to learn what the great teacher would say, they sent a young man to listen. When he returned he said,

'The rabbi spoke three times.
The first talk was brilliant - clear and simple. I understood every word.
The second was even better-deep and subtle. I didn't understand much, but the rabbi understood all of it.
The third was by far the finest- a great and unforgettable experience. I understood nothing, and the rabbi himself didn't understand much either'.

How can the consensus of less informed ("the Wise crowd" indeed) ever fairly assess the competing theories of those blessed with clearer insight?

Don Lincoln said...

Well, we each have a level that we will tolerate and a level we don't. I have teenagers, so I have remarkably thick skin.

I agree with you on some of your points, but the points I agree with you on, I think are blindingly self-evident. Then there are the points I don't understand that you are making. I vacillate between thinking they are of the first category, or disagreeing with you. And, occasionally, I am simply perplexed.

I >>DO<< believe that any prediction needs an uncertainty. Ditto measurement. And, to a much lesser degree, the probability distribution function you mention. Since that particular goal is likely out of reach, I put it aside as being unachievable for the moment and therefore not a reasonable thing to ask for.

Maybe it's just that I come at it from an experimental point of view. I view all theories as suspect to a greater or lesser degree. Nearly all predictions these days come from perturbative expansions, thus they are inherently flawed, although we have some hope that we can assign uncertainties to these problems.

Then there is the problem that we are pretty damned sure that there are new phenomena not currently in our theory. I'm essentially certain on that. (Yes, it is a faith statement, but it's not a faith statement that requires boldness.) Accordingly, I realize that all calculations have this huge limitation as we push into realms not constrained by experiment. That doesn't much bother me. I'm used to being confused and cautious about extrapolations. I presume you are as well.

But, that said, I see some value in the guidance that the aesthetics of such things as naturalness bring to the conversation. (And, I should be cautious as the term naturalness can have a breadth of meanings depending on context and I am not being particularly careful here.) No matter the answer, there is a problem inherent in the significant disagreement between the cosmological constant (if, indeed, it is a constant) and calculations from QFTs about latent energy. Or in the quadratic divergences of the Higgs, etc.

You (properly) pointed out that the 120 orders of magnitude number is arbitrary and depends on how you cast the problem, but your other options also had large discrepancies. These point to the fact that there is an unexplained problem and the naturalness aesthetic suggests paths forward. Obviously, those suggested paths might be wrong. The answer could be something not-yet-imagined. In fact, it probably is.

So I am not offended to the degree that you are. Maybe I just don't understand or maybe I'm just not as philosophically...oh...I don't know...rigid, for the lack of a better word. But, then again, I haven't read your book. I reserve the right to change my mind when I receive it.

Lee McCulloch said...

I note the early response to the blog to the definition of fluctuation as a temporary deviation from the average. I was wondering whether, that such a definition requires a probability measure renders it problematical in the sense that you argue against naturalness.

David Schroeder said...

What a pity that our Washington politicians nixed the Superconducting Supercollider in Waxahachie Texas, as Uncle Al, mentioned in the linked thread of Niayesh Afshordi's analysis of the Cosmological Constant issue. If it were ever to be revived, advances in technology might even allow boosting of its originally planned energy of 40 TeV to even higher values. This is one area of government spending that I don't mind paying taxes for.

Sabine Hossenfelder said...

Don,

My point is this. Arguments from beauty - and that includes naturalness - have not historically been successful. In the cases where they have been successful (think: SR, Dirac equation, GR), it turns out the underlying problem was actually a problem not of ugliness but of actual mathematical inconsistency. I conclude that we are more likely to make progress if we make an effort to formulate problems that are mathematically well-posed.

I think the problem with the fluctuations in the CC is a well-posed problem (also an understudied one). The problem with the absolute value is not. I have a hard time seeing how you can think it's not a problem to subtract infinities from infinities and just define the residual to comply with measurement as you do when you renormalize eg bare masses, but when you do the same for the CC then the argument that the terms which you subtract from each other are not measurable individually for some reason doesn't carry weight.

Yeah, sure, there's always a chance that when you follow your sense of aesthetics you'll hit on a good idea. But it doesn't seem to be terribly successful and it certainly doesn't help if theorists erroneously believe the problem they work on is any more than an aesthetic itch.

Let me not forget to add that in my book it counts as deception to let members of the public think these studies are based on any more than aesthetic misgivings. If particle physicists had been honest, they'd have said "We believe that the LHC will see new particles besides the Higgs because that would fit so well with our ideas of what a beautiful theory should look like."

Sabine Hossenfelder said...

(Read "in my book" as "in my opinion". Seems a figure of speech I should stop using.)

Uncle Al said...

@Don Lincoln Arguments cannot exceed their postulates. Noether's theorems plus evident universal intrinsic symmetries evolve physical theory. General relativity lacks conservation laws. Intrinsic symmetries are inapplicable (arXiv:1710.01791, Ed Witten).

Physics fundamentally denies a left-handed universe. Emergent symmetry geometric chirality transforms beautiful equations into horrors. Parameterize after beauty derives a beast.

Baryogenesis’ Sakharov conditions are a left-handed universe toward hadrons. Test for a vacuum trace left foot with divergence of fit of calculated maximally divergent paired shoes. Crystallographic enantiomorphs measurably violate the Equivalence Principle, on a bench top. But that is impossible! - as is a triangle whose interior angles sum to 540°, yet there we live.

Wes Hansen said...

“Another real-world manifestation of implicit memory is known as the “illusion-of-truth effect:” you are more likely to believe that a statement is true if you have heard it before – whether or not it is actually true. […] The illusion-of-truth effect highlights the potential danger for people who are repeatedly exposed to the same religious edicts or political slogans.”

http://www.eagleman.com/incognito

David Eagleman is a neuroscientist; he runs the neurolaw lab and Baylor College of Medicine and has written a number of really good books!

Don Lincoln said...

I do think that renormalization is an ugly business and represents some dodgy thinking.

Domenico Barillari, PhD, C-Nucleonics company said...

Hello Sabina

Been a quiet follower of your blog for ages and finally motivated to talk... thank goodness an expert, on a well-respected blog, decided to finally bring this forward. Cannot tell you how appalled I was as a graduate student to see this kind of a claim of a some estimate of lambda creeping in-even into textbooks - which shall remain nameless because I liked them otherwise, and I would reveal my age! When Sean Carroll these days, whom by the way I also like very much as a public expositor, uses this item as if it could ever be a serious talking point with audiences, I turn green once again. I think my best QFT teachers had the wisdom, looking back now for decades (when dinosaurs were still roaming!), to keep a safe distance between all discussion of a nice, visible Casimir effect just appearing then in actual precision lab measurements, to a wild grab at Einstein's lambda that only a desperately optimistic of a "real vacuum scale" of QFT as we know it know might very naively permit!
signed - a finally content surfer DKB

Enrico said...

Don
All theories are wrong, some are useful. The problem is naturalness, aesthetics, symmetry. They are not physical theories but mathematical tools to aid calculation. They are useful to experimental physicists but to call them a theory is just silly. Theoretical physicists without any new theory but plenty of mathematical tricks to calculate experimental data are like theologians without a god.

Phillip Helbig said...

Sorry for the late comment---I was at the Texas Symposium on Relativistic Astrophysics in South Africa---but it is never to late to remind people to read the wonderful paper by Bianchi and Rovelli, which does a good job of pointing out what a non-problem the cosmological-constant problem is. In general, physicists would probably be better off by listening more to Carlo. :-)

What bothers me most is that those who think that it really is a prediction and that it is bad hardly ever assume that there is anything wrong with the basis of the prediction, but rather that it points to something wrong with GR, some new physics, etc.

My impression is that this is often touted as a problem in the (semi-)popular literature but, although too many still believe it, among people who actually work in cosmology there is a growing consensus that this is no real problem. Something similar is happening with respect to the flatness problem, but progress there has been somewhat slower.

Phillip Helbig said...

Its funny that you start your own post from 2016 with
"The cosmological constant is the worst-ever prediction of quantum field theory, infamously off by 120 orders of magnitude."
http://backreaction.blogspot.de/2016/02/much-ado-around-nothing-cosmological.html


Later on in the post referenced above, Sabine writes:

"Regarding the 120, it's a standard argument and it's in its core correct"

If you read the whole thing in context, though, this is more of a teaser used as a jumping-off point for other things, but to be fair this might not be completely clear to all.

Phillip Helbig said...

"I have understood that the "dark energy" concept is brought to the cosmology to explain observed acceleration of distant galaxies in order to avoid talking about "forbidden" cosmological constant, which is equivalent as far as I understand."

No. Observations indicate there is accelerated expansion. The idea of the cosmological constant has been around for literally more than 100 years. There is no a priori reason that it should be zero, and many arguments for the fact that it should not be zero. Fit the parameters to the observations. One gets a value for the cosmological constant which explains them, and one also gets the same value for the cosmological constant which is required from other observations. That is why the current standard model of cosmology is called the standard model.

This should be the end of the story, at least until some observations indicate that something is missing. But none do.

Some people don't like the cosmological constant because they don't know what it is. Interestingly, none of these people claim not to like gravity because we don't know why the gravitational constant is non-zero. So various other explanations, such as "quintessence", are invented. "Dark energy" (a really stupid name; as Sean Carroll pointed out, essentially everything has energy and many things are dark; sadly, his much better "smooth tension" never caught on) is a generic term for these other ideas (perhaps including the cosmological constant as well); in general, the value of this additional term, unlike the cosmological constant, can vary with time, can be coupled to ordinary matter and/or dark matter, etc.

But there is no observational evidence that the cosmological constant is not sufficient to explain all observations. While it might be interesting to investigate other ideas, there is no hope of finding observational evidence for them as long as the cosmological constant fits the data.

Somewhat related to this is the question as to which side of the Einstein equation dark energy belongs. Is it a property of space, essentially an integration constant, as in Einstein's original idea? Or is it some substance with an unusual equation of state (pressure equal to in magnitude but opposite in sign to the density in the appropriate units) which has the same effect (an idea first suggested, as far as I know, by Schrödinger)?

Phillip Helbig said...

Cosmological constant is not a prediction. I agree. Why did Einstein say his theory predicted an expanding universe after Hubble discovered it? He added the cosmological constant to make a static universe model, and later said it was his biggest blunder. The cosmological constant is a free parameter. It doesn’t predict anything. I don’t know what he meant. Matter must be moving away from each other to counteract gravity from pulling them all together in one big lump. Was that his “expanding universe?”

I say the worst prediction in the history of physics is Einstein’s “prediction” of an expanding universe.


This is wrong on many levels.

Einstein originally introduce the cosmological constant with a specific value because he wrongly thought---as did many, if not most, at the time---that the universe is static on large scales. After the expansion was discovered, he thought it better to drop the term. In that sense, GR would have predicted expansion (or contraction) because there are no static models without a positive cosmological constant. However, before the discovery of expansion, others had presented models based on GR with a cosmological constant (with a generic value) which do expand. Whether Einstein really said that it was his biggest blunder is not clear, and even if he said it it is not clear what he meant by it. At the time, observations were not very good, so Einstein later favoured models without the cosmological constant, especially the Einstein-de Sitter model since it is mathematically simple, but also one with a large matter density since it is spatially closed.

Sabine is talking about the value as predicted from particle physics. This has essentially nothing to do with the cosmological constant as Einstein used it.

Yes, matter must be moving apart or contracting unless there is a special value of the cosmological constant. Yes, that is the expanding universe. It is only accelerated expansion which needs a cosmological constant, though.

Enrico said...

"GR would have predicted expansion (or contraction) because there are no static models without a positive cosmological constant."

GR would have predicted static or expansion or contraction or accelerating expansion or decelerating expansion depending on the value of the cosmological constant - positive, negative or zero. That's why it's a free parameter.

"Yes, matter must be moving apart or contracting unless there is a special value of the cosmological constant. Yes, that is the expanding universe."

Moving matter is not the expanding universe. Newtonian mechanics predicts moving matter without spacetime expansion. Moving matter can be red or blue shifted. Expanding universe is just red shift.

CapitalistImperialistPig said...

Tim M. - I think you are confusing yourself with your talk of the 'completeness' of the wave function even though you clearly realize that the wave function does not completely determine the result of the measurement of the system. If you want to say that the wave function is complete, you have to realize that that doesn't mean it tells you everything you might want to know about a quantum system's future, instead it means that it tells you all you CAN know about the system's future.

The reality of quantum fluctuations, though, as demonstrated in measurements and calculations, has been known at least since the 1940's.

Phillip Helbig said...

"GR would have predicted static or expansion or contraction or accelerating expansion or decelerating expansion depending on the value of the cosmological constant - positive, negative or zero. That's why it's a free parameter."

Yes, but the value for stability must be infinitely fine-tuned. This was pointed out by Eddington. Technically, in the language of dynamical systems, it is an unstable fixed point. Perturbing the solution slightly leads to expansion or contraction. (Of course, in a perfect Einstein universe, there is no way one can perturb it.)

The Einstein-de Sitter model is an unstable fixed point in exactly the same sense. This fact was used against the static Einstein model, but in favour of the Einstein-de Sitter universe ("the universe must be exactly Einstein-de Sitter because if not it would evolve away from it"---an argument often made about our real universe, though of course it is only approximately described by any Friedmann model).

Yes, your description of the expanding universe is technically more correct, but my simpler version was to put the original poster on the right track, rather than confuse him more. :- By the way, though I am in the "expanding-space camp", there are colleagues who don't think that this is a valid way of looking at things. Of course, at the end of the day, what matters is whether one calculates correctly.

Phillip Helbig said...

"GR would have predicted static or expansion or contraction or accelerating expansion or decelerating expansion depending on the value of the cosmological constant - positive, negative or zero. That's why it's a free parameter."

Just to be clear (I am not implying that this is what you meant), it is not the case that there is a simple relationship between "static or expansion or contraction or accelerating expansion or decelerating expansion" and "positive, negative or zero". If the cosmological constant is negative, then a) there is always deceleration and b) after initial expansion (in all cases, I am assuming expansion now; the equations are completely time-symmetric) there is contraction. If the cosmological constant is zero, there is always deceleration (unless there is no ordinary matter (nor radiation) as well) and the ultimate fate depends on the value of the density parameter or (equivalently, but only in this case) the spatial curvature. If the cosmological constant is positive, then there is always deceleration at first after the big bang (unless there is no ordinary matter (nor radiation) as well). If it is low enough, there will be no acceleration and the universe will collapse. The limiting case is that it reaches the Einstein static model after an infinite time. If it is larger, the deceleration changes to acceleration and the universe expands forever. (There are also the non-big-bang cases that the universe was in the static state infinitely long ago then accelerates forever, and models which contracted in the past to some minimum (possibly zero, in the case that there is no matter) size before expanding forever.)



Tim Maudlin said...

CaptialistIP-

It is not, of course, "my" notion of completeness, but the terminology introduced in the EPR paper. It is a perfectly clear notion for purposes of foundational analysis, and should not be confused with what you seem to have in mind, which has to do with either predictability-in-principle or predictability-in-practice.

In a complete theory, as EPR defined it, every real physical feature or aspect of a system at a time is reflected somehow in its theoretical description. In this case, we are considering the theoretical description to be its wave function. If a theory is incomplete, then the theory is missing something, and needs to be completed by postulating a more refined theoretical description, one that captures the missing physical features. Every presently existing physical theory is of course incomplete in this sense, since we have no Theory of Everything. But also a theory could be incomplete even in a restricted domain. For example, a theory of electromagnetism that uses only the electric and magnetic fields in its electro-magnetic description of a system is incomplete, as the Aharonov-Bohm effect demonstrates.

What has this to do with predictability, i.e. whether the theory can be used to predict the outcomes of measurements, or in general the future evolution of the system? Nothing direct.

A complete theory can fail to yield predictability because the fundamental dynamical laws are stochastic rather than deterministic. In that case, two physically identical systems at a time in physically identical circumstances can evolve differently. That is the usual picture of quantum theory, as codified by von Neumann, with a fundamentally indeterministic and hence unpredictable collapse of the wave function, or, in short, "God plays dice". The standard picture, as Einstein saw, was that QM is complete but indeterministic, with the indeterministic bit somehow associated with "measurement". This leads to the "measurement problem" in one guise (i.e. what physically distinguishes these special indeterministic “measurement” interactions from plain vanilla everyday deterministic interactions described by a Hamiltonian?). The GRW collapse theory has this dynamics, but with no measurement problem because the collapses are not particularly associated with “measurements”. That's why Bell was so impressed with it.

What EPR showed was the very thing that had bugged Einstein all along, namely that the standard Copenhagen approach not only had to posit indeterminism (God plays dice), but also non-locality ("spooky action-at-a-distance"), since in their experimental situation the collapse not only changes the physical state of the local particle but the distant particle as well. Their whole argument is that "the quantum-mechanical description is *not* a complete description" because that would entail non-locality. It is a bit buried in the paper because they did not think anyone would actually endorse non-locality.

So a stochastic dynamics can produce a theory that gives only probabilisitic predictions but is still complete, in Einstein's sense of “complete”. What the EPR argument shows is that it also must be non-local. To preserve locality and get rid of spooky-action-at-a-distance, you have to have a deterministic theory, at least in the EPR setting. This is where Bell takes up the question: is it possible to preserve non-locality even granting the incompleteness of the wave function, which is the only hope you have.

(Con't)

Tim Maudlin said...

A deterministic theory, on the other hand, must give non-probabilistic predictions from a complete initial description of a system. Two systems can that end up differently (e.g. giving different outcomes of a “measurement”) must start out differently. Any practical inability that we have to predict the outcome must be explained by our inability to determine the exact initial state. This is illustrated by Bohmian mechanics or pilot-wave theory. There, the wave function (of the universe) evolves deterministically and linearly, and unitarily, and predictably all the time, but we can't predict what the system will do from just that information because that information is incomplete. According to this theory there is another physical fact about the system (in fact a manifest rather than a "hidden" one)—namely the actual particle positions—that are needed to fill out the theoretical description. This is by far the cleanest and most highly developed and precise understanding of quantum theory, but most physicists reject it. In this theory, if a system such as an atom is in its lowest energy state, and the wave function is stationary, then the particles are not "fluctuating" or "jumping around": they too are static.

The attempt to keep the wave function as a complete description but get rid of the collapse leads to Many Worlds, as Schrödinger pointed out (and rejected as absurd) in his "cat" paper, his response to EPR.

In no case—collapse, Bohm, or Many Worlds—is anything in the vacuum state fluctuating. That was my point. The talk of "fluctuations" is not about any physical process going on in the vacuum state, but about the character of the interaction of the vacuum state with a very non-vacuum physical apparatus we call a "measuring device". But as long as no such physical system is around, e.g. in interstellar space, nothing is fluctuating.

Now I don't know if this clears up your confusions, because I'm not sure what they are. You said that I was confused without specifying anything wrong in my post. What is wrong in yours is equating the question of the completeness of a theory with the sorts of predictions it makes, on the one hand, and the sort of predictions we can makes with it (which depends on who much of the physical state we can actually know) on the other. So you can explain why you think I am "confusing myself" or point out an error in this post or the last, that might help us sort this out.

Gabriel said...

@Tim M. When you say that "the talk of fluctuations is not about any physical process going on in the vacuum state" Does that imply that, in your view, the proposed solution to the cosmological constant problem proposed by Unruh is not correct? In his recent paper he literally takes the "vacuum fluctuations" as a form of fluctuating energy that gravitates.

CapitalistImperialistPig said...

Tim M. - Perhaps instead of saying that you were confusing yourself, I should have just said that I don't understand your point. In any case, I think that you are misleading in attributing the indeterminacy to the measurement process. An electron orbiting a proton in a hydrogen atom experiences the vacuum fluctuations of the electromagnetic field, and they produce the Lamb shift. Many other examples exist that show the reality of vacuum fluctuations.

The wave function is a useful idealization, but in practice physicists deal with Lagrangians and what can be calculated in approximations.

Perhaps you would like to put QM into the Procrustean bed of your deterministic philosophy, but its not a good fit.

CapitalistImperialistPig said...

@Tim M. - You should think about what the wave function says about the vacuum field, and what it says is that at small scales that field changes very rapidly in space and time. If that doesn't sound like a vacuum fluctuation to you, then I give up.

Paul Hayes said...

CapitalistImperialistPig,

Tim is right about vacuum fluctuations but wrong about EPR and 'the' CI. And pilot waves may be "by far the cleanest and most highly developed and precise" psi-ontic 'understanding' of quantum mechanics but it's still spooky and it's still psi-ontic.

Tim Maudlin said...

Gabriel- Since I have not read Unruh's recent paper I cannot comment. The point I am making is a simple logical point. The vacuum state state itself is stationary: it does not change with time. If it is complete, which most physicists would insist on, then it just immediately follows that nothing is fluctuating. That's a simple, one-step argument. If Unruh really requires something to fluctuate, then either he has adopted a very idiosyncratic interpretation of quantum theory, which I can't even describe, or he is out of luck.

Sabine Hossenfelder said...

Tim, Gabriel,

Unruh indeed talks about fluctuations around the average - the same fluctuations that I am referring to (and that Niayesh is referring to). I think you have run into a problem of vocabulary here. The vacuum expectation value is often attributed to "fluctuations" but that's just a fancy way of referring to certain types of sums. There isn't really anything fluctuating here - the result is a constant. But this constant is only an average value around which there should be actual fluctuations.

Note that this constant does *not* appear in usual QFT (which has been referred to above) because not only can you renormalize it away, even if you don't it has no effect if space can't curve in response to it.

Sabine Hossenfelder said...

PS: I wrote about Unruh's paper here. I've been thinking about this paper a lot after writing the post and haven't changed my mind.

Tim Maudlin said...

Sabine,

Now you have introduced yet another concept into the discussion: the expectation value. Of course that is "constant", but as an average you get that for free. To note that the expectation value for a given state does not fluctuate is to note a triviality.

The question is really very simple: take a system in the vacuum state over some period of time. Is there any physical magnitude in that system that is fluctuating, i.e. changing, over that period of time? If you think that the wave function is complete, then the answer trivially must be "no". So if you think the answer is "yes", as you appear to, then you must think that the wave function is not complete, i.e. you endorse (consciously or unconsciously) a "hidden variables" theory.

Since most physicists deny that they hold a hidden variables theory, they are perforce required to say that there is nothing actually fluctuating or changing in the vacuum state. This is just simple logic.

If you want to insist that something is really fluctuating, as you seem to, then please answer these diagnostic questions:

1) Do you regard the the wave function of a system as a complete physical description of it, that is, as not leaving out any physical property that the system has?

2) Does the wave function of the quantum vacuum change with time?

Just "yes" or "no" for each will do. If you think that either questions is somehow vague or ambiguous, point that out and I sharpen it up.

Sabine Hossenfelder said...

Tim,

The vacuum energy (that this post is about) is time-independent. It is its average value. Yes, that's trivial. Please excuse my futile attempt at being polite.

I think of quantum mechanics (and qft) as an effective theory in which degrees of freedom (call them hidden variables) have been neglected. So, no, I do not regard the (usual) wave function as the complete description. I add usual because it may well be that whatever is the more fundamental theory also has a wave-function.

As to your question 2). The vacuum state isn't uniquely defined so it's impossible to answer.

Tim Maudlin said...

Sabine,

This is getting very interesting, then. Of course, we all know that present theory is not the final theory, and so is not, in that sense, complete, but we can still get at an important part of how you are thinking here. So granting that there are more degrees of physical freedom than are represented in present theory (so this is, I guess, part of the "unknown sector" of physics we haven't figured out yet), you seem very confident that in what we call the vacuum state (or a vacuum state, if that is degenerate) these unknown degrees of freedom are actually fluctuating. Do you have any idea of what drives these fluctuations? Is the underlying dynamics linear? Unitary? Do you think of the probabilistic nature of quantum predictions as due to these fluctuations? Or is all that just a guess without any real grounds? Do you think it is actually impossible that these unknown physical degrees of freedom are stationary too?

The EPR paper called into question the completeness of the quantum-mechanical description, and was roundly rejected by Bohr et. al. Do you think you your view as therefore more on Einstein's side of that debate? Just curious about how you are thinking about all this.

Sabine Hossenfelder said...

Tim,

I don't really see the relation of your comments to the argument of my post. There are certain calculations that you can do in qfts and these give certain results and all I am saying is that these don't predict the cosmological constant, never have, and never will.

I think the dynamics underlying qfts is neither linear nor unitary. Yes, I think the probabilistic nature of quantum mechanics is due to the unresolved degrees of freedom. I wouldn't call them fluctuations, I am not sure how that helps. I don't know what you mean by 'stationary' in that context, so can't answer that question.

It's mostly a guess. I never looked into the Bohr-Einstein debate so I can't tell to what extent I'd agree with whose argument about what.

Enrico said...

Tim
In real EPR experiments, physicists claim they disproved Einstein. Entanglement of distant particles can be explained by a theory that is probabilistic and non-local or a theory that is deterministic and local. What is ruled out is probabilistic and local. How did they disprove Einstein when he was arguing for a deterministic and local theory? They say they have proven non-locality but that is only true if they also assume the theory is probabilistic, and quantum mechanics is probabilistic. But that is circular reasoning. Non-locality is true if QM is true. But the position of Einstein et al was QM and non-locality are false or incomplete.

Tim Maudlin said...

Sabine,

So here is one source of misunderstanding here: I have not been even responding to your post. Just to McLin's question. There is a lot of confusion about what "quantum fluctuation" means, so I thought it was a nice questions quite apart from the Cosmological Constant.

Enrico-
Once you take account of Bell as well as EPR, then no local deterministic theory can work.You are stuck with non-locality.Could be indeterministic non-locality (GRW) or deterministic non-locality (Bohm) but there must be non-locality.

Enrico said...

Tim
Experiments testing Bell's inequality are inconclusive. They don't rule out locality as pointed out by Franson et al (see link) Is there more updated experimental result?
http://math.ucr.edu/home/baez/physics/Quantum/bells_inequality.html

Tim Maudlin said...

Enrico

The link you have is to the early 90's! Of course there are updated results, and all the experimental loophole have been closed. Try this:

https://phys.org/news/2017-07-probability-quantum-world-local-realism.html

The chance of a local explanation is less than 1 in a billion.

Enrico said...

Tim
Sorry but I think this report is hyped. Below is the most important sentence in the report.
“This means that the quantum world violates either locality (that distant objects cannot influence each other in less than a certain amount of time) or realism (that objects exist whether or not someone measures them), or possibly both.”

This means they have not falsified Einstein et al because EPR were arguing for locality AND realism. The experiments cannot determine if locality or realism or both was violated. By the way, it is not a choice between either “hidden variables” or entanglement. Hidden variables are a proposed explanation for what is perceived as “entanglement” in QM

Brian Dolan said...

Sabine,

As usual I agree with almost everything you say in your post. I do not believe that the Standard Model of particle physics has anything to say about the Cosmological Constant. I wasn't aware of Martin's paper (arxiv:1205.3365), and I haven't read it in detail, but I share the unease with equation (515) in that I have no idea what the parameter mu in that equation means physically.

But I would like to add my ha'pence on the Standard Model as an effective field theory. Sure it's an effective field theory, from an observational point of view we know we have to add something to get neutrino masses (not difficult, there is a number of feasible models on the market, but not yet clear which is the best one), we know we have to add dark matter (completely unknown) and, of course, gravity. But I maintain that it is a very unusual effective field theory. The Standard Model has 19 parameters (I'm including Theta-QCD here but not the cosmological constant). Of these 19 parameters one is relevant (the Higgs mass) and 18 are marginal (not exactly marginal of course, there are logarithmic corrections, but still marginal in the usual terminology). This is unprecedented in the history of physics. In my opinion this is probably significant and Nature is giving us some hint here. We've had effective field theories in particle physics before, the Fermi 4-point weak interaction was an effective field theory, it involved an irrelevant operator and everyone knew it had to break down --- there had to be new physics at 100GeV. The Standard Model has no irrelevant operators (in old fashioned terminology it was called renormalisable). Because all operators are either relevant or marginal there is no internal evidence of any need for new physics for a very large range of energies, at least up to 10^9 GeV, as determined by the calculations on the instability of the electroweak vacuum (though this very sensitive to the top quark mass https://arxiv.org/pdf/1704.02821.pdf).
So there is no internal evidence in the Standard Model for any need of new physics from current energies (10^3 GeV) up to 10^9 GeV, six orders of magnitude!
This is unprecedented in physics. I am not a condensed matter theorist but realistic models in condensed matter that have no irrelevant operators are not usual and marginal operators are certainly not generic.

To end my rant, I do not believe that the Standard Model of particle physics is "just another effective field theory" (I am aware that this is a very unorthodox point of view!). It is a very non-generic effective field theory with no irrelevant operators, 18 marginal operators and 1 relevant operator. This is unprecedented in the history of physics and I feel that Nature is probably giving us a hint here that whatever underlies the Standard Model must be something very unusual.

Brian

Sabine Hossenfelder said...

Enrico,

Your logic is fishy.

Tim Maudlin said...

Enrico

You asked about whether we can say with very high confidence from experiments that Bell's inequality is in fact violated (for measurements none at space like separation). The answer is "yes". Period. The so-called experimental loopholes have been closed.

I didn't even read the article, because I was was looking for some report about the best experiments. The stuff about "realism" is patent nonsense, as you can see, although admittedly widespread nonsense. Do objects actually exists even when no one is looking at them? Of course they do. No physical theory could possibly suggest or imply that they don't. Adding "realism" as a presupposition of the experiment is literally even sillier then reporting every experimental result like this: "The experiment provides strong evidence that the Higgs boson exists or that the experimenters were all hallucinating or are pathological liars".If the physical world does not exist when no one is looking then physics itself just is not possible.

As silly as is it, there really are physicists who talk like this. They say ("argue" would be too strong) that we should interpret the result of these experiments as indicating a violatiton of realism rather than locality. Or, as I like to put it, "Well, the nothing actually exists, but thank God it's local".

David Halliday said...

Enrico:

Your "logic" is failing you.

If one finds that either A must not be true, or that B must not be true, or both A and B must not be true; then, a fortiori, A AND B must not be true.

What one has is (NOT A) OR (NOT B) is TRUE (where the OR is not an exclusive or, but an inclusive or, so both could be false).

The negation of this yields A AND B is NOT TRUE.

It's simple logic.

David

David Halliday said...

Tim:

The most general form of Bell's Theorem, as I recall, is that Locallity "(that distant objects cannot influence each other in less than a certain amount of time)" and Contrafactual Definiteness, together, lead to Bell's inequality (which is violated by empirical evidence, as you know).

Contrafactual Definiteness is similar to "realism (that objects exist whether or not someone measures them)", but is more narrow and refined, and speaks more specifically to ("hidden") variables or degrees of freedom that have definite values, even if you don't even try to measure such.

This is not about "reality" being "really real", or not, even when one doesn't "look".

For instance, non-relativistic Bohmian mechanics has Contrafactual Definiteness (with the "actual" "particle"), but violates Locality.

Other interpretations of Quantum Mechanics (QM) take the opposite tradeoff, maintaining Locality (even at the relativistic level), but "give up" Contrafactual Definiteness (even though they do not "give up" "realism" in a sence that reflects our actual experience).

In fact, even the integrals that are used within QM, need not be on "surfaces" of "simultaneity", as they are usually formulated, but can involve effective "velocities" that are far smaller (at least down to arbitrarily close to the speed of light), as I showed within my Ph.D. Dissertation. (Unfortunately, I still have yet to obtain an electronic version I can share. I've got to get on that!)

(I think it can be formulated actually on light-like surfaces, and, maybe, even surfaces involving only effective "speeds" less than that of light, but I haven't proven that, Mathematically, yet.)

Anyway, maybe this will "shed some light".

David

Tim Maudlin said...

David:

This is a common misconception. Neither Bell nor EPR presuppose counterfactual definiteness. Nor do they presuppose determinism. Rather, as Bell insists, determinism is *derived* from locality and the EPR perfect correlations. And once you have determinism you get counterfactual definiteness for free. So it is not a presupposition than can be denied in order to save locality. Further, the CHSH inequality does not use the perfect EPR correlations, and so never even implies counterfactual definiteness.

Standard QM is clearly non-local in virtue of the collapses. That was what bothered Einstein about standard QM all along: the spooky action-at-a-distance. That's the whole point of the EPR paper. What Bell showed is that you can't escape the non-locality and still get the right predictions.

Tim

Amos said...

Standard QM is clearly non-local in virtue of the collapses.

To avoid confusion, it's good to be clear about the meanings of "standard QM" and "locality". It would be better (in my opinion) to talk about quantum field theory and separability. It goes without saying that non-relativistic quantum mechanics is non-relativistic, meaning it is not Lorentz invariant. Also, the word "locality" is (or should be) reserved for the proposition that no energy or information propagates faster than light, which amounts to Lorentz invariance. So, standard (non-relativistic) quantum mechanics obviously does not satisfy locality (Lorentz invariance), which is why we talk instead about quantum field theory, which IS Lorentz invariant and hence satisfies locality according to our definition (which is the right definition!).

With that said, quantum entanglement does not entail non-locality, it entails non-separability, where "separable" is a more subtle concept that refers to a degree of (statistical) independence between the results of measurements in one region and the (free?) choice of measurements in another (space-like separated) region. If quantum coherence is maintained, phenomena are not separable in this sense. Notice that Einstein didn't complain that entanglement implies no real existence for the separate entities until measured, he complained that it implies no real independent existence, i.e., that space-like separated physical systems are not separable. He was right about that, but his reasons for regarding this as unacceptable were sketchy.

Eusa said...

If we assume that reality is both left-handed and right-handed for every one and both negative and positive charged for every one but only inverse makes difference, then entanglement and Bell logic need no nonlocality. It's all due to spatial environment that preserves inversions.

Tim Maudlin said...

Amos:

"Also, the word 'locality' is (or should be) reserved for the proposition that no energy or information propagates faster than light, which amounts to Lorentz invariance."

This is a bold assertion, which is also not correct. Lorentz invariant descriptions of tachyons have been around forever, and tachyons certainly would transmit both energy and information superluminally. So your proposed "definition" of "locality" is certainly not acceptable: it is self-contradictory.

In any case, to understand what "locality" means in the context of Bell's theorem one obviously has to look at the condition Bell used in his proof, and discussed in his works. You may want to use the term in some other way, but that is just the wrong way if you are discussing Bell.

Further, it is not clear what you mean by "information": does that mean signals or Shannon information? It is well known that violations of Bell's inequality need not allow for signaling. Indeed, even stronger violations of locality can be non-signaling (see: Popescu boxes). And if you mean no signaling faster than light, then the condition is too weak to capture what Bell (or Einstein) had in mind. When Einstein correctly accused quantum theory of "spooky action at a distance" he was certainly not claiming you could superluminally signal using it!

Bell has many clear discussion of what he means by locality, and in Bell's sense QM. and just as well QFT, is clearly non-local. And if you think that QFT is Lorentz invariant, think again. In particular, figure out how you plan to solve the measurement problem. Only then can one have a real discussion.

Paul Hayes said...

Amos,

Quite right but that's the trouble with arguing about "nonlocality": it is just nonseparability, but because nonseparability is a distinctly non-classical feature, those who wish to spook themselves and others with it can always 'call that dog by a bad name'. It's a regrettable practice and should be resisted - along with the simply false claims about QM.

Tim Maudlin said...

Paul Hayes,

PBR proves that you must have a psi-ontic theory or else violate basic statistical independence assumptions that underlie all experimental method. That is not a false claim of any sort: it is a mathematical theorem.

Amos said...

To understand what "locality" means in the context of Bell's theorem one obviously has to look at the condition Bell used in his proof, and discussed in his works.

Yes, and what Bell’s inequality rests on is actually not what most people call “locality” (no faster-than-light-signaling) but rather separability. Bell himself was muddled about this. He said things like “The reason I want to go back to the idea of an aether here is because in these EPR experiments there is the suggestion that behind the scenes something is going faster than light”. But of course he recognized that this implies that “things can go backward in time”, so it’s a big problem, unless the “things” convey no energy or information (signaling), in which case at most we are talking about separability, not locality.

In Bell's sense QM. and just as well QFT, is clearly non-local.

If the non-relativistic Schrodinger equation were true, locality would be violated, as seen by the fact that it is not Lorentz invariant. (The possibility of tachyons is neither here nor there… literally.) QFT, on the other hand, is manifestly Lorentz invariant (see below), so it satisfies locality in the sense of no faster-than-light signaling. As noted above, “Bell’s sense” of locality was muddled, but his inequality involves what I’m calling separability, not locality.

If you think that QFT is Lorentz invariant, think again. In particular, figure out how you plan to solve the measurement problem.

I think QFT is manifestly Lorentz invariant, but I suppose it’s conceivable that the resolution of the measurement problem might result in the overthrow of QFT and replacement with a theory that is not Lorentz invariant. However, QFT already entails quantum entanglement, and yet it is Lorentz invariant, hence the need to distinguish between the distinct concepts of locality and separability.

Tim Maudlin said...

Amos,

Lorentz invariant tachyons, even as a theoretical or mathematical possibility, are a flat counter-example to your claim. Your suggestion about the meaning of "local" is provably incorrect. You can't simply wave a counter-example away.

Again you are not distinguishing signaling (which requires a level of controlability and observability) from information transfer. If you regard the wave function as complete, then QFT just like QM is manifestly non-local in the sense of superluminal information transfer, as Einstein saw. The EPR argument works as well for QFT as it does for QM.

Locality "in the sense of no faster-than-light signaling" just is not the sense that either Einstein or Bell had in mind. Which I already pointed out.

If you have no resolution to the measurement problem then you have no theory, certainly not one that can predict the results reported by Aspect, Zeilinger, etc.

Lorentz invariance is neither necessary nor sufficient for any sort of locality. If you want all the details of various properties (no superluminal energy transfer, no superluminal signaling, no superluminal information transfer, with detailed calculations of how much transfer of Shannon information is required to violate the inequalities) it is all in my "Quantum Non-locality and Relativity", with explicitly constructed examples.

Eusa said...

According to all measurements the locality, the speed of causality being c, has been never broken.

It's bullshit speaking about nonlocal phenomena but we must research out the spatiality as conservator of inversions.

Sabine Hossenfelder said...

Amos, Tim,

I side with Tim on that Lorentz-invariance is neither necessary nor sufficient for locality. It's not necessary because there are clearly examples which are local and not Lorentz-invariant. A set of entirely disconnected points will do - doesn't get more local than that. It's not sufficient because you can write down Lorentz-invariant theories that are not local.

As to the example with tachyons however, it is not at all clear they can indeed used to tranfer information, See eg here.

Amos said...

You are not distinguishing signaling from information transfer. ..QFT is manifestly non-local in the sense of superluminal information transfer…

I’ve distinguished between signaling versus non-signaling, and called one locality and the other separability. By the way, the word “transfer” when applied to non-signaling correlations is problematic, because two space-like separated events may exhibit (over multiple trials) some correlation, but each precedes the other in some frame, so the idea of “transfer”, which tends to imply direction, is misleading. We need to distinguish between signaling (which has a clear sense of directional “transfer” of information) versus spacelike-separated correlations (which don’t have a clear Lorentz-invariant sense of directional transfer). This is the distinction that is designating by the words locality and separability.

The EPR argument works as well for QFT as it does for QM.

Sure, but we don’t even need the EPR argument to show that QM violates not only separability but also locality (using the words in my sense), because the non-relativistic Schrodinger equation is not Lorentz invariant, so it would imply a preferred frame and superluminal signaling. Einstein insisted that we should not violate special relativity (the content of which is Lorentz invariance, along with a few other tacit but generally conceded assumptions, such as that tachyons cannot permit superluminal signaling), but non-relativistic quantum mechanics explicitly violates special relativity. QFT fixes this problem, although Einstein never had much interest in learning about QFT (and in his day it was sort of a mess anyway), but as you said the EPR argument still applies. But EPR does not imply superluminal signaling nor even directional transfer of information, it implies spacelike separated correlations, which is what I’m calling non-separability, to distinguish it from superluminal signaling and directional transfer of information.

Lorentz invariant tachyons, even as a theoretical or mathematical possibility, are a flat counter-example to your claim. Your suggestion about the meaning of "local" is provably incorrect. You can't simply wave a counter-example away.

I’m not trying to wave anything away, I’m saying that “tachyons” that permit superluminal signaling would violate special relativity, and tachyons that do not permit superluminal signaling are not directional transfers in any Lorentz invariant sense, and hence (at most) are just another word for non-separability. If you want to conceive of spacelike-separated correlations as being somehow facilitated by “tachyons”, you’re free to do so, but that would not constitute superluminal signaling nor even directional transfer of information (without violating special relativity). The point is to clearly distinguish between directional transfers of information (signaling) versus non-directional spacelike-separated correlations.

Lorentz invariance is neither necessary nor sufficient for any sort of locality.

Taking “locality” to mean no superluminal (directional, in the Lorentz invariant sense) transfer of energy or information (meaning signaling), I think Lorentz invariance is sufficient for locality, though not of course for separability.

It's not sufficient because you can write down Lorentz-invariant theories that are not local.

Do you mean “not local” in the sense that they permit superluminal signaling, i.e., Lorentz invariant directional transfer of energy or information? Is this referring to achronal cylindrical universes, or some such thing? Or are you using the word "local" with some other meaning?

Tim Maudlin said...

Amos

Communication here is is simply going to be impossible if you insist on using words in an idiosyncratic way. You certainly cannot make any contact at all with Einstein's and Bell's concerns.

You write:
"I’ve distinguished between signaling versus non-signaling, and called one locality and the other separability."

Yes, and that is creating nothing but confusion. The distinction between theories that allow superluminal signaling and theories that don't already has a name: the signaling/non-signaling distinction. That simply has nothing to do with locality or separability. The fact that this leads to a complete mess is illustrated by the following sentence:

"But EPR does not imply superluminal signaling nor even directional transfer of information, it implies spacelike separated correlations, which is what I’m calling non-separability, to distinguish it from superluminal signaling and directional transfer of information."

No, the EPR correlations don't require non-separability in any sense at all. The EPR correlations can be perfectly accounted for using a completely local and separable theory. This is the sort of Bertlmann's socks explanation that Bell contrasts with the violation of his inequality. The EPR correlations, of course, do not violate the Bell inequality. So the fact that by your definition EPR requires a failure of separability shows only the incorrectness of your definition, nothing else.

The example of a completely Lorentz invariant theory with superluminal signaling is in my book. It is easy to construct: you allow signaling from the transmitter to a locus at constant Spacelike Invariant Interval from the emitter. Since the Invariant interval is, well, Invariant the theory is Lorentz invariant but allows superluminal signaling. It is a flat disproof of your "definition".

john schultz said...

We, as a society, seem staggeringly drawn to short term profits instead of long term “profits.” (Quotes because, in the case of science, the profits are a deeper understanding of Nature.) That is an absurd solution to anything.

In this case, the problem is that the emphasis of science has morphed from understanding to “measurables” like number of citations. Monetizing it will only distort the focus even further.