Monday, May 18, 2015

Book Review: “String Theory and the Scientific Method” by Richard Dawid

String Theory and the Scientific Method
By Richard Dawid
Cambridge University Press (2013)

“String Theory and the Scientific Method” is a very interesting and timely book by a philosopher trying to make sense out of trends in contemporary theoretical physics. Dawid has collected arguments that physicists have raised to demonstrate the promise of their theories, arguments that however are not supported by the scientific method as it is currently understood. He focuses on string theory, but some of his observations are more general than this.


There is for example that physicists rely on mathematical consistency as a guide, even though this is clearly not an experimental assessment. A theory that isn’t mathematically consistent in some regime where we do not have observations yet isn’t considered fundamentally valid. I have to admit it wouldn’t even have occurred to me to call this a “non-empirical assessment,” because our use of mathematics is clearly based on the observation that it works very well to describe nature.

The three arguments that Dawid has collected which are commonly raised by string theorists to support their belief that string theory is a promising theory of everything are:
  1. Meta-inductive inference: The trust in a theory is higher if its development is based on extending existing successful research programs.
  2. No-alternatives argument: The more time passes in which we fail to find a theory as successful as string theory in combining quantum field theory with general relativity the more likely it is that the one theory we have found is unique and correct.
  3. Argument of unexpected explanatory coherence: A finding is perceived more important if it wasn’t expected.
Dawid then argues basically that since a lot of physicists are de facto not relying on the scientific method any more maybe philosophers should face reality and come up with a better explanation that would alter the scientific method so that according to the new method the above arguments were scientific.

In the introduction Dawid writes explicitly that he only studies the philosophical aspects of the development and not the sociological ones. My main problem with the book is that I don’t think one can separate these two aspects clearly. Look at the arguments that he raises: The No Alternatives Argument and the Unexpected Explanatory Coherence are explicitly sociological. They are 1.) based on the observation that there exists a large research area which attracts much funding and many young people and 2.) that physicists trust their colleagues’ conclusions better if it wasn’t the conclusion they were looking for. How can you analyze the relevance of these arguments without taking into account sociological (and economic) considerations?

The other problem with Dawid’s argument is that he confuses the Scientific Method with the rest of the scientific process that happens in the communities. Science basically operates as a self-organized adaptive system, that is in the same class of systems as natural selection. For such systems to be able to self-optimize something – in the case of science the use of theories for the descriptions of nature – they must have a mechanism of variation and a mechanism for assessment of the variation followed by a feedback. In the case of natural selection the variation is genetic mixing and mutation, the assessment is whether the result survives, the feedback is another reproduction. In science the variation is a new theory and the assessment is whether it agrees with experimental test. The feedback is the revision or trashcanning of the theory. This assessment whether a theory describes observation is the defining part of science – you can’t change this assessment without changing what science does because it determines what we optimize for.

The assessments that Dawid, correctly, observes are a pre-selection that is meant to assure we spend time only on those theories (gene combinations) that are promising. To make a crude analogy, we clearly do some pre-selection in our choice of partners that determines which genetic combinations are ever put to test. These might be good choices or they might be bad choices and as long as their success hasn’t also been put to test, we have to be very careful whether we rely on them. It’s the same with the assessments that Dawid observes. Absent experimental test, we don’t know if using these arguments does us any good. In fact I would argue that if one takes into account sociological dynamics one presently has a lot of reasons to not trust researchers to be objective and unbiased which sheds much doubt on the use of these arguments.

Be that as it may, Dawid’s book has been very useful for me to clarify my thoughts about exactly what is going on in the community. I think his observations are largely correct, just that he draws the wrong conclusion. We clearly don’t need to update the scientific method, we need to apply it better, and we need to apply it in particular to better understand the process of knowledge discovery.

I might never again agree with David Gross on anything, but I do agree on his “pre-publication praise” on the cover. The book is very recommendable reading both for physicists and philosophers.

I wasn’t able to summarize the arguments in the book without drawing a lot of sketches, so I made a 15 mins slideshow with my summary and comments on the book. If you have the patience, enjoy :)

Wednesday, May 13, 2015

Information transfer without energy exchange

While I was writing up my recent paper on classical information exchange, a very interesting new paper appeared on quantum information exchange
    Information transmission without energy exchange
    Robert H. Jonsson, Eduardo Martin-Martinez, Achim Kempf
    Phys. Rev. Lett. 114, 110505 (2015)
    arXiv:1405.3988 [quant-ph]
I was visiting Achim’s group two weeks ago and we talked about this for a bit.

In their paper the authors study the communication channels in lower dimensional spaces by use of thought experiments. If you do thought experiments, you need thought detectors. Named “Unruh De-Witt detectors” after their inventors such detectors are the simplest systems you can think of that detect something. It’s a state with two energy levels and it couples linearly to the field you want to detect. A positive measurement results in an excitation of the detector’s state, and that’s pretty much it. No loose cables, no helium leaks, no microwave ovens.



Equipped with such a thought detector, you can then introduce Bob and Alice and teach them to exchange information by means of a quantum field, in the simplest case a massless scalar field. What they can do depends on the way the field is correlated with itself at distant points. In a flat space-time with three spatial dimensions, the field only correlates with itself on the lightcone. But in lower dimensions this isn’t so.

The authors then demonstrate just exactly how Alice can use the correlations to send information to Bob in two spatial dimensions, or 2+1 dimensional space-time as the physicists like to say. They further show that Alice can submit a signal without it drowning in quantum noise. Alice submits information not by sending a quantum of the field, but by coupling and decoupling her detector to the field’s vacuum state. The correlations in the field then imply that whether her detector is coupled or not affects how the field excites Bob’s detector.

Now this information exchange between Bob and Alice is always slower than the speed of light so you might wonder why that is interesting. It is interesting because Alice doesn’t send any energy! While the switching of the detectors requires some work, this is a local energy requirement which doesn’t travel with the information.

Okay you might say then, fine, but we don’t live in 2+1 dimensional space-time. That’s right, but we don’t live in three plus one dimensional flat space-time either: We live in a curved space-time. This isn’t further discussed in the paper but the correlations allowing for this information exchange without energy can also exist in some curved backgrounds. The interesting question is then of course, in which backgrounds and what does this mean for our sending of information into black holes? Do we really need to use quanta of energy for this or is there a way to avoid this? And if it can be avoided, what does it mean for the information being stored in black holes?

I am sure we will hear more about this in the future...

Wednesday, May 06, 2015

Testing modified gravity with black hole shadows

Black hole shadow in the movie “Interstellar.” Image credit: Double Negative artists/DNGR/TM © Warner Bros. Entertainment Inc./Creative Commons (CC BY-NC-ND 3.0) license.

On my visit to Perimeter Institute last week, I talked to John Moffat, whose recent book “Cracking the Particle Code of the Universe” I much enjoyed reading. Talking to John is always insightful. He knows the ins and outs of both particle physics and cosmology, has an opinion on everything, and gives you a complete historical account with this. I have learned a lot from John, especially to put today’s community squabbles into a larger perspective.

John has dedicated much of his research to alternatives to the Standard Model and the cosmological Concordance Model. You might mistake him for being radical or having a chronical want of being controversial, but I assure you neither is the case. The interesting thing about his models is that they are, on the very contrary, deeply conservative. He’s fighting the standard with the standard weapons. Much of his work goes largely ignored by the community for no particular reason other than that the question what counts as an elegant model is arguably subjective. John is presently maybe best known for being one of the few defenders for modified gravity as an alternative to dark matter made of particles.

His modified gravity (MOG) that he has been working on since 2005 is a covariant version of the more widely known MOdified Newtonian Dynamics (or MOND for short). It differs from Bekenstein’s Tensor-Vector-Scalar (TeVeS) model in the field composition; it also adds a vector field to general relativity but then there are additional scalar fields and potentials for the fields. John and his collaborators claim they can fit all the evidence for dark matter with that model, including rotation curves, the acoustic peaks in the cosmic microwave background and the bullet cluster.

I can understand that nobody really liked MOND which didn’t really fit together with general relativity and was based on little more than the peculiar observation that galaxy rotation curves seem to deviate from the Newtonian prediction at a certain acceleration rather than at a certain radius. And TeVeS eventually necessitated the introduction of other types of dark matter, which made it somewhat pointless. I like dark matter because it’s a simple solution and also because I don’t really see any good reason why all matter should couple to photons. But I do have some sympathy for modifying general relativity, though, having tried and failed to do it consistently has made me vary of the many pitfalls. For what MOG is concerned, I don’t see a priori why it’s worse adding a vector field and some scalar fields than adding a bunch of other fields for which we have no direct evidence and then giving them names like WIMPS or axions.

Quite possibly the main reason MOG isn’t getting all that much attention is that it’s arguably unexciting because, if correct, it just means that none of the currently running dark matter experiments will detect anything. What you really want is a prediction for something that can be seen rather than a prediction that nothing can be seen.

That’s why I find John’s recent paper about MOG very interesting, because he points out an observable consequence of his model that could soon be tested:
Modified Gravity Black Holes and their Observable Shadows
J. W. Moffat
European Physics Journal C (2015) 75:130
arXiv:1502.01677 [gr-qc]
In this paper, he has studied how black holes in this modification of gravity differ from ordinary general relativity, and in particular calculated the size of the black hole shadow. As you might have learned from the movie “Interstellar,” black holes appear like dark disks surrounded by rings that are basically extreme lensing effects. The size of the disk in MOG depends on a parameter in the model that can be determined from fitting the galaxy rotation curves. Using this parameter, it turns out the black hole shadow should appear larger by a factor of about ten in MOG as compared to general relativity.

So far nobody has seen a black hole shadow other than in the movies, but the Event Horizon Telescope will soon be looking for exactly that. It isn’t so much a telescope but a collaboration of many telescopes all over the globe, which allows for a very long baseline interferometry with unprecedented precision. In principle they should be able to see the shadow.

What I don’t know though is whether the precision of both radius of the shadow and the mass will be sufficient to make a distinction between normal and modified general relativity in such an experiment. I am also not really sure that the black hole solution in the paper is really the most general solution one can obtain in this type of model, or if not there is some way to backpedal to another solution if the data doesn’t fulfill hopes. And then the paper contains the somewhat ominous remark that the used value for the deviation parameter might not be applicable for the black holes the Event Horizon Telescope has set its eyes on. So there are some good reasons to be skeptic of this and as the scientists always say “more work is needed.” Be that as it may, if the event horizon telescope does see a shadow larger than expected, then this would clearly be a very strong case for modified gravity.

Tuesday, April 28, 2015

What should everybody know about the foundations of physics?

How do we best communicate research on the foundations of physics? That was topic of a panel discussion at the conference I attended last week. Organized by Brendan Foster from FQXi, on the panel were, next to me, Matt Leifer and Dagomir Kaszlikowski, winner of last year’s FQXi video contest. And, yes, Matt was wearing his anti-quantum zealot shirt :)

It turned out that Matt and I had quite similar thoughts on the purpose of public outreach. I started with pointing out we most often have two different aims, inspiration and education, that sometimes conflict with each other. To this Matt added a third aim, “activation,” by which he meant that we sometimes want people to react to our outreach message, such as maybe signing up for a newsletter, attending a lecture, or donating to a specific cause. Dagomir explained that making movies with sexy women is the way to better communicate physics.

As I laid out in an earlier blogpost, the dual goals of inspiration and education create a tension that seems inevitable. The presently most common way to inspire the masses is to entirely avoid technical terms and cut back on accuracy for the sake of catchy messages – and heaven forbid showing equations. Since the readers are never exposed to any technical terms or equations, they are doomed to forever remain in the shallow waters. This leads to an unfortunate gap in the available outreach literature, where on the one hand we have the seashore with big headlines and very little detail, and in the far distance we have the island of education with what are basically summaries of the technical literature and already well above most people’s head. There isn’t much in the middle, and most readers never learn to swim.

This inspiration-education-gap is sometimes so large that it creates an illusion of knowledge among those only reading the inspirational literature. Add to this that many physicists who engage in outreach go to great lengths trying to convince the audience that it’s all really not so difficult, and you create a pool of people who are now terribly inspired to do research without having the necessary education. Many of them will just end up being frustrated with the popular science literature that doesn’t help them to gain any deeper knowledge. A small fraction of these become convinced that all the years it takes to get a PhD are really unnecessary and that reading popular science summaries prepares them well for doing research on their own. These are the people who then go on to send me their new theory of quantum mechanics that solves the black hole paradox or something like that.

The tension leading to this gap is one we have inherited from print media which only allows a fixed level of technical detail, then often chosen to be a low level as to maximize the possible audience. But now that personalization and customization is all en vogue it would be possible to bridge this gap online. It would take effort, of course, but I think it would be worth it. To me bridging this gap between inspiration and education is clearly one of the goals we should be working towards, to help people who are interested to learn more gradually and build their knowledge. Right now some bloggers are trying to fill the gap, but the filling is spotty and not coordinated. We could do much better than that.

The other question that came up repeatedly during the panel discussion was whether we really need more inspiration. Several people, including Matt Leifer and Alexei Grinbaum, thought that physics has been very successful recently to reach the masses, and yes, the Brian Cox effect and the Big Bang Theory were named in this context. I think they are right to some extent – a lot has changed in the last decades. Though we could always do better of course. Alexei said that we should try to make the term “entanglement” as commonly used as relativity. Is that a goal we should strive for?

When it comes to inspiration, I am not sure at all that it is achievable or even particularly useful that everybody should know what a bipartite state is or what exactly is the problem with renormalizing quantum gravity. As I also said in the panel discussion, we are all in the first line interested in what benefits us personally. One can’t eat quantum gravity and it doesn’t cure cancer and that’s where most people’s interest ends. I don’t blame them. While I think that everybody needs a solid basic education in math and physics, and the present education leaves me wanting,  I don’t think everybody needs to know what is going on at the research frontier in any detail.

What I really want most people to know about the foundations of physics is not so much exactly what research is being conducted, but what are the foundational questions to begin with and why is this research relevant at all. I have the impression that much of the presently existing outreach effort doesn’t do this. Instead of giving people the big picture and the vision – and then a hand if they want to know more – public outreach is often focused on promoting very specific research agendas. The reason for this is mostly structural, because much of public outreach is driven by institutes or individuals who are of course pushing their own research. Very little public outreach is actually done for the purpose of primarily benefitting the public. Instead, it is typically done to increase visibility or to please a sponsor.

The other reason though is that many scientists don’t speak about their vision, or maybe don’t think about the big picture themselves all that much. Even I honestly don’t understand the point of much of the research in quantum foundations, so if you needed any indication that public outreach in quantum foundations isn’t working all that well, please take me as a case study. For all I can tell there seem to be a lot of people in this field who spend time reformulating a theory that works perfectly fine, and then make really sure to convince everybody their reformulation does exactly the same as quantum mechanics has always done.

Why, oh why, are they so insistent on finding a theory that is both realist and local, when it would be so dramatically more interesting to find a theory that allows for non-local information transfer and still be compatible with all data we have so far. But maybe that’s just me. In any case, I wish that more people had the courage to share their speculation what this research might lead to, in a hundred or a thousand years. Will we have come to understand that non-locality is real and in possible to exploit for communication? Will we be able to create custom-designed atomic nuclei?

As was pointed out by Matt and Brendan several times, it is unfortunate that there aren’t many scientific studies dedicated to finding out what public outreach practices are actually efficient, and efficient for what. Do the movies with sexy women actually get across any information? Does the inspiration they provide actually prompt people to change their attitude towards science? Do we succeed at all in raising awareness that research on the foundations of physics is necessary for sustainable progress? Or do we, despite our best intensions, just drive people into the arms of quantum quacks because we hand them empty words but not enough detail to tell the science from the pseudoscience?

I enjoyed this panel discussion because most often the exchange about public outreach that I have with my colleagues comes down to them declaring that public outreach just takes time and money away from research. In the end of course these basic questions remain: Who does it, and who pays for it?

In summary, I think what we need is more effort to bridge the gap between inspiration and education, and I want to see more vision and less promotion in public outreach.

Friday, April 24, 2015

The problem with Poincaré-invariant networks

Following up on the discussions on these two previous blogposts, I've put my argument why Poincaré-invariant networks in Minkowski space cannot exist into writing, or rather drawing. The notes are on the arxiv today.

The brief summary goes like this: We start in 1+1 dimensions. Suppose there is a Poincaré-invariant network in Minkowski space that is not locally infinitely dense, then its nodes must be locally finite and on the average be distributed in a Poincaré-invariant way. We take this distribution of points and divide it up into space-time tiles of equal volume. Due to homogeneity, each of these volumes must contain the same number of nodes on the average. Now we pick one tile (marked in grey).


In that tile we pick one node and ask where one of its neighbors is. For the network to be Poincaré-invariant, the distribution of the links to the neighbor must be Lorentz-invariant around the position of the node we have picked. Thus the probability for the distribution of the neighboring node must be uniform on each hyberbola at equal proper distance from the node. Since the hyperbolae are infinitely long, the neighbor is with probability one at infinite spatial distance and arbitrarily close to the lightcone. The same is the case for all neighbors.

Since the same happens for each other node, and there are infinitely many nodes in the lightlike past and future of the center tile, there are infinitely many links passing through the center tile, and due to homogeneity also through every other tile. Consequently the resulting network is a) highly awkward and b) locally infinitely dense.


This argument carries over to 3+1 dimensions, for details see paper. This implies there aren't any finite Poincaré-invariant triangulations, since their edges would have to form a network which we've just seen doesn't exist.

What does this mean? It means that whenever you are working with an approach to quantum gravity based on space-time networks or triangulations, then you have to explain how you want to recover local Lorentz-invariance. Just saying "random distribution" doesn't make the problem go away. The universe isn't Poincaré-invariant, so introducing a preferred frame is not in and by itself unreasonable or unproblematic. The problem is to get rid of it on short distances, and to make sure it doesn't conflict with existing constraints on Lorentz-invariance violations.

I want to thank all those who commented on my earlier blogposts which prompted me to write up my thoughts.

Tuesday, April 21, 2015

Away Note

I will be travelling the next weeks, so blogging might be spotty and comment moderation slow. I'll first be in Washington DC speaking at a conference on New Directions in the Foundations of Physics (somewhat ironically after I just decided I've had enough of the foundations of physics). And then I'll be at PI and the University of Waterloo (giving the same talk, with more equations and less philosophy). And, yes, I've packed the camera and I'm trigger happy ;)

Friday, April 17, 2015

A wonderful 100th anniversary gift for Einstein

This year, Einstein’s theory of General Relativity celebrates its 100th anniversary. 2015 is also the “Year of Light,” and fittingly so, because the first and most famous confirmation of General Relativity was the light deflection on the Sun.

As light carries energy and is thus subject of gravitational attraction, a ray of light passing by a massive body should be slightly bent towards it. This is so both in Newton’s theory of gravity and in Einstein’s, but Einstein’s deflection is by a factor two larger than Newton’s. Because of this effect, the positions of stars seem to slightly shift as they stand close by the Sun, but the shift is absolutely tiny: The deflection of light from a star close to the rim of the Sun is just about a thousandth of the Sun's diameter, and the deflection drops rapidly the farther away the star’s position is from the rim.

In the year 1915 one couldn’t observe stars in such close vicinity of the Sun because if the Sun does one thing it’s shining really brightly, which is generally bad if you want to observe something small and comparably dim next to it. The German astronomer Johann Georg von Soldner had calculated the deflection in Newton’s theory already in 1801. His paper wasn’t published until 1804, and then with a very defensive final paragraph that explained:
“Uebrigens glaube ich nicht nöthig zu haben, mich zu entschuldigen, daß ich gegenwärtige Abhandlung bekannt mache; da doch das Resultat dahin geht, daß alle Perturbationen unmerklich sind. Denn es muß uns fast eben so viel daran gelegen seyn, zu wissen, was nach der Theorie vorhanden ist, aber auf die Praxis keinen merklichen Einfluß hat; als uns dasjenige interessirt, was in Rücksicht auf Praxis wirklichen Einfluß hat. Unsere Einsichten werden durch beyde gleichviel erweitert.”

[“Incidentally I do not think it should be necessary for me to apologize that I publish this article even though the result indicates that the deviation is unobservably small. We must pay as much attention to knowing what theoretically exists but has no influence in practice, as we are interested in that what really affects practice. Our insights are equally increased by both.” - translation SH]
A century passed and physicists now had somewhat more confidence in their technology, but still they had to patiently wait for a total eclipse of the Sun during which they were hoping to observe the predicted deflection of light.

In 1919 finally, British astronomer and relativity aficionado Arthur Stanley Eddington organized two expeditions to observe a solar eclipse with a zone of totality roughly along the equator. He himself travelled to Principe, an island in the Atlantic ocean, while a second team observed the event from Sobral in Brazil. The results of these observations were publicly announced November 1919 at a meeting in London that made Einstein a scientific star over night: The measured deflection of light did fit to the Einstein value, while it was much less compatible with the Newtonian bending.

As history has it, Eddington’s original data actually wasn’t good enough to make that claim with certainty. His measurements had huge error bars due to bad weather and he also might have cherry-picked his data because he liked Einstein’s theory a little too much. Shame on him. Be that as it may, dozens of subsequent measurement proved his premature announcement correct. Einstein was right, Newton was wrong.

By the 1990s, one didn’t have to wait for solar eclipses any more. Data from radio sources, such as distant pulsars, measured by very long baseline interferometry (VLBI) could now be analyzed for the effect of light deflection. In VLBI, one measures the time delay by which wavefronts from radio sources arrive at distant detectors that might be distributed all over the globe. The long baseline together with a very exact timing of the signal’s arrival allows one to then pinpoint very precisely where the object is located – or seems to be located. In 1991, Robertson, Carter & Dillinger confirmed to high accuracy the light deflection predicted by General Relativity by analyzing data from VLBI accumulated over 10 years.

But crunching data is one thing, seeing it is another thing, and so I wanted to share with you today a plot I came across coincidentally, in a paper from February by two researchers located in Australia.

They have analyzed the VLBI data from some selected radio sources over a period of 10 years. In the image below, you can see how the apparent position of the blazar (1606+106) moves around over the course of the year. Each dot is one measurement point; the “real” position is in the middle of the circle that can be inferred at the point marked zero on the axes.

Figure 2 from arXiv:1502.07395

How is that for an effect that was two centuries ago thought to be unobservable?

Publons

The "publon" is "the elementary quantum of scientific research which justifies publication" and it's also a website that might be interesting for you if you're an active researcher. Publons helps you collect records of your peer review activities. On this website, you can set up an account and then add your reviews to your profile page.

You can decide whether you want to actually add the text of your reviews, or not, and to which level you want your reviews to be public. By default, only the journal for which you reviewed and the month during which the review was completed will be shown. So you need not be paranoid that people will know all the expletives you typed in reply to that idiot last year!

You don't even have to add the text of your review at all, you just have to provide a manuscript number. Your review activity is then checked against the records of the publisher, or so is my understanding.

Since I'm always interested in new community services, I set up an account there some months ago. It goes really quickly and is totally painless. You can then enter your review activities on the website or - super conveniently - you just forward the "Thank You" note from the publisher to some email address. The record then automatically appears on your profile within a day or two. I forwarded a bunch of "Thank You" emails from the last months, and now my profile page looks like follows:



The folks behind the website almost all have a background in academia and probably know it's pointless trying to make money from researchers. One expects of course that at some point they will try to monetize their site, but at least so far I have received zero spam, upgrade offers, or the dreaded newsletters that nobody wants to read.

In short, the site is doing exactly what it promises to do. I find the profile page really useful and will probably forward my other "Thank You" notes (to the extent that I can dig them up), and then put the link to that page in my CV and on my homepage.

Sunday, April 12, 2015

Photonic Booms: How images can move faster than light, and what they can tell us.

If you sweep a laser pointer across the moon, will the spot move faster than the speed of light? Every physics major encounters this question at some point, and the answer is yes, it will. If you sweep the laser pointer it in an arc, the velocity of the spot increases with the distance to the surface you point at. On Earth, you only have to rotate the laser in a full arc within a few seconds, then it will move faster than the speed of light on the moon!



This simplified explanation would be all there is to say were the moon a disk, but the moon isn’t a disk and this makes the situation more interesting. The speed of the spot also increases the more parallel the surface you aim at is relative to the beam’s direction. And so the spot’s speed increases without bound as it reaches the edge of the visible part of the moon.

That’s the theory. In practice of course your average laser pointer isn’t strong enough to still be visible on the moon.

This faster-than-light motion is not in conflict with special relativity because the continuous movement of the spot is an illusion. What actually moves are the photons in the laser beam, and they move at the always same speed of light. But different photons illuminate different parts of the surface in a pattern synchronized by the photon’s collective origin, which appears like a continuous movement that can happen at arbitrary speed. It isn’t possible in this way to exchange information faster than the speed of light because information can only be sent from the source to the surface, not between the illuminated parts on the surface.

That is for what the movement of the spot on the surface is concerned. Trick question: If you sweep a laser pointer across the moon, what will you see? Note the subtle difference – now you have to take into account the travel time of the signal.

Let us assume for the sake of simplicity that you and the moon are not moving relative to each other, and you sweep from left to right. Let us also assume that the moon reflects diffusely into all directions, so you will see the spot regardless of where you are. This isn’t quite right but good enough for our purposes.

Now, if you were to measure the speed of the spot on the surface of the moon it would appear on the left moving faster than the speed of light initially, then slowing down as it approaches the place on the moon’s surface that is orthogonal to the beam, then speed up again. But that’s not what you would see on Earth. That’s because the very left and very right edges are also farther away and so the light takes longer to reach us. You would instead see a pair of spots appear close by the left edge and then separate, one of them disappearing at the left edge, the other moving across the moon to disappear on the other edge. The point where the spot pair seems to appear is the position where the velocity of the spot on the surface drops from above the speed of light to below.


This pair creation of spots happens for the same reason you hear a sonic boom when a plane passes by faster than the speed of sound. That’s because the signal (the sound or the light) is slower than what is causing the signal (the plane or the laser hitting the surface of the moon). The spot pair creation is thus signal of a “photonic boom,” a catchy phrase coined by Robert Nemiroff, Professor for astrophysics at Michigan Technological University, and one of the two people behind the Astronomy Picture Of the Day that clogs our facebook feeds every morning.

The most surprising thing about this spot pair creation is that nobody ever thought through this until December 2014, when Nemiroff put out a paper in which he laid out the math of the photonic booms. The above considerations for a perfectly spherical surface can be put in more general terms, taking into account also relative motion between the source and the reflecting surface. The upshot is that the spot pair creation events carry information about the structure of the surface that they are reflected on.

But why, you might wonder, who cares about spots on the Moon? To begin with, if you were to measure the structure of any object, say an asteroid, by aiming at it with laser beams and recording the reflections, then you would have to take into account this effect. Maybe more interestingly, these spot pair creations probably occur in astrophysical situations. Nemiroff in his paper for example mentions the binary pulsar 3U 0900-40, whose x-rays may be scattering off the surface of its companion, a signal that one will misinterpret without knowing about photonic booms.

The above considerations don’t only apply to illuminated spots but also to shadows. Shadows can be cast for example by opaque clouds on reflecting nebula, resulting in changes of brightness that may appear to move faster than the speed of light. There are many nebula that show changes in brightness thought to be due to such effects, like for example the Hubble Nebula (HVN: NGC 2260). Again, one cannot properly analyze these situations without taking into account the spot pair creation effect.

In his January paper, Nemiroff hints at an upcoming paper “in preparation” with a colleague, so I think we will hear more about the photonic booms in the near future.

In 2015, Special Relativity is 110 years old, but it still holds surprises for us.

This post first appeared on Starts with A Bang with the title "Photonic Booms".

Tuesday, April 07, 2015

No, the black hole information loss problem has not been solved. Even if PRL thinks so.

This morning I got several requests for comments on this paper which apparently was published in PRL
    Radiation from a collapsing object is manifestly unitary
    Anshul Saini, Dejan Stojkovic
    Phys.Rev.Lett. 114 (2015) 11, 111301
    arXiv:1503.01487 [gr-qc]
The authors claim they find “that the process of gravitational collapse and subsequent evaporation is manifestly unitary as seen by an asymptotic observer.”

What do they do to arrive at this groundbreaking result that solves the black hole information loss problem in 4 PRL-approved pages? The authors calculate the particle production due to the time-dependent background of the gravitational field of a collapsing mass-shell. Using the mass-shell is a standard approximation. It is strictly speaking unnecessary, but it vastly simplifies the calculation and is often used. They use the functional Schrödinger formalism (see eg section II of this paper for a brief summary), which is somewhat unusual, but its use shouldn’t make a difference for the outcome. They find the time evolution of the particle production is unitary.

In the picture they use, they do not explicitly use Bogoliubov transformations, but I am sure one could reformulate their time-evolution in terms of the normally used Bogoliubov-coefficients, since both pictures have to be unitarily equivalent. There is an oddity in their calculation which is that in their field expansion they don’t seem to have anti-particles, or else I am misreading their notation, but this might not matter much as long as one keeps track of all branch cuts.

Due to the unusual picture that they use one unfortunately cannot directly compare their intermediate results with the standard calculation. In the most commonly used Schrödinger picture, the operators are time-independent. In the picture used in the paper, part of the time-dependence is pushed into the operators. Therefore I don’t know how to interpret these quantities, and in the paper there’s no explanation on what observables they might correspond to. I haven’t actually checked the steps of the calculation, but it all looks quite plausible as by method and functional dependence.

What’s new about this? Nothing really. The process of particle production in time-dependent background fields is unitary. The particles produced in the collapse process do form a pure state. They have to because it’s a Hamiltonian evolution. The reason for the black hole information loss is not that the particle production isn’t unitary – Bogoliubov transformations are by construction unitary – but that the outside observer in the end doesn’t get to see the full state. He only sees the part of the particles which manage to escape. The trouble is that these particles are entangled with the particles that are behind the horizon and eventually hit the singularity.

It is this eventual destruction of half of the state at the singularity that ultimately leads to a loss of information. That’s why remnants or baby-universes in a sense solve the information loss problem simply by preventing the destruction at the singularity, since the singularity is assumed to not be there. For many people this is a somewhat unsatisfactory solution because the outside observer still doesn’t have access the information. However, since the whole state still exists in a remnant scenario the time evolution remains unitary and no inconsistency with quantum mechanics ever arises. The new paper is not a remnant scenario, I am telling you this to explain that what causes the non-unitarity is not the particle production itself, but that the produced particles are entangled across the horizon, and part of them later become inaccessible, thereby leaving the outside observer with a mixed state (read: “information loss”).

The authors in the paper never trace out the part behind the horizon, so it’s not surprising the get a pure state. They just haven’t done the whole calculation. They write (p. 3) “Original Hawking radiation density matrix contains only the diagonal elements while the cross-terms are absent.” The original matrix of the (full!) Hawking radiation contains off-diagonal terms, it’s a fully entangled state. It becomes a diagonal, mixed, matrix only after throwing out the particles behind the horizon. One cannot directly compare the both matrices though because in the paper they use a different basis than one normally does.

So, in summary, they redid a textbook calculation by a different method and claimed they got a different result. That should be a warning sign. This is a 30+ years old problem, thousands of papers have been written about it. What are the odds that all these calculations have simply been wrong? Another warning sign is that they never explain just why they manage to solve the problem. They try to explain that their calculation has something in common with other calculations (about entanglement in the outgoing radiation only) but I cannot see any connection, and they don’t explain it either.

The funny thing about the paper is that I think the calculation, to the extent that they do it, is actually correct. But then the authors omit the last step, which means they do not, as stated in the quote above, calculate what the asymptotic observer sees. The conclusion that this solves the black hole information problem is then a classical non-sequitur.

Tuesday, March 31, 2015

New type of gravitational wave detector proposed

The existence of gravitational waves is one of the central predictions of Einstein’s theory of general relativity. Gravitational waves have never been directly observed, though the indirect evidence is so good that Marc Kamionkowski, a theorist at Johns Hopkins University in Baltimore, Maryland, recently said
“We are so confident that gravitational waves exist that we don’t actually need to see one.”
But most scientists for good reasons prefer evidence over confidence, and so the hunt for a direct detection of gravitational waves has been going on for decades. Existing gravitational wave detectors search for the periodic stretching in distances caused by a the gravitational wave’s distortion of space and time itself. For this one has to very precisely measure and compare distances in different directions. Such tiny relative distortion can be detected very precisely by an interferometer.

In the interferometer, a laser beam is sent into each of the directions that are being compared. The signal is reflected and then brought to interfere with itself. This interference pattern is very sensitive to the tiniest changes in distance; the longer the arms of the interferometer, the better. One can increase the sensitivity by reflecting the laser light back and forth several times.

The most ambitious gravitational wave interferometer in planning is the eLISA space observatory, which would ping back and forth laser signals between one mother space-station and two “daughter” stations. These stations would have distances of about 1 mio kilometer. The interferometer would be sensitive to gravitational waves in the mHz to Hz range, a range in which one expects signals from binary systems, probably one of the most reliable sources of gravitational waves. eLisa might or might not be launched in 2028.

In a recent paper now two physicists from Tel Aviv University, Israel, have proposed a new method to measure gravitational waves. They propose not to look for periodic distortions of space, but periodic distortions of time instead. If Einstein taught us one thing, it’s that space and time belong together, and so all gravitational waves distort both together. The idea then is to measure the local passing of time in different locations with atomic clocks to very high precision, and then compare it.

If you recall, time passes differently depending on the position in a gravitational field. Close by a massive body, time goes by slower than farther away from it. And so, when a gravitational wave passes by, the tick rate of clocks, or of atoms respectively, depends on the location in the field of the wave.



The fineprint is that to reach an interesting regime in which gravitational waves are likely to be found, similar to that of eLISA, the atomic clocks have to be brought into distances far exceeding the diameter of the Earth and more like the distance of the Earth to the Sun. So the researchers propose that we could leave behind a trail of atomic clocks on our path around the Sun. The clocks then would form a network of local time-keepers from which the presence of gravitational waves could be read off; the more clocks, the better the precision of the measurement.

Figure from arXiv:1501.00996


It is a very ambitious proposal of course, but I love the vision!

Wednesday, March 25, 2015

No, the LHC will not make contact with parallel universes

Evidence for rainbow gravity by butterfly
production at the LHC.

The most recent news about quantum gravity phenomenology going through the press is that the LHC upon restart at higher energies will make contact with parallel universes, excuse me, with PARALLEL UNIVERSES. The telegraph even wants you to believe that this would disprove the Big Bang, and tomorrow maybe it will cause global warming, cure Alzheimer and lead to the production of butterflies at the LHC, who knows. This story is so obviously nonsense that I thought it would be unnecessary to comment on this, but I have underestimated the willingness of news outlets to promote shallow science, and also the willingness of authors to feed that fire.

This story is based on the paper:
    Absence of Black Holes at LHC due to Gravity's Rainbow
    Ahmed Farag Ali, Mir Faizal, Mohammed M. Khalil
    arXiv:1410.4765 [hep-th]
    Phys.Lett. B743 (2015) 295
which just got published in PLB. Let me tell you right away that this paper would not have passed my desk. I'd have returned it as major revisions necessary.

Here is a summary of what they have done. In models with large additional dimensions, the Planck scale, where effects of quantum gravity become important, can be lowered to energies accessible at colliders. This is an old story that was big 15 years ago or so, and I wrote my PhD thesis on this. In the new paper they use a modification of general relativity that is called "rainbow gravity" and revisit the story in this framework.

In rainbow gravity the metric is energy-dependent which it normally is not. This energy-dependence is a non-standard modification that is not confirmed by any evidence. It is neither a theory nor a model, it is just an idea that, despite more than a decade of work, never developed into a proper model. Rainbow gravity has not been shown to be compatible with the standard model. There is no known quantization of this approach and one cannot describe interactions in this framework at all. Moreover, it is known to lead to non-localities with are ruled out already. For what I am concerned, no papers should get published on the topic until these issues have been resolved.

Rainbow gravity enjoys some popularity because it leads to Planck scale effects that can affect the propagation of particles, which could potentially be observable. Alas, no such effects have been found. No such effects have been found if the Planck scale is the normal one! The absolutely last thing you want to do at this point is argue that rainbow gravity should be combined with large extra dimensions, because then its effects would get stronger and probably be ruled out already. At the very least you would have to revisit all existing constraints on modified dispersion relations and reaction thresholds and so on. This isn't even mentioned in the paper.

That isn't all there is to say though. In their paper, the authors also unashamedly claim that such a modification has been predicted by Loop Quantum Gravity, and that it is a natural incorporation of effects found in string theory. Both of these statements are manifestly wrong. Modifications like this have been motivated by, but never been derived from Loop Quantum Gravity. And String Theory gives rise to some kind of minimal length, yes, but certainly not to rainbow gravity; in fact, the expression of the minimal length relation in string theory is known to be incompatible with the one the authors use. The claims that this model they use has some kind of derivation or even a semi-plausible motivation from other theories is just marketing. If I had been a referee of this paper, I would have requested that all these wrong claims be scraped.

In the rest of the paper, the authors then reconsider the emission rate of black holes in extra dimension with the energy-dependent metric.

They erroneously state that the temperature diverges when the mass goes to zero and that it comes to a "catastrophic evaporation". This has been known to be wrong since 20 years. This supposed catastrophic evaporation is due to an incorrect thermodynamical treatment, see for example section 3.1 of this paper. You do not need quantum gravitational effects to avoid this, you just have to get thermodynamics right. Another reason to not publish the paper. To be fair though, this point is pretty irrelevant for the rest of the authors' calculation.

They then argue that rainbow gravity leads to black hole remnants because the temperature of the black hole decreases towards the Planck scale. This isn't so surprising and is something that happens generically in models with modifications at the Planck scale, because they can bring down the final emission rate so that it converges and eventually stops.

The authors then further claim that the modification from rainbow gravity affects the cross-section for black hole production, which is probably correct, or at least not wrong. They then take constraints on the lowered Planck scale from existing searches for gravitons (ie missing energy) that should also be produced in this case. They use the contraints obtained from the graviton limits to say that with these limits, black hole production should not yet have been seen, but might appear in the upcoming LHC runs. They should not of course have used the constaints from a paper that were obtained in a scenario without the rainbow gravity modification, because the production of gravitons would likewise be modified.

Having said all that, the conclusion that they come to that rainbow gravity may lead to black hole remnants and make it more difficult to produce black holes is probably right, but it is nothing new. The reason is that these types of models lead to a generalized uncertainty principle, and all these calculations have been done before in this context. As the authors nicely point out, I wrote a paper already in 2004 saying that black hole production at the LHC should be suppressed if one takes into account that the Planck length acts as a minimal length.

Yes, in my youth I worked on black hole production at the LHC. I gracefully got out of this when it became obvious there wouldn't be black holes at the LHC, some time in 2005. And my paper, I should add, doesn't work with rainbow gravity but with a Lorentz-invariant high-energy deformation that only becomes relevant in the collision region and thus does not affect the propagation of free particles. In other words, in contrast to the model that the authors use, my model is not already ruled out by astrophysical constraints. The relevant aspects of the argument however are quite similar, thus the similar conclusions: If you take into account Planck length effects, it becomes more difficult to squeeze matter together to form a black hole because the additional space-time distortion acts against your efforts. This means you need to invest more energy than you thought to get particles close enough to collapse and form a horizon.

What does any of this have to do with paralell universes? Nothing, really, except that one of the authors, Mir Faizal, told some journalist there is a connection. In the phys.org piece one can read:
""Normally, when people think of the multiverse, they think of the many-worlds interpretation of quantum mechanics, where every possibility is actualized," Faizal told Phys.org. "This cannot be tested and so it is philosophy and not science. This is not what we mean by parallel universes. What we mean is real universes in extra dimensions. As gravity can flow out of our universe into the extra dimensions, such a model can be tested by the detection of mini black holes at the LHC. We have calculated the energy at which we expect to detect these mini black holes in gravity's rainbow [a new theory]. If we do detect mini black holes at this energy, then we will know that both gravity's rainbow and extra dimensions are correct."
To begin with rainbow gravity is neither new nor a theory, but that addition seems to be the journalist's fault. For what the parallel universes are concerned, to get these in extra dimensions you would need to have additional branes next to our own one and there is nothing like this in the paper. What this has to do with the multiverse I don't know, that's an entirely different story. Maybe this quote was taken out of context.

Why does the media hype this nonsense? Three reasons I can think of. First, the next LHC startup is near and they're looking for a hook to get the story across. Black holes and parallel universes sound good, regardless of whether this has anything to do with reality. Second, the paper shamelessly overstates the relevance of the investigation, makes claims that are manifestly wrong, and fails to point out the miserable state that the framework they use is in. Third, the authors willingly feed the hype in the press.

Did the topic of rainbow gravity and the author's name, Mir Faizal, sound familiar? That's because I wrote about both only a month ago, when the press was hyping another nonsense story about black holes in rainbow gravity with the same author. In that previous paper they claimed that black holes in rainbow gravity don't have a horizon and nothing was mentioned about them forming remnants. I don't see how these both supposed consequences of rainbow gravity are even compatible with each other. If anything this just reinforces my impression that this isn't physics, it's just fanciful interpretation of algebraic manipulations that have no relation to reality whatsoever.

In summary: The authors work in a framework that combines rainbow gravity with a lowered Planck scale, which is already ruled out. They derive bounds on black hole production using existing data analysis that does not apply in the framework they use. The main conclusion that Planck length effects should suppress black hole production at the LHC is correct, but this has been known since 10 years at least. None of this has anything to do with parallel universes.

Monday, March 23, 2015

No, you cannot test quantum gravity with X-ray superradiance

I am always looking for new ways to repeat myself, so I cannot possibly leave out this opportunity to point out yet another possibility to not test quantum gravity. Chris Lee from Arstechnica informed the world last week that “Deflecting X-rays due to gravity may provide view on quantum gravity”, which is a summary of the paper

The idea is to shine light on a crystal at frequencies high enough so that it excites nuclear resonances. This excitation is delocalized, and the energy is basically absorbed and reemitted systematically, which leads to a propagation of the light-induced excitation through the crystal. How this propagation proceeds depends on the oscillations of the nuclei, which again depends on the local proper time. If you place the crystal in a gravitational field, the proper time will depend on the strength of the field. As a consequence, the propagation of the excitation through the crystal depends on the gradient of the gravitational field. The authors argue that in principle this influence of gravity on the passage of time in the crystal should be measurable.

They then look at a related but slightly different effect in which the crystal rotates and the time-dilatation resulting from the (non-inertial!) motion gives rise to a similar effect, though much larger in magnitude.

The authors do not claim that this experiment would be more sensitive than already existing ones. I assume that if it was so, they’d have pointed this out. Instead, they write the main advantage is that this new method allows to test both special and general relativistic effects in tabletop experiments.

It’s a neat paper. What does it have to do with quantum gravity? Well, nothing. Indeed the whole paper doesn’t say anything about quantum gravity. Quantum gravity, I remind you, is the quantization of the gravitational interaction, which plays no role for this whatsoever. Chris Lee in his Arstechnica piece explains
“Experiments like these may even be sensitive enough to see the influence of quantum mechanics on space and time.”
Which is just plainly wrong. The influence of quantum mechanics on space-time is far too weak to be measurable in this experiment, or in any other known laboratory experiment. If you figure out how to do this on a tabletop, book your trip to Stockholm right away. Though I recommend you show me the paper before you waste your money.

Here is what Chris Lee had to say about the question what he thinks it’s got to do with quantum gravity:
Deviations from general relativity aren’t the same as quantum gravity. And besides this, for all I can tell the authors haven’t claimed that they can test a new parameter regime that hasn’t been tested before. The reference to quantum gravity is an obvious attempt to sex up the piece and has no scientific credibility whatsoever.

Summary: Just because it’s something with quantum and something with gravity doesn’t mean it’s quantum gravity.

Wednesday, March 18, 2015

No foam: New constraint on space-time fluctuations rules out stochastic Planck scale dispersion

The most abused word in science writing is “space-time foam.” You’d think is a technical term, but it isn’t – space-time foam is just a catch-all phrase for some sort of quantum gravitational effect that alters space-time at the Planck scale. The technical term, if there is any, would be “Planck scale effect”. And please note that I didn’t say quantum gravitational effects “at short distances” because that is an observer-dependent statement and wouldn’t be compatible with Special Relativity. It is generally believed that space-time is affected at high curvature, which is an observer-independent statement and doesn’t a priori have anything to do with distances whatsoever.

Having said that, you can of course hypothesize Planck scale effects that do not respect Special Relativity and then go out to find constraints, because maybe quantum gravity does indeed violate Special Relativity? There is a whole paper industry behind this since violations of Special Relativity tend to result in large and measurable consequences, in contrast to other quantum gravity effects, which are tiny. A lot of experiments have been conducted already looking for deviations from Special Relativity. And one after the other they have come back confirming Special Relativity, and General Relativity in extension. Or, as the press has it: “Einstein was right.”

Since there are so many tests already, it has become increasingly hard to still believe in Planck scale effects that violate Special Relativity. But players gonna play and haters gonna hate, and so some clever physicists have come up with models that supposedly lead to Einstein-defeating Planck scale effects which could be potentially observable, be indicators for quantum gravity, and are still compatible with existing observations. A hallmark of these deviations from Special Relativity is that the propagation of light through space-time becomes dependent on the wavelength of the light, an effect which is called “vacuum dispersion”.

There are two different ways this vacuum dispersion of light can work. One is that light of shorter wavelength travels faster than that of longer wavelength, or the other way round. This is a systematic dispersion. The other way is that the dispersion is stochastic, so that the light sometimes travels faster, sometimes slower, but on the average it moves still with the good, old, boring speed of light.

The first of these cases, the systematic one, has been constrained to high precision already, and no Planck scale effects have been seen. This has been discussed since a decade or so, and I think (hope!) that by now it’s pretty much off the table. You can always of course come up with some fancy reason for why you didn’t see anything, but this is arguably unsexy. The second case of stochastic dispersion is harder to come by because on the average you do get back Special Relativity.

I already mentioned in September last year that Jonathan Granot gave a talk at the 2014 conference on “Experimental Search for Quantum Gravity” where he told us he and collaborators had been working on constraining the stochastic case. I tentatively inquired if they saw any deviations from no effect and got a head shake, but was told to keep my mouth shut until the paper is out. To make a long story short, the paper has appeared now, and they don’t see any evidence for Planck scale effects whatsoever:
A Planck-scale limit on spacetime fuzziness and stochastic Lorentz invariance violation
Vlasios Vasileiou, Jonathan Granot, Tsvi Piran, Giovanni Amelino-Camelia
What they did for this analysis is to take a particularly pretty gamma ray burst, GRB090510. The photons from gamma ray bursts like this travel over a very long distances (some Gpc), during which the deviations from the expected travel time add up. The gamma ray spectrum can also extend to quite high energies (about 30 GeV for this one) which is helpful because the dispersion effect is supposed to become stronger with high energy.

What the authors do then is basically to compare the lower energy part of the spectrum with the higher energy part and see if they have a noticeable difference in the dispersion, which would tend to wash out structures. The answer is, no, there’s no difference. This in turn can be used to constrain the scale at which effects can set in, and they get a constraint a little higher than the Planck scale (1.6 times) at high confidence (99%).

It’s a neat paper, well done, and I hope this will put the case to rest.

Am I surprised by the finding? No. Not only because I knew the result since September, but also because the underlying models that give rise to such effects are theoretically unsatisfactory, at least for what I am concerned. This is particularly apparent for the systematic case. In the systematic case the models are either long ruled out already because they break Lorentz-invariance, or they result in non-local effects which are also ruled out already. Or, if you want to avoid both they are simply theoretically inconsistent. I showed this in a paper some years ago. I also mentioned in that paper that the argument I presented does not apply for the stochastic case. However, I added this because I simply wasn’t in the mood to spend more time on this than I already had. I am pretty sure you could use the argument I made also to kill the stochastic case on similar reasoning. So that’s why I’m not surprised. It is of course always good to have experimental confirmation.

While I am at it, let me clear out a common confusion with these types of tests. The models that are being constrained here do not rely on space-time discreteness, or “graininess” as the media likes to put it. It might be that some discrete models give rise to the effects considered here, but I don’t know of any. There are discrete models of space-time of course (Causal Sets, Causal Dynamical Triangulation, LQG, and some other emergent things), but there is no indication that any of these leads to an effect like the stochastic energy-dependent dispersion. If you want to constrain space-time discreteness, you should look for defects in the discrete structure instead.

And because my writer friends always complain the fault isn’t theirs but the fault is that of the physicists who express themselves sloppily, I agree, at least in this case. If you look at the paper, it’s full with foam, and it totally makes the reader believe that the foam is a technically well-defined thing. It’s not. Every time you read the word “space-time foam” make that “Planck scale effect” and suddenly you’ll realize that all it means is a particular parameterization of deviations from Special Relativity that, depending on taste, is more or less well-motivated. Or, as my prof liked to say, a paramterization of ignorance.

In summary: No foam. I’m not surprised. I hope we can now forget deformations of Special Relativity.

Wednesday, March 11, 2015

What physics says about the vacuum: A visit to the seashore.

[Image Source: www.wall321.com]
Imagine you are at the seashore, watching the waves. Somewhere in the distance you see a sailboat — wait, don’t fall asleep yet. The waves and I want to tell you a story about nothing.

Before quantum mechanics, “vacuum” meant the absence of particles, and that was it. But with the advent of quantum mechanics, the vacuum became much more interesting. The sea we’re watching is much like this quantum vacuum. The boats on the sea’s surface are what physicists call “real” particles; they are the things you put in colliders and shoot at each other. But there are also waves on the surface of the sea. The waves are like “virtual” particles; they are fluctuations around sea level that come out of the sea and fade back into it.

Virtual particles have to obey more rules than sea waves though. Because electric charge must be conserved, virtual particles can only be created together with their anti-particles that carry the opposite charge. Energy too must be conserved, but due to Heisenberg’s uncertainty principle, we are allowed to temporarily borrow some energy from the vacuum, as long as we give it back quickly enough. This means that the virtual particle pairs can only exist for a short time, and the more energy they carry, the shorter the duration of their existence.

You cannot directly measure virtual particles in a detector, but their presence has indirect observable consequences that have been tested to great accuracy. Atomic nuclei, for example, carry around them a cloud of virtual particles, and this cloud shifts the energy levels of electrons orbiting around the nucleus.

So we know, not just theoretically but experimentally, that the vacuum is not empty. It’s full with virtual particles that constantly bubble in and out of existence.

Vizualization of a quantum field theory calculation showing virtual particles in the quantum vacuum.
Image Credits: Derek Leinweber


Let us go back to the seashore; I quite liked it there. We measure elevation relative to the average sea level, which we call elevation zero. But this number is just a convention. All we really ever measure are differences between heights, so the absolute number does not matter. For the quantum vacuum, physicists similarly normalize the total energy and momentum to zero because all we ever measure are energies relative to it. Do not attempt to think of the vacuum’s energy and momentum as if it was that of a particle; it is not. In contrast to the energy-momentum of particles, that of the vacuum is invariant under a change of reference frame, as Einstein’s theory of Special Relativity requires. The vacuum looks the same for the guy in the train and for the one on the station.

But what if we take into account gravity, you ask? Well, there is the rub. According to General Relativity, all forms of energy have a gravitational pull. More energy, more pull. With gravity, we are no longer free to just define the sea level as zero. It’s like we had suddenly discovered that the Earth is round and there is an absolute zero of elevation, which is at the center of the Earth.

In best manner of a physicist, I have left out a small detail, which is that the calculated energy of the quantum vacuum is actually infinite. Yeah, I know, doesn’t sound good. If you don’t care what the total vacuum energy is anyway, this doesn’t matter. But if you take into account gravity, the vacuum energy becomes measurable, and therefore it does matter.

The vacuum energy one obtains from quantum field theory is of the same form as Einstein’s Cosmological Constant because this is the only form which (in an uncurved space-time) does not depend on the observer. We measured the Cosmological Constant to have a small, positive, nonzero value which is responsible for the accelerated expansion of the universe. But why it has just this value, and why not infinity (or at least something huge), nobody knows. This “Cosmological Constant Problem” is one of the big open problems in theoretical physics today and its origin lies in our lacking understanding of the quantum vacuum.

But this isn’t the only mystery surrounding the sea of virtual particles. Quantum theory tells you how particles belong together with fields. The quantum vacuum by definition doesn’t have real particles in it, and normally this means that the field that it belongs to also vanishes. For these fields, the average sea level is at zero, regardless of whether there are boats on the water or aren’t. But for some fields the real particles are more like stones. They’ll not stay on the surface, they will sink and make the sea level rise. We say the field “has a non-zero vacuum expectation value.”

On the seashore, you now have to wade through the water, which will slow you down. This is what the Higgs-field does: It drags down particles and thereby effectively gives them mass. If you dive and kick the stones that sunk to the bottom hard enough, you can sometimes make one jump out of the surface. This is essentially what the LHC does, just call the stones “Higgs bosons.” I’m really getting into this seashore thing ;)

Next, let us imagine we could shove the Earth closer to the Sun. Oceans would evaporate and you could walk again without having to drag through the water. You’d also be dead, sorry about this, but what about the vacuum? Amazingly, you can do the same. Physicists say the “vacuum melts” rather than evaporates, but it’s very similar: If you pump enough energy into the vacuum, the level sinks to zero and all particles are massless again.

You may complain now that if you pump energy into the vacuum, it’s no longer vacuum. True. But the point is that you change the previously non-zero vacuum expectation value. To our best knowledge, it was zero in the very early universe and theoretical physicists would love to have a glimpse at this state of matter. For this however they’d have to achieve a temperature of 1015 Kelvin! Even the core of the sun “only” makes it to 107 Kelvin.

One way to get to such high temperature, if only in a very small region of space, is with strong electromagnetic fields.

In a recent paper, Hegelich, Mourou, and Rafelski estimated that with the presently most advanced technology high intensity lasers could get close to the necessary temperature. This is still far off reality, but it will probably one day become possible!

Back to the sea: Fluids can exist in a “superheated” state. In such a state, the medium is liquid even though its temperature is above the boiling point. Superheated liquids are “metastable,” this means if you give them any opportunity they will very suddenly evaporate into the preferred stable gaseous state. This can happen if you boil water in the microwave, so always be very careful taking it out.

The vacuum that we live in might be a metastable state: a “false vacuum.” In this case it will evaporate at some point, and in this process release an enormous amount of energy. Nobody really knows whether this will indeed happen. But even if it does happen, best present estimates date this event into the distant future, when life is no longer possible anyway because stars have run out of power. Particle physicist Joseph Lykken estimated something like a Googol years; that’s about 1090 times the present age of the universe.

According to some theories, our universe came into existence from another metastable vacuum state, and the energy that was released in this process eventually gave rise to all we see around us now. Some physicists, notably Lawrence Krauss, refer to this as creating a universe from “nothing.”


If you take away all particles, you get the quantum vacuum, but you still have space-time. If we had a quantum theory for space-time as well, you could take away space-time too, at least operationally. This might be the best description of a physical “nothing” that we can ever reach, but it still would not be an absolute nothing because even this state is still a mathematical “something”.

Now what exactly it means for mathematics to “exist” I better leave to philosophers. All I have to say about this is, well, nothing.


If you want to know more about the philosophy behind nothing, you might like Jim Holt’s book “Why does the world exist”, which I reviewed here

This post previously appeared at Starts With a Bang under the title “Everything you ever wanted to know about nothing”.

Wednesday, March 04, 2015

Can we prove the quantization of gravity with the Casimir effect? Probably not.

Quantum gravity phenomenology has hit the news again. This time the headline is that we can supposedly use the gravitational Casimir effect to demonstrate the existence of gravitons, and thereby the quantization of the gravitational field. You can read this on New Scientist or Spektrum (in German), and tomorrow you’ll read it in a dozen other news outlets, all of which will ignore what I am about to tell you now, namely (surpise) the experiment is most likely not going to detect any quantum gravitational effect.

The relevant paper is on the arxiv
I’m here for you. I went and read the paper. Then it turned out that the argument is based on another paper by Minter et al, which has a whooping 60 pages. Don’t despair, I’m here for you. I went and read that too. It’s only fun if it hurts, right? Luckily my attempted martyrdom wasn’t put to too much test because I recalled after the first 3 pages that I had read the Minter et al paper before. So what is this all about?

The Casmir effect is a force that is normally computed for quantum electrodynamics, where it acts between conducting, uncharged plates. The resulting force is a consequence of the boundary conditions on the plates. The relevant property of the setup in quantum electrodynamics is that the plates are conducting, which is what causes the boundary condition. Then, the quantum vacuum outside the plates is different from the vacuum between the plates, resulting in a net force. You can also do this calculation for other geometries with boundary conditions; it isn’t specific to the plates, this is just the simplest case.

The Casimir effect exists for all quantized fields, in principle, if you have suitable boundary conditions. It does also exist for gravity, if you perturbatively quantize it, and this has been calculated in the context of many cosmological models. Since compactified dimensions are also boundary conditions, the Casmir effect can be relevant for all extra-dimensional scenarios, where it tends to destabilize configurations.

In the new paper now, the author, James Quach, calculates the gravitational Casimir effect with a boundary condition where the fields do not abruptly jump, but are smooth, and he also takes into account a frequency-dependence in the reaction of the boundary to the vacuum fluctuations. The paper is very clearly written, and while I haven’t checked the calculation in detail it looks good to me. I also think it is a genuinely new result.

To estimate the force of the resulting Casmir effect one then needs to know how the boundary reacts to the quantum fluctuations in the vacuum. The author for this looks at two different case for which he uses other people’s previous findings. First, he uses an estimate for how normal materials scatter gravitational waves. Then he uses an estimate that goes back to the mentioned 60 pages paper how superconducting films supposedly scatter gravitational waves, due to what they dubbed the “Heisenberg-Coulomb Effect” (more about that in a moment). The relevant point to notice here is that in both cases the reaction of the material is that to a classical gravitational wave, whereas in the new paper the author looks at a quantum fluctuation.

Quach estimates that for normal materials the gravitational Casimir effect is ridiculously tiny and unobservable. Then he uses the claim in the Minter et al paper that superconducting materials have a hugely enhanced reaction to gravitational waves. He estimates the Casimir effect in this case and finds that it can be measureable.

The paper by Quach is very careful and doesn’t overstate this result. He very clearly spells out that this doesn’t so much test quantum gravity, but that it tests the Minter et al claim, the accuracy of which has previously been questioned. Quach writes explicitly:
“The origins of the arguments employed by Minter et al. are heuristical in nature, some of which we believe require a much more formal approach to be convincing. This is echoed in a review article […] Nevertheless, the work by Minter et al. do yield results which can be used to falsify their theory. The [Heisenberg-Coulomb] effect should enhance the Casimir pressure between superconducting plates. Here we quantify the size of this effect.”
Take away #1: The proposed experiment does not a priori test quantum gravity, it tests the controversial Heisenberg-Coulomb effect.

So what’s the Heisenberg-Coulomb effect? In their paper, Minter et al explain that a in a superconducting material, Cooper pairs aren’t localizable and thus don’t move like point particles. This means in particular they don’t move on geodesics. That by itself wouldn’t be so interesting, but their argument is that this is the case only for the negatively charged Cooper pairs, while the positively charged ions of the atomic lattice move pretty much on geodesics. So if a gravitational wave comes in, their argument, the positive and negative charges react differently. This causes a polarization, which leads to a restoring force.

You probably don’t feel like reading the 60 pages Minter thing, but have at least a look at the abstract. It explicitly uses the semi-classical approximation. This means the gravitational field is unquantized. This is essential, because they talk about stuff moving in a background spacetime. Quach in his paper uses the frequency-dependence from the Minter paper not for the semi-classical approximation, but for the response of each mode in the quantum vacuum. The semi-classical approximation in Quach’s case is flat space by assumption.

Take away #2: The new paper uses a frequency response derived for a classical gravitational wave and uses it for the quantized modes of the vacuum.

These two things could be related in some way, but I don’t see how it’s obvious that they are identical. The problem is that to use the Minter result you’d have to argue somehow that the whole material responds to the same mode at once. This is so if you have a gravitational wave that deforms the background, but I don’t see how it’s justified to still do this for quantum fluctuations. Note, I’m not saying this is wrong. I’m just saying I don’t see why it’s right. (Asked the author about it, no reply yet. I’ll keep you posted.)

We haven’t yet come to the most controversial part of the Minter argument though. That the superconducting material reacts with polarization and a restoring force seems plausible to me. But to get the desired boundary condition, Minter et al argue that the superconducting material reflects the incident gravitational wave. The argument seems to be basically that since the gravitational wave can’t pull apart the negative from the positive charges, it can’t trespass the medium at all. And since the reaction of the medium is electromagnetic in origin, it is hugely enhanced compared to the reaction of normal media.

I can’t follow this argument because I don’t see where the backreaction from the material on the gravitational wave is supposed to come from. The only way the superconducting material can affect the background is through the gravitational coupling, ie through its mass movement. And this coupling is tiny. What I think would happen is simply that the superconducting film becomes polarized and then when the force becomes too strong to allow further separation through the gravitational wave, it essentially moves as one, so no further polarization. Minter et al do in their paper not calculate the backreaction of the material to the background. This isn’t so surprising because backreaction in gravity is one of the thorniest math problems you can encounter in physics. As an aside, notice that the paper is 6 years old but unpublished. And so

Take away #3: It’s questionable that the effect which the newly proposed experiments looks for exists at all.

My summary then is the following: The new paper is interesting and it’s a novel calculation. I think it totally deserves publication in PRL and I have little doubt that the result (Eqs 15-18) is correct. I am not sure that using the frequency response to classical waves is good also for quantum fluctuations. And even if you buy this, the experiment doesn’t test for quantization of the gravitational field directly, but rather it tests for a very controversial behavior of superconducting materials. This controversial behavior has been argued to exist for classical gravitational waves though, not for quantized ones. Besides this, it’s a heuristic argument in which the most essential feature – the supposed reflection of gravitational waves – has not been calculated.

For these reasons, I very strongly doubt that the proposed experiment that looks for a gravitational contribution to the Casimir effect would find anything.

Saturday, February 28, 2015

Are pop star scientists bad for science?

[Image Source: Asia Tech Hub]

In January, Lawrence Krauss wrote a very nice feature article for the Bulletin of the Atomic Scientists, titled “Scientists as celebrities: Bad for science or good for society?” In his essay, he reflects on the rise to popularity of Einstein, Sagan, Feynman, Hawking, and deGrasse Tyson.

Krauss, not so surprisingly, concludes that scientific achievement is neither necessary nor sufficient for popularity, and that society benefits from scientists’ voices in public debate. He does not however address the other part of the question that his essay’s title raises: Is scientific celebrity bad for science?

I have to admit that people who idolize public figures just weird me out. It isn’t only that I am generally suspicious of groups of any kinds and avoid crowds like the plague, but that there is something creepy about fans trying to outfan each other by insisting their stars are infallible. It’s one thing to follow the lives of popular figures, be happy for them and worry about them. It’s another thing to elevate their quotes to unearthly wisdom and preach their opinion like supernatural law.

Years ago, I unknowingly found myself in a group of Feynman fans who were just comparing notes about the subject of their adoration. In my attempt to join the discussion I happily informed them that I didn’t like Feynman’s books, didn’t like, in fact, his whole writing style. The resulting outrage over my blasphemy literally had me back out of the room.

Sorry, have I insulted your hero?

An even more illustrative case is that of Michael Hale making a rather innocent joke about a photo of Neil deGrasse Tyson on twitter, and in reply getting shot down with insults. You can find some (very explicit) examples in the writeup of his story “How I Became Thousands of Nerds' Worst Enemy by Tweeting a Photo.” After blowing up on twitter, his photo ended up on the facebook page “I Fucking Love Science.” The best thing about the ensuing facebook thread is the frustration of several people who apparently weren’t able to turn off notifications of new comments. The post has been shared more than 50,000 times, and Michael Hale now roasts in nerd hell somewhere between Darth Vader and Sauron.

Does this seem like scientist’s celebrity is beneficial to balanced argumentation? Is fandom ever supportive to rational discourse?

I partly suspect that Krauss, like many people his age and social status, doesn’t fully realize the side-effects that social media attention brings, the trolls in the blogosphere’s endless comment sections and the anonymous insults in the dark corners of forum threads. I agree with Krauss that it’s good that scientists voice their opinions in public. I’m not sure that celebrity is a good way to encourage people to think on their own. Neither, for that matter, are facebook pages with expletives in the title.

Be that as it may, pop star scientists serve, as Steve Fuller put it bluntly, as “marketing”
“The upshot is that science needs to devote an increased amount of its own resources to what might be called pro-marketing.”
Agreed. And for that reason, I am in favor of scientific celebrity, even though I doubt that idolization can ever bring insight. But let us turn now to the question what ill effects celebrity can have on science.

Many of those who become scientists report getting their inspiration from popular science books, shows, or movies. Celebrities clearly play a big role in this pull. One may worry that the resulting interest in science is then very focused on a few areas that are the popular topics of the day. However, I don’t see this worry having much to do with reality. What seems to happen instead is that young people, once their interest is sparked, explore the details by themselves and find a niche that they fit in. So I think that science benefits from popular science and its voices by inspiring young people to go into science.

The remaining worry that I can see is that scientific pop stars affect the interests of those already active in science. My colleagues always outright dismiss the possibility that their scientific opinion is affected by anything or anybody. It’s a nice demonstration of what psychologists call the “bias blind spot”. It is well documented that humans pay more attention to information that they receive repeatedly and in particular if it comes from trusted sources. This was once a good way to extract relevant information in a group of 300 fighting for survival. But in the age of instant connectivity and information overflow, it means that our interests are easy to play.

If you don’t know what I mean, imagine that deGrasse Tyson had just explained he read my recent paper and thinks it’s totally awesome. What would happen? Well, first of all, all my colleagues would instantly hate me and proclaim that my paper is nonsense without even having read it. Then however, a substantial amount of them would go and actually read it. Some of them would attempt to find flaws in it, and some would go and write follow-up papers. Why? Because the papal utterance would get repeated all over the place, they’d take it to lunch, they’d discuss it with their colleagues, they’d ask others for opinion. And the more they discuss it, the more it becomes interesting. That’s how the human brain works. In the end, I’d have what the vast majority of papers never gets: attention.

That’s a worry you can have about scientific celebrity, but to be honest it’s a very constructed worry. That’s because pop star scientists rarely if ever comment on research that isn’t already very well established. So the bottomline is that while it could be bad for science, I don’t think scientific celebrity is actually bad for science, or at least I can’t see how.

The above mentioned problem of skewing scientific opinions by selectively drawing attention to some works though is a real problem with the popular science media, which doesn’t shy away from commenting on research which is still far from being established. The better outlets, in the attempt of proving their credibility, stick preferably to papers of those already well-known and decorate their articles with quotes from more well-known people. The result is a rich-get-richer trend. On the very opposite side, there’s a lot of trash media that seem to randomly hype nonsense papers in the hope of catching readers with fat headlines. This preferably benefits scientists who shamelessly oversell their results. The vast majority of serious high quality research, in pretty much any area, goes largely unnoticed by the public. That, in my eyes, is a real problem which is bad for science.

My best advice if you want to know what physicists really talk about is to follow the physics societies or their blogs or journals respectively. I find they are reliable and trustworthy information sources, and usually very balanced because they’re financed by membership fees, not click rates. Your first reaction will almost certainly be that their news are boring and that progress seems incremental. I hate to spell it out, but that’s how science really is.