Friday, May 30, 2014

Tuesday, May 27, 2014

Book Review: “The Cosmic Cocktail” by Katherine Freese

The Cosmic Cocktail: Three Parts Dark Matter
Katherine Freese
Princeton University Press (May 4, 2014)

Katherine Freese’s “Cosmic Cocktail” lays out the current evidence for dark matter and dark energy, and the status of the relevant experiments. The book excels in the chapter about indirect and direct detection of WIMPs, a class of particles that constitutes the presently best motivated and most popular dark matter candidates. “The Cosmic Cocktail” is is Freese’s first popular science book.

Freese is a specialist in the area of astroparticle physics, and she explains the experimental status for WIMP detection clearly, not leaving out the subtleties in the data interpretation. She integrates her own contributions to the field where appropriate; the balance between her own work and that of others is well met throughout the book.

The book also covers dark energy, and while this part is informative and covers the basics, it is nowhere near as detailed as that about dark matter detection. Along the way to the very recent developments, “The Cosmic Cocktail” introduces the reader to the concepts necessary to understand the physics and relevance of the matter composition of the universe. In the first chapters, Freese explains the time-evolution of the universe, structure formation, the evolution of stars, and the essentials of particle physics necessary to understand matter in the early universe. She adds some historical facts, but the scientific history of the field is not the main theme of the book.

Freese follows the advice to first say what you want to tell them, then tell them, then tell them what you just told them. She regularly reminds the reader of what was explained in earlier chapters, and repeats explanations frequently throughout the book. While this makes it easy to follow the explanations, the alert reader might find the presumed inattention somewhat annoying. The measure of electron volts, for example, is explained at least four times. Several sentences are repeated almost verbatim in various places, for example that “eventually galaxies formed… these galaxies then merged to make clusters and superclusters…” (p. 31) “…eventually this merger lead to the formation of galaxies and clusters of galaxies...” (p. 51) or “Because neutrons are slightly heavier than protons, protons are the more stable of the objects...” (p. 70), “neutrons are a tiny bit heavier than protons… Because protons are lighter, they are the more stable of the two particles.” (p. 76), “Inflation is a period of exponential expansion just after the Big Bang”. “inflationary cosmology… an early accelerating period of the history of the Universe” (p. 202), and so on.

The topics covered in the book are timely, but do not all contribute to the theme of the book, the “cosmic cocktail”. Freese narrates for example the relevance and discovery of the Higgs and the construction details of the four LHC detectors, but does only mention the inflaton in one sentence while inflation itself is explained in two sentences (plus two sentences in an endnote). She covers the OPERA anomaly of faster-than-light neutrino (yes, including the joke about the neutrino entering a bar) and in this context mentions that faster-than-light travel implies violations of causality, confusing readers not familiar with Special Relativity. On the other hand, she does not even name the Tully-Fisher relation, and dedicates only half a sentence to baryon acoustic oscillations.

The book contains some factual errors (3 kilometers are not 5 miles (p. 92), the radius of the Sun is not 10,000 kilometers (p. 95), Hawking radiation is not caused by quantum fluctuations of space-time (p.98), the HESS experiment is not in Europe (p. 170), the possible vacua in the string theory landscape do not all have a different cosmological constant (p. 201)). Several explanations are expressed in unfortunate phrases, eg: “[T]he mass of all galaxies, including our own Milky Way, must be made of dark matter.” (p. 20) All its mass? “Imagine drawing a circle around the [gravitational lens]; the light could pass through any point on that circle.” (p. 22). Circle in which plane?

The metaphors and analogies used by Freese’s are common in the popular science literature: The universe is an expanding balloon or a raisin bread, the Higgs field is “a crowded room of dancing people” or some kind of molasses (p.116). Some explanations are vague “The multiverse perspective is strengthened by theories of inflationary cosmology” (which?) others are misleading, eg, the reader may be left with the idea that Casimir energy causes cosmic acceleration (p. 196) or that “Only with a flat geometry can the universe grow old enough to create the conditions for life to exist.” (p. 44). One has to be very careful (and check the endnote) to extract that she means the spatial geometry has to be almost flat. Redshift at the black hole horizon is often illustrated with somebody sending light signals while falling through the horizon. Freese instead uses sound waves, which adds confusion because sounds needs a medium to travel.

These are minor shortcomings, but they do limit the target group that will benefit from the book. The reader who brings no background knowledge in cosmology and particle physics I am afraid will inevitably stumble at various places.

Freese’s writing style is very individual and breaks with the smooth – some may find too smooth – style that has come to dominate the popular science literature. It takes some getting used to her occasionally quite abrupt changes of narrative direction in the first chapters, but the later chapters are more fluently written. Freese interweaves anecdotes from her personal life with the scientific explanations. Some anecdotes document academic life, others seem to serve no particular purpose other than breaking up the text. The book comes with a light dose of humor that shows mostly in the figures, which contain a skull to illustrate the ‘Death of MACHO’s’, a penguin, and a blurry photo of a potted plant.

The topic of dark energy and dark matter has of course been covered in many books, one may mention Dan Hooper’s “Dark Cosmos” (Smithsonian Books, 2006) and Evalyn Gates “Einstein’s Telescope” (WW Norton, 2009). These two books are meanwhile somewhat out-of-date because the field has developed so quickly, making Freese’s book a relevant update. Both Gates’ and Hooper’s book are more easily accessible and have a smoother narrative than “The Cosmic Cocktail”. Freese demands more of the reader but also gets across more scientific facts.

I counted more than a dozen instances of the word “exciting” throughout the book. I agree that these are indeed exciting times for cosmology and astroparticle physics. Freese’s book is a valuable, non-technical and yet up-to-date review, especially on the topic of dark matter detection.

[Disclaimer: Free review copy. Page numbers in the final version might slightly differ.]

Wednesday, May 21, 2014

What is direct evidence and does the BICEP2 measurement prove that gravity must be quantized?

Fast track to wisdom: Direct evidence is relative and no, BICEP doesn’t prove that gravity must be quantized.

In the media storm following the BICEP announcement that they had measured the polarization of the cosmic microwave background due to gravitational waves, Chao-Lin Kuo, member of the BICEP team was widely quoted with saying:
“This is the first direct image of gravitational waves across the primordial sky.”

As of lately, it’s been debated whether BICEP has signals from the early universe at all, or whether their signal is mostly produced by matter in our own galaxy that hasn’t been properly accounted for. This isn’t my area of research and I don’t know the details of their data analysis. Let me just say that this kind of discussion is perfectly normal to have when data are young. Whether or not they actually have seen what they claimed, it is worthwhile to sort out exactly what it would mean if the BICEP claims correct, and that is the purpose of this post.

The BICEP2 results have variously been reported as the first direct evidence of cosmic inflation, direct proof of the theory of inflation, indirect evidence for the existence of gravitational waves, the first indirect detection of the gravitational wave background [emphasis theirs],the most direct evidence of Albert Einstein’s last major unconfirmed prediction, and evidence for the first detection of gravitational waves in the initial moments of the universe.

Confused already?

What is a direct measurement?

A direct measurement of a quantity X is if your detector measures quantity X.

One can now have a philosophical discussion about whether not human senses should account for as the actual detector. Then all measurements with external devices are indirect because they are inferred from secondary measurements, for example the reading off a display. However, for what physicists are concerned the reading of the detector by a human is irrelevant, so if you want to have this discussion, you can have it without me.

An indirect measurement is if your detector measures Y and you use a relation between X and Y to obtain X.

A Geiger-counter counts highly energetic particles as directly as it gets, but once you start thinking about it, you’ll note that we rarely measure anything directly. A common household thermometer for example does not actually measure temperature, it measures volume. A GPS device does not actually measure position, it measures the delay between signals received from different satellites and infers the position from that. Your microphone doesn’t actually measure decibel, it measures voltage. And so on.

One problem in distinguishing between direct and indirect measurements is that it’s not always so clear what is or isn’t part of the detector. Is the water in the Kamiokande tank part of the detector, or is the measurement only made in the photodetectors sourrounding the water? And is the Antarctic part of the IceCube detector?

The other problem is that in many cases scientists do not talk about quantities, they talk about concepts, ideas, hypotheses, or models. And that’s where things become murky.

What is direct evidence?

There is no clear definition for this.

You might want to extend the definition of a direct measurement to direct evidence, but this most often does not work. If you are talking about direct evidence for a particle, you can ask for the particle to hit the detector for it to be direct evidence. (Again, I am leaving aside that most detectors will amplify and process the signal before it is read out by a human because commonly the detector and data analysis are discussed separately.)

However, if you are measuring something like a symmetry violation or a decay time, then your measurement would always be indirect. What is commonly known as “direct” CP violation for example would then also be an indirect measurement since the CP violation is inferred from decay products.

In practice whether some evidence is called direct or indirect is a relative statement about the amount of assumptions that you had to use to extract the evidence. Evidence is indirect if you can think of a more direct way to make the measurement. There is some ambiguity in this which comes from the question whether the ‘more direct measurement’ must be possible in practice or in principle, but this is a problem that only people in quantum gravity and quantum foundations spend sleepless nights over...

BICEP2 is direct evidence for what?

BICEP2 has directly measured the polarization of CMB photons. Making certain assumptions about the evolution of the universe (and after subtracting the galactic foreground) this is indirect evidence for the presence of gravitational waves in the early universe, also called the relic gravitational wave background.

Direct measurement of gravitational waves is believed to be possible with gravitational wave detectors that basically measure how space-time periodically contracts and expands. The slowing down of the rotation period in pulsar systems is also indirect evidence for gravitational waves, which according to Einstein’s theory of General Relativity should carry away energy from the system. This evidence gave rise to a Nobel Prize in 1993.

Evidence for inflation comes from the presence of the gravitational wave background in the (allegedly) observed range. How can this evidence for inflation plausibly be called “direct” if it is inferred from a measurement of gravitational waves that was already indirect? That’s because we do not presently know of any evidence for inflation that would be more direct than this. Maybe one day somebody will devise a way to measure the inflaton directly in a detector, but I’m not even sure a thought experiment can do that. Until then, I think it is fair to call this direct evidence.

One should not mistake evidence for proof. We will never prove any model correct. We only collect support for it. Evidence – theoretical or experimental – is such support.

Now what about BICEP and quantum gravity?

Let us be clear that most people working on quantum gravity mean the UV-completion of the theory when they use the word ‘quantum gravity’. The BICEP2 data has the potential to rule out some models derived from these UV-completions, for example variants of string cosmology or loop quantum cosmology, and many researchers are presently very active in deriving the constraints. However, the more immediate question raised by the BICEP2 data is about the perturbative quantization of quantum gravity, that is the question whether the CMB polarization is evidence not only for classical gravitational waves, but for gravitons, the quanta of the gravitational field.

Since the evidence for gravitational waves was indirect already, the evidence for gravitons would also be indirect, though this brings up the above mentioned caveat about whether a direct detection must not only be theoretically possible, but actually be practically feasible. Direct detection of gravitons is widely believed to be not feasible.

There have been claims by Krauss and Wilzcek (which we discussed earlier here) and a 2012 paper by Ashoorioon, Dev, and Mazumdar that argues that, yes, the gravitational wave background is evidence for the quantization of gravity. The arguments in a nutshell say that quantum fluctuations of space-time are the only way the observed fluctuations could have been large enough to produce the measured spectrum.

The problems with the existing arguments is that they do not carefully track the assumptions that go into it. They do for example make assumptions about the coupling between gravity and matter fields being the usual coupling. That is plausible of course, but these are couplings at energy densities higher than we have ever tested. They also assume, rather trivially, that space-time exists to begin with. If one has a scenario in which space-time comes into being by some type of geometric phase transition, as is being suggested in some approaches to quantum gravity, one might have an entirely different mechanism for producing fluctuations. Many emergent and induced gravity approaches to quantum gravity tend not to have gravitons, which raises the question of whether these approaches could be ruled out with the BICEP data. Alas, I am not aware of any prediction for the gravitational wave background coming from these approaches, so clearly there is a knowledge gap here.

What we would need to make the case that gravity must have been perturbatively quantized in the early universe is a cosmic version of Bell’s theorem. An argument that demonstrates that no classical version of gravity would have been able to produce the observations. The power of Bell’s inequality is not in proving quantum mechanics right - this is not possible. The power of of Bell’s inequality (or measuring violations thereof respectively) is in showing that a local classical, ie “old fashioned”, theory can not account for the observations and something has to give. The present arguments about the CMB polarization are not (yet) that stringent.

This means that the BICEP2 result is strong support for the quantization of gravity, but it does not presently rule out the option that gravity is entirely classical. Though, as we discussed earlier, this option is hard to make sense of theoretically, it is infuriatingly difficult to get rid of experimentally.

Summary

The BICEP2 data, if it holds up to scrutiny, is indirect evidence for the relic gravitational wave background. It is not the first indirect evidence for gravitational waves, but the first indirect evidence for this gravitational wave background that was created in the early universe. I think it is fair to say that it is direct evidence for inflation, but the terminology is somewhat ambiguous. It is indirect evidence for the perturbative quantization of gravity, but cannot presently rule out the option that gravity was never quantized at all.

Sunday, May 18, 2014

10 Things I wish I had known 20 years ago – Science Edition

The blogosphere thrives with advice for your younger self. Leaving aside lottery numbers and such, the older selves know this haircut was a really bad idea, you’ll eternally regret cheating on the nice guy, and you will never be that young again. This made me wonder which scientific knowledge I wish I had had already as a teenager. Leaving aside the scientific equivalent of sending lottery numbers back in time and recommend, say, that I have a close look at those type Ia supernovae, here’s my top 10:

  1. The fundamental theorems of welfare economics and Arrow’s impossibility theorem.

    I was absolutely disinterested in economics and sociology as a teenager. After reading some books on microeconomics, welfare economics, and social choice theory, the world made dramatically more sense to me. That’s how the hamster wheel works, and that’s the root of most of the quarrels in politics. Now my problem is that I don’t understand why most people don’t understand this...

  2. Exoplanets!

    Are much more common than anybody expected when I was a teenager. This has really changed the way I perceive our place in the universe, and I guess that this topic gets so much coverage in the media because this is the case for many people.

  3. Medicine is not a science.

    I was only after I read about the ‘recent’ field of ‘evidence based medicine’ that I realized I had falsely assumed medical practice is rooted in scientific evidence. Truth is, for the most part it’s not. Medicine isn’t a science, it’s a handcraft, and this is only slowly changing. You are well advised to check the literature for yourself.

  4. Most drugs are not tested on women.

    Pharma companies often don’t test drugs on women because changing hormone levels make it more difficult to find statistically significant effects. The result is that little is known about how the female body reacts differently to drugs than the male body. In many cases the recommended doses of certain medicines tend to be way too high for me, and had I known this earlier I would have trusted my body, not the label.

  5. Capsaicin isn’t water soluble.

    The stuff that makes Chili spicy doesn’t wash off with water, it takes alcohol or fat to get it off your tongue. Yes, this did make my life much better...

  6. Genetics.

    I wish I had known back then what I know today about genetic predispositions, eg that introversion, pain tolerance, response to training, and body odor have genetic factors, and I wish I had had a chance to have my DNA sampled 20 years ago.

    The default assumption that I, and I think most people, bring is that other people’s experiences are similar to our own. It never occurred to me, for example, that the other kids weren’t overdramatizing, they were really hurting more. Just by looking at my daughters I would bet that Lara got my pain tolerance while Gloria didn’t, and I can tell that Lara doesn’t mean to hurt Gloria, she just doesn’t believe it hurts as much as Gloria screams. And after reading Cain’s book that covers the correlation between introversion and a neurological trait called ‘high sensitivity’ I could finally stop wondering what is wrong with me.

  7. You probably have no free will, but it’s no reason to worry.

    Took me two decades to wrap my mind around this. Tough one.

  8. Most people talk to themselves.

    Psychologists call it the ‘internal monologue’. How was I supposed to know that pretty much everybody does that?

  9. Adaptive Systems.

    Adaptive systems are basically a generalization of the process of mutation and natural selection. This was really helpful to understand much of the changes in institutions and organizations, and all the talk about incentives. It also reveals that many of our problems stem from our inability to adapt. This is basically what gave rise to my this year’s FQXi essay.

  10. That guy really smells good.

    It was long believed that humans do not detect pheromones because the respective nerve is missing. Alas, MRI imaging settled the dispute in the 90s. The nerve, now called 'Cranial Nerve Zero' does exist. But note that while both, the olfactory and the zero nerve, end in the nostrils, the olfactory nerve does not detect pheromones and the nerves wire to different areas of the brain. Exactly what influence pheromones have on humans is still an active subject of study.

Tuesday, May 13, 2014

Nordita's Science Writers Workshop on Quantum Theory: Apply Now

Yes! We will have another science writers workshop at Nordita, after last year's workshop was as enjoyable as it was interesting, and we received numerous encouragements to continue with our efforts in science communication. So George and I, we have teamed up again and decided that after astrophysics and cosmology, this time we will focus on all things quantum. We got financial support from FQXi and the Fetzer Franklin Fund, and are well along with the organization.

I am particularly excited that Chad Orzel, he who teaches physics to his dog and preaches physics at Uncertain Principles, will join us and give a lecture. Which is not to say that the rest of the lecturers are any less interesting! We got everything covered from atom interferometry and quantum computing, over tests of the foundations of quantum mechanics to topological insulators and the gauge-gravity duality - and more.

You can find the full list on the workshop website and the purpose of this post is to let you know that you can now apply to join our meeting. The number of participants is strictly limited due to space restrictions and fire regulations, but we do have some spaces left. If you are a science writer who covers physics, and quantum stuff crosses your way every other day, then this workshop is for you. Just fill in the webform and tell us a few words about who you are and what you do and we will get back to you.

Monday, May 12, 2014

A Thousand Words

Have you noticed that paragraphs have gotten shorter?

We are reading more and more text displayed on screens, in landscape rather than portrait, or on tiny handheld devices. This hasn’t only affected the layout and typesetting, it has altered the way we write.

Short paragraphs and lists are now often used to break up blocks of text, and so are images. There is hardly any writing on the internet not decorated with an image. Besides reasons of layout there is the image grab of sharing apps that insists you need to have a picture. If none is provided this often comes out to be some advertisement or a commenter’s avatar. Adding a default image avoids this.

A picture, as they say, is worth a thousand words, but these thousand words are specifics that are often uncalled for, in the best case distracting in the worst case misleading. Think of “scientist” or “academia”. What image to pick that will not propagate a stereotype or single out a discipline? You may want to use a female scientist just to avoid criticism, but then isn’t your image misleading? And to make sure everybody understand she’s a s-c-i-e-n-t-i-s-t, even though she’s got lipstick on, you need a visual identification marker, a lab coat maybe, or a microscope, or at least a blackboard with equations. And now you’ve got a Latino woman in a lab coat looking into a microscope when all you meant was “scientist”.

FQXi launched a video contest “Show Me the Physics!” and in the accompanying visualization you’ll find me representing “scientist”, think bubble included (0:22). I’m very flattered that I’ve been promoted to a stereotype killer. Do you feel aptly represented? (Really, do not take pictures of yourself within 5 minutes of waking up. You never know, they might end up being your most popular ones.)

But if a picture adds a thousand words worth of detail, then a word calls upon a thousand pictures. The word is a generalization and abstraction that encompasses whole classes.

When my two year old daughter had spaghetti the first time, she excitedly proclaimed “Hair!” Humans are by nature good at classification, generalization and abstraction and this expresses in our language. That’s why we understand metaphors and analogies, and that’s where much of our humor roots.


This generalization is why we are so good at recognizing patterns, devising theories and, yes, at building stereotypes. Show me an image that captures all the richness, all the associations, all the analogies and connotations that come with the words “life” or “hope” or “yesterday”.

What are we doing then by drowning readers in unwanted and often unnecessary information? Sometimes I wonder if not the well-intended image works against the writer’s intent of making the text more accessible.

I love music, almost all kinds, but if anyhow possible I avoid music videos. I actually don’t want to know how the band looks like and I don’t want to know their interpretation of the lyrics. I want to make up my own story. Images are powerful. They stick. This video ruined David Ghetta’s Titanium for me.

This made me wonder if not this fear of the abstract, the word all by itself, is the same fear that leads science writers to shy away from equations. If a word calls upon a thousand images, an equation calls upon a thousand words. Think of exponential growth, or the wave equation, or the second law of thermodynamics. Did you just think of stirring milk into your coffee? Verbal explanations add details that are as uncalled for and can be as misleading as adding an image to illustrate a word. An analogy, a metaphor or a witty example does not convey what makes these equations so relevant: Their broad applicability and the ability to describe very diverse phenomena.

Recall these word problems from 8th class? The verbal description is supposed to make the math more accessible, but finding the equation is the real challenge. Science isn’t so much about solving equations. It’s about finding the equations to begin with. It’s about finding the underlying laws amidst all the clutter, the laws that are worth a thousand words.

Sometimes I wonder if I’d not rather be an abstract “scientist” for you, instead of a married middle-European mother of two, and I wonder what are the thousand words that my profile image speaks to you. And I fear that, by adding all these visual details, we are limiting the reader’s ability to extract and appreciate abstract ideas, that by adding all these verbal details to science writing, we are ultimately limiting the reader’s ability to appreciate science - in all its abstract glory. Hear my words...

Monday, May 05, 2014

Consciousness and Physics from Scratch

Brain in a squeezed state [Source]
Max Tegmark’s claim that we are all mathematical structures taught me an important lesson: Do not take photos of yourself within 5 minutes of waking up. Since he has influenced my thinking so thoroughly, his newest paper on the physical basis of consciousness was mandatory reading.

Titled “Consciousness as a state of matter”, the paper scores at 30 pages in 10 pt font. The argument has some gaps that are filled with conjectures, but it is an interesting attempt to quantify and formalize the slippery notion of consciousness. I’ll not claim I understood it all, but my below summary should convey the general idea.

The title of Tegmark’s paper is somewhat misleading because except for the rather vague introduction, the idea that consciousness is a “state of matter” is not rigorously pursued. In fact the original title “Space, consciousness and the quantum factorization problem” would have been much more informative if less catchy. I recommend that before you upload your LaTeX file to the arXiv you remove all comments, including discarded title options.

Tegmark’s paper actually tackles two different problems. One is the question what properties a conscious system has and how to formalize them. The other is the question of how to identify macroscopic and mostly classical objects from a fundamental Hamiltonian and wavefunction that describes presumably everything. At least that is my reading of what Tegmark calls the “physics-from-scratch problem” though this left me to wonder where the rest of the mathematical universe has gone. Maybe I should have taken the blue pill.

So let us look at the question of consciousness first.
    1. Consciousness
Tegmark builds on defining qualities for consciousness suggested by Giulio Tononi (never heard of him) according to which a conscious system needs to be able to a) store large amounts of information, and b) the information must be “integrated into a unified whole”. I’ll add my comments later, let me just say that though I don’t think these are very useful criteria, at least they are criteria. He later adds three more criteria c) dynamics (time-dependence), d) independence (dynamics is dominated by ‘forces from within’) and e) utility (records mainly information that is useful for it). The latter inches quite close to adaptive systems.

Tegmark then goes on to express the two criteria of information and integration in mathematical form and tries to derive conclusions about the conscious system from this. The approach that he uses is that he assumes the system is fundamentally a quantum system described by a Hamiltonian and a density matrix, and he performs various operations on the Hamiltonian that are supposed to bring it into a form where it is an ‘integrated whole’. For this, he essentially looks for a minimum of shared information between two subsystems under arbitrary unitary transformations. These subsystems are not local in any way, they are generic divisions of the Hilbert space.

He finds that arbitrary unitary transformations can dramatically lower the integrated information in a quantum system, basically by reducing entanglement between any two subsystems. Tegmarks uses a particular conjecture about the eigenvalues of the density matrix to make this point, and while the details may depend on this conjecture I don’t think this will be news for the folks in quantum information. It is basically the idea that Verlinde and Verlinde used in their solution to the firewall paradox, the same idea that I later used in my paper, that unitary operations can ‘disentangle’ subsystems. Tegmark concludes then that we have an “integration paradox […] No matter how large a quantum system we create, its state can never contain more than about a quarter of a bit of integrated information.”

A quarter of a bit is not much and if you can still follow my elaboration it’s probably not enough to explain your brain’s workings, so the criterion of integration does not seem particularly useful. Tegmark thus goes on to amend it by taking into account dynamics, ie the requirement to process information.

Comments: I don’t find it very plausible to require that the degree of integration a system possesses must be found by minimizing over all unitary transformations. Tegmark only acts with these transformation on the density matrix, so I am not sure whether the transformation is supposed to be an actual operation or whether it also should act on the Hamiltonian. In the latter case doing the transformation wouldn’t make a difference to observables, so why look for the minimum? Tegmark unfortunately doesn’t discuss observables at all. In the former case, if the unitary transformation is an actual change to the system then I think one should consider these different systems and again I don’t see why one should look for the minimum.

In any case, let us go on to the next point then, taking into account the dynamics. For this Tegmark now aims at finding a basis in the Hilbert space that minimizes the interaction terms in the Hamiltonian, thus maximizing what he calls separability. This leads to the second topic of the paper.

    2. Physics from Scratch
Tegmark interprets the “physics-from-scratch problem” as the question how to identify subsystems of the whole Hilbert space that can be separated as well as possible. These subsystems I believe are eventually supposed to give rise to the neatly separated (and almost classical) objects we experience, not to mention our own brains. He thus sets out to find a basis in which the interaction Hamiltonian between subspaces is minimized.

After another conjecture, this time about the energy eigenvalues of the Hamiltonian, he however finds that the minimal interaction Hamiltonian will always commute with the Hamiltonian of the subsystem, so there isn’t only little energy exchange, but actually none which then creates another paradox: “If we decompose our universe into maximally independent objects, then all change grinds to a halt.” This he finds does not describe reality and concludes “We have tried to understand the emergence of our observed semiclassical world, with its hierarchy of moving objects, by decomposing the world into maximally independent parts, but our attempts have failed dismally, producing merely a timeless world reminiscent of heat death.”

Then he goes on to weaken these requirements.

Comments: Recall that in Tegmark’s reading the physics-from-scratch problem includes the emergence of space and time. If that is so, I know neither what time nor what energy is supposed to mean and I have no clue how to interpret the equations. That there are unitary transformations which lead to a seemingly “timeless” picture is clear because one can shuffle the time-evolution from the wave-function into the operators. That of course does not affect observables, which brings me back to my earlier remark that it doesn’t seem very useful to try to quantify operators when no attention is paid to their expectation values.

Before reading Tegmark’s paper, I would have envisioned the physics-from-scratch procedure as follows. First you need to identify space and time from your Hamiltonian. Space and time are roughly the degrees of freedom that make the rest look as local as possible. Once you have that, you should be able to write down the Hamiltonian in a series of local, or almost local, operators of various dimensions. You need to define a vacuum state, then you can start building your Fock space. The rest is basically effective field theory. That, needless to say, is all “in principle”, not that anybody could do this in practice.

Just why the world we observe contains large things that are almost classical is probably not a question we can answer by looking at the properties of Hilbert-space decompositions in general, but it depends on the specific Hamiltonian. If we didn’t have confinement and if we didn’t have gravity our universe might just be a quantum soup.

After reading Tegmark’s paper, I am even more convinced that locality is a key requirement for the physics-from-scratch problem. Tegmark has some comments on this towards the end of the paper, but believes this requirement to be in conflict with the idea that space-time is emergent. I don’t think so, I think locality is what identifies space-time. Given that the objects that Tegmark wants to identify in the physics-from-scratch procedure are in practice very localized, I’d have expected this to be paid more attention to.
    3. Summary
Having said that I don’t think Tegmark’s is a promising approach to the physics-from-scratch problem, let me come back to the topic of consciousness and the main premise that consciousness has to fulfill the above listed five criteria.

To begin with, these criteria I think are in the best case necessary but not sufficient criteria that you may want to look for in some system.

The problem is that “consciousness” is not in and by itself a thing, and it isn’t a state of something either. Consciousness is a noun that is shorthand for a verb much like, for example, the word “leadership”. Leadership isn’t a thing and it isn’t a property, it’s a relation. It’s somebody leading somebody. Consciousness too isn’t a thing, it’s a relation. It’s A being consciously aware of B. (Depending on whether you also want self-awareness B can be identical to A.) We call A conscious if we have evidence it is aware of many B’s. Just how many B’s you want is pretty arbitrary, I think it’s a sliding scale (just think about anesthesia or sleepwalking) and there is no sharp line where something becomes conscious.

Having said that, while I think Tegmark’s paper has some flaws, it is interesting and it provides a mathematical basis for further investigation. With some refinements of the criteria he has applied this can become a very fruitful approach to the physical basis of consciousness. All over the world neuroscientists are presently trying to build and program artificial brains. I am sure this mathematical approach with the possibility of quantification will one day become highly relevant to the study of artificial intelligence. It is a very courageous paper that pushes the boundaries of our knowledge and I hope that it will be influential. I really want to understand consciousness better, and for me the only proper way of understanding is by way of maths.

So what did I learn from this paper? I learned that you should not read papers about the physical basis of consciousness within five minutes of waking up. You might spend the rest of the day staring at your hand, in amazement of the fact that you have a hand, two of them even, and are able to stare, not to mention being able to think about staring. If you’ve stopped staring at your hand, let me know what you think about Tegmark’s idea.