My wife and I have been listening to Max Tegmark’s book “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality” as an audiobook during our trips to and from work lately.

When he hit his chapter explaining Quantum Mechanics and his “Level 3 multiverse” I found that I profoundly disagree with this guy. It’s clear that he’s a grade A cosmologist, but I think he skirts dangerously close to being a quantum crank when it comes to multi-universe theory. I’ve been disagreeing with his take for the last couple driving sessions and I will do my best to try to sum for memory the specific issues that I’ve taken. Since this is a physicist making these claims, it’s important that I be accurate about my disagreement. In fact, I’ll start with just one and see whether I feel like going further from there…

The first place where I disagree is where he seems to show physicist Dunning-Kruger when regarding other fields in which he is not an expert. Physicists are very smart people, but they have a nasty habit of overestimating their competence in neighboring sciences… particularly biology. I am in a unique position in that I’ve been doubly educated; I have a solid background in biochemistry and cell molecular biology in addition to my background in quantum mechanics. I can speak at a fair level on both.

Professor Tegmark uses an anecdote (got to be careful here; anecdotes inflate mathematical imprecision) to illustrate how he feels quantum mechanics connects to events at a macroscopic level in organisms. There are many versions, but essentially he says this: when he is biking, the quantum mechanical behavior of an atom crossing through a gated ion channel in his brain affects whether or not he sees an oncoming car, which then may or may not hit him. By quantum mechanics, whether he gets hit or not by the car should be a superposition of states depending on whether or not the atom passes through the membrane of a neuron and enables him to have the thought to save himself or not. He ultimately elaborates this by asserting that “collapse free” quantum mechanics states that there is one universe where he saved himself and one universe where he didn’t… and he uses this as a thought experiment to justify what he calls a “level 3” multiverse with parallel realities that are coherent to each other but differ by the direction that a quantum mechanical wave function collapse took.

I feel his anecdote is a massive oversimplification that more or less throws the baby out with the bath water. Illustration of the quantum event in question is “Whether or not a calcium ion in his brain passes through a calcium gate” as connected to the macroscopic biological phenomenon of “whether he decides to bike through traffic” or alternatively “whether or not he decides to turn his eye in the appropriate direction” or alternatively “whether or not he sees a car coming when he starts to bike.”

You may notice this as a variant of the Schrodinger “Cat in a box” thought experiment. In this experiment, a cat is locked in a perfectly closed box with a sample of radioactive material and a Geiger counter that will dump acid onto the cat if it detects a decay; as long as the box is closed, the cat will remain in some superposition of states, conventionally considered “alive” or “dead” as connected with whether or not the isotope emitted a radioactive decay or not. I’ve made my feelings of this thought experiment known before here.

The fundamental difficulty comes down to what the superposition of states means when you start connecting an object with a very simple spectrum of states, like an atom, to an object with a very complex spectrum of states, like a whole cat. You could suppose that the cat and the radioactive emission become entangled, but I feel that there’s some question whether you could ever actually know whether or not they were entangled simply because you can’t discretely figure out what the superposition should mean: alive and dead for the cat are not a binary on-off difference from one another as “emitted or not” is for the radioactive atom. There are a huge number of states the cat might occupy that are very similar to one another in energy and the spectrum spanning “alive” to “dead” is so complicated that it might as well just be a thermal universe. If the entanglement actually happened or not, in this case, the classical thermodynamics and statistical mechanics should be enough to tell you in classically “accurate enough” terms what you find when you open the box. If you wait one half-life of a bulk radioactive sample, when you open the box, you’ll find a cat that is burned by acid to some degree or another. At some point, quantum mechanics does give rise to classical reality, but where?

The “but where” is always where these arguments hit their wall.

In the anecdote Tegmark uses, as I’ve written above, the “whether a calcium ion crossed through a channel or not” is the quantum mechanical phenomenon connected to “whether an oncoming car hit me or not while I was biking.”

The problem that I have with this particular argument is that it loses scale. This is where quantum flapdoodle comes from. Does the scale make sense? Is all the cogitation associated with seeing a car and operating a bike on the same scale as where you can actually see quantum mechanical phenomena? No, it isn’t.

First, all the information coming to your brain from your eyes telling you that the car is present originate from many many cells in your retina, involving billions of interactions with light. The muscles that move your eyes and your head to see the car are instructed from thousands of nerves firing simultaneously and these nerves fire from gradients of Calcium and other ions… molar scale quantities of atoms! A nerve doesn’t fire or not based on the collapse of possibilities for a single calcium ion. It fires based on thermodynamic quantities of ions flowing through many gated ion channels all at once. The net effect of one particular atom experiencing quantum mechanical ambivalence is swamped under statistically large quantities of atoms picking all of the choices they can pick from the whole range of possibilities available to them, giving rise to the bulk phenomenon of the neuron firing. Let’s put it this way: for the nerve to fire or not based on quantum mechanical superposition of calcium ions would demand that the nerve visit that single thermodynamic state where all the ions fail to flow through all the open ion gates in the membrane of the cell all at once… and there are statistically few states where this has happened compared to the statistically many states where some ions or many ions have chosen to pass through the gated pore (this is what underpins the chemical potential that drives the functioning of the cell). If you bothered to learn any stat mech at all, you would know that this state is such a rare one that it would probably not be visited even once in the entire age of the universe. Voltage gradients in nerve cells are established and maintained through copious application of chemical energy, which is truthfully constructed from quantum mechanics and mainly expressed in bulk level by plain old classical thermodynamics. And this is merely the state of whether a single nerve “fired or not” taken in aggregate with the fact that your capacity for “thought” doesn’t depend enough on a single nerve that you can’t lose that one nerve and fail to think –if a single nerve in your retina failed to fire, all the sister nerves around it would still deliver an image of the car speeding toward you to your brain.

Do atoms like a single calcium ion subsist in quantum mechanical ambivalence when left to their own devices? Yes, they do. But, when you put together a large collection of these atoms simultaneously, it is physically improbable that every single atom will make the same choice all at once. At some point you get a bulk thermodynamic behavior and the decision that your brain makes are based on bulk thermodynamic behaviors, not isolated quantum mechanical events.

Pretending that a person made a cognitive choice based on the quantum mechanical outcomes of a single atom is a reductio ad absurdum and it is profoundly disingenuous to start talking about entire parallel universes where you swerved right on your bike instead of left based on that single calcium atom (regardless of how liberally you wave around the butterfly effect). The nature of physiology in a human being at all levels is about biasing fundamentally random behavior into directed, ordered action, so focusing on one potential speck of randomness doesn’t mean that the aggregate should fail to behave as it always does. All the air in the room where you’re standing right now could suddenly pop into the far corner leaving you to suffocate (there is one such state in the statistical ensemble), but that doesn’t mean that it will…. closer to home, you might win a $500 million Power Ball Jackpot, but that doesn’t mean you will!

I honestly do not know what I think about the multiverse or about parallel universes. I would say I’m agnostic on the subject. But, if all parallel universe theory is based on such breathtaking Dunning-Kruger as Professor Tegmark exhibits when talking about the connection between quantum mechanics and actualization of biological systems, the only stance I’m motivated to take is that we don’t know nearly enough to be speculating. If Tegmark is supporting multiverse theory based on such thinking, he hasn’t thought about the subject deeply enough. Scale matters here and neglecting the scale means you’re neglecting the math! Is he neglecting the math elsewhere in his other huge, generalizing statements? For the scale of individual atoms, I can see how these ideas are seductive, but stretching it into statistical systems is just wrong when you start claiming that you’re seeing the effects of quantum mechanics at macroscopic biological levels when people actually do not. It’s like Tegmark is trying to give Deepak Chopra ammunition!

Ok, just one gripe there. I figure I probably have room for another.

In another series of statements that Tegmark makes in his discussion of quantum mechanics, I think he probably knows better, but by adopting the framing he has, he risks misinforming the audience. After a short discussion of the origins of Quantum Mechanics, he introduces the Schrodinger Equation as the end-all, be-all of the field (despite speaking briefly of Lagrangian path integral formalism elsewhere). One of the main theses of his book is that “the universe is mathematical” and therefore the whole of reality is deterministic based on the predictions of equations like Schrodinger’s equation. If you can write the wave equation of the whole universe, he says, Schrodinger’s equation governs how all of it works.

This is wrong.

And, I find this to miss most of the point of what physics is and what it actually does. Math is valuable to the physics, but one must always be careful that the math not break free of its observational justification. Most of what physics is about is making measurements of the world around us and fitting those measurements to mathematical models, the “theories” (small caps) provided to us by the Einsteins and the Sheldon Coopers… if the fit is close enough, the regularity of a given equation will sometimes make predictions about further observations that have not yet been made. Good theoretical equations have good provenance in that they predict observations that are later made, but the opposite can be said for bad theory, and the field of physics is littered with a thick layer of mathematical theories which failed to account for the observations, in one way or another. The process of physics is a big selection algorithm where smart theorists write every possible theory they can come up with and experimentalists take those theories and see if the data fit to them, and if they do accommodate observation, such a theory is promoted to a Theory (big caps) and is explored to see where its limits exist. On the other hand, small caps “theories” are discarded if they don’t accommodate observation, at which point they are replaced by a wave of new attempts that try to accomplish what the failure didn’t. As a result, new theories fit over old theories and push back predictive limits as time goes on.

For the specific example of Schrodinger’s equation, the mathematical model that it offers fits over the Bohr model by incorporating deBroglie’s matter wave. Bohr’s model itself fit over a previous model and the previous models fit over still earlier ideas had by the ancient Greeks. Each later iteration extends the accuracy of the model, where the development is settled depending on whether or not a new model has validated predictive power –this is literally survival of the fittest applied to mathematical models. Schrodinger’s equation itself has a limit where its predictive power fails: it cannot handle Relativity except as a perturbation… meaning that it can’t exactly predict outcomes that occur at high speeds. The deficiencies of the Schrodinger equation are addressed by the Klein-Gordon equation and by the Dirac equation and the deficiencies of those in turn are addressed by the path integral formalisms of Quantum Field Theory. If you knew the state equation for the whole universe, Schrodinger’s equation would not accurately predict how time unfolds because it fails to work under certain physically relevant conditions. The modern Quantum Field Theories fail at gravity, meaning that even with the modern quantum, there is no assured way of predicting the evolution of the “state equation of the universe” even if you knew it. There are a host of follow-on theories, String Theory, Quantum loop gravity and so and so forth that vy for being The Theory That Fills The Holes, but, given history, probably will only extend our understanding without fully answering all the remaining questions. That String Theory has not made a single prediction that we can actually observe right now should be lost on no one –there is a grave risk that it never will. We cannot at the moment pretend that the Schrodinger equation perfectly satisfies what we actually know about the universe from other sources.

It would be most accurate to say that reality seems to be quantum mechanical at its foundation, but that we have yet to derive the true “fully correct” quantum theory. Tegmark makes a big fuss about trying to explain “wave function collapse” doesn’t fit within the premise of Schrodinger’s equation but that the equation could hold as good quantum regardless if a “level three multiverse” is real. The opposite is also true: we’ve known Schrodinger’s equation is incomplete since the 1930s, so “collapse” may simply be another place where it’s incomplete that we don’t yet know why. A multiverse does not necessarily follow from this. Maybe pilot wave theory is correct quantum, for all I know.

It might be possible to masturbate over the incredible mathematical regularity of physics in the universe, but beware of the fact that it wasn’t particularly mathematical or regular until we picked out those theories that fit the universe’s behavior very closely. Those theories have predictive power because that is the nature of the selection criteria we used to find them; if they lacked that power, they would be discarded and replaced until a theory emerged meeting the selection criteria. To be clear, mathematical models can be written to describe anything you want, including the color of your bong haze, but they only have power because of their self consistency. If the universe does something to deviate from what the math says it should, the math is simply wrong, not the universe. Every time you find neutrino mass, God help your massless neutrino Standard Model!

Wonderful how the math works… until it doesn’t.

Edit 12-19-17:

We’re still listening to this book during our car trips and I wanted to point out that Tegmark uses an argument very similar to my argument above to suggest why the human brain can’t be a quantum computer. He approaches the matter from a slightly different angle. He says instead that a coherent superposition of all the ions either inside or outside the cell membrane is impossible to maintain for more than a very very short period of time because eventually something outside of the superposition would rapidly bump against some component of the superposition and that since so many ions are involved, the frequency of things bumping on the system from the outside and “making a measurement” becomes high. I do like what he says here because it starts to show the scale that is relevant to the argument.

On the other hand, it still fails to necessitate a multiverse. The simple fact is that human choice is decoupled from the scale of quantum coherence.

Edit 1-10-18:

As I’m trying desperately to recover from stress in the process of thesis writing, I thought I would add a small set of thoughts in this subject in an effort to defocus and defrag a little. My wife and I have continued to listen to this book and I think I have another fairly major objection with Tegmark’s views.

Tegmark lives in a version of quantum mechanics that fetishizes the notion of wave function collapse where he views himself as going against the grain by offering an alternative where collapse does not have to happen.

For a bit of context, “collapse” is a side effect of the Copenhagen convention of quantum mechanics. In this way of looking at the subject, the wave function will remain in superposition until something is done to determine what state the wave function is in… at this point, the wave function will cease to be coherent and will drop into some allowed eigenstate, after which it will remain in that eigenstate. This is a big, dominant part of quantum mechanics, but I would suggest that it misses some of the subtlety of what actually happens in quantum mechanics by trying to interpret, perhaps wrongly, what the wave function is.

Fact of the matter is that you can never observe a wave function. When you actually look at what you have, you only ever find eigenstates. But, there is an added subtlety to this. If you make an observation, you find an object somewhere, doing something. That you found the object is indisputable and you can be pretty certain what you know about it at the time slice of the observation. Unfortunately, you only know exactly what you found; from this –directly– you actually have no idea either what the wave function was or even really what the eigenstates are. Location is clearly an eigenstate of the position operator, as quantum mechanics operates, but from finding a particle “here” you really don’t actually know what the spectrum of locations it was potentially capable of occupying actually were. In order to learn this, the experiment which is performed is to set up the situation in a second instance, put time in motion and see that you find the new particle ending up “there,” then to tabulate the results together. This is repeated a number of times until you get “here,” “there” and “everywhere.” Binning each trial together, you start to learn a distribution of how the possibilities could have played out. From this distribution, you can suddenly write a wave function, which tells the probability of making some observation across the continuum of the space you’re looking at… the wave function says that you have “this chance of finding the object ‘here’ or ‘there’.”

The wave function, however you try to pack it, is fundamentally dependent on the numerical weight of a statistically significant number of observations. From one observation, you can never know anything about the wave function.

The same thing holds true for coherence. If you make one observation, you find what you found that one time; you know nothing about the spectrum of possibilities. For that one hit, the particle could have been in coherence, or it could have been collapsed to an eigenstate. You don’t know. You have to build up a battery of observations, which gives you the ability to say “there’s a xx% chance this observation and that observation were correlated, meaning that coherence was maintained to yy degree.”

This comes back to Feynman’s old double slit experiment anecdote. For one BB passing through the system and striking the screen, you only know that it did, and not anything about how it did. The wave function written for the circumstances of the double slit provides a forecast of what the possible outcomes of the experiment could be. If you start measuring which slit a BB went through, the system becomes fundamentally different based upon how the observation is made and different things are knowable, giving the chance that the wave function will forecast different statistical outcomes. But, you cannot know this unless you make many observations in order to see the difference. If you measure the location of 1 BB at the slit and the location of 1 BB at the screen, that’s all you know.

In this way, the wave function is a bulk phenomenon, a beast of statistical weight. It can tell you observations that you might find… if you know the set up of the system. An interference pattern at the screen tells that the history was muddy and that there are multiple possible histories that could explain an observation at the screen. This doesn’t mean that a BB went through both slits, merely that you don’t know what history brought it to the place where it is. “Collapse” can only be known after two situations have been so thoroughly examined that the chances for the different outcomes are well understood. In a way, it is as if the phenomenon of collapse is written into the outcome of the system by the set-up of the experiment and that the types of observations that are possible are ordained before the experiment is carried out. In that way, the wave function really is basically just a forecast of possible outcomes based on what is known about a system… sampling for the BB at the slit or not, different information is present about the system, creating different possible outcomes, requiring the wave function to make a different forecast that includes that something different is known about the system. The wave function is something that never actually exists at all except to tell you the envelope of what you can know at any given time, based upon how the system is different from one instance to the next.

This view directly contradicts the notions in Tegmark’s book that individual quantum mechanical observations at “collapse” allow for two universes to be created based upon whether the wave function went one way or another. On a statistical weight of one, it cannot be known whether the observed outcome was from a collection of different possibilities or not. The possible histories or futures are unknown on a data point of one; that one is what it is and it can’t be known that there may have been other choices without a large conspiracy to know what other choices could have happened and what that gives you is the ability to say is “there’s a sixty percent chance this observation matches this eigenstate and a forty percent chance it’s that one.” Which is fundamentally not the same as the decisiveness which would be required for a collapse of one data point to claim “we’re definitely in the universe where it went through the right slit.”

I guess I would say this: Tegmark’s level 3 multiverse is strongly contradicted by the Uncertainty Principle. Quantum mechanics is structurally based on indecisiveness, while Tegmark’s multiverse is based on a clockwork decisiveness. Tegmark is saying that the history of every particle is always known.

This is part of the issue with quantum computers: the quantum computer must run its processing experiment repeatedly, multiple times, in order to establish knowledge about coherence in the system. On a sampling of one, the wave function simply does not exist.

Tegmark does this a lot. He routinely puts the cart ahead of the horse; saying that math implies the universe rather than that math describes the universe (Tegmark: Math therefore Universe. Me: Universe, therefore Math). The universe is not math; math is simply so flexible that you can pick out descriptions that accurately tell what’s going on in the universe (until they don’t). For all his cherry picking the “mathematical regularity of the universe,” Tegmark quite completely turns his eye to where math fails to work: most problems in quantum mechanics are not exactly solvable and most quantum advancement is based strongly on perturbation… that is approximations and infinite expansions that are cranked through computers to churn out compact numbers that are close to what we see. In this, the math that ‘works’ is so overloaded with bells and whistles to make it approach the actual observational curve that one can only ever say that the math is adopting the form of the universe, not that the universe arises from the math.

edit 1-17-18:

Still listening to this book. We listened through a section where Tegmark admits that he’s putting the cart ahead of the horse by putting math ahead of reality. He simply refers to it as a “stronger assertion” which I think is code for “where I know everyone will disagree with me.”

Tegmark slipped gently out of reality again when he started into a weird observer-observation duality argument about how time “flows” for a self-aware being. You know he’s lost it when his description fails to even once use the word “entropy.” Tegmark is under the impression that the quantum mechanical choice of every distinct ion in your brain is somehow significant to the functioning of thought. This shows an unbelievable lack of understanding of biology, where mass structures and mass action form behavior. Fact of the matter is that biological thought (the awareness of a thinking being) is not predictable from the quantum mechanical behavior of its discrete underpinning parts. In reality, quantum mechanics supplies the bulk steady state from which a mass effect like biological self-awareness is formed. Because of the difference in scale between the biological level and the quantum mechanical level, biology depends only on the prevailing quantum mechanical average… fluctuations away from that average, the weirdness of quantum, are almost entirely swamped out by simple statistical weight. A series of quantum mechanical arguments designed to connected the macroscale of thought to the quantum scale is fundamentally broken without taking this into account.

Consider this: the engine of your gas fueled car is dependent on a quantum mechanical behavior. Molecules of gasoline are mixed with molecules of oxygen in the cylinder head and are triggered by a pulse of heat to undergo a chemical reaction where the atoms of the gas and oxygen reconfigure the quantum mechanical states of their electrons in order to organize into molecules of CO2 and CO. After the reorganization, the collected atoms in these new molecules of CO2 and CO are at a different average state of quantum mechanical excitation than they were prior to the reconfiguration –you could say that they end up further from their quantum mechanical zero point for their final structure as compared to prior to the reorganization. In ‘human baggage’ we call this differential “heat” or “release of heat.” The quantum mechanics describe everything about how the reorganization would proceed, right down to the direction a CO2 molecule wants to speed off after it has been formed. What the quantum mechanics does not directly tell you is that 10^23 of these reactions happen and for all the different directions that CO2 molecules are moving after they are formed, the average distribution of their expansion is all that is needed to drive the cylinder head… that this molecule speeds right or that one speeds left are immaterial: if it didn’t, another would, and if that one didn’t still another would and so on and so forth until you achieve a bulk behavior of expansion in CO2 atmosphere that can push the piston. The statistics are important here. That the gasoline is 87 octane versus 91 octane, two quantum mechanically different approaches to the same thing, does not change that both drive the piston… you could use ethanol or kerosine or RP-1 to perform the same action and the specifics of the quantum mechanics result in an almost indistinguishable state where an expanding gas pushes back the piston head to produce torque on the crankshaft to drive the wheels around. The quantum mechanics are watered out to a simple average where the quantum mechanical differences between one firing of the piston are indistinguishable from the next. But, to be sure, every firing of the piston is not quantum mechanically exactly the same as the one before it. In reality, that piston moves despite these differences. There is literally an unthinkably huge ensemble of quantum mechanical states that result in the cylinder head moving and you cannot distinguish any of them from any other. There is literally no choice but to group them all together by what they hold in common and to treat them as if they are the same thing, even though at the basement layer of reality, they aren’t. Without what Tegmark refers to as “human baggage” there would be no way to connect the quantum level to the one we can actually observe in this case. That this particular molecule of fuel failed to react or not based on fluctuations of the quantum mechanics is pretty much immaterial.

The brain is not different. If you were to consider “thought” to be a quantum mechanical action, the specific difference between one thought and the next are themselves huge ensembles of different quantum mechanical configurations… even the same thought twice is not the same quantum mechanical configuration twice. The “units” of thought are in this way decoupled from the fundamental level since two versions of the “same thing” are actually so statistically removed from their quantum mechanical foundation as to be completely unpredictable from it.

This is a big part of the problem with Tegmark’s approach; he basically says “Quantum underlies everything, therefore everything should be predictable from quantum.” This is a fool’s errand. The machineries of thought in a biological person are simply at a scale where the quantum mechanics has salted out into Tegmark’s “human baggage”… named conceptual entities, like neuroanatomy, free energy and entropy, that are not mathematically irreducible. He gets to ignore the actual mechanisms of “thought” and “self-awareness” in order to focus on things he’s more interested in, like what he calls the foundation structure of the universe. Unfortunately, he’s trying to attach to levels of reality that are not naturally associated… thought and awareness are by no means associated with fundamental reality –time passage as experienced by a human being, for instance, has much more in common with entropy and statistical mechanics than it does with anything else, and Tegmark totally ignored it in favor of a rather ridiculous version of the observer paradox.

One thing that continues to bother me about this book is something that Tegmark says late in it. The man is clearly very skilled and very capable at what he does, but he dedicates the last part of his book to all the things he will not publish on for fear of destroying his career. He feels the ideas deserve to be out (and as an arrogant theorist, he feels that even the dross in his theories are gold), but by publishing a book about them, he gets to circumvent peer review and scientific discussion and bring these ideas straight to an audience that may not be able to sort which parts of what he says are crap from those few trinkets which are good. I don’t mean that he should be muzzled, he has the freedom of speech, but if his objective is to favor dissemination of scientific education, he should be a model of what he professes. If Tegmark truly believes these ideas are useful, he should damned well be publishing them directly into the scientific literature so that they can be subjected to real peer review. Like all people, this one should face his hubris. The first of which is his incredible weakness at stat mech and biology.

Published by foolish physicist

Low level academic enthralled with learning how things work.

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: