Disagreeing with “Our Mathematical Universe”

My wife and I have been listening to Max Tegmark’s book “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality” as an audiobook during our trips to and from work lately.

When he hit his chapter explaining Quantum Mechanics and his “Level 3 multiverse” I found that I profoundly disagree with this guy. It’s clear that he’s a grade A cosmologist, but I think he skirts dangerously close to being a quantum crank when it comes to multi-universe theory. I’ve been disagreeing with his take for the last couple driving sessions and I will do my best to try to sum for memory the specific issues that I’ve taken. Since this is a physicist making these claims, it’s important that I be accurate about my disagreement. In fact, I’ll start with just one and see whether I feel like going further from there…

The first place where I disagree is where he seems to show physicist Dunning-Kruger when regarding other fields in which he is not an expert. Physicists are very smart people, but they have a nasty habit of overestimating their competence in neighboring sciences… particularly biology. I am in a unique position in that I’ve been doubly educated; I have a solid background in biochemistry and cell molecular biology in addition to my background in quantum mechanics. I can speak at a fair level on both.

Professor Tegmark uses an anecdote (got to be careful here; anecdotes inflate mathematical imprecision) to illustrate how he feels quantum mechanics connects to events at a macroscopic level in organisms. There are many versions, but essentially he says this: when he is biking, the quantum mechanical behavior of an atom crossing through a gated ion channel in his brain affects whether or not he sees an oncoming car, which then may or may not hit him. By quantum mechanics, whether he gets hit or not by the car should be a superposition of states depending on whether or not the atom passes through the membrane of a neuron and enables him to have the thought to save himself or not. He ultimately elaborates this by asserting that “collapse free” quantum mechanics states that there is one universe where he saved himself and one universe where he didn’t… and he uses this as a thought experiment to justify what he calls a “level 3” multiverse with parallel realities that are coherent to each other but differ by the direction that a quantum mechanical wave function collapse took.

I feel his anecdote is a massive oversimplification that more or less throws the baby out with the bath water. Illustration of the quantum event in question is “Whether or not a calcium ion in his brain passes through a calcium gate” as connected to the macroscopic biological phenomenon of “whether he decides to bike through traffic” or alternatively “whether or not he decides to turn his eye in the appropriate direction” or alternatively “whether or not he sees a car coming when he starts to bike.”

You may notice this as a variant of the Schrodinger “Cat in a box” thought experiment. In this experiment, a cat is locked in a perfectly closed box with a sample of radioactive material and a Geiger counter that will dump acid onto the cat if it detects a decay; as long as the box is closed, the cat will remain in some superposition of states, conventionally considered “alive” or “dead” as connected with whether or not the isotope emitted a radioactive decay or not. I’ve made my feelings of this thought experiment known before here.

The fundamental difficulty comes down to what the superposition of states means when you start connecting an object with a very simple spectrum of states, like an atom, to an object with a very complex spectrum of states, like a whole cat. You could suppose that the cat and the radioactive emission become entangled, but I feel that there’s some question whether you could ever actually know whether or not they were entangled simply because you can’t discretely figure out what the superposition should mean: alive and dead for the cat are not a binary on-off difference from one another as “emitted or not” is for the radioactive atom. There are a huge number of states the cat might occupy that are very similar to one another in energy and the spectrum spanning “alive” to “dead” is so complicated that it might as well just be a thermal universe. If the entanglement actually happened or not, in this case, the classical thermodynamics and statistical mechanics should be enough to tell you in classically “accurate enough” terms what you find when you open the box. If you wait one half-life of a bulk radioactive sample, when you open the box, you’ll find a cat that is burned by acid to some degree or another. At some point, quantum mechanics does give rise to classical reality, but where?

The “but where” is always where these arguments hit their wall.

In the anecdote Tegmark uses, as I’ve written above, the “whether a calcium ion crossed through a channel or not” is the quantum mechanical phenomenon connected to “whether an oncoming car hit me or not while I was biking.”

The problem that I have with this particular argument is that it loses scale. This is where quantum flapdoodle comes from. Does the scale make sense? Is all the cogitation associated with seeing a car and operating a bike on the same scale as where you can actually see quantum mechanical phenomena? No, it isn’t.

First, all the information coming to your brain from your eyes telling you that the car is present originate from many many cells in your retina, involving billions of interactions with light. The muscles that move your eyes and your head to see the car are instructed from thousands of nerves firing simultaneously and these nerves fire from gradients of Calcium and other ions… molar scale quantities of atoms! A nerve doesn’t fire or not based on the collapse of possibilities for a single calcium ion. It fires based on thermodynamic quantities of ions flowing through many gated ion channels all at once. The net effect of one particular atom experiencing quantum mechanical ambivalence is swamped under statistically large quantities of atoms picking all of the choices they can pick from the whole range of possibilities available to them, giving rise to the bulk phenomenon of the neuron firing. Let’s put it this way: for the nerve to fire or not based on quantum mechanical superposition of calcium ions would demand that the nerve visit that single thermodynamic state where all the ions fail to flow through all the open ion gates in the membrane of the cell all at once… and there are statistically few states where this has happened compared to the statistically many states where some ions or many ions have chosen to pass through the gated pore (this is what underpins the chemical potential that drives the functioning of the cell). If you bothered to learn any stat mech at all, you would know that this state is such a rare one that it would probably not be visited even once in the entire age of the universe. Voltage gradients in nerve cells are established and maintained through copious application of chemical energy, which is truthfully constructed from quantum mechanics and mainly expressed in bulk level by plain old classical thermodynamics. And this is merely the state of whether a single nerve “fired or not” taken in aggregate with the fact that your capacity for “thought” doesn’t depend enough on a single nerve that you can’t lose that one nerve and fail to think –if a single nerve in your retina failed to fire, all the sister nerves around it would still deliver an image of the car speeding toward you to your brain.

Do atoms like a single calcium ion subsist in quantum mechanical ambivalence when left to their own devices? Yes, they do. But, when you put together a large collection of these atoms simultaneously, it is physically improbable that every single atom will make the same choice all at once. At some point you get a bulk thermodynamic behavior and the decision that your brain makes are based on bulk thermodynamic behaviors, not isolated quantum mechanical events.

Pretending that a person made a cognitive choice based on the quantum mechanical outcomes of a single atom is a reductio ad absurdum and it is profoundly disingenuous to start talking about entire parallel universes where you swerved right on your bike instead of left based on that single calcium atom (regardless of how liberally you wave around the butterfly effect). The nature of physiology in a human being at all levels is about biasing fundamentally random behavior into directed, ordered action, so focusing on one potential speck of randomness doesn’t mean that the aggregate should fail to behave as it always does. All the air in the room where you’re standing right now could suddenly pop into the far corner leaving you to suffocate (there is one such state in the statistical ensemble), but that doesn’t mean that it will…. closer to home, you might win a $500 million Power Ball Jackpot, but that doesn’t mean you will!

I honestly do not know what I think about the multiverse or about parallel universes. I would say I’m agnostic on the subject. But, if all parallel universe theory is based on such breathtaking Dunning-Kruger as Professor Tegmark exhibits when talking about the connection between quantum mechanics and actualization of biological systems, the only stance I’m motivated to take is that we don’t know nearly enough to be speculating. If Tegmark is supporting multiverse theory based on such thinking, he hasn’t thought about the subject deeply enough. Scale matters here and neglecting the scale means you’re neglecting the math! Is he neglecting the math elsewhere in his other huge, generalizing statements? For the scale of individual atoms, I can see how these ideas are seductive, but stretching it into statistical systems is just wrong when you start claiming that you’re seeing the effects of quantum mechanics at macroscopic biological levels when people actually do not. It’s like Tegmark is trying to give Deepak Chopra ammunition!

Ok, just one gripe there. I figure I probably have room for another.

In another series of statements that Tegmark makes in his discussion of quantum mechanics, I think he probably knows better, but by adopting the framing he has, he risks misinforming the audience. After a short discussion of the origins of Quantum Mechanics, he introduces the Schrodinger Equation as the end-all, be-all of the field (despite speaking briefly of Lagrangian path integral formalism elsewhere). One of the main theses of his book is that “the universe is mathematical” and therefore the whole of reality is deterministic based on the predictions of equations like Schrodinger’s equation. If you can write the wave equation of the whole universe, he says, Schrodinger’s equation governs how all of it works.

This is wrong.

And, I find this to miss most of the point of what physics is and what it actually does. Math is valuable to the physics, but one must always be careful that the math not break free of its observational justification. Most of what physics is about is making measurements of the world around us and fitting those measurements to mathematical models, the “theories” (small caps) provided to us by the Einsteins and the Sheldon Coopers… if the fit is close enough, the regularity of a given equation will sometimes make predictions about further observations that have not yet been made. Good theoretical equations have good provenance in that they predict observations that are later made, but the opposite can be said for bad theory, and the field of physics is littered with a thick layer of mathematical theories which failed to account for the observations, in one way or another. The process of physics is a big selection algorithm where smart theorists write every possible theory they can come up with and experimentalists take those theories and see if the data fit to them, and if they do accommodate observation, such a theory is promoted to a Theory (big caps) and is explored to see where its limits exist. On the other hand, small caps “theories” are discarded if they don’t accommodate observation, at which point they are replaced by a wave of new attempts that try to accomplish what the failure didn’t. As a result, new theories fit over old theories and push back predictive limits as time goes on.

For the specific example of Schrodinger’s equation, the mathematical model that it offers fits over the Bohr model by incorporating deBroglie’s matter wave. Bohr’s model itself fit over a previous model and the previous models fit over still earlier ideas had by the ancient Greeks. Each later iteration extends the accuracy of the model, where the development is settled depending on whether or not a new model has validated predictive power –this is literally survival of the fittest applied to mathematical models. Schrodinger’s equation itself has a limit where its predictive power fails: it cannot handle Relativity except as a perturbation… meaning that it can’t exactly predict outcomes that occur at high speeds. The deficiencies of the Schrodinger equation are addressed by the Klein-Gordon equation and by the Dirac equation and the deficiencies of those in turn are addressed by the path integral formalisms of Quantum Field Theory. If you knew the state equation for the whole universe, Schrodinger’s equation would not accurately predict how time unfolds because it fails to work under certain physically relevant conditions. The modern Quantum Field Theories fail at gravity, meaning that even with the modern quantum, there is no assured way of predicting the evolution of the “state equation of the universe” even if you knew it. There are a host of follow-on theories, String Theory, Quantum loop gravity and so and so forth that vy for being The Theory That Fills The Holes, but, given history, probably will only extend our understanding without fully answering all the remaining questions. That String Theory has not made a single prediction that we can actually observe right now should be lost on no one –there is a grave risk that it never will. We cannot at the moment pretend that the Schrodinger equation perfectly satisfies what we actually know about the universe from other sources.

It would be most accurate to say that reality seems to be quantum mechanical at its foundation, but that we have yet to derive the true “fully correct” quantum theory. Tegmark makes a big fuss about trying to explain “wave function collapse” doesn’t fit within the premise of Schrodinger’s equation but that the equation could hold as good quantum regardless if a “level three multiverse” is real. The opposite is also true: we’ve known Schrodinger’s equation is incomplete since the 1930s, so “collapse” may simply be another place where it’s incomplete that we don’t yet know why. A multiverse does not necessarily follow from this. Maybe pilot wave theory is correct quantum, for all I know.

It might be possible to masturbate over the incredible mathematical regularity of physics in the universe, but beware of the fact that it wasn’t particularly mathematical or regular until we picked out those theories that fit the universe’s behavior very closely. Those theories have predictive power because that is the nature of the selection criteria we used to find them; if they lacked that power, they would be discarded and replaced until a theory emerged meeting the selection criteria. To be clear, mathematical models can be written to describe anything you want, including the color of your bong haze, but they only have power because of their self consistency. If the universe does something to deviate from what the math says it should, the math is simply wrong, not the universe. Every time you find neutrino mass, God help your massless neutrino Standard Model!

Wonderful how the math works… until it doesn’t.

Edit 12-19-17:

We’re still listening to this book during our car trips and I wanted to point out that Tegmark uses an argument very similar to my argument above to suggest why the human brain can’t be a quantum computer. He approaches the matter from a slightly different angle. He says instead that a coherent superposition of all the ions either inside or outside the cell membrane is impossible to maintain for more than a very very short period of time because eventually something outside of the superposition would rapidly bump against some component of the superposition and that since so many ions are involved, the frequency of things bumping on the system from the outside and “making a measurement” becomes high. I do like what he says here because it starts to show the scale that is relevant to the argument.

On the other hand, it still fails to necessitate a multiverse. The simple fact is that human choice is decoupled from the scale of quantum coherence.

Edit 1-10-18:

As I’m trying desperately to recover from stress in the process of thesis writing, I thought I would add a small set of thoughts in this subject in an effort to defocus and defrag a little. My wife and I have continued to listen to this book and I think I have another fairly major objection with Tegmark’s views.

Tegmark lives in a version of quantum mechanics that fetishizes the notion of wave function collapse where he views himself as going against the grain by offering an alternative where collapse does not have to happen.

For a bit of context, “collapse” is a side effect of the Copenhagen convention of quantum mechanics. In this way of looking at the subject, the wave function will remain in superposition until something is done to determine what state the wave function is in… at this point, the wave function will cease to be coherent and will drop into some allowed eigenstate, after which it will remain in that eigenstate. This is a big, dominant part of quantum mechanics, but I would suggest that it misses some of the subtlety of what actually happens in quantum mechanics by trying to interpret, perhaps wrongly, what the wave function is.

Fact of the matter is that you can never observe a wave function. When you actually look at what you have, you only ever find eigenstates. But, there is an added subtlety to this. If you make an observation, you find an object somewhere, doing something. That you found the object is indisputable and you can be pretty certain what you know about it at the time slice of the observation. Unfortunately, you only know exactly what you found; from this –directly– you actually have no idea either what the wave function was or even really what the eigenstates are. Location is clearly an eigenstate of the position operator, as quantum mechanics operates, but from finding a particle “here” you really don’t actually know what the spectrum of locations it was potentially capable of occupying actually were. In order to learn this, the experiment which is performed is to set up the situation in a second instance, put time in motion and see that you find the new particle ending up “there,” then to tabulate the results together. This is repeated a number of times until you get “here,” “there” and “everywhere.” Binning each trial together, you start to learn a distribution of how the possibilities could have played out. From this distribution, you can suddenly write a wave function, which tells the probability of making some observation across the continuum of the space you’re looking at… the wave function says that you have “this chance of finding the object ‘here’ or ‘there’.”

The wave function, however you try to pack it, is fundamentally dependent on the numerical weight of a statistically significant number of observations. From one observation, you can never know anything about the wave function.

The same thing holds true for coherence. If you make one observation, you find what you found that one time; you know nothing about the spectrum of possibilities. For that one hit, the particle could have been in coherence, or it could have been collapsed to an eigenstate. You don’t know. You have to build up a battery of observations, which gives you the ability to say “there’s a xx% chance this observation and that observation were correlated, meaning that coherence was maintained to yy degree.”

This comes back to Feynman’s old double slit experiment anecdote. For one BB passing through the system and striking the screen, you only know that it did, and not anything about how it did. The wave function written for the circumstances of the double slit provides a forecast of what the possible outcomes of the experiment could be. If you start measuring which slit a BB went through, the system becomes fundamentally different based upon how the observation is made and different things are knowable, giving the chance that the wave function will forecast different statistical outcomes. But, you cannot know this unless you make many observations in order to see the difference. If you measure the location of 1 BB at the slit and the location of 1 BB at the screen, that’s all you know.

In this way, the wave function is a bulk phenomenon, a beast of statistical weight. It can tell you observations that you might find… if you know the set up of the system. An interference pattern at the screen tells that the history was muddy and that there are multiple possible histories that could explain an observation at the screen. This doesn’t mean that a BB went through both slits, merely that you don’t know what history brought it to the place where it is. “Collapse” can only be known after two situations have been so thoroughly examined that the chances for the different outcomes are well understood. In a way, it is as if the phenomenon of collapse is written into the outcome of the system by the set-up of the experiment and that the types of observations that are possible are ordained before the experiment is carried out. In that way, the wave function really is basically just a forecast of possible outcomes based on what is known about a system… sampling for the BB at the slit or not, different information is present about the system, creating different possible outcomes, requiring the wave function to make a different forecast that includes that something different is known about the system. The wave function is something that never actually exists at all except to tell you the envelope of what you can know at any given time, based upon how the system is different from one instance to the next.

This view directly contradicts the notions in Tegmark’s book that individual quantum mechanical observations at “collapse” allow for two universes to be created based upon whether the wave function went one way or another. On a statistical weight of one, it cannot be known whether the observed outcome was from a collection of different possibilities or not. The possible histories or futures are unknown on a data point of one; that one is what it is and it can’t be known that there may have been other choices without a large conspiracy to know what other choices could have happened and what that gives you is the ability to say is “there’s a sixty percent chance this observation matches this eigenstate and a forty percent chance it’s that one.” Which is fundamentally not the same as the decisiveness which would be required for a collapse of one data point to claim “we’re definitely in the universe where it went through the right slit.”

I guess I would say this: Tegmark’s level 3 multiverse is strongly contradicted by the Uncertainty Principle. Quantum mechanics is structurally based on indecisiveness, while Tegmark’s multiverse is based on a clockwork decisiveness. Tegmark is saying that the history of every particle is always known.

This is part of the issue with quantum computers: the quantum computer must run its processing experiment repeatedly, multiple times, in order to establish knowledge about coherence in the system. On a sampling of one, the wave function simply does not exist.

Tegmark does this a lot. He routinely puts the cart ahead of the horse; saying that math implies the universe rather than that math describes the universe (Tegmark: Math therefore Universe. Me: Universe, therefore Math). The universe is not math; math is simply so flexible that you can pick out descriptions that accurately tell what’s going on in the universe (until they don’t). For all his cherry picking the “mathematical regularity of the universe,” Tegmark quite completely turns his eye to where math fails to work: most problems in quantum mechanics are not exactly solvable and most quantum advancement is based strongly on perturbation… that is approximations and infinite expansions that are cranked through computers to churn out compact numbers that are close to what we see. In this, the math that ‘works’ is so overloaded with bells and whistles to make it approach the actual observational curve that one can only ever say that the math is adopting the form of the universe, not that the universe arises from the math.

edit 1-17-18:

Still listening to this book. We listened through a section where Tegmark admits that he’s putting the cart ahead of the horse by putting math ahead of reality. He simply refers to it as a “stronger assertion” which I think is code for “where I know everyone will disagree with me.”

Tegmark slipped gently out of reality again when he started into a weird observer-observation duality argument about how time “flows” for a self-aware being. You know he’s lost it when his description fails to even once use the word “entropy.” Tegmark is under the impression that the quantum mechanical choice of every distinct ion in your brain is somehow significant to the functioning of thought. This shows an unbelievable lack of understanding of biology, where mass structures and mass action form behavior. Fact of the matter is that biological thought (the awareness of a thinking being) is not predictable from the quantum mechanical behavior of its discrete underpinning parts. In reality, quantum mechanics supplies the bulk steady state from which a mass effect like biological self-awareness is formed. Because of the difference in scale between the biological level and the quantum mechanical level, biology depends only on the prevailing quantum mechanical average… fluctuations away from that average, the weirdness of quantum, are almost entirely swamped out by simple statistical weight. A series of quantum mechanical arguments designed to connected the macroscale of thought to the quantum scale is fundamentally broken without taking this into account.

Consider this: the engine of your gas fueled car is dependent on a quantum mechanical behavior. Molecules of gasoline are mixed with molecules of oxygen in the cylinder head and are triggered by a pulse of heat to undergo a chemical reaction where the atoms of the gas and oxygen reconfigure the quantum mechanical states of their electrons in order to organize into molecules of CO2 and CO. After the reorganization, the collected atoms in these new molecules of CO2 and CO are at a different average state of quantum mechanical excitation than they were prior to the reconfiguration –you could say that they end up further from their quantum mechanical zero point for their final structure as compared to prior to the reorganization. In ‘human baggage’ we call this differential “heat” or “release of heat.” The quantum mechanics describe everything about how the reorganization would proceed, right down to the direction a CO2 molecule wants to speed off after it has been formed. What the quantum mechanics does not directly tell you is that 10^23 of these reactions happen and for all the different directions that CO2 molecules are moving after they are formed, the average distribution of their expansion is all that is needed to drive the cylinder head… that this molecule speeds right or that one speeds left are immaterial: if it didn’t, another would, and if that one didn’t still another would and so on and so forth until you achieve a bulk behavior of expansion in CO2 atmosphere that can push the piston. The statistics are important here. That the gasoline is 87 octane versus 91 octane, two quantum mechanically different approaches to the same thing, does not change that both drive the piston… you could use ethanol or kerosine or RP-1 to perform the same action and the specifics of the quantum mechanics result in an almost indistinguishable state where an expanding gas pushes back the piston head to produce torque on the crankshaft to drive the wheels around. The quantum mechanics are watered out to a simple average where the quantum mechanical differences between one firing of the piston are indistinguishable from the next. But, to be sure, every firing of the piston is not quantum mechanically exactly the same as the one before it. In reality, that piston moves despite these differences. There is literally an unthinkably huge ensemble of quantum mechanical states that result in the cylinder head moving and you cannot distinguish any of them from any other. There is literally no choice but to group them all together by what they hold in common and to treat them as if they are the same thing, even though at the basement layer of reality, they aren’t. Without what Tegmark refers to as “human baggage” there would be no way to connect the quantum level to the one we can actually observe in this case. That this particular molecule of fuel failed to react or not based on fluctuations of the quantum mechanics is pretty much immaterial.

The brain is not different. If you were to consider “thought” to be a quantum mechanical action, the specific difference between one thought and the next are themselves huge ensembles of different quantum mechanical configurations… even the same thought twice is not the same quantum mechanical configuration twice. The “units” of thought are in this way decoupled from the fundamental level since two versions of the “same thing” are actually so statistically removed from their quantum mechanical foundation as to be completely unpredictable from it.

This is a big part of the problem with Tegmark’s approach; he basically says “Quantum underlies everything, therefore everything should be predictable from quantum.” This is a fool’s errand. The machineries of thought in a biological person are simply at a scale where the quantum mechanics has salted out into Tegmark’s “human baggage”… named conceptual entities, like neuroanatomy, free energy and entropy, that are not mathematically irreducible. He gets to ignore the actual mechanisms of “thought” and “self-awareness” in order to focus on things he’s more interested in, like what he calls the foundation structure of the universe. Unfortunately, he’s trying to attach to levels of reality that are not naturally associated… thought and awareness are by no means associated with fundamental reality –time passage as experienced by a human being, for instance, has much more in common with entropy and statistical mechanics than it does with anything else, and Tegmark totally ignored it in favor of a rather ridiculous version of the observer paradox.

One thing that continues to bother me about this book is something that Tegmark says late in it. The man is clearly very skilled and very capable at what he does, but he dedicates the last part of his book to all the things he will not publish on for fear of destroying his career. He feels the ideas deserve to be out (and as an arrogant theorist, he feels that even the dross in his theories are gold), but by publishing a book about them, he gets to circumvent peer review and scientific discussion and bring these ideas straight to an audience that may not be able to sort which parts of what he says are crap from those few trinkets which are good. I don’t mean that he should be muzzled, he has the freedom of speech, but if his objective is to favor dissemination of scientific education, he should be a model of what he professes. If Tegmark truly believes these ideas are useful, he should damned well be publishing them directly into the scientific literature so that they can be subjected to real peer review. Like all people, this one should face his hubris. The first of which is his incredible weakness at stat mech and biology.

Advertisements

Flat Earth “Research”

You no doubt heard about this fellow in the last week with the steampunk rocket with “Flat Earth Research” written on the side. In my opinion, he was pretty clearly trolling the media; not much likelihood of resolving any issues about the shape of the Earth if the peak altitude of your rocket is only a fraction of the altitude of a commercial airline jet. He said a number of antiscience things and sort of repurposed mathematical formulae for aeronautics and fluid mechanics as “not science” as if physics is anything other than physics. The guy claimed he was using the flight as a test bed for a bigger rocket and wanted to create a media circus to announce his run for a seat in the California legislature. Not bad for a limo driver, I give him that.

Further in the background, I think it’s clear he was just after a publicity stunt; his do-it-yourself rocket cost a great deal of money, and his conversion to flat eartherism obviously helped to pay the bill. It really did make me wonder what exactly flat earthers think “research” is given that they were apparently willing to pony up a ton of money for this rocket, which won’t go high enough to resolve anything an airline ticket won’t resolve better.

My general feelings about flat earth nonsense are well recorded here and here.

A part of why I decided to write anything about this is that the guy wants to run for congress in California. This should be concerning to everyone: someone who is trusted to make decisions for a whole community had better be doing so based on a sound understanding of reality. Higher positions currently filled in the Federal government not withstanding, a disconnect seems to be forming in our self-governance which is allowing people to unhinge their decision-making processes from what is actually known about the world. I think that’s profoundly dangerous.

In my opinion also, this is not to heap blame on those who actually hold office now, but on everybody who elected to put them there. Our government is both by the people and for the people: anybody in power is at some level representative of the electorate, possessing all the same potentially fatal flaws. If you want to bitch about the government, the place to start is society itself.

Now, Flat Eartherism is one of those pastimes that is truly incredibly past its time. There are two reasons it subsists; the first is people trolling other people for kicks online, while the second is that some people are so distrusting and conspiracy-minded that they’re willing to believe just about anything if it feeds into their biases. There are some people who truly believe it. A part of why people have the ability to believe the conspiracy theories is that what they consider visual evidence of the Earth’s roundness comes through sources that they define as questionable because of their connection to ostensibly corrupt power –NASA, for all its earnest effort to keep space science accessible to the common man, has not been perfect. Further, not just anybody can go to a place where the roundness of the Earth is unambiguously visible given exactly how hard it is to get to very high altitudes over Earth in the first place. For all of SpaceX’s success, space flight still isn’t a commodity that everyone can sample. Travel into space is held under lock and key by the few and powerful.

Knowing and having worked a bit around scientists associated with space flight projects, I understand the mindset of the scientists, and it offends me very deeply to see their trustworthiness questioned when I know that many of them value honesty very highly. Part of why the conspiracy garbage circulates at all is because our society is so big that “these people” never meet “those people” and the two sides have little chance of bumping into one another. It’s easy to malign people who are faceless and its really easy to accuse someone of lying if they aren’t present to defend themselves. That doesn’t mean that either is due. This comes back to my old argument about the constitutionally defended right to spout lies in the form of “Freedom of Speech” being a very dangerous social norm.

Now, that said, another of the primary reasons I decided to write this post is because I saw a Youtube video of Eddie Bravo facing down two scientists and more or less humiliating them over their inability to defend “round eartherism.”

You may or may not know of him, but Eddie Bravo is a modern hero to the teenage boy; he’s another of these podcaster/micro-celebrity types who is widely accessible with a few keystrokes in an environment with basically zero editorial content control. He’s a visible face of the UFC (Ultimate Fighting Challenge) movement along with Joe Rogan. He’s attained wide acclaim for being a “Gracie Killer,” which is a big thing if you know anything about UFC… the Gracies being the renown Brazilian Jiu-Jutsu family who dominated the grappling world early in the UFC and brought the art of Jiu-Jutsu in its Brazilian form to the whole world. From this little history, you can easily guess why Bravo is a teenage boy hero: he’s a brash, cocky bad ass. He’s a world class Jiu-Jutsu fighter, hands down. Unfortunately, as with many celebrities, his Jiu-Jutsu street cred affords him the opportunity to open his mouth about whatever he feels like. Turns out he’s a bit of a crank magnet too, including being a flat earther.

To begin with, I don’t believe Mr. Bravo –or any other crank, for that matter– is stupid. I’ve long since seen that great intelligence can exist in people who for one reason or another don’t know better or choose not to “believe” in something for whatever reason. If he weren’t talented at some level, he wouldn’t be a hard enough worker to develop the acclaim he has attained. But, he conflates being able to shout over whoever he feels like to being able to beat them, which absolutely isn’t true in an intellectual debate.

In the Youtube clip I saw, Mr. Bravo confronts two scientists in a room full of people friendly to him. The first scientist is brought to the forefront where he introduces himself as an “Earth Scientist”… much to the rolling eyes and derision of the audience. Eddie Bravo then demands that he give the one bit of evidence which proves that the “Earth is round.” Put on the spot, this poor fellow then makes the mistake of trying to tell Mr. Bravo that science is a group of people who specialize in many different disciplines, across many different lines of research, and fails to provide Mr. Bravo with a direct answer to his question. It’s true that science is distributed, but by not answering the question, he gives the appearance of not having the answer and Eddie Bravo was completely aware that he’d said nothing to the point! When the second scientist comes forward, Eddie Bravo demands (a poorly worded demand at that, in my opinion) that since most people hold the disappearance of a ship’s mast over the horizon as the “proof” that the world is round, “why was it that people are able to take pictures of ships after they’re supposedly over the horizon?” This second scientist really did step up, I think: he tried to explain that light doesn’t necessarily travel in straight lines (which is true) and that the atmosphere can work like a fiber optic to bring images around the curve of the earth. Mr. Bravo derided this explanation, basically saying “Oh, please, that’s garbage, everybody knows you can’t see around corners.” And, at a superficial level, this will be regarded as a true response, despite the fact that the numbers always fall out the bottom of the strainer in a rhetorical confrontation. The second scientist ended up sounding like he was talking over everybody’s head with his too intricate explanation, and Eddie Bravo was able to use that to make him out as “other,” winning the popular argument at that point. Combine these incidents with a lot of shouting over the other guy, and Eddie Bravo came off well…. the video is listed as a “debate,” never mind that it was anything but.

If you are a science educator, I would recommend watching that video. Scientist #1 comes off as stupid and scientist #2 comes off as pompous.

You’ll love me for saying this, but that was all preface to the purpose of this blog post. Most modern flat earthers are Youtube trolls; they castrate their opposition by relying on the fact that evidence of the Earth’s roundness is provided by a source that is intrinsically tainted and questionable. And, the truth is that many people who believe the Earth is round really only understand this fact based on a line of evidence that people like Eddie Bravo will not accept. How do you straighten out a guy who will not accept the satellite images?

Well, how is it that we know the earth is round? We knew it before there were satellites, computer graphics and photoshop. With globalism and information society, these knowable, observable things are amplified. Flat earthers prove they are incompetent researchers every time they open their mouths and say “Well, have you researched it? I did and the earth is flat!”

Now, suppose I was a flat earth researcher, how would I go about the science of establishing the shape of the earth using a series of modern, readily available, cheap tools?

Hypothesis: The Earth is flat! It’s the stable, unmoving center of the universe and the sun and sky move over it.

1 flat earth model

One thing that we can immediately see about this model is a simple thing. When the sun is in the sky, every point on the plane can see it at the same time since there is nothing to obstruct the line of sight anywhere. In the 1800s, nobody could really travel fast enough to be able to tell whether or not this was the case: for every person in that time, it was enough to suppose that everybody on Earth wakes up from the night at the same time and goes about their day. For this flat earth modeled when seen from the side, the phenomenon of sunrise (a phenomenon as old as the beginning of the Earth, by the way) would look like this:

2 simple sunrise model

We have all seen this: the sun starts below the edge of the Eastern Horizon and pops up above it. For a majority of people on Earth, this is what the sun seems to do in the morning.

There are a number of simple tests of this model, but the simplest question to ask is this: Does everybody on Earth see the sun appear at the same time? Everybody is standing on that flat plane: when the sun comes up from below the horizon, does everybody on Earth see it at once?

3 simple sunrise model at sunrise

Notice, this is a requirement: if the Earth is flat, people all across the plane of the Earth will be able to see something big coming over the edge of that plane almost simultaneously, depending on nearby impediments, like mountains for instance.

So, here’s the experiment! If you live in California, grab your smart phone, buy an airplane ticket and fly to New York. The government has no control at all over where you fly in the continental US of A and they really won’t care if you take this trip. New York, New York is actually a kind of fun place to visit, so I recommend going and maybe catching a Broadway show while you’re there. When you get to New York, find someplace along the waterline where you can look east over the ocean and go there in the morning before sunrise. After the sun rises, wait 30 minutes and then place a phone call back to one of your buddies in California and ask him if the sun is up.

This experiment can be repeated with any two east-west related locations on Earth, though the time delays will depend on the separation so that maybe a half hour is long enough for the sun to rise in both places. Any real flat earth “researcher” should be running this experiment.

For the set-up written above, the sun comes up in New York four hours before it actually comes up in California! A California view of the sun is blocked below the horizon of the Earth for four hours after it has become visible in New York.

Now, you might argue, New York is on the east side of the US and is much closer to where the sun comes up on our hypothetical plane, so maybe the Rocky Mountains are obstructing some view of the sun in LA.

4 mountain occlusion

And that this blocking effect lasts 4 hours.

So, here’s the new experiment. Drive your car from LA to NY and watch the odometer; you can even get a mechanic you trust to assure you that the government hasn’t fiddled with it. You now know the approximate distance from LA to NY by the odometer read-out. Next, you buy a barometer and use the pressure change of the air to measure how high the Rocky Mountains are… or, you could just use a surveying scope to measure the angular height of the mountains and your car to check distances, then work a bit of trig to estimate the height of the mountains.

5 measure mountain height

The Rockies are well understood to be just a bit taller than 14,000 ft.

With these distances available, you do the following experiment with surveying scopes. When the sun appears above the horizon in LA, your friend measures the angle above ground level where it is visible (surveying scopes have bubble levels for leveling the scope). You measure the angle above the horizon at the same time using a survey scope of your own in New York. Remember, you’ve got smartphones, you can talk to each other and coordinate these measurements.

For the flat earth, the position of the sun in the sky should obey the following simple triangular model:

6 flat earth trig model

This technique is as old as the hills and is called “triangulation.” Notice, I’ve used three measurements made with cheap modern equipment: angle at LA, angle at NY and the distance from LA to NY (approximate from the odometer). What I have in hand from this is the ability to determine the approximate altitude of the sun using a bit of high school level trig. Use law of sines and it’s easy to forecast the altitude of the sun from these measurements:

7 height of sun

I won’t do the derivation this once, but you just plug in the distance and the angles, then voila, the height of the sun over the flat earth. (I’m not being snide here: Flat Earthers don’t even seem to try to use trig.)

What we know so far is that the sun comes up four hours earlier in New York than LA and that we would expect that the sun should be visible everywhere on the flat earth at the same time as it comes over the horizon. Maybe the Rockies are blocking LA from seeing the sun for four hours. This would give rise to the following situation:

9 mountain triangle

You end up with similar triangles formed by the triangle of LA to the Rocky Mountains and the triangle of LA to the sun. Knowing the height of the mountains and the distance from LA to the mountains, you get the angle that the sun must be at when it appears in LA. This gives us a relation where the angle from LA to the top of the mountains must be the same as the angle from LA to the sun when it appears. We would expect the angle to be very small since the Rockies are really not that high, so finding it nearly zero to within the noise of the instrument would be expected.

Now, LA to New York is about 2,800 miles and the distance from LA to Denver is 1,020 miles. The mountains are 14,000 feet tall. In four hours of morning, from New York, the sun will appear to be at an angle of ~60 degrees over the horizon (neglecting latitude effects… leave that for later). If you start plugging these figures into equations, the altitude of the sun must be 7.3 miles up in the sky, or 38,500 ft.

Huh.

You can fly at 40,000 ft in an airliner. Easy hypothesis to test. If the sun is only 7.3 miles up and visible at 60 degrees inclination in New York, you could go fly around it with an airplane.

Has anybody ever done that?

A good scientist would keep looking at the sun through the whole day and might notice that the angular difference of the sun’s inclination observed in the spotting scopes at New York and in LA does not change. Both inclinations increase at the same rate. There is always something like 60 degree difference in inclination in the sky from where the sun rose between these two places (again, neglecting latitude effects; this argument will appear a tiny bit janky since New York and Los Angeles are not at the same latitude, but the effect should be very close to what I described).

For this flat earth model to be true, the sun would need to radically and aphysically change altitude from one part of the day to the next in order for the reported angles to be real. We know with pretty good accuracy that the sun does not just pop out of the Atlantic ocean several dozen miles off the coast every morning when it rises over the United States, whatever the flat earthers want to tell you. And, this is pretty much observable without any NASA satellites. Grab yourself a boat and go see! The other possibility is that the sun is much further away than 7 miles and that the physical obstruction between LA and New York is much larger than just the height the Rocky Mountains over sea level –and also maybe that the angles on the levels of the spotting scopes somehow don’t agree with each other.

For this alone, the vanilla flat earth model must be discarded. You cannot validate any of the predictions in the model above: LA and New York do not see the sunrise at the same time and the sun clearly is not only 7 miles high in New York. To give them some credit, most modern flat earthers, including Eddie Bravo, do not subscribe directly to this model.

For a point, I would mention that every flat earth model struggles with the observable phenomenon of time zones and jet lag. If any flat earther ever asks you what convinced you of a round Earth, just say “Time Zones” in order to forestall him or her and to not look like you’re avoiding the question. Generally speaking, time zones exist because the curve of the Earth (something that flat earthers claim shouldn’t exist) obstructs the sun from lighting every point on the surface of the Earth at the same time.

So then, now that we’ve made basically two tests of a flat earther hypothesis and seen that it fails rather dramatically in the face of simple modern do-it-yourself measurements, what model do these people actually believe in?

flat_earther_believers_explain_their_theory_on_australien_television__234804

Most modern flat earthers believe in some version of the model above (one of the major purveyors of this is Eric Dubay. I won’t link his site because I won’t give him traffic.) In this model, you can think about the Earth as a big disc centered on an axle that passes through the north pole. The sun, the moon and the night sky spin around this axle over the Earth (or maybe the Earth spins like a record beneath the sky). The southern tips of South America, Africa and Australia are placed at extreme distances from one another and Antarctica is expanded into an ice wall that surrounds the whole disc. The model here is actually not a new one and originated some time in the 1800s.

For the image depicted here, I would point out once again that if the sun is an emissive sphere, projecting light in all directions, the model above gives a clear line of sight for every location on Earth to see the sun at all times. For this reason, the flat earthers usually insist that the sun is more like a flashlight or a street lamp which projects light in a preferred direction so that light from it can’t be seen at locations other than where the light is being projected (never mind that this prospect immediately begins to suffer for trying to generate the appropriate phases of the moon).

To generate this model, the flat earthers have actually cherry-picked a few rather interesting observations about the sky. You can find a Youtube video where Eddie Bravo tries to articulate these observations to Joe Rogan. Central among them is that the North Star, Polaris, seems to not move in the night sky and that all the stars and even the sun seem to pivot around this point. In particular, during the season of white nights above the arctic circle, the sun seems to travel around the horizon without really setting (never mind that during the winter months, the sun disappears below the horizon for weeks on end… again with that pesky horizon thing; on the flat earth, the sun is not allowed to drop below the horizon and still be visible elsewhere on the same longitude since that intrinsically implies that the Earth’s surface must curve to accomplish said feat).

sun-path-arctic-circle-large

Taken from Scijinks.gov, this image demonstrates the real observation of what the sun does during the season of white nights as viewed at the arctic circle. The flat earth model amplifies this into the depiction given above.

If this is our hypothetical model, we could say that the sun is suspended over the flat Earth so that it sits on a ring at the radius of the equator in its revolution around the pole.

10 disc model

This image shows you right away the first thing to test. As seen at a distance of 3/4 of the disc’s diameter away, the sun cannot ever be seen in the sky at a lower angle of inclination than is allowed by its altitude over the surface. In other words, it can never go down below the horizon or come up over it.

11 min angle of inclination

Here, theta is the minimum angle of inclination that the sun will visit in the sky. I’ve heard flat earthers quote ~3,000 miles for the height of the sun and the absolute length of the longitude would be (3/4)*24,000 miles = 18,000 miles, which gives a minimum inclination angle of about 9 degrees over the horizon. And, that’s seen from the maximum possible distance across the width of the disc, where the flat earthers claim the sunlight can’t be seen. As a result, the sun will always have to *appear* in the sky at some inclination greater than 9 degrees –just suddenly start making light– at the time when the sun supposedly rises.

The truth of that is directly observable: do you ever see the sun just appear in the sky when day breaks? I certainly haven’t.

This failure to ever reach the horizon mixed with the requirement for time zones is enough to kill the flat earth model above: it can’t produce the observations available from the world around us that can be obtained with just the tiniest bit of leg work! The model can’t handle sunrises (period). There’s a reason that the round earth was postulated in 2,500 BC; it’s based on a series of clever but damn easy measurements. And I reiterate, those measurements are easier to make with modern technology.

It is inevitable that this logic won’t satisfy someone. The altitude number for the sun, 3,000 miles, was cribbed from flat earth chatter. Suppose that this number is actually different and that they don’t actually know what it is (surprise, surprise, I don’t think I’ve ever seen evidence of any one of them doing something other than making YouTube videos or staring through big cameras trying to see ships disappear over the horizon and not understanding why they don’t. Time to get to work, guys, you need to measure the altitude of the sun over the flat earth or you’ll all just keep looking like a bunch of dumbasses staring at tea leaves!)

Now, then, in some attempt to justify this model, a measurement needs to be made of the altitude of the sun (again). You can do it basically in the same way you did it before; you mark out a base length along the surface of the Earth and station two guys with surveying scopes at either end: you count “1,2,3” over the smartphone and then both of you report the angle you measure for the inclination of the sun. In this case, I recommend that one guy be stationed south of the equator and the other guy stationed north, both off the equator by the same distance along a longitude line. The measurement should be made on either the Vernal or Autumnal equinox and it should be made at noon during the day when the sun is at its highest point in the sky. This should make calculations easier by producing an isosceles triangle. How do you know you’re on the same longitude line? The sun should rise at the same time for both of you on the equinox. And, I specify equinox because I would rather not get into effects caused by the Earth’s axial tilt, like the significance of the tropics of Cancer and Capricorn (you want to know about those, go learn about them yourself).

12 height of sun ver 2

From this measurement how do you get the height of the sun? You use the following piece of very easy trig:

13 trig height

And, note, this trig will not work unless both angles measured above are the same… but you can orchestrate this with a couple spotters, an accurate clock and a couple surveying scopes.

If you do this very close to the equator, where d is small, you will find that the sun is at some crazily high altitude. You may not be able to distinguish it because of the sizeable angular width of the sun, but it will be very high… in the millions of miles. This by itself will push the minimum allowed angular height of the sun up, not down, because it’s larger than what was taken for the calculation above. To handle the horizon problem where the sun can only appear to be higher than about 9 degrees in the sky and never cross the horizon, the height of the sun must be lower than 3,000 miles, not higher. Humans were unable to do this calculation in prehistory and used a different set of triangles to try to estimate the height of the sun.

If you are a good scientist, you will repeat this measurement a number of times with different base distances between the spotters. If the Earth is flat, every base length you choose between the spotters should produce the same height for the sun (this is an example of the scientific concept of Replication).

Here’s what you will actually find:

14 three measurements

At a latitude close to the equator, during the first measurement, the sun will appear to be very far away at a really high altitude. With the second measurement, at mid latitudes on either side of the equator, the sun will appear to be at a significantly lower altitude. During the final measurement, at distant latitudes, as far north and south as you can get, the sun will appear to actually sit down on the face of the Earth. If you coordinate this experiment with six people on group chat all at once, this is what they will all see simultaneously. Could I coordinate the measurement locations so that the sun appears to be 3,000 miles high? Sure, but who in the hell would ever take that as honest? Flat earthers blame scientists for being dishonest… what if the flat earthers are the ones being dishonest? Does it not count for them somehow?

Since the sun suddenly appears to be speeding toward the Earth, does this mean that it’s about to crash down onto the experimenters you have stationed at the equator? No. It just means that your model is completely wrong because it hasn’t produced a self-consistent measurement. A mature scientist would consider the flat earth a dead hypothesis at this point.

Why does the round earth manage to succeed at explaining this series of observations? For one thing, the round earth doesn’t assume that the spotting scopes are stationed at the same angular level.

15 round earth contrast

The leveling bubble on the spotting scope can only assume the local level. And, the angle that you end up measuring is the one between the local horizon and the sight line. On the equinox (very important) the sun will only appear to be directly overhead at noon on the equator.

If you’re still unconvinced that the flat earth is a dead hypothesis which doesn’t live up to testing and continue to focus on strange mirages seen over the surface of the ocean on warm days as evidence that the round earth can’t be right, consider the following observations.

Flat earthers use Polaris as the pivot around which the sky spins. Why is it that Polaris is not visible in the sky from latitudes south of the equator? Why is it that the Southern Cross star constellation is not visible from the northern hemisphere? Eddie Bravo, as a Gracie hunter, surely must have visited Brazil: did he ever go outside and look for the north star during a visit? Pending that, did he look for the Southern Cross from Las Vegas?

Flat earthers use the observation that the stars in the sky rotate counterclockwise around Polaris as evidence that the sky is rotating around the disc of the Earth. Have they ever gone and observed at night from the tip of Argentina in South America that the sky seems to rotate clockwise around some axis to the south? How can the sky rotate both clockwise and counterclockwise at the same time? In the flat earth model, it can’t, but in reality, it does! As an extension, why in the hell does the sun come straight up from the east and set straight in the west on equinox at the equator? When seen at the North Pole, on equinox day, simultaneously, the sun rolls around the horizon at the level of the ground and never quite rises. Use your smartphone and take the trip to see! Send a friend to Panama while you go to Juneau Alaska and talk on the smartphone to see that it happens this way in both places at once.

Don’t take my word for it, go and make the observations yourself!

How is this all possible?

I’ll tell you why.

It’s because flat earthers never test the models they put forward with the tools that are at their flipping fingertips. “Flat Earth ‘Research'” my ass.

Do I need NASA satellite pictures or rocket launches to know that the Earth is round? Pardon my french, but Fucking hell, no! Give me the combination of time zones with the fact that the sun actually pops up over the horizon when it rises and your ass is grass. Flat earth models can’t explain these observations simultaneously, they can only do one or the other.

Edit 11-28-17

Yeah, I have a tiny bit more to say.

If all of what I’ve said still does not convince you, likely you’re hopeless. But, here’s a comparison between what the sun does in the sky over the disc shaped flat earth and what it actually does.

Here’s how the sun travels across the sky on the disc-shaped earth:

16 flat earth sun track

Here’s what the sun really does depending on latitude:

17 earth sun track

This particular set of sun behaviors in the sky is actually visible year round, but the latitude where the sun travels from East, straight over the apex, to West varies North to South depending on the season when you look. At equinox, the observation is symmetric at the equator, but it shifts north and south of there as the months move on, producing the same general pattern above. In the winter, the axial tilt of the Earth prevents the sun from rising over the north pole –ever– while the same is true at the south pole during the summer of the northern hemisphere. Flat earthers seem to never make any observations about what happens in the sky to the sun south of the equator. Do they not go to Australia or South America to take a look?

As an extra, I have made the mistake of rooting through Eric Dubay’s “200 proofs” gallop. I once even thought about writing a blog post about the experience, but decided it was too exhausting. For one thing, quantity does not assure quality. Many of the 200 proofs are taken from accounts of 19th century navigation errors, and one must wonder whether such accounts hold as valid in the 21st century world. Further, some of the proofs are simple, flat out lies: among the proofs is an exhaustive observation of the lack of airline flight routes in the southern hemisphere, twisting route information to show that flights must pass through the northern hemisphere to reach destinations as far separated as the tip of South America and the tip of South Africa, which simply ignores the fact that flight routes exist for these destinations that do not go to the northern hemisphere. Are there more flight routes in the Northern hemisphere than in the southern hemisphere? Yes, most of the human population lives at or north of the equator… most of the places anybody would want to go are in the northern hemisphere. If you doubt that such a flight route exists, go to the Southern hemisphere and take an airline flight from Argentina to South Africa and use a stopwatch during the flight to see if it’s a fraction of the length Dubay would claim –commerical airline jets have a known flight profile that would be impossible to hide; the rate at which they cross distance is well-characterized. Did Dubay do this experiment? Nope. What should stun a person about Dubay is that he does not merely make wrong claims, it’s that he repeats the same wrong claims 60 times in a row to an audience that not only fawns over it, but fails to point out the giant logical gaps that are detailed above. How hard is it to see that you not only need to cope with time zones, but with sunrises too?

Pointing out a tiny detail, like not understanding how mirages work on the surface of the ocean, does not somehow validate a model that can’t handle the big ticket items, like time zones and sunrises. It only shows that you can’t understand how the small details work. I can also sort of understand that people are losing touch with the world around them as they grow more and more entrenched in the online world, but if you fail to understand that the online world does not dictate the physics of the real world, you are in big trouble.

(Edit 3-26-18:)

The steam rocket dude finally shot himself 1,800 ft into the air. Oh yeah, and “flat earth and stuff.” Tell me again how his little stunt was supposed to test anything. His interest was in launching himself in a steam powered rocket, it had nothing to do with finding out the roundness (or lack thereof) of the Earth.

If you vote for him for Governor, you deserve what you get.

For anybody actually interested in a test that did something, check this out. For the record, there are aberrations to the lenses here which do effect exactly what you see along the edges of the image, but ask yourself how the rocket can appear straight while the background appears curved. Further, if you doubt it, that test is something that can be done by someone with the limo driver’s means.

Powerball Probabilities

If you’ve read anything else in this blog, you’ll know I write frequently about my playing around with Quantum Mechanics. As a digression away from a natural system that is all about probabilities, an interesting little toy problem I decided to tackle is figuring out how the “win” probabilities are determined in the lottery game Powerball.

Powerball is actually quite intriguing to me. They have a website here which details by level all the winners across the whole country who have won a Powerball prize in any given drawing. You may have looked at this chart at some point while trying to figure out if your ticket won something useful. A part of what intrigues me about this chart is that it tells you in a given drawing exactly how much money was spent on Powerball and how many people bought tickets. How does it tell you this? Because probability is an incredibly reliable gauge of behavior with big samples sizes. And, Powerball quite willingly lays all the numbers out for you to do their book keeping for them by telling you exactly how many people won… particularly at the high-probability-to-win levels which push into the regime of Gaussian statistics. For big samples, like millions of people buying powerball tickets, where N=big, the errors on average values become relatively insignificant since they go as sqrt(N). And, the probabilities reveal what those average values are.

The game is doubly intriguing to me because of the psychological component that drives it. As the pot becomes big, people’s willingness to play becomes big even though the probabilities never change. It suddenly leaps into the national consciousness every time the size of the pot becomes big and people play more aggressively as if they had a greater chance of winning said money. It is true that somebody ultimately walks away with the big pot, but what’s the likelihood that somebody is you?

But, as a starter, what are the probabilities that you win anything when you buy a ticket? To understand this, it helps to know how the game is set up.

As everybody knows, powerball is one of these games where they draw a bunch of little balls printed with numbers out of a machine with a spinning basket and you, as the player, simply match the numbers on your ticket to the numbers on the balls. If your ticket matches all the numbers, you win big! And, as an incentive to make people feel like they’re getting something out of playing, the powerball company awards various combinations of matching numbers and adds in multipliers which increase the size of the award if you do get any sort of match. You might only match a number or two, but they reward you a couple bucks for your effort. If you really want, you can pick the numbers yourself, but most people simply grab random numbers spat out of a computer… not like I’m telling you anything you don’t already know at this point.

One of the interesting qualities of the game is that the probabilities of prizes are very easy to adjust. The whole apparatus stays the same; they just add or subtract balls from the basket. In powerball, as currently run, there are two baskets: the first basket contains 69 balls while the second contains 26. Five balls are drawn from the first basket while only one, the Powerball, is drawn from the second. There is actually an entire record available of how the game has been run in the past, how many balls were in either the first or second baskets and when balls were added or subtracted from each. As the game has crossed state lines and the number of players has grown, the number of balls has also steadily swelled. I think the choice in numbering has been pretty careful to make the smallest prize attainably easy to get while pushing the chances for the grand prize to grow enticingly larger and larger. Prizes are mainly regulated by the presence of the Powerball: if your ticket manages to match the Powerball and nothing else, you win a small prize, no matter what. Prizes get bigger as a larger number of the other five balls are matched on your ticket.

The probabilities at a low level work almost exactly as you would expect: if there are 26 balls in the powerball basket, at any given drawing, you have 1 chance in 26 of matching the powerball. This means that you have 1 chance in 26 of winning some prize as determined by the presence of the powerball. There are also prizes for runs of larger than three matching balls drawn from the main basket, which tends to push the probabilities of winning anything to a slightly higher frequency than 1 in 26.

For the number savvy this begins to reveal the economics of powerball: an assured win by these means requires you to spend, on average, $48. That’s 26 tickets where you are likely to have one that matches the powerball. Note, the prize for matching that number is $4. $44 dollars spent to net only $4 is a big overall loss. But, this 26 ticket buy-in is actually hiding the fact that you have a small chance of matching some sequence of other numbers and obtaining a bigger prize… and it would certainly not be an economic loss if you matched the powerball and then the 5 other balls, yielding you a profit in the hundreds of millions of dollars (and this is usually what people tell themselves as they spend $2 for each number).

The probability to win the matched powerball prize only, that is to match just the powerball number, is actually somewhat worse than 1 in 26. The probability is attenuated by the requirement that you hit no matches on any other of the five possible numbers drawn.

Finding the actual probability is as follows: (1/26)*(64/69)*(63/68)*(62/67)*(61/66)*(60/65). If you multiply that out and invert it, you get 1 hit in 38.32 tries. The first number is, of course, the chances of hitting the powerball, while the other five are the chance of hitting numbers that aren’t picked… most of these probabilities are naturally quite close to 1, so you are likely to hit them, but they are probabilities that count toward hitting the powerball only.

This number may not be that interesting to you, but lots of people play the game and that means that the likelihood of hitting just the powerball is close to Gaussian. This is useful to a physicist because it reveals something about the structure of the Powerball playing audience on any given week: that site I gave tells you how many people won with only the powerball, meaning that by multiplying that number by 38.32, you know how many tickets were purchased prior to the drawing in question. For example, as of the August 12 2017 drawing, 1,176,672 numbers won the powerball-only prize, meaning that very nearly 38.32*1,176,672 numbers were purchased: ~45,090,071 numbers +/- 6,715, including error (notice that the error here is well below 1%).

How many people are playing? If people mostly purchase maybe two or three numbers, around 15-20 million people played. Of course, I’m not accounting for the slavering masses who went whole hog and dropped $20 on numbers; if everybody did this, 4.5 million people played… truly, I can’t really know people’s purchasing habits for certain, but I can with certainty say that only a couple tens of millions of people played.

The number there reveals quite clearly the economics of the game for the period between the 8/12 drawing and the one a couple days prior: $90 million was spent on tickets! This is really quite easy arithmetic since it’s all in factors of 2 over the number of ticket numbers sold. If you look at the total prize pay-out, also on that page I provided, $19.4 million was won. This means that the Powerball company kept ~$70 million made over about three days, of which some got dumped into the grand prize and some went to whatever overhead they keep (I hear at least some of that extra is supposed to go into public works and maybe some also ends up in the Godfather’s pocket). Lucrative business.

If you look at the prize payouts for the game, most of the lower level prizes pay off between $4 and $7. You can’t get a prize that exceeds $100 until you match at least 4 balls. Note, here, that the probability of matching 4 balls (including the powerball) is about 1 in 14,494. This means, that to assure yourself a prize of $100, you have to spend ~$29,000. You might argue that in 14,494 tickets, you’ll win a couple smaller prizes ($4 prizes are 1 in 38, 1 in 91, and $7 prizes are 1 in 700 and 1 in 580) and maybe break even. Here’s the calculation for how much you’ll likely make for that buy-in: $4*(14,494*(1/38 + 1/91)) + $7*(14,494*(1/700 + 1/580))… I’ve rounded the probabilities a bit… =$2482.65. For $29,000 spent to assure a single $100 win, you are assured to win at most $2500 from lesser winnings for a total loss of $27,500. Notice, $4 on a $44 loss is about 10%, while $2500 on $27,500 is also about 10%… the payoff does not improve at attainable levels! Granted, there’s a chance at a couple hundred million, but the probability of the bigger prize is still pretty well against you.

Suppose you are a big spender and you managed to rake up $29,000 in cash to dump into tickets, how likely is it that you will win just the $1 million prize? That’s five matched balls excluding the powerball. The probability is 1 in 11,688,053. By pushing the numbers, your odds of this prize have become 14,500/11,688,053, or about 1 chance in 800. Your odds are substantially improved here, but 1 in 800 is still not a wonderful bet despite the fact that you assured yourself a fourth tier prize of $100! The grand prize is still a much harder bet with odds running at about 1 in 20,000, despite the amount you just dropped on it. Do you just happen to have $30,000 burning a hole in your pocket? Lucky you! Lots of people live on that salary for a year.

Most of this is simple arithmetic and I’ve been bandying about probabilities gleaned from the Powerball website. If you’re as curious about it as me, you might be wondering exactly how all those probabilities were calculated. I gave an example above of the mechanical calculation of the lowest level probability, but I also went and figured out a pair of formulae that calculate any of the powerball prize probabilities. It reminded me a bit of stat mech…

prob without powerball

prob with powerball

number for hits

I’ve colored the main equations and annotated the the parts to make them a little clearer. The final relation just shows how you can see the number of tries needed in order to hit one success, given a probability as calculated with the other two equations. The first equation differs from the second in that it refers to probabilities where you have matched numbers without managing to match the powerball, while the second is the complement, where you match numbers having hit the powerball. Between these two equations, you can calculate all the probabilities for the powerball prizes. Since probabilities were always hard for me, I’ll try to explain the parts of these equations. If you’re not familiar with the factorial operation, this is what is denoted by the exclamation point “!” and it denotes a product string counting up from one to the number of the factorial… for example 5! means 1x2x3x4x5. The special case 0! should be read as 1. The first part, in blue, is the probability relating to either hitting on missing the powerball, where K = 26, the number of balls in the powerball basket. The second part (purple) is the multiplicity and tells you how many ways that you can draw a certain number of matches (Y) to fill a number of open slots (X), while drawing a number of mismatches (Z) in the process, where X=Y+Z. In powerball, you draw five balls, so X=5 and Y is the number of matches (anywhere from 0 to 5), while Z is the number of misses. Multiplicity shows up in stat mech and is intimately related to entropy. The totals drawn (green) is perhaps mislabeled… here I’m referring to the number of possible choices in the main basket, N=69, and the number of those that will not be drawn M = N – X, or 64. I should probably have called it “Main basket balls” or something. The last two parts determine the probabilities related to the given number of hits (Y) (orange) and the given number of misses (Z) (red) and I have applied the product operator to spiffy up the notation. Product operator is another iterand much like the summation operator and means that you repeatedly multiply successive values, much like a factorial, but where the value you are multiplying is produced from a particular range and given a set form. In these, the small script m and n start at zero (my bad, this should be under the Pi) and iterate until they are just less than the number up top (Y – 1 or Z – 1 and not equal to). At the extreme cases of either all hits or all misses, the relevant product operator (either Miss or Hit respectively) must be set equal to one in order to not count it.

This is one of those rare situations where the American public does a probability experiment with the values all well recorded where it’s possible to see the outcomes. How hard is it to win the grand prize? Well, the odds are one in 292 million. Consider that the population of the United States is 323 million. That means that if everybody in the United States bought one powerball number, about one person would win.

Only one.

Thanks to the power of the media, everybody has the opportunity to know that somebody won. Or not. That this person exists, nobody wants to doubt, but consider that the odds of winning are so scant that you not only won’t win, but you pretty likely will never meet anyone who did. Sort of surreal… everything is above board, you would think, but the rarity is so rare that there’s no assurance that it ever actually happens. You can suppose that maybe it does happen because people do win those dinky $4 prizes, but maybe this is just a red herring and nobody really actually wins! Those winner testimonials could be from actors!

Yeah, I’m not much of a conspiracy theorist, but it is true that a founding tenant of the idea of a ‘limit’ in math is that 99.99999% is effectively 100%. Going to the limit where the discrepancy is so small as to be infinitesimal is what calculus is all about. It is fair to say that it very nearly never happens! Everybody wants to be the one who beats the odds, which is why Powerball tickets are sold, but the extraordinarily vast majority never will win anything useful… I say “useful” because winning $4 or $7 is always a net loss. You have to win one of the top three prizes for it to be anywhere near worth anything, which you likely never will.

One final fairly interesting feature of the probability is that you can make some rough predictions about how frequently the grand prize is won based on how frequently the first prize is won. First prize is matching all five of the balls, but not the powerball. This frequency is about once per 12 million numbers, which is about 26 times more likely than all 5 plus the Powerball. In the report on winnings, a typical frequency is about 2 to 3 winners per drawing. About 1 time in 26 a person with all five manages to get the powerball too, so, with two drawings per week and about 2.5 first prize winners per drawing, that’s five winners per week… which implies that the grand prize should be won at a frequency of about once every five to six weeks –every month and a half or so. The average here will have a very large standard deviation because the number of winners is compact, meaning that the error is an appreciable portion of the measurement, which is why there is a great deal of variation in period between times when the grand prize is won. The incidence becomes much more Poissonian and stochastic, and allows some prizes to get quite big compared to others and causes their values to disperse across a fairly broad range. Uncertainty tends to dominate, making the game a bit more exciting.

While the grand prize is small, the number of people winning the first prize in a given week is small (maybe none or one), but this number grows in proportion to the size of the grand prize (maybe 5 or 6 or as high as 9). When the prize grows large enough to catch the public consciousness, the likelihood that somebody will win goes up simply because more people are playing it and this can be witnessed in the fluctuating frequency of the wins of lower level prizes. It breathes around the pulse of maybe 200 million dollars, lubbing at 40 million (maybe 0 to 1 person winning the first prize) and dubbing at 250 million (with 5 people or more winning the first prize).

Quite a story is told if you’re boring and as easily amused as me.

In my opinion, if you do feel inclined to play the game, be aware that when I say you probably won’t win, I mean that the numbers are so strongly against you that you do not appreciably improve your odds by throwing down $100 or even $1,000. The little $4 wins do happen, but they never pay and $1,000 spent will likely not get you more than $100 in total of winnings. It might as well be a voluntary tax. Cherish the dream your $2 buys, but do not stake your well-being on it. There’s nothing wrong with dreaming as long as you understand where to wake up.

(edit 8-24-17)

There was a grand prize winner last night (Wednesday 8-23-17). The outcomes are almost completely as should be expected: the winner is in Massachusetts… the majority of the country’s population is located in states on either the east or west coast, so this is unsurprising. There were 40 match 5 winners, so you would anticipate at least one to be a grand prize winner, which is exactly what happened (1 in 26 difference between 5 with powerball and 5 without). There were about 5.9 million powerball-only winners, so 38.32*5.9 is 226 million total powerball numbers sold in the run-up to last night’s drawing… with grand prize odds of 1 in 292 million, this is approaching parity. This means that more than $452 million was spent since Saturday on powerball lottery numbers (calculation excludes the extra dollar spent on multipliers). About five times as many ticket numbers were sold for this drawing as when I made my original analysis a week ago. With that many tickets sold, there was almost assuredly going to be a winner last night. This is not to say there shouldn’t have been a winner before this –probability is a fickle mistress– but the numbers are such that it was unlikely, but not impossible, for the prize to grow bigger. The last time the powerball was won was on 6-10-17, about two months and thirteen days ago… you can know that this is an unusually large jackpot because this period is longer than the usual period between wins (I had generously estimated 6 weeks based on the guess of 2 match 5 winners per drawing, but I think this might actually be a bit too high).

There was only one grand prize winning number out of 226 million tickets sold (not counting all the drawings that failed to yield a grand prize winner prior to this.) Think on that for a moment.

Revoke Shaquille’s Doctorate in Education… he doesn’t deserve it.

We are in a world where truth doesn’t matter.

Read this and weep. These men are apparently the authorities of truth in our world.

Everywhere you look, truth itself is under assault. It doesn’t really matter whether you believe, it really doesn’t matter what you want it to say. Truth is not beholden to human whims. We can’t ultimately change it by manipulating it with cellphone apps. We can’t reinterpret it if we wanted to. One of these days, in however great of importance we hold ourselves, the truth will catch up. And we will deserve what happens to us after that point in time.

“It’s true. The Earth is flat. The Earth is flat. Yes, it is. Listen, there are three ways to manipulate the mind — what you read, what you see and what you hear. In school, first thing they teach us is, ‘Oh, Columbus discovered America,’ but when he got there, there were some fair-skinned people with the long hair smoking on the peace pipes. So, what does that tell you? Columbus didn’t discover America. So, listen, I drive from coast to coast, and this s*** is flat to me. I’m just saying. I drive from Florida to California all the time, and it’s flat to me. I do not go up and down at a 360-degree angle, and all that stuff about gravity, have you looked outside Atlanta lately and seen all these buildings? You mean to tell me that China is under us? China is under us? It’s not. The world is flat.”

This spoken by a man with a public platform and a Doctorate in Education. This is the paragon of teachers!

{Edit: 3-20-17 since I’m thinking better about this now, I will rebut his meaningless points.

First, arguments about whether or not Columbus discovered America are a non-sequitur as to whether or not the Earth is round.

Second, driving coast to coast can tell you very little about the overall roundness of the Earth, especially if you aren’t paying attention to the things that do. The curvature of the earth is extremely small: only about 8 inches per mile. This means that on the scale of feet, the curvature is in thousandths of an inch, so that you can’t measure it to not be flat at the dimensions that a human being can meaningfully experience standing directly on the surface. Can you see the couple feet of curvature at a distance of fifty miles looking off a sky scraper in the middle of Atlanta, or distinguish the deviation from the same direction of ‘up’ of two sky scrapers separated by ten miles? You can’t resolve tens of feet with your eyes at a distance of miles. That said, you actually can see Pikes Peak emerge over the horizon as you come out of Kansas into Colorado, but I suppose you would explain that away by some sort of giant conspiracy theory elevator device. To actually start to directly see the curvature at a meaningful degree with your eyes, you need to be at an altitude of hundreds of thousands of feet above the surface… which you could actually do as somebody with ridiculous wealth.

Third, how would you know that China is not ‘under?’ How would you know where China isn’t when you wouldn’t be able to see that distance along a flat surface no matter which direction you look? Can you explain the phase factor that you pick up to your day that causes your damn jet lag every time your wealthy, ignorant ass travels to places like China? By your logic, you should be able to use your colossal wealth to travel to where the globe of the sun pops out of the plane of the Earth in the east every morning. Hasn’t it once occurred to you that if you’re truly right, you should test a hypothesis first before making an assertion that can be easily shown to be wrong?}

You made a mint of money on the backs of a lot of people who made it possible for you to be internationally known, all because of the truth that they determined for you! You do not respect them, you do not understand the depth of their efforts, you do not know how hard they worked. You do not deserve the soapbox they built for you.

For everyone who values the truth, take a moment to share a little about it. Read other things in my blog to see what else I have to say. I have very little I can say right this second; I’m aghast and I feel the need to cry. My hard work is rendered essentially meaningless by morons like Shaquille O’Neal… men of no particular intellect or real skill dictating what reality ‘actually is’ while having no particular capacity to judge it for themselves.

From a time before cellphone apps and computer graphics manipulation, I leave you with one of the greatest pinnacles of truth ever to be achieved by the human species:

moon_and_earth_lroearthrise_frame_0

Like it or not, that’s Earth.

If you care to, I ask you to go and hug the scientist or engineer in your life. Tell them that you care about what they do and that you value their hard work. The flame of enlightenment kindled in our world is precious and at dire risk of guttering out.

Edit:

An open letter to the Shaq:

Dear Shaquille O’Neal,

I’m incredibly dismayed by your use of your public personae to endorse an intellectually bankrupt idea like flat earth conspiracy theories particularly in light of your Doctorate Degree in Education. If you are truly educated, and value truth, you should know that holding this stance devalues the hard work of generations of physicists and engineers and jeopardizes the standing of actual scientific truth in the public arena. The purpose of an educator is to educate, not to misinform… the difference is in whether you spread the truth or not.

There is so much evidence of the round earth available in the world around us without appeal to digital media, the cycle of the seasons, scheduled passages of the moon and the planets, observations of Coriolis forces in the weather patterns and simple ballistics, the capacity to jump in an airplane heading west and continue to head west until you get back to where you started, the passage of satellites and spacecraft visible from the surface of the Earth over our heads, the very existence of GPS available on your goddamn smart phone, to the common shapes of objects like the moon and planets visible through telescopes in the night skies around us, that appeals to flat earth conspiracies show a breathtaking lack of capacity to understand how the world fits together. That it comes from a figure who is ostensibly a force of truth –an educator– is truly deeply hurtful to those of us who developed that truth… modern scientists and engineers.

Since you are so profoundly wealthy, you among all people are singularly in a position to prove to yourself the roundness of our world. I bet you 50 million dollars that I don’t even have and will spend my entire life trying to repay, that you can rent an airliner with an honest pilot of your choice and fly west along a route also of your choice, and come back to the airport you originally departed from without any significant eastward travel. Heck, you can do the same exercise heading north or south if you want. And, if that experiment isn’t enough, use your celebrity to talk to Elon Musk: I hear he’s selling tickets now to rich people for flights around the moon. I bet he would build you a specially-sized two-person-converted-to-one berth in his Dragon capsule to give you a ride high enough to take a look for yourself at the shape of the world, if your eyes are the only thing you’ll believe. If you lose, you pay a 49 million dollar endowment to the University of Colorado Department of Physics for the support of Physics Education –and a million to me for the heartache you caused making a mockery of my education and profession by use of your ill-gotten public soapbox and mindlessly open mouth. Moreover, if you lose, you relinquish your Doctorate and make a public apology for standing for exactly the opposite of what that degree means.

Sincerely,

Foolish Physicist
of Poetry in Physics

Edit 4-5-17:

So, Shaq walked back his comments.

O’Neal: “The first part of the theory is, I’m joking, you idiots. That’s the first part of the theory. The second part is, I said jokingly that when I’m in my bus and I drive from Florida to California, which I do every summer, it seems to be flat. When I’m in my plane, and we’re getting ready to land, and I open up the window, and I’m looking at all the land that we’re flying over, it seems to be flat.”

“This world we live in, people take things too seriously, but I’m going to give the people answers to my test,” he said. “Knowing that I’m a funny guy, if something seems controversial or boom, boom, boom, you’ve got to have my funny points on, right? So now, once you have my funny points on, that should eradicate and get rid of all your negative thoughts, right? That’s what you should do when you hear a Shaquille O’Neal statement, OK? You should know that he has funny points right over here, and what did he say? Boom, boom, boom, add the funny points. You either laugh or you don’t laugh, but don’t take me seriously. When I want you to take me seriously, you will know by the tone of my voice that I’m being serious.”

“No, I don’t think that,” O’Neal told Harbinger of a flat Earth. “It was a joke, OK? So know that when Shaquille O’Neal says something, 80 percent of the time I’m being humorous, and it is a joke. And 20 percent of the time, I’m being serious, but when I’m being serious, you’ll know. You want to see me, seriously? See me and Charles Barkley going back and forth on TNT. That’s when I’m mad and when I’m serious. Other than that, you’re not going to get that out of me, so I was just joking people. The Earth is not round, it’s flat. I mean, the Earth is not flat, it’s round.”

One thing that should be added to these statements is this: there are people who are actively spreading misinformation about the state of the world, for instance that the earth is flat. The internet, Youtube, blogs, you name it, has given these people a soapbox that they would not otherwise have. Given that there is a blatant antiscientific thread in the United States which is attacking accepted, settled science as a big cover-up designed to destroy the rights of the everyday man, it is the duty of scientists and educators to take the truth seriously. In a world where Theory of Evolution, Climatology and Vaccine science are all actively politicized, we have to stand up for the truth.

Where real scientists are about studying and doing our work, the antiscientific activists are solely about spreading their belief… they don’t study, they don’t question, they spend their time actively lobbying the government and appealing to legislators, running for and getting onto school boards where they have an opportunity to pick which books are presented to school districts and various places where they can actively undercut what students are told about the truth of the world. They aren’t spending their energy studying, they are spending their energy solely on tinkering with the social mechanisms which provide our society with the next generation of scientists. As such, their efforts are more directed at undercutting the mechanisms that preserve the truth rather than on evaluating the truth… as scientists do. These people can do huge damage to us all. Every screwball coming out of a diploma mill “Quantum University” with a useless, unaccredited ‘PhD’… who goes off to promote woo-bong herbalist healthcare as an alternative to science based medicine, does damage to us all by undercutting what it means to get healthcare and by putting crankery and quackery in all seriousness at the same level as scientific truth when there should be no comparison.

If everybody understood that there is no ‘alternative’ to the truth, joking about what is true would mean something totally different to me. But, we live in a world where ‘alternative facts’ are a real thing and where everyone with a soapbox can say whatever they wish without fear of reprisal. Lying is a protected right! But someone has to stand up for truth. That someone should be scientists and educators. That should include an ‘education doctorate’ like the Shaq. If he were an NBA numbskull without the doctorate, I would care less: Kyrie Irving is a joke. But he isn’t; he’s got a doctorate and he has a responsibility to uphold what that degree means! The only reason humor in irony can work is if it can be clear that one is being ironic instead of serious… and that is never completely clear in this world.

Nuclear Toxins

A physicist from Lawrence Livermore Labs has been restoring old nuclear bomb detonation footage. This seems to me to be an incredibly valuable task because all of the original footage was shot on film, which is currently in the process of decaying and falling apart. There have been no open air nuclear bomb detonations on planet Earth since probably the 1960s, which is good… except that people are in the process of forgetting exactly how bad a nuclear weapon is. The effort of saving this footage makes it possible for people to know something about this world-changing technology that wasn’t previously declassified. Nukes are sort of mythical to a body like me who wasn’t even born until about the time that testing went underground: to everybody younger than me, I suspect that nukes are an old-people thing, a less important weapon than computers. That Lawrence Livermore Labs has posted this footage to Youtube is an amazing public service, I think.

As I was reading an article on Gizmodo about this piece of news, I happened to wander into the comment threads to see what the echo chamber had to say about all this. I should know better. Admittedly, I actually didn’t post any comments castigating anyone, but there was a particular comment that got me thinking… and calculating.

Here is the comment:

Nuclear explosions produce radioactive substances that are rare in nature — like carbon-14, a radioactive form of the carbon atom that forms the chemical basis of all life on earth.

Once released into the atmosphere, carbon-14 enters the food chain and gets bound up in the cells of most living things. There’s still enough floating around for researchers to detect in the DNA of humans born in 2016. If you’re reading this, it’s inside you.

This is fear mongering. If you’ve never seen fear mongering before, this is what it looks like. The comment is intended to deliberately inspire fear not just in nuclear weapons, but in the prospect of radionuclides present in the environment. The last sentence is pure body terror. Dear godz, the radionuclides, they’re inside me and there’s no way to clean them out! I thought for a time about responding to this comment. I decided not to because there is enough truth here that anyone should probably stop and think about it.

For anyone curious, the wikipedia article on the subject has some nice details and seems thorough.

It is true the C-14 is fairly rare in nature. The natural abundance is 1 part per trillion of carbon. It is also true that the atmospheric test detonations of nuclear bombs created a spike in the C-14 present in the environment. And, while it’s true that C-14 is rare, it is actually not technically unnatural since it is formed by cosmic rays impinging on the upper atmosphere. For the astute reader, C-14 produced by cosmic rays forms the basis of radiocarbon dating since C-14 is present at a particular known, constant proportion in living things right up until you die and stop uptaking it from the environment –a scientist can then determine the date when living matter died based on the radioactive decay curve for C-14.

Since it’s not unnatural, the real question here is whether the spike of radionuclides created by nuclear testing significantly increases the health hazard posed by excess C-14 above and beyond what it would normally be. You have it in your body anyway, is there greater hazard due to the extra amount released? This puzzle is actually a somewhat intriguing one to me because I worked for a time with radionuclides and it is kind of chilling all the protective equipment that you need to use and all the safety measures that are required. The risk is a non-trivial one.

But, what is the real risk? Does having a detectable amount of radionuclide in your body that can be ascribed to atomic air tests constitute an increased health threat?

To begin with, what is the health threat? For the particular case of C-14, one of a handful of radionuclides that can be incorporated into your normal body structures, the health threat would obviously come from the radioactivity of the atom. In this particular case, C-14 is a beta-emitter. This means that C-14 radiates electrons; specifically, one of the neutrons in the atom’s nucleus converts into a proton by giving off an electron and a neutrino, resulting in the carbon turning into nitrogen. The neutrino basically doesn’t interact with anything, but the radiated electron can travel with energies of 156 keV (or about 2.4×10^-14 Joules). This will do damage to the human body in two routes, either by direct collision of the radiated electron with the body, or by a structurally important carbon atom converting into a nitrogen atom during the decay process if the C-14 was part of your body already. Obviously, if a carbon atom turns suddenly into nitrogen, that’s conducive to organic chemistry occurring since nitrogen can’t maintain the same number of valence interactions as carbon without taking on a charge. So, energy deposition by particle collision, or spontaneous chemistry is the potential cause of the health threat.

In normal terms, the carbon-nitrogen chemistry routes for damage are not accounted for in radiation damage health effects simply because of how radiation is usually encountered: you need a lot of radiation in order to have a health effect, and this is usually from an exogenous source, that is, provided by a radiation source that is outside the body rather than incorporated with it, like endogenous C-14. This would be radiation much like the UV radiation which causes a sunburn. Heath effects due to radiation exposure are measured on a scale by a dose unit called a ‘rem.’ A rem expresses an amount of radiation energy deposited into body mass, where 1 rem is equal to 1.0×10^-5 Joules of radiation energy deposited into 1 gram of body mass. Here is a table giving the general scale of rem doses which causes health effects. People who work around radiation as part of their job are limited to a full-body yearly dose of 5 rem, while the general public is limited to 0.1 rem per year. Everybody is expected to have an environmental radiation dose exposure of about 0.3 rem per year and there’s an allowance of 0.05 rem per year for medical x-rays. It’s noteworthy that not all radiation doses are created equal and that the target body tissue matters; this is manifest by different radiation doses being allowed to occur to the eyes (15 rem) or the extremities, like the skin (50 rem). A sunburn would be like a dose of 100 to 600 rem to the skin.

What part of an organism must the damage affect in order to cause a health problem? Really, only one is truly significant, and that’s your DNA. Easy to guess. Pretty much everything else is replaceable to the extent that even a single cell dying from critical damage is totally expendable in the context of an organism built of a trillion cells. The problem of C-14 being located in your DNA directly is numerically a rather minor problem: DNA actually only accounts for about 3% of the dry mass of your cells, meaning that only about 3% of the C-14 incorporated into your body is directly incorporated into your DNA, so that most of the damage to your DNA is due to C-14 not directly incorporated in that molecule. This is not to say that chemistry doesn’t cause the damage, merely that most of the chemical damage is probably due to energy deposition in molecules around the DNA which then react with the DNA, say by generation of superoxides or similar paths. This may surprise you, but DNA damage isn’t always a complete all-or-nothing proposition either: to an extent, the cell has machinery which is able to repair damaged DNA… the bacterium Dienococcus radiodurans is able to repair its DNA so efficiently that it’s able to subsist indefinitely inside a nuclear reactor. Humans have some repair mechanisms as well.

Cells handling radiation damage in humans have about two levels of response. For minor damage, the cell repairs its DNA. If the DNA damage is too great to fix, a mechanism triggers in the cell to cause it to commit suicide. You can see the effect of this in a sunburn: critically radiation damaged skin cells commit suicide en mass in the substratum of your skin, ultimately sacrificing the structural integrity of your skin, causing the external layer to sough off. This is why your skin peels due to a sunburn. If the damage is somewhere in between, matters are a little murkier… your immune system has a way of tracking down damaged cells and destroying them, but those screwed up cells sometimes slip through the cracks to cause serious disease. Inevitably cancer. Affects like these emerge for ~20 rem full body doses. People love to worry about superpowers and three-arm, three-eye type heritable mutations due to radiation exposure, but congenital mutations are a less frequent outcome simply because your gonads are such a small proportion of your body; you’re more likely to have other things screwed up first.

One important trick in all of this to notice is that to start having serious health effects that can be clearly ascribed to radiation damage, you must absorb a dose of greater than about 5 rem.

Now, what kind of a radiation dose do you acquire on a yearly basis from body-incorporated C-14 and how much did that dose change in people due to atmospheric nuclear testing?

I did my calculations on the supposition of a 70 kg person (which is 154 lbs). I also adjusted rem into a more easily used physical quantity of Joules/gram (1 rem = 1×10^-5 J/g, see above.)  One rem of exposure for a 70 kg person works out to an absorbed dose of 0.7 J/year. An exposure sufficient to hit 5 rems is 3.5 J/year while 20 rem is 14 J/year. Beta-electrons from c-14 maximally hit with 2.4×10^-14 J/strike (150 keV) with about 0.8×10^-14 J/hit on average (50 keV).

In the following part of the calculation, I use radioactive decay and half-life in order to determine the rate of energy transference to the human body on the assumption that all the beta-electron energy emitted by radiation is absorbed by the human body. Radiation rates are a purely probabilistic event where the likelihood of seeing a radiated electron is proportional to the size of the radioactive atom population. The differential equation is a simple one and looks like this:

decay rate differential equation

This just means that the rate of decay (and therefore electron production rate) is proportional to the size of the decaying population where the k variable is a rate constant that can be determined from the half-life. The decay differential equation is solved by the following function:

exponential decay

This is just a simple exponential decay which takes an initial population of some number of objects and reduces it over time. You can solve for the decay constant by plugging the half-life into the time and simply asserting that you have 1/2 of your original quantity of objects at that time. The above exponential rearranges to find the decay constant:

decay constant

Here, Tau is the half-life in seconds (I could have used my time as years, but I’m pretty thoroughly trained to stick with SI units) and I’ve already substituted 1/2 for the population change. With k from half-life, I just need the population of radiation emitters present in the body in order to know the rate given in the first equation above… where I would simply multiply k by N.

To do this calculation, the half-life of C-14 is known to be 5730 years, which I then converted into seconds (ick; if I only care about years, next time I only calculate in years). This gives a decay constant of 3.836×10^-12 emissions/sec. In order to get the decay rate, I also need the population of C-14 emitters present in the human body. We know that C-14 has a natural prevalence of 1 per trillion and also that a 70 kg human body is 16 kg carbon after a little google searching, which gives me 1.6×10^-8 g of C-14. With C-14’s mass of 14 g/mole and Avagadro’s number, this gives about 6.88×10^14 C-14 atoms present in a 154 lb person. This population together with the rate constant gives me the decay rate by the first equation above, which is 2.639×10^3 decays per second. Energy per beta-electron absorbed times the decay rate gives the rate of energy deposited into the body per second on the assumption that all beta-decay energy is absorbed by the target: 2.639×10^3 decays/sec * 2.4×10^-14 Joules/decay = 6.33 x 10^-11 J/s. For the course of an entire year, the amount of energy works out to about 0.002 Joules/year.

This gets me to a place where I can start making comparisons. The exposure limit for any old member of the general public to ‘artificial’ radiation is 0.1 rem, or 0.07 J/year. The maximum… maximum… contribution due to endogenous C-14 is 35 times smaller than the allowed public exposure limits (for mean energy, it’s more like 100 times smaller). On average, endogenous C-14 gives 1/100th of the allowed permitted artificial radiation dose.

But, I’ve actually fudged here. Note that I said above that humans normally get a yearly environmental radiation dose of about 0.3 rem (0.21 J/year)… meaning that endogenous C-14 only provides about 1/300th of your natural dose. Other radiation sources that you encounter on a daily basis provide radiation exposure that is 300 times stronger than C-14 directly incorporated into the structure of your body. And, keep in mind that this is way lower than the 5 rem where health effects due to radiation exposure begin to emerge.

How does C-14 produced by atmospheric nuclear testing figure into all of this?

The wikipedia article I cited above has a nice histogram of detected changes in the environmental C-14 levels due to atmospheric nuclear testing. At the time of such testing, C-14 prevalence spiked in the environment by about 2 fold and has decayed over the intervening years to be less than 1.1-fold. This has an effect on C-14 exposure specifically of changing it from 1/300th of your natural dose to 1/150th, or about 0.5%, which then tapers to less than a tenth of a percent above natural prevalence in less than fifty years. Detectable, yes. Significant? No. Responsible for health effects…… not above the noise!

This is not to say that a nuclear war wouldn’t be bad. It would be very bad. But, don’t exaggerate environmental toxins. We have radionuclides present in our bodies no matter what and the ones put there by 1950s nuclear testing are only a negligible part, even at the time –what’s 100% next to 100.5%? A big nuclear war might be much worse than this, but this is basically a forgettable amount of radiation.

For anybody who is worried about environmental radiation, I draw your attention back to a really simple fact:

depositphotos_9985842_s-199x300

The woman depicted in the picture above has received a 100 to 600 rem dose of very (very very) soft X-rays by deliberately sitting out in front of a nuclear furnace. You can even see the nuclear shadow on her back left by her scant clothing. Do you think I’m kidding? UV light, which is lower energy than x-rays, but not by that much… about 3 eV versus maybe 500 eV, is ionizing radiation which is absorbed directly by skin DNA to produce real radiation damage, which your body treats indistinguishably from how it treats damage from particle radiation of radionuclides or X-rays or gamma-rays. The dose which produced this affect is something like two to twelve times higher than the federally permitted dose that radiation workers are allowed to receive in their skin over the course of an entire year… and she did it to herself deliberately in a matter of hours!

Here’s a hint, don’t worry about the boogieman under the bed when what you just happily did to yourself over the weekend among friends is much much worse.

What is a qubit?

I was trolling around in the comments of a news article presented on Yahoo the other day. What I saw there has sort of stuck with me and I’ve decided I should write about it. The article in question, which may have been by an outfit other than Yahoo itself, was about the recent decision by IBM to direct a division of people toward the task of learning how to program a quantum computer.

Using the word ‘quantum’ in the title of a news article is a sure fire way to incite click-bait. People flock in awe to quantum-ness even if they don’t understand what the hell they’re reading. This article was a prime example. All the article really talked about was that IBM has decided that quantum computers are now a promising enough technology that they’re going to start devoting themselves to the task of figuring out how to compute with them. Note, the article spent a lot of time kind of masturbating over how marvelous quantum computers will be, but it really actually didn’t say anything new. Another tech company deciding to pretend to be in quantum computing by figuring out how to program an imaginary computer is not an advance in our technology… digital quantum computers are generally agreed to be at least a few years off yet and they’ve been a few years off for a while now. There’s no guarantee that the technology will suddenly emerge into the mainstream –and I’m neglecting the DSpace quantum computer because it is generally agreed among experts that DSpace hasn’t even managed to prove that their qubits remain coherent through a calculation to actually be a useful quantum computer, let alone that they achieved anything at all by scaling it up.

The title of this article was a prime example of media quantum click-bait. The title boldly declared that “IBM is planning to build a quantum computer millions of times faster than a normal computer.” Now, that title was based on an extrapolation in the midst of the article where a quantum computer containing a mere 1000 qubits suddenly becomes the fastest computing machine imaginable. We’re very used to computers that contain gigabytes of RAM now, which is actually several billion on-off switches on the chip, so a mere 1,000 qubits seems like a really tiny number. This should be underwritten with the general concerns of the physics community that an array of 100 entangled qubits may exceed what’s physically possible… and it neglects that the difficulty of dealing with entangled systems increases exponentially with the number of qubits to be entangled. Scaling up normal bits doesn’t bump into the same difficulty. I don’t know if it’s physically possible or not, but I am aware that IBM’s declaration isn’t a major break-through so much as splashing around a bit of tech gism to keep the stockholders happy. All the article really said was that IBM has happily decided to hop on the quantum train because that seems to be the thing to do right now.

I really should understand that trolling around in the comments on such articles is a lost cause. There are so many misconceptions about quantum mechanics running around in popular culture that there’s almost no hope of finding the truth in such threads.

All this background gets us to what I was hoping to talk about. One big misconception that seemed to be somewhat common among commenters on this article is that two identical things in two places actually constitute only one thing magically in two places. This may stem from a conflation of what a wave function is versus what a qubit is and it may also be a big misunderstanding of the information that can be encoded in a qubit.

In a normal computer we all know that pretty much every calculation is built around representing numbers using binary. As everybody knows, a digital computer switch has two positions: we say that one position is 0 and the other is 1. An array of two digital on-off switches then can produce four distinct states: in binary, to represent the on-off settings of these states, we have 00, 01, 10 and 11. You could easily map those four settings to mean 1, 2, 3 and 4.

Suppose we switch now to talk about a quantum computer where the array is not bits anymore, but qubits. A very common qubit to talk about is the spin of an atom or an electron. This atom can be in two spin states: spin-up and spin-down. We could easily map the state spin-up to be 1, and call it ‘on,’ while spin-down is 0, or ‘off.’ For two qubits, we then get the states 00, 01, 10 and 11 that we had before, where we know about what states the bits are in, but we also can turn around and invoke entanglement. Entanglement is a situation where we create a wave function that contains multiple distinct particles at the same time such that the states those particles are in are interdependent on one another based upon what we can’t know about the system as a whole. Note, these two particles are separate objects, but they are both present in the wave function as separate objects. For two spin-up/spin-down type particles, this can give access to the so-called singlet and triplet states in addition to the normal binary states that the usual digital register can explore.

The quantum mechanics works like this. For the system of spin-up and spin-down, the usual way to look at this is in increments of spinning angular momentum: spin-up is a 1/2 unit of angular momentum pointed up while spin-down is -1/2 unit of angular moment, but pointed the opposite direction because of the negative sign. For the entangled system of two such particles, you can get three different values of entangled angular momentum: 1, 0 and -1. Spin 1 has both spins pointing up, but not ‘observed,’ meaning that it is completely degenerate with the 11 state of the digital register since it can’t fall into anything but 11 when the wave function collapses. Spin -1 is the same way: both spins are down, meaning that they have 100% probability of dropping into 00. The spin 0 state, on the other hand, is kind of screwy, and this is where the extra information encoding space of quantum computing emerges. The 0 states could be the symmetric combination of spin-up with spin-down or the anti-symmetric combination of the same thing. Now, these are distinct states, meaning that the size of your register just expanded from (00, 01, 10 and 11) to (00, 01, 10, 11 plus anti-symmetric 10-01 and symmetric 10+01). So, the two qubit register can encode 6 possible values instead of just 4. I’m still trying to decide if the spin 1 and -1 states could be considered different from 11 and 00, but I don’t think they can since they lack the indeterminacy present in the different spin 0 states. I’m also somewhat uncertain whether you have two extra states to give a capacity in the register of 6 or just 5 since I’m not certain what the field has to say about the practicality of determining the phase constant between the two mixed spin-up/spin-down eigenstates, since this is the only way to determine the difference between the symmetric and anti-symmetric combinations of spin.

As I was writing here, I realized also that I made a mistake myself in the interpretation of the qubit as I was writing my comment last night. At the very unentangled minimum, an array of two qubits contains the same number of states as an array of two normal bits. If I consider only the states possible by entangled qubits, without considering the phasing constant between 10+01 and 10-01, this gives only three states, or at most four states with the phase constant. I wrote my comment without including the four purely unentangled cases, giving fewer total states accessible to the device, or at most the same number.

Now, the thing that makes this incredibly special is that the number of extra states available to a register of qubits grows exponentially with the number of qubits present in the register. This means that a register of 10 qubits can encode many more numbers than a register of ten bits! Further, this means that fewer bits can be used to make much bigger calculations, which ultimately translates to a much faster computer if the speed of turning over the register is comparable to that of a more conventional computer –which is actually somewhat doubtful since a quantum computer would need to repeat calculations potentially many times in order to build up quantum statistics.

One of the big things that is limiting the size of quantum computers at this point is maintaining coherence. Maintaining coherence is very difficult and proving that the computer maintains all the entanglements that you create 100% of the time is exceptionally non-trivial. This comes back to the old cat-in-the-box difficulty of truly isolating the quantum system from the rest of the universe. And, it becomes more non-trivial the more qubits you include. I saw a seminar recently where the presenting professor was expressing optimism about creating a register of 100 Josephson junction type qubits, but was forced to admit that he didn’t know for sure whether it would work because of the difficulties that emerge in trying to maintain coherence across a register of that size.

I personally think it likely that we’ll have real digital quantum computers in the relatively near future, but I think the jury is still out as to exactly how powerful they’ll be when compared to conventional computers. There are simply too many variables yet which could influence the power and speed of a quantum computer in meaningful ways.

Coming back to my outrage at reading comments in that thread, I’m still at ‘dear god.’ Quantum computers do not work by teleportation: they do not have any way of magically putting a single object in multiple places. The structure of a wave function is defined simply by what you consider to be a collection of objects that are simultaneously isolated from the rest of the universe at a given time. A wave function quite easily spans many objects all at once since it is merely a statistical description of the disposition of that system as seen from the outside, and nothing more. It is not exactly a ‘thing’ in and of itself insomuch as collections of indescribably simple objects tend to behave in absolutely consistent ways among themselves. Where it becomes wave-like and weird is that we have definable limits to how precisely we can understand what’s going on at this basic level and that our inability to directly ‘interact’ with that level more or less assures that we can’t ever know everything about that level or how it behaves. Quantum mechanics follows from there. It really is all about what’s knowable; building a situation where certain things are selectively knowable is what it means to build a quantum computer.

That’s admittedly pretty weird if you stop and think about it, but not crazy or magical in that wide-eyed new agey smack-babbling way.

Calculating Molarity part 2: Vaccine structure

I’ve continued to think about this post at Respectful Insolence. You may already have read my previous post on this subject. I had a short conversation with Orac by email about the previous post; he had asked me what I thought about the alterations he made after thinking about my objections. One thing I answered that I thought he might add has sort of stuck with me and I think is worthy of a post of its own. What do you know, two posts in one week! This one may not be tremendously long, but it’s important and it bolsters the thesis written in that post on Respectful Insolence. They are about minimizing the contamination; this is true, but I would actually modify it by saying that you have to know what you’re looking at before you claim it’s a problem.

My previous writing here has been directed at my fellow skeptics and could be used by antivaccine advocates to attack people whose efforts I normally support. I would rather my efforts be focused at the greater good: namely to support vaccines. I don’t write often about my specific research expertise, but I’m mainly a soft matter researcher and I have a great deal of experience with colloids, nanoparticles and liquid crystals. This paper they’re talking about is my cup of tea! More than that, I’ve spent time at the university electron microscopy lab using SEM and elemental analysis in the form of EDS, shooting electron beams at precipitates obtained from colloidal suspensions.

I feel that the strategy of showing that vaccine contaminants are extraordinarily minor and not nearly as large as the antivaccine efforts try to claim is a good effort, but might also be the wrong strategy for tackling this science, particularly when screwing up the math. A part of my reason for feeling this way is that the argument is actually hinging on the existence, or not, of particulate objects in the preparations that the antivaxxers are examining. The paper that Orac (and, in a quotation, Skeptical Raptor) are looking at, is focusing on the spurious occurrence of a small particle content revealed in vaccine samples under SEM examination. The antivaxxers are counting and reporting particles found in SEM, of which they are reporting highly dispersive values: very few in some, many in others. They are also reporting instances where EDS shows unexpected metal content, like gold and others. Here, Orac notes that the particles are typically so few that they should be considered negligible and that’s fair… question is, what is the nature of these particles? And, should we take the antivaxxer EDS results seriously? It seems poor form for me to criticize my fellow skeptics and to not turn my attention against the subject that are analyzing –to allay my own conscience, I have to open my mouth! I therefore spent a bit of time of my own looking at the paper they were analyzing “New Quality-Control Investigations on Vaccines: Micro- and Nanocontamination.” I won’t link to it directly because I have no respect for it.

I’ll deal with the EDS first.

edsschematic

This picture is from https://s32.postimg.org/yryuggo1x/EDSschematic.gif

EDS is another spectroscopy technique that is sometimes called electron fluorescence. You shoot an electron beam (or X-ray) at a sample with the deliberate intent of knocking a deep orbital electron out of the atom. A higher energy shell electron will then drop down into the vacant orbital and emit an X-ray at the transition energy between the two orbitals. The spectrometer then detects the emitted X-rays. Because atoms have differing transition energies due to the depth of their shells, you can identify the element based on the X-ray frequencies emitted. A precondition for seeing this X-ray spectrum is that your impinging electron beam must be at sufficiently high energy to knock a deep shell electron up into the continuum, ionizing the atom and that energy might actually be considerable. There is also a confounder in that a lot of atoms have EDS peaks at fairly similar energies, meaning that it can be hard sometimes to distinguish them.

Here is a periodic table containing EDS peaks from Jeol:

energy-20table-20for-20eds-20analysis-1

Now, when you perform SEM, you spread your sample onto a conductive substrate and observe it in a fair vacuum. To generate an SEM image, the electron beam is rastered in a point across an area in the sample and an off-angle detector detects electron scatter. You’re literally trying to puff electrons up into the space over the sample by bombarding the surface. The substrate is usually conductive in order to replenish ejected electrons. The direction the ejection puff travels depends on the topography of the surface and the off-angle positioning of the detector means that some surfaces face the detector and give bright puffs while surfaces facing away do not. This gives the dimensionality to SEM images. Many SEM samples are sputtered with a layer of gold to improve contrast by introducing a material that is electron dense, but a system with the intent to use EDS would actually be directed at naked samples. With SEM, you always have to remember that the electron beam is intrinsically erosive and damaging. The beam doesn’t just bounce off the surface, it penetrates into the sample to a depth that I’ve heard called the interaction volume. The interaction volume is regulated by the accelerating voltage of the electron beam: higher accelerating voltages means deeper interacting volumes. Crisp SEM images that show clear surface features are usually obtained with low accelerating voltages which limit the interacting volume to only surface features of the sample. SEM images obtained at higher accelerating voltages take on a sort of translucent cast because the beam penetrates into the sample and interacts with an interior volume.

The combination of EDS with SEM is a little tricky. In SEM, EDS gains its excitation from the imaging electron beam of the system. Now, what makes this tricky is that samples like protein antigens in a vaccine are predominantly carbon and have low electron density, making them low contrast. You hit the sample at low accelerating voltages to see surface features. If you try to do EDS, you must hit the sample with electrons at energies sufficient to eject deep orbital electrons: it depends on the depth of that atom’s potential and on which electron is ejected, but atoms like gold can have deeper orbitals than atoms like carbon, meaning larger energies are needed to resolve deeper gold atom orbital transitions. Energies favorable to SEM imaging are sometimes very low compared to the energies needed to hit the EDS ejection energies. When you switch to EDS from imaging, you must be aware that you’re gaining a deeper penetration depth from the larger interaction volume of the beam. If your sample is thin and has low electron density, like carbonaceous biological molecules, you can easily be shooting through the sample and hitting the substrate, whatever that might be.

This can be a serious confounder because you don’t necessarily know where your signal is coming from. In the article commented on by Orac, the authors mention that they’re using an aluminum stub as an SEM mount, but they also talk about aluminum hydroxide and aluminum phosphate. The EDS aluminum signal is sensitive only to the aluminum atoms: you can’t know if the signal is coming from the mount or the sample! How do they know that the phosphate signal isn’t from phosphate buffered saline? That’s a common medical buffer that shows up in vaccine preparation. You can’t know if the material you’re looking at is aluminum phosphate from EDS or SEM.

As I mentioned, you also have to contend with close spacing of EDS peaks: if you look at that periodic table linked above, there’s a lot of overlap. To know gold, for certain, you really need to hit a couple of its EDS peaks to make certain you aren’t misreading the signal (all the peaks you get will have a gaussian width, meaning that you might have a broad signal that covers a number of peaks.) And, at least in the figure presented by Orac, they’re making their calls based on single peak identifications. This in addition to the other potential confounders Orac brought up: exogenous grit and the possibility that they’re reusing their SEM stub for other experiments. How can they be certain they aren’t getting spurious signals?

For EDS, I would be careful about making calls without having some means of independent analysis… like knowing what materials are supposed to be present and possibly hiring out elemental analysis of the sample. Will the gold or zirconium appear in the second analysis? Remember, science depends on being able to reproduce a result… if it was always spurious, a good tale is not being able to make it dance the second time around! Reporting everything doesn’t always mean that you know what you’re looking at. When I was doing EDS more routinely, I had a devil of a time hitting Titanium over Silicon and Gold signals… I knew titanium was present because I put it there, but I had trouble hitting it or ascribing it to specific particles in the SEM image. The EDS would not routinely allow me to reproduce an observation before the sample simply exploded while I was pounding high energy electrons into it.

Referring directly to the crank paper myself and I note that they make some extremely complicated mineral calls in their tables from the EDS data. Again, be aware that EDS is only sensitive to atoms specifically: you can’t know if Aluminum signals are aluminum phosphate or aluminum hydroxide or aluminum from the SEM stub. To know mineral crystals, you need precision ratios of the contents or X-ray diffraction or maybe Raman analysis of the mineral’s crystal lattice.

From their SEM imagery, it looks to me like they’re using a very strong voltage, which is confirmed in their methods section. They claim to be using voltages between 10 kV and 30 kV. These are very high voltages. For good surface resolution of a proteinaceous sample, I restricted myself to around 1 kV to 5 kV and sometimes below 1 kV and found that I was cutting holes through the specimen for much higher than that. Let me actually quote a piece of their methods for sample mounting:

A drop of about 20 microliter of vaccine is released from
the syringe on a 25-mm-diameter cellulose filter (Millipore,
USA), inside a flow cabinet. The filter is then deposited on an
Aluminum stub covered with an adhesive carbon disc.

They put a cellulose filter from Millipore into this SEM. I would have dried directly onto a clean silicon substrate. Here are the appropriate specimen mounts from Ted Pella. Note that the specimen mounts are not cellulose. Cellulose filters are used for a completely different purpose from normal SEM specimen mounts and, really importantly, you can’t efficiently clean a cellulose filter before putting your sample onto it. And, since these filters are actually designed to easily collect dust and grit as a part of their function, it is actually kind of difficult to get crap off of them. Without a control showing that their filters are clean of dust, there’s no way to be certain that this article isn’t actually a long survey examining the dust and foreign crap that can be found impregnating cellulose filters since the SEM acceleration voltages are unquestionably high enough to be cutting through a thin, low contrast biological layer on the top.

I won’t say more about the EDS.

So, I wanted also to address the particulate discussion a bit more directly too.

First off, from the paper directly, there is no real effort at reproduction or control. The source of the particles mentioned could be the carbon adhesive, the cellulose membrane or the vaccine sample. Having thought about it, I personally would bet on that cellulose: you don’t use them this way! They claim to be making preparations in a flow hood to keep dust out, but that doesn’t mean the dust isn’t already on any of the components being brought into the hood.

I stand by my original criticism of Orac’s post that these particles can’t be effectively quantified by molarity: those shown in the paper are all clearly micron scale objects, meaning that they have relatively large mass in and of themselves and constitute significant quantities of material. A better concentration unit for describing them would be mg/mL. I repeat that we don’t know the source of these objects for certain because the experiment is performed without true replication! If the vaccines are the source, the authors should have been able to perform a simple filtration of a vaccine specimen by a 0.22 um or 0.1 um filter and show that this drastically reduces contamination because many of their micrographs are of objects that should not have passed through such a filter… but they did no comparable experiment.

As I’ve been thinking about it, there are a couple potential different particles that could be observed under these conditions. The first is dust, as already detailed. The second possible source is vaccine components, but from a non-contaminating perspective. Orac used a quote by Skeptical Raptor who was rebutting the idea of Aluminum hydroxide being a strong contaminant by again mistaking particles for molecules. I won’t get into his difficulty calculating concentration since it was similar to what happened to Orac, but he was speaking about Aluminum hydroxide being a chemical that is a tiny fraction of a nanogram in a vaccine and therefore much less than environmental exposure to aluminum. I know I probably annoyed Orac with my thoughts about this as I was thinking out loud, but Aluminum hydroxide is not any sort of contaminant in the Cervarix vaccine friend Raptor was talking about: it’s the Adjuvant! Here’s a product insert for a Cervarix vaccine.

cervarix-pi-pil

In this vaccine, I found that there is approximately 500 ug of Aluminum hydroxide adjuvant added per 0.5 mL vaccine dose. If you look in the Aluminum hydroxide MSDS, there is no LD50 for this compound, no cancinogen warnings and no other special health precautions from chronic exposure –it irritates your eyes from contact, but what doesn’t? It got a 1 as a chemical hazard. Antivaxxers are crazy about being anti-aluminum based upon more decades old information that has since been rebutted, but for all intents and purposes, this material is pretty harmless. One special thing about it is that it’s actually very insoluble unless you drop an acid or a strong base on it, meaning that it should be no surprise if it’s a particulate in a neutral physiological pH vaccine (Ksp = 3×10^-34)! In vaccine design, and I haven’t spent a huge amount of time looking, but the main point of the adjuvant is to cause the antigen to be retained at the site of injection for a prolonged time so that the body can be exposed to it for a longer period. The adjuvant adheres the vaccine antigen and, by being an insoluble particle, it lodges in your tissues upon injection and stays there, holding the antigen with it. I found immunology papers on pubmed calling this establishment of a ‘immune depot’ for stimulating immune cells. Over a prolonged period, the insoluble Ksp will allow this compound to gradually dissolve and release the antigen out of the injection site, but Aluminum hydroxide will never have a very high concentration in the body as a whole: that’s what Ksp says, that the soluble phase of the salt components can be no greater than about 2.4 nM, which is well below established exposure limits recommended in the MSDS of between 30 nM and 100 nM (by my calculation).

But, if you look at vaccine adjuvant under SEM, it will be a colloidal particle with a core of Aluminum in the EDS! You can even see examples of this in the target paper itself: the SEM in figure 1 looks like a colloid fractal (they call it a ‘crystals’, but it looks like a precipitate deposition fractal), and the colloids are probably aluminum hydroxide particles caked with antigen protein (again, EDS can’t distinguish between  aluminum hydroxide mixed with PBS and aluminum phosphate, contrary to what the caption says). And, these colloids are INTENDED TO BE THERE by the manufacture of the vaccine. Note, this is a structure designed into the vaccine to help prolong the immune response.

I’ve been debating the source of the singleton particles that the authors of this paper take many SEM pictures of in the remainder of their work. They are mostly not regular enough to be designed nanoparticles or precipitate colloids and they often look like dust (Orac mentions as much). I’ve been skeptical of the sample preparation practices outlined in the paper: I think adding the cellulose membrane to the sample is asking for trouble. You use substrates in SEM to avoid contaminant issues and to provide surfaces that are easily cleaned prior to use. The cellulose polymer and vaccine antigens are all low contrast… at 30 kV accelerating voltage, the SEM could actually be interacting down into the volume of the filter (as I mentioned above). If this isn’t dust sitting on the filter prior to dropping the vaccine onto it, it might also be dust dropped randomly into the cellulose monomer during the manufacturing process and trapped there while polymerizing the membrane. The filter won’t care about most of this sort of contamination because the polymer will immobilize it. Another possibility, but the paper tests almost no hypotheses for purposes of error checking, so we’ll never know.

Overall, I found that paper incompetent. There’s no reason to take it seriously. I hope that my writing this blog post will help balance the previous post which attacked science advocates for misusing the science.