Powerball Probabilities

If you’ve read anything else in this blog, you’ll know I write frequently about my playing around with Quantum Mechanics. As a digression away from a natural system that is all about probabilities, an interesting little toy problem I decided to tackle is figuring out how the “win” probabilities are determined in the lottery game Powerball.

Powerball is actually quite intriguing to me. They have a website here which details by level all the winners across the whole country who have won a Powerball prize in any given drawing. You may have looked at this chart at some point while trying to figure out if your ticket won something useful. A part of what intrigues me about this chart is that it tells you in a given drawing exactly how much money was spent on Powerball and how many people bought tickets. How does it tell you this? Because probability is an incredibly reliable gauge of behavior with big samples sizes. And, Powerball quite willingly lays all the numbers out for you to do their book keeping for them by telling you exactly how many people won… particularly at the high-probability-to-win levels which push into the regime of Gaussian statistics. For big samples, like millions of people buying powerball tickets, where N=big, the errors on average values become relatively insignificant since they go as sqrt(N). And, the probabilities reveal what those average values are.

The game is doubly intriguing to me because of the psychological component that drives it. As the pot becomes big, people’s willingness to play becomes big even though the probabilities never change. It suddenly leaps into the national consciousness every time the size of the pot becomes big and people play more aggressively as if they had a greater chance of winning said money. It is true that somebody ultimately walks away with the big pot, but what’s the likelihood that somebody is you?

But, as a starter, what are the probabilities that you win anything when you buy a ticket? To understand this, it helps to know how the game is set up.

As everybody knows, powerball is one of these games where they draw a bunch of little balls printed with numbers out of a machine with a spinning basket and you, as the player, simply match the numbers on your ticket to the numbers on the balls. If your ticket matches all the numbers, you win big! And, as an incentive to make people feel like they’re getting something out of playing, the powerball company awards various combinations of matching numbers and adds in multipliers which increase the size of the award if you do get any sort of match. You might only match a number or two, but they reward you a couple bucks for your effort. If you really want, you can pick the numbers yourself, but most people simply grab random numbers spat out of a computer… not like I’m telling you anything you don’t already know at this point.

One of the interesting qualities of the game is that the probabilities of prizes are very easy to adjust. The whole apparatus stays the same; they just add or subtract balls from the basket. In powerball, as currently run, there are two baskets: the first basket contains 69 balls while the second contains 26. Five balls are drawn from the first basket while only one, the Powerball, is drawn from the second. There is actually an entire record available of how the game has been run in the past, how many balls were in either the first or second baskets and when balls were added or subtracted from each. As the game has crossed state lines and the number of players has grown, the number of balls has also steadily swelled. I think the choice in numbering has been pretty careful to make the smallest prize attainably easy to get while pushing the chances for the grand prize to grow enticingly larger and larger. Prizes are mainly regulated by the presence of the Powerball: if your ticket manages to match the Powerball and nothing else, you win a small prize, no matter what. Prizes get bigger as a larger number of the other five balls are matched on your ticket.

The probabilities at a low level work almost exactly as you would expect: if there are 26 balls in the powerball basket, at any given drawing, you have 1 chance in 26 of matching the powerball. This means that you have 1 chance in 26 of winning some prize as determined by the presence of the powerball. There are also prizes for runs of larger than three matching balls drawn from the main basket, which tends to push the probabilities of winning anything to a slightly higher frequency than 1 in 26.

For the number savvy this begins to reveal the economics of powerball: an assured win by these means requires you to spend, on average, $48. That’s 26 tickets where you are likely to have one that matches the powerball. Note, the prize for matching that number is $4. $44 dollars spent to net only $4 is a big overall loss. But, this 26 ticket buy-in is actually hiding the fact that you have a small chance of matching some sequence of other numbers and obtaining a bigger prize… and it would certainly not be an economic loss if you matched the powerball and then the 5 other balls, yielding you a profit in the hundreds of millions of dollars (and this is usually what people tell themselves as they spend $2 for each number).

The probability to win the matched powerball prize only, that is to match just the powerball number, is actually somewhat worse than 1 in 26. The probability is attenuated by the requirement that you hit no matches on any other of the five possible numbers drawn.

Finding the actual probability is as follows: (1/26)*(64/69)*(63/68)*(62/67)*(61/66)*(60/65). If you multiply that out and invert it, you get 1 hit in 38.32 tries. The first number is, of course, the chances of hitting the powerball, while the other five are the chance of hitting numbers that aren’t picked… most of these probabilities are naturally quite close to 1, so you are likely to hit them, but they are probabilities that count toward hitting the powerball only.

This number may not be that interesting to you, but lots of people play the game and that means that the likelihood of hitting just the powerball is close to Gaussian. This is useful to a physicist because it reveals something about the structure of the Powerball playing audience on any given week: that site I gave tells you how many people won with only the powerball, meaning that by multiplying that number by 38.32, you know how many tickets were purchased prior to the drawing in question. For example, as of the August 12 2017 drawing, 1,176,672 numbers won the powerball-only prize, meaning that very nearly 38.32*1,176,672 numbers were purchased: ~45,090,071 numbers +/- 6,715, including error (notice that the error here is well below 1%).

How many people are playing? If people mostly purchase maybe two or three numbers, around 15-20 million people played. Of course, I’m not accounting for the slavering masses who went whole hog and dropped $20 on numbers; if everybody did this, 4.5 million people played… truly, I can’t really know people’s purchasing habits for certain, but I can with certainty say that only a couple tens of millions of people played.

The number there reveals quite clearly the economics of the game for the period between the 8/12 drawing and the one a couple days prior: $90 million was spent on tickets! This is really quite easy arithmetic since it’s all in factors of 2 over the number of ticket numbers sold. If you look at the total prize pay-out, also on that page I provided, $19.4 million was won. This means that the Powerball company kept ~$70 million made over about three days, of which some got dumped into the grand prize and some went to whatever overhead they keep (I hear at least some of that extra is supposed to go into public works and maybe some also ends up in the Godfather’s pocket). Lucrative business.

If you look at the prize payouts for the game, most of the lower level prizes pay off between $4 and $7. You can’t get a prize that exceeds $100 until you match at least 4 balls. Note, here, that the probability of matching 4 balls (including the powerball) is about 1 in 14,494. This means, that to assure yourself a prize of $100, you have to spend ~$29,000. You might argue that in 14,494 tickets, you’ll win a couple smaller prizes ($4 prizes are 1 in 38, 1 in 91, and $7 prizes are 1 in 700 and 1 in 580) and maybe break even. Here’s the calculation for how much you’ll likely make for that buy-in: $4*(14,494*(1/38 + 1/91)) + $7*(14,494*(1/700 + 1/580))… I’ve rounded the probabilities a bit… =$2482.65. For $29,000 spent to assure a single $100 win, you are assured to win at most $2500 from lesser winnings for a total loss of $27,500. Notice, $4 on a $44 loss is about 10%, while $2500 on $27,500 is also about 10%… the payoff does not improve at attainable levels! Granted, there’s a chance at a couple hundred million, but the probability of the bigger prize is still pretty well against you.

Suppose you are a big spender and you managed to rake up $29,000 in cash to dump into tickets, how likely is it that you will win just the $1 million prize? That’s five matched balls excluding the powerball. The probability is 1 in 11,688,053. By pushing the numbers, your odds of this prize have become 14,500/11,688,053, or about 1 chance in 800. Your odds are substantially improved here, but 1 in 800 is still not a wonderful bet despite the fact that you assured yourself a fourth tier prize of $100! The grand prize is still a much harder bet with odds running at about 1 in 20,000, despite the amount you just dropped on it. Do you just happen to have $30,000 burning a hole in your pocket? Lucky you! Lots of people live on that salary for a year.

Most of this is simple arithmetic and I’ve been bandying about probabilities gleaned from the Powerball website. If you’re as curious about it as me, you might be wondering exactly how all those probabilities were calculated. I gave an example above of the mechanical calculation of the lowest level probability, but I also went and figured out a pair of formulae that calculate any of the powerball prize probabilities. It reminded me a bit of stat mech…

prob without powerball

prob with powerball

number for hits

I’ve colored the main equations and annotated the the parts to make them a little clearer. The final relation just shows how you can see the number of tries needed in order to hit one success, given a probability as calculated with the other two equations. The first equation differs from the second in that it refers to probabilities where you have matched numbers without managing to match the powerball, while the second is the complement, where you match numbers having hit the powerball. Between these two equations, you can calculate all the probabilities for the powerball prizes. Since probabilities were always hard for me, I’ll try to explain the parts of these equations. If you’re not familiar with the factorial operation, this is what is denoted by the exclamation point “!” and it denotes a product string counting up from one to the number of the factorial… for example 5! means 1x2x3x4x5. The special case 0! should be read as 1. The first part, in blue, is the probability relating to either hitting on missing the powerball, where K = 26, the number of balls in the powerball basket. The second part (purple) is the multiplicity and tells you how many ways that you can draw a certain number of matches (Y) to fill a number of open slots (X), while drawing a number of mismatches (Z) in the process, where X=Y+Z. In powerball, you draw five balls, so X=5 and Y is the number of matches (anywhere from 0 to 5), while Z is the number of misses. Multiplicity shows up in stat mech and is intimately related to entropy. The totals drawn (green) is perhaps mislabeled… here I’m referring to the number of possible choices in the main basket, N=69, and the number of those that will not be drawn M = N – X, or 64. I should probably have called it “Main basket balls” or something. The last two parts determine the probabilities related to the given number of hits (Y) (orange) and the given number of misses (Z) (red) and I have applied the product operator to spiffy up the notation. Product operator is another iterand much like the summation operator and means that you repeatedly multiply successive values, much like a factorial, but where the value you are multiplying is produced from a particular range and given a set form. In these, the small script m and n start at zero (my bad, this should be under the Pi) and iterate until they are just less than the number up top (Y – 1 or Z – 1 and not equal to). At the extreme cases of either all hits or all misses, the relevant product operator (either Miss or Hit respectively) must be set equal to one in order to not count it.

This is one of those rare situations where the American public does a probability experiment with the values all well recorded where it’s possible to see the outcomes. How hard is it to win the grand prize? Well, the odds are one in 292 million. Consider that the population of the United States is 323 million. That means that if everybody in the United States bought one powerball number, about one person would win.

Only one.

Thanks to the power of the media, everybody has the opportunity to know that somebody won. Or not. That this person exists, nobody wants to doubt, but consider that the odds of winning are so scant that you not only won’t win, but you pretty likely will never meet anyone who did. Sort of surreal… everything is above board, you would think, but the rarity is so rare that there’s no assurance that it ever actually happens. You can suppose that maybe it does happen because people do win those dinky $4 prizes, but maybe this is just a red herring and nobody really actually wins! Those winner testimonials could be from actors!

Yeah, I’m not much of a conspiracy theorist, but it is true that a founding tenant of the idea of a ‘limit’ in math is that 99.99999% is effectively 100%. Going to the limit where the discrepancy is so small as to be infinitesimal is what calculus is all about. It is fair to say that it very nearly never happens! Everybody wants to be the one who beats the odds, which is why Powerball tickets are sold, but the extraordinarily vast majority never will win anything useful… I say “useful” because winning $4 or $7 is always a net loss. You have to win one of the top three prizes for it to be anywhere near worth anything, which you likely never will.

One final fairly interesting feature of the probability is that you can make some rough predictions about how frequently the grand prize is won based on how frequently the first prize is won. First prize is matching all five of the balls, but not the powerball. This frequency is about once per 12 million numbers, which is about 26 times more likely than all 5 plus the Powerball. In the report on winnings, a typical frequency is about 2 to 3 winners per drawing. About 1 time in 26 a person with all five manages to get the powerball too, so, with two drawings per week and about 2.5 first prize winners per drawing, that’s five winners per week… which implies that the grand prize should be won at a frequency of about once every five to six weeks –every month and a half or so. The average here will have a very large standard deviation because the number of winners is compact, meaning that the error is an appreciable portion of the measurement, which is why there is a great deal of variation in period between times when the grand prize is won. The incidence becomes much more Poissonian and stochastic, and allows some prizes to get quite big compared to others and causes their values to disperse across a fairly broad range. Uncertainty tends to dominate, making the game a bit more exciting.

While the grand prize is small, the number of people winning the first prize in a given week is small (maybe none or one), but this number grows in proportion to the size of the grand prize (maybe 5 or 6 or as high as 9). When the prize grows large enough to catch the public consciousness, the likelihood that somebody will win goes up simply because more people are playing it and this can be witnessed in the fluctuating frequency of the wins of lower level prizes. It breathes around the pulse of maybe 200 million dollars, lubbing at 40 million (maybe 0 to 1 person winning the first prize) and dubbing at 250 million (with 5 people or more winning the first prize).

Quite a story is told if you’re boring and as easily amused as me.

In my opinion, if you do feel inclined to play the game, be aware that when I say you probably won’t win, I mean that the numbers are so strongly against you that you do not appreciably improve your odds by throwing down $100 or even $1,000. The little $4 wins do happen, but they never pay and $1,000 spent will likely not get you more than $100 in total of winnings. It might as well be a voluntary tax. Cherish the dream your $2 buys, but do not stake your well-being on it. There’s nothing wrong with dreaming as long as you understand where to wake up.

(edit 8-24-17)

There was a grand prize winner last night (Wednesday 8-23-17). The outcomes are almost completely as should be expected: the winner is in Massachusetts… the majority of the country’s population is located in states on either the east or west coast, so this is unsurprising. There were 40 match 5 winners, so you would anticipate at least one to be a grand prize winner, which is exactly what happened (1 in 26 difference between 5 with powerball and 5 without). There were about 5.9 million powerball-only winners, so 38.32*5.9 is 226 million total powerball numbers sold in the run-up to last night’s drawing… with grand prize odds of 1 in 292 million, this is approaching parity. This means that more than $452 million was spent since Saturday on powerball lottery numbers (calculation excludes the extra dollar spent on multipliers). About five times as many ticket numbers were sold for this drawing as when I made my original analysis a week ago. With that many tickets sold, there was almost assuredly going to be a winner last night. This is not to say there shouldn’t have been a winner before this –probability is a fickle mistress– but the numbers are such that it was unlikely, but not impossible, for the prize to grow bigger. The last time the powerball was won was on 6-10-17, about two months and thirteen days ago… you can know that this is an unusually large jackpot because this period is longer than the usual period between wins (I had generously estimated 6 weeks based on the guess of 2 match 5 winners per drawing, but I think this might actually be a bit too high).

There was only one grand prize winning number out of 226 million tickets sold (not counting all the drawings that failed to yield a grand prize winner prior to this.) Think on that for a moment.

Advertisements

Revoke Shaquille’s Doctorate in Education… he doesn’t deserve it.

We are in a world where truth doesn’t matter.

Read this and weep. These men are apparently the authorities of truth in our world.

Everywhere you look, truth itself is under assault. It doesn’t really matter whether you believe, it really doesn’t matter what you want it to say. Truth is not beholden to human whims. We can’t ultimately change it by manipulating it with cellphone apps. We can’t reinterpret it if we wanted to. One of these days, in however great of importance we hold ourselves, the truth will catch up. And we will deserve what happens to us after that point in time.

“It’s true. The Earth is flat. The Earth is flat. Yes, it is. Listen, there are three ways to manipulate the mind — what you read, what you see and what you hear. In school, first thing they teach us is, ‘Oh, Columbus discovered America,’ but when he got there, there were some fair-skinned people with the long hair smoking on the peace pipes. So, what does that tell you? Columbus didn’t discover America. So, listen, I drive from coast to coast, and this s*** is flat to me. I’m just saying. I drive from Florida to California all the time, and it’s flat to me. I do not go up and down at a 360-degree angle, and all that stuff about gravity, have you looked outside Atlanta lately and seen all these buildings? You mean to tell me that China is under us? China is under us? It’s not. The world is flat.”

This spoken by a man with a public platform and a Doctorate in Education. This is the paragon of teachers!

{Edit: 3-20-17 since I’m thinking better about this now, I will rebut his meaningless points.

First, arguments about whether or not Columbus discovered America are a non-sequitur as to whether or not the Earth is round.

Second, driving coast to coast can tell you very little about the overall roundness of the Earth, especially if you aren’t paying attention to the things that do. The curvature of the earth is extremely small: only about 8 inches per mile. This means that on the scale of feet, the curvature is in thousandths of an inch, so that you can’t measure it to not be flat at the dimensions that a human being can meaningfully experience standing directly on the surface. Can you see the couple feet of curvature at a distance of fifty miles looking off a sky scraper in the middle of Atlanta, or distinguish the deviation from the same direction of ‘up’ of two sky scrapers separated by ten miles? You can’t resolve tens of feet with your eyes at a distance of miles. That said, you actually can see Pikes Peak emerge over the horizon as you come out of Kansas into Colorado, but I suppose you would explain that away by some sort of giant conspiracy theory elevator device. To actually start to directly see the curvature at a meaningful degree with your eyes, you need to be at an altitude of hundreds of thousands of feet above the surface… which you could actually do as somebody with ridiculous wealth.

Third, how would you know that China is not ‘under?’ How would you know where China isn’t when you wouldn’t be able to see that distance along a flat surface no matter which direction you look? Can you explain the phase factor that you pick up to your day that causes your damn jet lag every time your wealthy, ignorant ass travels to places like China? By your logic, you should be able to use your colossal wealth to travel to where the globe of the sun pops out of the plane of the Earth in the east every morning. Hasn’t it once occurred to you that if you’re truly right, you should test a hypothesis first before making an assertion that can be easily shown to be wrong?}

You made a mint of money on the backs of a lot of people who made it possible for you to be internationally known, all because of the truth that they determined for you! You do not respect them, you do not understand the depth of their efforts, you do not know how hard they worked. You do not deserve the soapbox they built for you.

For everyone who values the truth, take a moment to share a little about it. Read other things in my blog to see what else I have to say. I have very little I can say right this second; I’m aghast and I feel the need to cry. My hard work is rendered essentially meaningless by morons like Shaquille O’Neal… men of no particular intellect or real skill dictating what reality ‘actually is’ while having no particular capacity to judge it for themselves.

From a time before cellphone apps and computer graphics manipulation, I leave you with one of the greatest pinnacles of truth ever to be achieved by the human species:

moon_and_earth_lroearthrise_frame_0

Like it or not, that’s Earth.

If you care to, I ask you to go and hug the scientist or engineer in your life. Tell them that you care about what they do and that you value their hard work. The flame of enlightenment kindled in our world is precious and at dire risk of guttering out.

Edit:

An open letter to the Shaq:

Dear Shaquille O’Neal,

I’m incredibly dismayed by your use of your public personae to endorse an intellectually bankrupt idea like flat earth conspiracy theories particularly in light of your Doctorate Degree in Education. If you are truly educated, and value truth, you should know that holding this stance devalues the hard work of generations of physicists and engineers and jeopardizes the standing of actual scientific truth in the public arena. The purpose of an educator is to educate, not to misinform… the difference is in whether you spread the truth or not.

There is so much evidence of the round earth available in the world around us without appeal to digital media, the cycle of the seasons, scheduled passages of the moon and the planets, observations of Coriolis forces in the weather patterns and simple ballistics, the capacity to jump in an airplane heading west and continue to head west until you get back to where you started, the passage of satellites and spacecraft visible from the surface of the Earth over our heads, the very existence of GPS available on your goddamn smart phone, to the common shapes of objects like the moon and planets visible through telescopes in the night skies around us, that appeals to flat earth conspiracies show a breathtaking lack of capacity to understand how the world fits together. That it comes from a figure who is ostensibly a force of truth –an educator– is truly deeply hurtful to those of us who developed that truth… modern scientists and engineers.

Since you are so profoundly wealthy, you among all people are singularly in a position to prove to yourself the roundness of our world. I bet you 50 million dollars that I don’t even have and will spend my entire life trying to repay, that you can rent an airliner with an honest pilot of your choice and fly west along a route also of your choice, and come back to the airport you originally departed from without any significant eastward travel. Heck, you can do the same exercise heading north or south if you want. And, if that experiment isn’t enough, use your celebrity to talk to Elon Musk: I hear he’s selling tickets now to rich people for flights around the moon. I bet he would build you a specially-sized two-person-converted-to-one berth in his Dragon capsule to give you a ride high enough to take a look for yourself at the shape of the world, if your eyes are the only thing you’ll believe. If you lose, you pay a 49 million dollar endowment to the University of Colorado Department of Physics for the support of Physics Education –and a million to me for the heartache you caused making a mockery of my education and profession by use of your ill-gotten public soapbox and mindlessly open mouth. Moreover, if you lose, you relinquish your Doctorate and make a public apology for standing for exactly the opposite of what that degree means.

Sincerely,

Foolish Physicist
of Poetry in Physics

Edit 4-5-17:

So, Shaq walked back his comments.

O’Neal: “The first part of the theory is, I’m joking, you idiots. That’s the first part of the theory. The second part is, I said jokingly that when I’m in my bus and I drive from Florida to California, which I do every summer, it seems to be flat. When I’m in my plane, and we’re getting ready to land, and I open up the window, and I’m looking at all the land that we’re flying over, it seems to be flat.”

“This world we live in, people take things too seriously, but I’m going to give the people answers to my test,” he said. “Knowing that I’m a funny guy, if something seems controversial or boom, boom, boom, you’ve got to have my funny points on, right? So now, once you have my funny points on, that should eradicate and get rid of all your negative thoughts, right? That’s what you should do when you hear a Shaquille O’Neal statement, OK? You should know that he has funny points right over here, and what did he say? Boom, boom, boom, add the funny points. You either laugh or you don’t laugh, but don’t take me seriously. When I want you to take me seriously, you will know by the tone of my voice that I’m being serious.”

“No, I don’t think that,” O’Neal told Harbinger of a flat Earth. “It was a joke, OK? So know that when Shaquille O’Neal says something, 80 percent of the time I’m being humorous, and it is a joke. And 20 percent of the time, I’m being serious, but when I’m being serious, you’ll know. You want to see me, seriously? See me and Charles Barkley going back and forth on TNT. That’s when I’m mad and when I’m serious. Other than that, you’re not going to get that out of me, so I was just joking people. The Earth is not round, it’s flat. I mean, the Earth is not flat, it’s round.”

One thing that should be added to these statements is this: there are people who are actively spreading misinformation about the state of the world, for instance that the earth is flat. The internet, Youtube, blogs, you name it, has given these people a soapbox that they would not otherwise have. Given that there is a blatant antiscientific thread in the United States which is attacking accepted, settled science as a big cover-up designed to destroy the rights of the everyday man, it is the duty of scientists and educators to take the truth seriously. In a world where Theory of Evolution, Climatology and Vaccine science are all actively politicized, we have to stand up for the truth.

Where real scientists are about studying and doing our work, the antiscientific activists are solely about spreading their belief… they don’t study, they don’t question, they spend their time actively lobbying the government and appealing to legislators, running for and getting onto school boards where they have an opportunity to pick which books are presented to school districts and various places where they can actively undercut what students are told about the truth of the world. They aren’t spending their energy studying, they are spending their energy solely on tinkering with the social mechanisms which provide our society with the next generation of scientists. As such, their efforts are more directed at undercutting the mechanisms that preserve the truth rather than on evaluating the truth… as scientists do. These people can do huge damage to us all. Every screwball coming out of a diploma mill “Quantum University” with a useless, unaccredited ‘PhD’… who goes off to promote woo-bong herbalist healthcare as an alternative to science based medicine, does damage to us all by undercutting what it means to get healthcare and by putting crankery and quackery in all seriousness at the same level as scientific truth when there should be no comparison.

If everybody understood that there is no ‘alternative’ to the truth, joking about what is true would mean something totally different to me. But, we live in a world where ‘alternative facts’ are a real thing and where everyone with a soapbox can say whatever they wish without fear of reprisal. Lying is a protected right! But someone has to stand up for truth. That someone should be scientists and educators. That should include an ‘education doctorate’ like the Shaq. If he were an NBA numbskull without the doctorate, I would care less: Kyrie Irving is a joke. But he isn’t; he’s got a doctorate and he has a responsibility to uphold what that degree means! The only reason humor in irony can work is if it can be clear that one is being ironic instead of serious… and that is never completely clear in this world.

Nuclear Toxins

A physicist from Lawrence Livermore Labs has been restoring old nuclear bomb detonation footage. This seems to me to be an incredibly valuable task because all of the original footage was shot on film, which is currently in the process of decaying and falling apart. There have been no open air nuclear bomb detonations on planet Earth since probably the 1960s, which is good… except that people are in the process of forgetting exactly how bad a nuclear weapon is. The effort of saving this footage makes it possible for people to know something about this world-changing technology that wasn’t previously declassified. Nukes are sort of mythical to a body like me who wasn’t even born until about the time that testing went underground: to everybody younger than me, I suspect that nukes are an old-people thing, a less important weapon than computers. That Lawrence Livermore Labs has posted this footage to Youtube is an amazing public service, I think.

As I was reading an article on Gizmodo about this piece of news, I happened to wander into the comment threads to see what the echo chamber had to say about all this. I should know better. Admittedly, I actually didn’t post any comments castigating anyone, but there was a particular comment that got me thinking… and calculating.

Here is the comment:

Nuclear explosions produce radioactive substances that are rare in nature — like carbon-14, a radioactive form of the carbon atom that forms the chemical basis of all life on earth.

Once released into the atmosphere, carbon-14 enters the food chain and gets bound up in the cells of most living things. There’s still enough floating around for researchers to detect in the DNA of humans born in 2016. If you’re reading this, it’s inside you.

This is fear mongering. If you’ve never seen fear mongering before, this is what it looks like. The comment is intended to deliberately inspire fear not just in nuclear weapons, but in the prospect of radionuclides present in the environment. The last sentence is pure body terror. Dear godz, the radionuclides, they’re inside me and there’s no way to clean them out! I thought for a time about responding to this comment. I decided not to because there is enough truth here that anyone should probably stop and think about it.

For anyone curious, the wikipedia article on the subject has some nice details and seems thorough.

It is true the C-14 is fairly rare in nature. The natural abundance is 1 part per trillion of carbon. It is also true that the atmospheric test detonations of nuclear bombs created a spike in the C-14 present in the environment. And, while it’s true that C-14 is rare, it is actually not technically unnatural since it is formed by cosmic rays impinging on the upper atmosphere. For the astute reader, C-14 produced by cosmic rays forms the basis of radiocarbon dating since C-14 is present at a particular known, constant proportion in living things right up until you die and stop uptaking it from the environment –a scientist can then determine the date when living matter died based on the radioactive decay curve for C-14.

Since it’s not unnatural, the real question here is whether the spike of radionuclides created by nuclear testing significantly increases the health hazard posed by excess C-14 above and beyond what it would normally be. You have it in your body anyway, is there greater hazard due to the extra amount released? This puzzle is actually a somewhat intriguing one to me because I worked for a time with radionuclides and it is kind of chilling all the protective equipment that you need to use and all the safety measures that are required. The risk is a non-trivial one.

But, what is the real risk? Does having a detectable amount of radionuclide in your body that can be ascribed to atomic air tests constitute an increased health threat?

To begin with, what is the health threat? For the particular case of C-14, one of a handful of radionuclides that can be incorporated into your normal body structures, the health threat would obviously come from the radioactivity of the atom. In this particular case, C-14 is a beta-emitter. This means that C-14 radiates electrons; specifically, one of the neutrons in the atom’s nucleus converts into a proton by giving off an electron and a neutrino, resulting in the carbon turning into nitrogen. The neutrino basically doesn’t interact with anything, but the radiated electron can travel with energies of 156 keV (or about 2.4×10^-14 Joules). This will do damage to the human body in two routes, either by direct collision of the radiated electron with the body, or by a structurally important carbon atom converting into a nitrogen atom during the decay process if the C-14 was part of your body already. Obviously, if a carbon atom turns suddenly into nitrogen, that’s conducive to organic chemistry occurring since nitrogen can’t maintain the same number of valence interactions as carbon without taking on a charge. So, energy deposition by particle collision, or spontaneous chemistry is the potential cause of the health threat.

In normal terms, the carbon-nitrogen chemistry routes for damage are not accounted for in radiation damage health effects simply because of how radiation is usually encountered: you need a lot of radiation in order to have a health effect, and this is usually from an exogenous source, that is, provided by a radiation source that is outside the body rather than incorporated with it, like endogenous C-14. This would be radiation much like the UV radiation which causes a sunburn. Heath effects due to radiation exposure are measured on a scale by a dose unit called a ‘rem.’ A rem expresses an amount of radiation energy deposited into body mass, where 1 rem is equal to 1.0×10^-5 Joules of radiation energy deposited into 1 gram of body mass. Here is a table giving the general scale of rem doses which causes health effects. People who work around radiation as part of their job are limited to a full-body yearly dose of 5 rem, while the general public is limited to 0.1 rem per year. Everybody is expected to have an environmental radiation dose exposure of about 0.3 rem per year and there’s an allowance of 0.05 rem per year for medical x-rays. It’s noteworthy that not all radiation doses are created equal and that the target body tissue matters; this is manifest by different radiation doses being allowed to occur to the eyes (15 rem) or the extremities, like the skin (50 rem). A sunburn would be like a dose of 100 to 600 rem to the skin.

What part of an organism must the damage affect in order to cause a health problem? Really, only one is truly significant, and that’s your DNA. Easy to guess. Pretty much everything else is replaceable to the extent that even a single cell dying from critical damage is totally expendable in the context of an organism built of a trillion cells. The problem of C-14 being located in your DNA directly is numerically a rather minor problem: DNA actually only accounts for about 3% of the dry mass of your cells, meaning that only about 3% of the C-14 incorporated into your body is directly incorporated into your DNA, so that most of the damage to your DNA is due to C-14 not directly incorporated in that molecule. This is not to say that chemistry doesn’t cause the damage, merely that most of the chemical damage is probably due to energy deposition in molecules around the DNA which then react with the DNA, say by generation of superoxides or similar paths. This may surprise you, but DNA damage isn’t always a complete all-or-nothing proposition either: to an extent, the cell has machinery which is able to repair damaged DNA… the bacterium Dienococcus radiodurans is able to repair its DNA so efficiently that it’s able to subsist indefinitely inside a nuclear reactor. Humans have some repair mechanisms as well.

Cells handling radiation damage in humans have about two levels of response. For minor damage, the cell repairs its DNA. If the DNA damage is too great to fix, a mechanism triggers in the cell to cause it to commit suicide. You can see the effect of this in a sunburn: critically radiation damaged skin cells commit suicide en mass in the substratum of your skin, ultimately sacrificing the structural integrity of your skin, causing the external layer to sough off. This is why your skin peels due to a sunburn. If the damage is somewhere in between, matters are a little murkier… your immune system has a way of tracking down damaged cells and destroying them, but those screwed up cells sometimes slip through the cracks to cause serious disease. Inevitably cancer. Affects like these emerge for ~20 rem full body doses. People love to worry about superpowers and three-arm, three-eye type heritable mutations due to radiation exposure, but congenital mutations are a less frequent outcome simply because your gonads are such a small proportion of your body; you’re more likely to have other things screwed up first.

One important trick in all of this to notice is that to start having serious health effects that can be clearly ascribed to radiation damage, you must absorb a dose of greater than about 5 rem.

Now, what kind of a radiation dose do you acquire on a yearly basis from body-incorporated C-14 and how much did that dose change in people due to atmospheric nuclear testing?

I did my calculations on the supposition of a 70 kg person (which is 154 lbs). I also adjusted rem into a more easily used physical quantity of Joules/gram (1 rem = 1×10^-5 J/g, see above.)  One rem of exposure for a 70 kg person works out to an absorbed dose of 0.7 J/year. An exposure sufficient to hit 5 rems is 3.5 J/year while 20 rem is 14 J/year. Beta-electrons from c-14 maximally hit with 2.4×10^-14 J/strike (150 keV) with about 0.8×10^-14 J/hit on average (50 keV).

In the following part of the calculation, I use radioactive decay and half-life in order to determine the rate of energy transference to the human body on the assumption that all the beta-electron energy emitted by radiation is absorbed by the human body. Radiation rates are a purely probabilistic event where the likelihood of seeing a radiated electron is proportional to the size of the radioactive atom population. The differential equation is a simple one and looks like this:

decay rate differential equation

This just means that the rate of decay (and therefore electron production rate) is proportional to the size of the decaying population where the k variable is a rate constant that can be determined from the half-life. The decay differential equation is solved by the following function:

exponential decay

This is just a simple exponential decay which takes an initial population of some number of objects and reduces it over time. You can solve for the decay constant by plugging the half-life into the time and simply asserting that you have 1/2 of your original quantity of objects at that time. The above exponential rearranges to find the decay constant:

decay constant

Here, Tau is the half-life in seconds (I could have used my time as years, but I’m pretty thoroughly trained to stick with SI units) and I’ve already substituted 1/2 for the population change. With k from half-life, I just need the population of radiation emitters present in the body in order to know the rate given in the first equation above… where I would simply multiply k by N.

To do this calculation, the half-life of C-14 is known to be 5730 years, which I then converted into seconds (ick; if I only care about years, next time I only calculate in years). This gives a decay constant of 3.836×10^-12 emissions/sec. In order to get the decay rate, I also need the population of C-14 emitters present in the human body. We know that C-14 has a natural prevalence of 1 per trillion and also that a 70 kg human body is 16 kg carbon after a little google searching, which gives me 1.6×10^-8 g of C-14. With C-14’s mass of 14 g/mole and Avagadro’s number, this gives about 6.88×10^14 C-14 atoms present in a 154 lb person. This population together with the rate constant gives me the decay rate by the first equation above, which is 2.639×10^3 decays per second. Energy per beta-electron absorbed times the decay rate gives the rate of energy deposited into the body per second on the assumption that all beta-decay energy is absorbed by the target: 2.639×10^3 decays/sec * 2.4×10^-14 Joules/decay = 6.33 x 10^-11 J/s. For the course of an entire year, the amount of energy works out to about 0.002 Joules/year.

This gets me to a place where I can start making comparisons. The exposure limit for any old member of the general public to ‘artificial’ radiation is 0.1 rem, or 0.07 J/year. The maximum… maximum… contribution due to endogenous C-14 is 35 times smaller than the allowed public exposure limits (for mean energy, it’s more like 100 times smaller). On average, endogenous C-14 gives 1/100th of the allowed permitted artificial radiation dose.

But, I’ve actually fudged here. Note that I said above that humans normally get a yearly environmental radiation dose of about 0.3 rem (0.21 J/year)… meaning that endogenous C-14 only provides about 1/300th of your natural dose. Other radiation sources that you encounter on a daily basis provide radiation exposure that is 300 times stronger than C-14 directly incorporated into the structure of your body. And, keep in mind that this is way lower than the 5 rem where health effects due to radiation exposure begin to emerge.

How does C-14 produced by atmospheric nuclear testing figure into all of this?

The wikipedia article I cited above has a nice histogram of detected changes in the environmental C-14 levels due to atmospheric nuclear testing. At the time of such testing, C-14 prevalence spiked in the environment by about 2 fold and has decayed over the intervening years to be less than 1.1-fold. This has an effect on C-14 exposure specifically of changing it from 1/300th of your natural dose to 1/150th, or about 0.5%, which then tapers to less than a tenth of a percent above natural prevalence in less than fifty years. Detectable, yes. Significant? No. Responsible for health effects…… not above the noise!

This is not to say that a nuclear war wouldn’t be bad. It would be very bad. But, don’t exaggerate environmental toxins. We have radionuclides present in our bodies no matter what and the ones put there by 1950s nuclear testing are only a negligible part, even at the time –what’s 100% next to 100.5%? A big nuclear war might be much worse than this, but this is basically a forgettable amount of radiation.

For anybody who is worried about environmental radiation, I draw your attention back to a really simple fact:

depositphotos_9985842_s-199x300

The woman depicted in the picture above has received a 100 to 600 rem dose of very (very very) soft X-rays by deliberately sitting out in front of a nuclear furnace. You can even see the nuclear shadow on her back left by her scant clothing. Do you think I’m kidding? UV light, which is lower energy than x-rays, but not by that much… about 3 eV versus maybe 500 eV, is ionizing radiation which is absorbed directly by skin DNA to produce real radiation damage, which your body treats indistinguishably from how it treats damage from particle radiation of radionuclides or X-rays or gamma-rays. The dose which produced this affect is something like two to twelve times higher than the federally permitted dose that radiation workers are allowed to receive in their skin over the course of an entire year… and she did it to herself deliberately in a matter of hours!

Here’s a hint, don’t worry about the boogieman under the bed when what you just happily did to yourself over the weekend among friends is much much worse.

What is a qubit?

I was trolling around in the comments of a news article presented on Yahoo the other day. What I saw there has sort of stuck with me and I’ve decided I should write about it. The article in question, which may have been by an outfit other than Yahoo itself, was about the recent decision by IBM to direct a division of people toward the task of learning how to program a quantum computer.

Using the word ‘quantum’ in the title of a news article is a sure fire way to incite click-bait. People flock in awe to quantum-ness even if they don’t understand what the hell they’re reading. This article was a prime example. All the article really talked about was that IBM has decided that quantum computers are now a promising enough technology that they’re going to start devoting themselves to the task of figuring out how to compute with them. Note, the article spent a lot of time kind of masturbating over how marvelous quantum computers will be, but it really actually didn’t say anything new. Another tech company deciding to pretend to be in quantum computing by figuring out how to program an imaginary computer is not an advance in our technology… digital quantum computers are generally agreed to be at least a few years off yet and they’ve been a few years off for a while now. There’s no guarantee that the technology will suddenly emerge into the mainstream –and I’m neglecting the DSpace quantum computer because it is generally agreed among experts that DSpace hasn’t even managed to prove that their qubits remain coherent through a calculation to actually be a useful quantum computer, let alone that they achieved anything at all by scaling it up.

The title of this article was a prime example of media quantum click-bait. The title boldly declared that “IBM is planning to build a quantum computer millions of times faster than a normal computer.” Now, that title was based on an extrapolation in the midst of the article where a quantum computer containing a mere 1000 qubits suddenly becomes the fastest computing machine imaginable. We’re very used to computers that contain gigabytes of RAM now, which is actually several billion on-off switches on the chip, so a mere 1,000 qubits seems like a really tiny number. This should be underwritten with the general concerns of the physics community that an array of 100 entangled qubits may exceed what’s physically possible… and it neglects that the difficulty of dealing with entangled systems increases exponentially with the number of qubits to be entangled. Scaling up normal bits doesn’t bump into the same difficulty. I don’t know if it’s physically possible or not, but I am aware that IBM’s declaration isn’t a major break-through so much as splashing around a bit of tech gism to keep the stockholders happy. All the article really said was that IBM has happily decided to hop on the quantum train because that seems to be the thing to do right now.

I really should understand that trolling around in the comments on such articles is a lost cause. There are so many misconceptions about quantum mechanics running around in popular culture that there’s almost no hope of finding the truth in such threads.

All this background gets us to what I was hoping to talk about. One big misconception that seemed to be somewhat common among commenters on this article is that two identical things in two places actually constitute only one thing magically in two places. This may stem from a conflation of what a wave function is versus what a qubit is and it may also be a big misunderstanding of the information that can be encoded in a qubit.

In a normal computer we all know that pretty much every calculation is built around representing numbers using binary. As everybody knows, a digital computer switch has two positions: we say that one position is 0 and the other is 1. An array of two digital on-off switches then can produce four distinct states: in binary, to represent the on-off settings of these states, we have 00, 01, 10 and 11. You could easily map those four settings to mean 1, 2, 3 and 4.

Suppose we switch now to talk about a quantum computer where the array is not bits anymore, but qubits. A very common qubit to talk about is the spin of an atom or an electron. This atom can be in two spin states: spin-up and spin-down. We could easily map the state spin-up to be 1, and call it ‘on,’ while spin-down is 0, or ‘off.’ For two qubits, we then get the states 00, 01, 10 and 11 that we had before, where we know about what states the bits are in, but we also can turn around and invoke entanglement. Entanglement is a situation where we create a wave function that contains multiple distinct particles at the same time such that the states those particles are in are interdependent on one another based upon what we can’t know about the system as a whole. Note, these two particles are separate objects, but they are both present in the wave function as separate objects. For two spin-up/spin-down type particles, this can give access to the so-called singlet and triplet states in addition to the normal binary states that the usual digital register can explore.

The quantum mechanics works like this. For the system of spin-up and spin-down, the usual way to look at this is in increments of spinning angular momentum: spin-up is a 1/2 unit of angular momentum pointed up while spin-down is -1/2 unit of angular moment, but pointed the opposite direction because of the negative sign. For the entangled system of two such particles, you can get three different values of entangled angular momentum: 1, 0 and -1. Spin 1 has both spins pointing up, but not ‘observed,’ meaning that it is completely degenerate with the 11 state of the digital register since it can’t fall into anything but 11 when the wave function collapses. Spin -1 is the same way: both spins are down, meaning that they have 100% probability of dropping into 00. The spin 0 state, on the other hand, is kind of screwy, and this is where the extra information encoding space of quantum computing emerges. The 0 states could be the symmetric combination of spin-up with spin-down or the anti-symmetric combination of the same thing. Now, these are distinct states, meaning that the size of your register just expanded from (00, 01, 10 and 11) to (00, 01, 10, 11 plus anti-symmetric 10-01 and symmetric 10+01). So, the two qubit register can encode 6 possible values instead of just 4. I’m still trying to decide if the spin 1 and -1 states could be considered different from 11 and 00, but I don’t think they can since they lack the indeterminacy present in the different spin 0 states. I’m also somewhat uncertain whether you have two extra states to give a capacity in the register of 6 or just 5 since I’m not certain what the field has to say about the practicality of determining the phase constant between the two mixed spin-up/spin-down eigenstates, since this is the only way to determine the difference between the symmetric and anti-symmetric combinations of spin.

As I was writing here, I realized also that I made a mistake myself in the interpretation of the qubit as I was writing my comment last night. At the very unentangled minimum, an array of two qubits contains the same number of states as an array of two normal bits. If I consider only the states possible by entangled qubits, without considering the phasing constant between 10+01 and 10-01, this gives only three states, or at most four states with the phase constant. I wrote my comment without including the four purely unentangled cases, giving fewer total states accessible to the device, or at most the same number.

Now, the thing that makes this incredibly special is that the number of extra states available to a register of qubits grows exponentially with the number of qubits present in the register. This means that a register of 10 qubits can encode many more numbers than a register of ten bits! Further, this means that fewer bits can be used to make much bigger calculations, which ultimately translates to a much faster computer if the speed of turning over the register is comparable to that of a more conventional computer –which is actually somewhat doubtful since a quantum computer would need to repeat calculations potentially many times in order to build up quantum statistics.

One of the big things that is limiting the size of quantum computers at this point is maintaining coherence. Maintaining coherence is very difficult and proving that the computer maintains all the entanglements that you create 100% of the time is exceptionally non-trivial. This comes back to the old cat-in-the-box difficulty of truly isolating the quantum system from the rest of the universe. And, it becomes more non-trivial the more qubits you include. I saw a seminar recently where the presenting professor was expressing optimism about creating a register of 100 Josephson junction type qubits, but was forced to admit that he didn’t know for sure whether it would work because of the difficulties that emerge in trying to maintain coherence across a register of that size.

I personally think it likely that we’ll have real digital quantum computers in the relatively near future, but I think the jury is still out as to exactly how powerful they’ll be when compared to conventional computers. There are simply too many variables yet which could influence the power and speed of a quantum computer in meaningful ways.

Coming back to my outrage at reading comments in that thread, I’m still at ‘dear god.’ Quantum computers do not work by teleportation: they do not have any way of magically putting a single object in multiple places. The structure of a wave function is defined simply by what you consider to be a collection of objects that are simultaneously isolated from the rest of the universe at a given time. A wave function quite easily spans many objects all at once since it is merely a statistical description of the disposition of that system as seen from the outside, and nothing more. It is not exactly a ‘thing’ in and of itself insomuch as collections of indescribably simple objects tend to behave in absolutely consistent ways among themselves. Where it becomes wave-like and weird is that we have definable limits to how precisely we can understand what’s going on at this basic level and that our inability to directly ‘interact’ with that level more or less assures that we can’t ever know everything about that level or how it behaves. Quantum mechanics follows from there. It really is all about what’s knowable; building a situation where certain things are selectively knowable is what it means to build a quantum computer.

That’s admittedly pretty weird if you stop and think about it, but not crazy or magical in that wide-eyed new agey smack-babbling way.

Calculating Molarity part 2: Vaccine structure

I’ve continued to think about this post at Respectful Insolence. You may already have read my previous post on this subject. I had a short conversation with Orac by email about the previous post; he had asked me what I thought about the alterations he made after thinking about my objections. One thing I answered that I thought he might add has sort of stuck with me and I think is worthy of a post of its own. What do you know, two posts in one week! This one may not be tremendously long, but it’s important and it bolsters the thesis written in that post on Respectful Insolence. They are about minimizing the contamination; this is true, but I would actually modify it by saying that you have to know what you’re looking at before you claim it’s a problem.

My previous writing here has been directed at my fellow skeptics and could be used by antivaccine advocates to attack people whose efforts I normally support. I would rather my efforts be focused at the greater good: namely to support vaccines. I don’t write often about my specific research expertise, but I’m mainly a soft matter researcher and I have a great deal of experience with colloids, nanoparticles and liquid crystals. This paper they’re talking about is my cup of tea! More than that, I’ve spent time at the university electron microscopy lab using SEM and elemental analysis in the form of EDS, shooting electron beams at precipitates obtained from colloidal suspensions.

I feel that the strategy of showing that vaccine contaminants are extraordinarily minor and not nearly as large as the antivaccine efforts try to claim is a good effort, but might also be the wrong strategy for tackling this science, particularly when screwing up the math. A part of my reason for feeling this way is that the argument is actually hinging on the existence, or not, of particulate objects in the preparations that the antivaxxers are examining. The paper that Orac (and, in a quotation, Skeptical Raptor) are looking at, is focusing on the spurious occurrence of a small particle content revealed in vaccine samples under SEM examination. The antivaxxers are counting and reporting particles found in SEM, of which they are reporting highly dispersive values: very few in some, many in others. They are also reporting instances where EDS shows unexpected metal content, like gold and others. Here, Orac notes that the particles are typically so few that they should be considered negligible and that’s fair… question is, what is the nature of these particles? And, should we take the antivaxxer EDS results seriously? It seems poor form for me to criticize my fellow skeptics and to not turn my attention against the subject that are analyzing –to allay my own conscience, I have to open my mouth! I therefore spent a bit of time of my own looking at the paper they were analyzing “New Quality-Control Investigations on Vaccines: Micro- and Nanocontamination.” I won’t link to it directly because I have no respect for it.

I’ll deal with the EDS first.

edsschematic

This picture is from https://s32.postimg.org/yryuggo1x/EDSschematic.gif

EDS is another spectroscopy technique that is sometimes called electron fluorescence. You shoot an electron beam (or X-ray) at a sample with the deliberate intent of knocking a deep orbital electron out of the atom. A higher energy shell electron will then drop down into the vacant orbital and emit an X-ray at the transition energy between the two orbitals. The spectrometer then detects the emitted X-rays. Because atoms have differing transition energies due to the depth of their shells, you can identify the element based on the X-ray frequencies emitted. A precondition for seeing this X-ray spectrum is that your impinging electron beam must be at sufficiently high energy to knock a deep shell electron up into the continuum, ionizing the atom and that energy might actually be considerable. There is also a confounder in that a lot of atoms have EDS peaks at fairly similar energies, meaning that it can be hard sometimes to distinguish them.

Here is a periodic table containing EDS peaks from Jeol:

energy-20table-20for-20eds-20analysis-1

Now, when you perform SEM, you spread your sample onto a conductive substrate and observe it in a fair vacuum. To generate an SEM image, the electron beam is rastered in a point across an area in the sample and an off-angle detector detects electron scatter. You’re literally trying to puff electrons up into the space over the sample by bombarding the surface. The substrate is usually conductive in order to replenish ejected electrons. The direction the ejection puff travels depends on the topography of the surface and the off-angle positioning of the detector means that some surfaces face the detector and give bright puffs while surfaces facing away do not. This gives the dimensionality to SEM images. Many SEM samples are sputtered with a layer of gold to improve contrast by introducing a material that is electron dense, but a system with the intent to use EDS would actually be directed at naked samples. With SEM, you always have to remember that the electron beam is intrinsically erosive and damaging. The beam doesn’t just bounce off the surface, it penetrates into the sample to a depth that I’ve heard called the interaction volume. The interaction volume is regulated by the accelerating voltage of the electron beam: higher accelerating voltages means deeper interacting volumes. Crisp SEM images that show clear surface features are usually obtained with low accelerating voltages which limit the interacting volume to only surface features of the sample. SEM images obtained at higher accelerating voltages take on a sort of translucent cast because the beam penetrates into the sample and interacts with an interior volume.

The combination of EDS with SEM is a little tricky. In SEM, EDS gains its excitation from the imaging electron beam of the system. Now, what makes this tricky is that samples like protein antigens in a vaccine are predominantly carbon and have low electron density, making them low contrast. You hit the sample at low accelerating voltages to see surface features. If you try to do EDS, you must hit the sample with electrons at energies sufficient to eject deep orbital electrons: it depends on the depth of that atom’s potential and on which electron is ejected, but atoms like gold can have deeper orbitals than atoms like carbon, meaning larger energies are needed to resolve deeper gold atom orbital transitions. Energies favorable to SEM imaging are sometimes very low compared to the energies needed to hit the EDS ejection energies. When you switch to EDS from imaging, you must be aware that you’re gaining a deeper penetration depth from the larger interaction volume of the beam. If your sample is thin and has low electron density, like carbonaceous biological molecules, you can easily be shooting through the sample and hitting the substrate, whatever that might be.

This can be a serious confounder because you don’t necessarily know where your signal is coming from. In the article commented on by Orac, the authors mention that they’re using an aluminum stub as an SEM mount, but they also talk about aluminum hydroxide and aluminum phosphate. The EDS aluminum signal is sensitive only to the aluminum atoms: you can’t know if the signal is coming from the mount or the sample! How do they know that the phosphate signal isn’t from phosphate buffered saline? That’s a common medical buffer that shows up in vaccine preparation. You can’t know if the material you’re looking at is aluminum phosphate from EDS or SEM.

As I mentioned, you also have to contend with close spacing of EDS peaks: if you look at that periodic table linked above, there’s a lot of overlap. To know gold, for certain, you really need to hit a couple of its EDS peaks to make certain you aren’t misreading the signal (all the peaks you get will have a gaussian width, meaning that you might have a broad signal that covers a number of peaks.) And, at least in the figure presented by Orac, they’re making their calls based on single peak identifications. This in addition to the other potential confounders Orac brought up: exogenous grit and the possibility that they’re reusing their SEM stub for other experiments. How can they be certain they aren’t getting spurious signals?

For EDS, I would be careful about making calls without having some means of independent analysis… like knowing what materials are supposed to be present and possibly hiring out elemental analysis of the sample. Will the gold or zirconium appear in the second analysis? Remember, science depends on being able to reproduce a result… if it was always spurious, a good tale is not being able to make it dance the second time around! Reporting everything doesn’t always mean that you know what you’re looking at. When I was doing EDS more routinely, I had a devil of a time hitting Titanium over Silicon and Gold signals… I knew titanium was present because I put it there, but I had trouble hitting it or ascribing it to specific particles in the SEM image. The EDS would not routinely allow me to reproduce an observation before the sample simply exploded while I was pounding high energy electrons into it.

Referring directly to the crank paper myself and I note that they make some extremely complicated mineral calls in their tables from the EDS data. Again, be aware that EDS is only sensitive to atoms specifically: you can’t know if Aluminum signals are aluminum phosphate or aluminum hydroxide or aluminum from the SEM stub. To know mineral crystals, you need precision ratios of the contents or X-ray diffraction or maybe Raman analysis of the mineral’s crystal lattice.

From their SEM imagery, it looks to me like they’re using a very strong voltage, which is confirmed in their methods section. They claim to be using voltages between 10 kV and 30 kV. These are very high voltages. For good surface resolution of a proteinaceous sample, I restricted myself to around 1 kV to 5 kV and sometimes below 1 kV and found that I was cutting holes through the specimen for much higher than that. Let me actually quote a piece of their methods for sample mounting:

A drop of about 20 microliter of vaccine is released from
the syringe on a 25-mm-diameter cellulose filter (Millipore,
USA), inside a flow cabinet. The filter is then deposited on an
Aluminum stub covered with an adhesive carbon disc.

They put a cellulose filter from Millipore into this SEM. I would have dried directly onto a clean silicon substrate. Here are the appropriate specimen mounts from Ted Pella. Note that the specimen mounts are not cellulose. Cellulose filters are used for a completely different purpose from normal SEM specimen mounts and, really importantly, you can’t efficiently clean a cellulose filter before putting your sample onto it. And, since these filters are actually designed to easily collect dust and grit as a part of their function, it is actually kind of difficult to get crap off of them. Without a control showing that their filters are clean of dust, there’s no way to be certain that this article isn’t actually a long survey examining the dust and foreign crap that can be found impregnating cellulose filters since the SEM acceleration voltages are unquestionably high enough to be cutting through a thin, low contrast biological layer on the top.

I won’t say more about the EDS.

So, I wanted also to address the particulate discussion a bit more directly too.

First off, from the paper directly, there is no real effort at reproduction or control. The source of the particles mentioned could be the carbon adhesive, the cellulose membrane or the vaccine sample. Having thought about it, I personally would bet on that cellulose: you don’t use them this way! They claim to be making preparations in a flow hood to keep dust out, but that doesn’t mean the dust isn’t already on any of the components being brought into the hood.

I stand by my original criticism of Orac’s post that these particles can’t be effectively quantified by molarity: those shown in the paper are all clearly micron scale objects, meaning that they have relatively large mass in and of themselves and constitute significant quantities of material. A better concentration unit for describing them would be mg/mL. I repeat that we don’t know the source of these objects for certain because the experiment is performed without true replication! If the vaccines are the source, the authors should have been able to perform a simple filtration of a vaccine specimen by a 0.22 um or 0.1 um filter and show that this drastically reduces contamination because many of their micrographs are of objects that should not have passed through such a filter… but they did no comparable experiment.

As I’ve been thinking about it, there are a couple potential different particles that could be observed under these conditions. The first is dust, as already detailed. The second possible source is vaccine components, but from a non-contaminating perspective. Orac used a quote by Skeptical Raptor who was rebutting the idea of Aluminum hydroxide being a strong contaminant by again mistaking particles for molecules. I won’t get into his difficulty calculating concentration since it was similar to what happened to Orac, but he was speaking about Aluminum hydroxide being a chemical that is a tiny fraction of a nanogram in a vaccine and therefore much less than environmental exposure to aluminum. I know I probably annoyed Orac with my thoughts about this as I was thinking out loud, but Aluminum hydroxide is not any sort of contaminant in the Cervarix vaccine friend Raptor was talking about: it’s the Adjuvant! Here’s a product insert for a Cervarix vaccine.

cervarix-pi-pil

In this vaccine, I found that there is approximately 500 ug of Aluminum hydroxide adjuvant added per 0.5 mL vaccine dose. If you look in the Aluminum hydroxide MSDS, there is no LD50 for this compound, no cancinogen warnings and no other special health precautions from chronic exposure –it irritates your eyes from contact, but what doesn’t? It got a 1 as a chemical hazard. Antivaxxers are crazy about being anti-aluminum based upon more decades old information that has since been rebutted, but for all intents and purposes, this material is pretty harmless. One special thing about it is that it’s actually very insoluble unless you drop an acid or a strong base on it, meaning that it should be no surprise if it’s a particulate in a neutral physiological pH vaccine (Ksp = 3×10^-34)! In vaccine design, and I haven’t spent a huge amount of time looking, but the main point of the adjuvant is to cause the antigen to be retained at the site of injection for a prolonged time so that the body can be exposed to it for a longer period. The adjuvant adheres the vaccine antigen and, by being an insoluble particle, it lodges in your tissues upon injection and stays there, holding the antigen with it. I found immunology papers on pubmed calling this establishment of a ‘immune depot’ for stimulating immune cells. Over a prolonged period, the insoluble Ksp will allow this compound to gradually dissolve and release the antigen out of the injection site, but Aluminum hydroxide will never have a very high concentration in the body as a whole: that’s what Ksp says, that the soluble phase of the salt components can be no greater than about 2.4 nM, which is well below established exposure limits recommended in the MSDS of between 30 nM and 100 nM (by my calculation).

But, if you look at vaccine adjuvant under SEM, it will be a colloidal particle with a core of Aluminum in the EDS! You can even see examples of this in the target paper itself: the SEM in figure 1 looks like a colloid fractal (they call it a ‘crystals’, but it looks like a precipitate deposition fractal), and the colloids are probably aluminum hydroxide particles caked with antigen protein (again, EDS can’t distinguish between  aluminum hydroxide mixed with PBS and aluminum phosphate, contrary to what the caption says). And, these colloids are INTENDED TO BE THERE by the manufacture of the vaccine. Note, this is a structure designed into the vaccine to help prolong the immune response.

I’ve been debating the source of the singleton particles that the authors of this paper take many SEM pictures of in the remainder of their work. They are mostly not regular enough to be designed nanoparticles or precipitate colloids and they often look like dust (Orac mentions as much). I’ve been skeptical of the sample preparation practices outlined in the paper: I think adding the cellulose membrane to the sample is asking for trouble. You use substrates in SEM to avoid contaminant issues and to provide surfaces that are easily cleaned prior to use. The cellulose polymer and vaccine antigens are all low contrast… at 30 kV accelerating voltage, the SEM could actually be interacting down into the volume of the filter (as I mentioned above). If this isn’t dust sitting on the filter prior to dropping the vaccine onto it, it might also be dust dropped randomly into the cellulose monomer during the manufacturing process and trapped there while polymerizing the membrane. The filter won’t care about most of this sort of contamination because the polymer will immobilize it. Another possibility, but the paper tests almost no hypotheses for purposes of error checking, so we’ll never know.

Overall, I found that paper incompetent. There’s no reason to take it seriously. I hope that my writing this blog post will help balance the previous post which attacked science advocates for misusing the science.

Calculating Molarity (mole/L)

As a preface to this post, I want to make doubly clear my stance on vaccines. There is no good scientific evidence to support the notion that vaccination is in any way an unsafe practice or that it is responsible for any manner of health problem above and beyond the diseases that vaccines protect against. Vaccination is the single most powerful health intervention created in the last 150 years of medicine. There is, in my opinion, some potential for this post to be used to damage the credibility of a person who I believe to be a necessary positive force in the Healthcare scene and I want to make it clear that this was not the intention of my writing here. Orac is a tireless advocate for science and for clear, skeptical thought in general and I respect him quite deeply for the time he puts in and for putting up with the static he puts up with.

That said, I believe that science advocacy is a double edged sword: if you didn’t get it right, it can come back to bite you.

I love Respectful Insolence, but I’ve got to ding Orac for failing to calculate molarity correctly. He is profoundly educated, but I think he’s a surgeon and not a physicist. We all have our weak points! (Thank heaven above I’m not ever in the operating room with the knife!)

In this post, which he may now have edited for correctness (and it seems he has), he makes the following statement:

More importantly, look at the numbers of precipitates found per sample. It ranges from two to 1,821.

O.M.G.! 1,821 particles! Holy crap! That’s horrible! The antivaxers are right that vaccines are hopelessly contaminated!

No. They. Are. Not.

Look at it this way. This is what was found in 20 μl (that’s microliters) of liquid. That’s 0.00002 liters. That means, in a theoretical liter of the vaccine, the most that one would find is 91,050,000 (9.105 x 107) particles! Holy hell! That’s a lot. We should be scared, shouldn’t we? well, no. Let’s go back to our homeopathy knowledge and look at Avogadro’s number. One mole of particles = 6.023 x 1023. So divide 91,050,000 by Avogadro’s number, and you’ll get the molarity of a solution of 91,050,000 particle in a liter, as a 1 M solution would contain 6.023 x 1023 particles. So what’s the concentration:

1.512 x 10-16 M. that’s 0.15 femtomolar (fM) (or 150 altomolar), an incredibly low concentration. And that’s the highest amount the investigators found.

Anybody see the mistake? Let’s start here: Avogadro’s number is a scaling constant for a linear relationship and it has a unit! The units on this number are atoms(or molecules) per mole. It converts a number of atoms or molecules into a number of moles.

‘Moles’ is a convenient person-sized number that is standardized around ‘molecular weight,’ which is a weight unit that arbitrarily says that a single carbon atom has a weight of ’12’ and results in atomic hydrogen having a weight of ‘1.’ That’s atomic mass units (or AMU), which is usually very convenient for calculating relative weights of molecules by adding up all the AMU of their atomic constituents. To use molarity, we usually need a molecular weight in the form of Daltons, or grams/mole. Grams per mole says that it takes this many grams in mass of a substance for that substance to contain a single mole’s worth of molecules (or atoms) where it is then implicit that the number of molecules or atoms is Avogadro’s number.

‘Mole’ is extremely special. It refers to a collection of objects that are atomically identical! If you have a mole of a kind of protein, it means that you have 6.02 x 10^23 number of this kind of identical object. If you make a comparison between two proteins, the same molar number of each with a different molecular weight is a different overall mass. Consider Insulin (5808 g/mole) compared to the 70S Ribosome (2,500,000 g/mole)… one mole of Insulin would weigh 5.8 kg while one mole of 70S Ribosome would weigh 2.5 metric tons!!! If they have roughly the average density of proteins, what would be the volume of 1 mole of 70S ribosome as compared to 1 mole of Insulin? It would be 430 times greater for the Ribosome; 2900 L for 70S Ribosome while Insulin is about 6 L!

Notice something here: an object with a big molecular weight occupies a bigger volume than the same object of a smaller molecular weight… regardless of the fact that they are at the same molarity. Molarity as a number depends strongly on the molecular weight of the substance in question in order to mean anything at all. For the Ribosome, the same molar concentration as for Insulin means a solution containing a much larger amount of solute.

In the post in question on Respectful Insolence, Orac is talking about a paper which observes particulate matter derived from vaccine specimens in an SEM. It is clear from the authorship and publication of the paper that the intent is to find fault in vaccines based upon the contents of materials examined by this probing… from what little I know about the paper, it does not seem to be producing any information that is truly that informative. But, you can’t fault a paper on a point that may not actually be as flawed as an initial interpretation would imply. The paper reports number of particles observed per 20 uL of a solvent. They find as many as 1,821 particles per 20 uL. We are not told for certain what these particles are composed of except that the investigators aren’t sure and shot an overpower EDS at everything and reported even the spurious results. Orac scales up this number to 1L to get 90.1 x 10^7 particles and then divides by Avogadro’s number to find what proportion this is of one mole of these particles, never mind that we don’t know how big the particles are in terms of molecular weight or how dense in volume per mass. He declares it to be a tenth of a femtomole and runs on with how tiny the concentration is. As I initially wrote this, I focused on the gleeful way in which Orac does his deconstruction in large part because it really isn’t a valid thing to laugh at when the deconstruction is not properly done.

Here is how someone of my background approaches the same series of observations. I can see from the micrograph in the blog post that the scale bar is something like 2 mm (2000 microns)… the objects in question are maybe tens to hundreds of microns in size. Let’s make a physicist supposition here and think about it: pulling this out of my ass, I’ll claim these are 1,821 approximately spherical identical particles of sodium chloride, each of 40 microns diameter. That gives a volume of 4/3*Pi*20^3 um^3 or 1.9 x 10^-12 m^3 per particle and 3.5 x 10^-9 m^3 for the whole collection of particles. Now, density usually is given in terms of g/cm^3 or g/mL… there are 100 cm per meter and you must convert three times to cube it, so 3.5 x 10^-9 x 100^3 = 3.5 x 10^-3 cm^3. Wait a minute, we’re now at a volume of 3.5 uL!!! Did you see that? A cubic centimeter is a mL and 0.0035 mL is 3.5 uL, or 17% of the original 20 uL sample volume! What molarity is this? The density of sodium chloride is 2.16 g/mL or 2.16 mg/uL… which is 7.56 mg. That’s 7.56 mg of salt dissolved in 20 uL. The molecular weight of sodium chloride is 58.44 g/mole or 58.44 mg/mmole, which gives .129 mmole. From this .129 mmole in .02 mL is 6.47 mmole/mL.

That’s 6.47 mole/L……. 6.47 M!!!!

Let’s pause for a second. Is that femtomolar?

Orac missed the science here! I initially wrote that he should be apologizing for it, but I’ve revised this so that my respect for his work is more apparent. The volume of these particles and their composition is everything. A single particle with a molecular weight in the gigadaltons or teradaltons range is suddenly a very substantial mass in low particle number. If these particles are as I specified and composed of simple salt, they are at a molarity that is abruptly appreciable. If we make these into tiny balls of Ricin, that’s unquestionably a fatally toxic quantity!

As with all things, dose makes the poison and there’s no Ricin in evidence, but this argument Orac has made about concentration, in this particular case is catastrophically wrong. A femtomole of a big particle that can be dissolved could be a large dose!

I forgive him and I love his blog, but let this be a lesson… you don’t just divide by Avogadro’s number in order to get meaningful concentrations!

A Physicist Responds to “The Three Body Problem” part 2

To start with, this post will be almost pure spoiler. I’m assuming, if you got through part 1, that you’ve read Cixin Liu’s book.

I’ve gotten partway through the second book in the trilogy myself, meaning that I’ve had some additional time to think about the contents of this post, but that I don’t know the ultimate outcome of the series.

This post is addressing a central conclusion of the first book, a major piece of science fiction that I didn’t address in the previous post because it is so intrinsic to the plot. This is about the idea of the Sophon induced ‘science lock-down.’ An alien race is going to invade the planet Earth in 400 years and this race is concerned that Human technology will advance in that time to be more powerful than the alien race’s own technology, so the aliens have played a trick to prevent humans from performing fundamental scientific research in order to prevent human technology from developing.

The key of this is the idea of the “Sophon.”As mentioned in the previous post, the word ‘proton’ was chosen over the name of an actual fundamental particle in order to facilitate a wordplay in Chinese… particularly the Chinese word that got translated into English as “Sophon.” This word was chosen from a modification of the word “Sophont.” As any science fiction aficionado can tell you, this word means “intelligent creature.” A Sophon is intended to be an intelligent proton, a robot the size and mass of a subatomic particle. These Sophons are capable to some extent of changing their size and shape and can communicate back to the aliens instantaneously. Sophons can also travel, as subatomic particles, at very nearly the speed of light.

You can see right from that paragraph the first place where the Sophon (and therefore the idea of science lock-down) are broken. Sophons communicate with the aliens instantaneously by means of quantum entanglement. If you’ve read anything else I’ve written, you know how I feel about the cliche of the ‘Ansible.’ Entanglement can’t be used to pass information: the Quantum mechanics doesn’t allow for this, no matter how you misinterpret it. Entanglement means correlation, not necessarily communication. This quantum mechanical effect is an interesting and very real phenomenon, but to understand what it actually means, you need to understand more about the rest of what quantum is… the story of ‘Three Body Problem’ never goes there. I won’t go there either except to suggest learning about the Bell Inequality.

The reason that Sophons are capable of producing science lock-down is because they can falsify data coming out of particle accelerators. Sophons can fly through the sensors in particle detectors and trigger them falsely, creating intelligently designed noise. At the surface, this is a horrible prospect, making it impossible for Humans to probe the deep structure of matter and therefore attain the understandings necessary to build Sophons ourselves. Do not pass go, no ‘correct’ results means no good science!

Obviously, this looks really bad. Very interesting science fiction idea. On the other hand, it also demands a bit of discussion, both about how particle accelerators work and on how science works.

Particle accelerators are the wrecking ball of the scientific enterprise. They generate data almost entirely by accelerating charged particles up to substantial fractions of the speed of light and slamming them into each other and into stationary targets. Particle physicists are all about impact cross sections and statistical probabilities of outcomes. The gold standard of a discovery in particle physics is a 5-sigma observation. ‘Sigma’ is, of course, standard deviation, which is a statistical standard by which scientists use the Gaussian statistical distribution to judge probability of occurrence –it’s the Bell Curve. Average is the peak of this curve, while one standard deviation is either one sigma to the left or right of average. Particle physics is set up around a simple statistical weight tabulation which can be couched as a question: “How likely is it that my observation is false/true?” If an event observed in the accelerator is spurious –that is, if the event is noise– the statistical machinery of particle physics places it close to the peak of the Bell Curve, that is at the average, which is to say that the event observed is ‘not different’ from noise. A 5-sigma event is an event which has been so well observed statistically that the difference from noise is five standard deviations from the peak of the Bell curve out into the tail (99.9999% of the curve’s area is captured within this extent of the tail!) This is essentially like saying that a conclusion is better than 99% certain to be NOT false.

Do you know how big a particle accelerator data set is? They include billions of events. Particle accelerators run for months to years on end, collecting data automatically 24 hours a day. And, the whole enterprise is based on the assumption that every observation independently might be a false outcome. Statistical weight determines the correctness of an observation. Physical theory exists to model both the trends and noise of an experiment.

As I said above, the purpose of the Sophons is to produce false results within the sensors of an accelerator’s detector apparatus. The most major detection devices in the modern systems are calorimeters and photomultipliers. Calorimeters simply detect heat deposition within the sensor volume while photomultipliers give a small current when they are perturbed by a passing electric charge. Usually, detector assemblies contain layers of sensors wrapped around the collision target where photomultipliers form multiple inner layers and calorimeters reside around the outside of the whole assembly. There are usually also magnetic fields applied through the detector so that charged particles will tend to follow curving paths as they pass outward through the different layers away from the collision site. There are other detector technologies and refinements of these ideas, but this gives a basic taste.

Here is the ALTAS detector at the Large Hadron Collider:

atlasdet

Using this layered design, photomultipliers can resolve the path of outward flying particles, determining their charges based upon their path curvature through the magnetic fields established by the solenoids and then the calorimeters determine how much energy was in the particle when that particle heats the calorimeter upon crashing into it. Certain particles types penetrate shields differently, necessitating layers of calorimeters with different structural characteristics in order to resolve different particle types. Computers correlate detection traces between the layers and tabulate what heat depositions relate to which flight paths. Particle physicists can then do simple arithmetic  to count up all the heats and all the charges on all the particles detected for one collision event and deduce which subatomic particles appeared during a particular collision. Momentum and energy/mass get conserved relativistically while charge is directly conserved and you simply add up what went in in order to account for what comes out during a collision.

In order to falsify data within such a detector, the smart subatomic particle, the Sophon, would need to fly back and forth through the detector layers, switching its charge polarity between passes and somehow dumping heat into calorimeters without being destroyed or lost in some way. How the Sophons get their kinetic energy is somewhat opaque in the story and I spent some time abortively rereading the TBP trying to figure this out, but it can be assumed that they possess a self-contained power supply which enables them to either recharge themselves from their surroundings, or simply dip into a long term battery reserve whenever they need it. They are clearly able to accelerate to highly relativistic velocities in a self-contained manner since they flew across the void from the alien homeworld to Earth, and then slowed down without external assistance at Earth. You could presume that they are able to write completely fake collision events into the detector, pretending to travel wrong velocities and masquerading as false charges and masses.

Now, like I said, this is terrible! The experiments can’t always give reliable results. Never mind that the real experiments must always be filtered for the fact that false results exist in the data set anyway.

In the paragraph above, I said “can’t always give reliable results” because the real data set of collision events still exists behind the fake data set. The Sophon flying back and forth can’t prevent real particle collisions from occurring and also interacting with the detector. The particle physicists would actually know right away that something isn’t right with the systematic structure of the experiment because they know how many particles are in their particle beams and also know the cross-sections of interaction, meaning that they start the experiment knowing statistically how many collision events to expect in a unit of time: Sophon interference with the experiment would only increase over the expected number. What you get is two overlapping data sets, one that is false and one that’s true. If the false data is much different from the true data, you inevitably bin them as distinct results because they would create a bimodal distribution to your data set… some measurements add up to five-sigma toward one result while a distinct set will ultimately add up as five-sigma toward something distinctly different. Then, you just let the theorists work out what’s what.

In the story, the scientists just throw up their hands and declare ‘sophon barrier’ saying that science ‘can’t advance’ because it can’t discern correctness.

This prospect has really kind of sat in the back of my mind, nagging me. I’m not completely certain that the author understands the overall scientific mindset or philosophy. Science starts out assuming that all results might be false! Having a falsehood layered on top of other potential falsehoods is really not that deterring to me, particularly since the scientists know the Sophon interference is present by the end of the story. Science as a process is intrinsically concerned with error checking and finding systematic interference, even intelligent fabrication of data within the scientific community –you think the Sophons are bad: somebody simply altering the data set as they see fit, completely independent of the experiment, is worse. And, we deal with this in reality! At least with the Sophons, a real data set must sit behind the mixture of false events. If the data set is merely bimodal or multimodal with statistics backing up each conclusion, you design experiments to address each… at some point, consistency of a result must ultimately dominate. Sorting out this noise would take time, but it would be unable to stop progress overall, especially since the scientists know the noise is present!

Now, giving false data is actually somewhat different than prohibiting data collection. This facet is somewhat unclear to me by the story –my memory fails. You can imagine that the Aliens realize that the humans know about the tampering and rather than leaving humans with a data set that contains some good data, they would simply have their Sophons swamp the detectors. In this, the Sophons fly back and forth within the detector giving so many false events that they prohibit the detector from being able to trigger for the resolution of real events. They could simply white us out!

While this would indeed be a bad thing, it would have a sort of a perverse effect on a real scientist. Consider: you know how fast your instrument triggers and you know the latency required for it to recover… this gives you a measure for how quickly and in what frequency the Sophon must act! You can just imagine the particle beam physicist salivating at the prospect of his Nobel prize in the nascent field of Sophon physics. Imagine the flood of grant proposals around the subject of baiting a Sophon into a particle beam line by the performance of basic science only to try to turn the particle beam against the Sophon in order to smash it apart and see how it works!

Really, if you were a high energy physicist and you knew unequivocally that a smart particle was flying around inside your instrument, how could you not be trying to figure out a way to probe it? It’s like getting Maxwell’s demon handed to you on a shiny platter!

A realistic outcome here is actually not the prohibition of science. It would be an arm-wrestling match with the Aliens: at the very best, leaving us with a partial data set that we can ultimately advance with, or giving us the chance to probe the Sophons directly.

The prospect of probing the Sophons directly contains the danger that it would be hard to distinguish engineered results from real ones, but every demonstration by the Sophons of some other confusing behavior is in fact data itself. The author made a huge argument in “Three Body Problem” that Sophons are typically point-like and would probably subscribe to the notion that they can’t be probed since they would essentially have no collision cross-section; I would resist this idea because it either violates or misunderstands quantum mechanics, which I detailed a bit in the previous post. The author might even suggest that Sophons can’t be probed because they can dodge collisions with other particles in the collider, but I would doubt that simply because of the inability for the Sophon to know things about other particles due to simple quantum mechanics and the affect of relativity altering the rates of information flow: the decision would need to be made very quickly and it would have a built in imprecision from Uncertainty! Moreover, the more time the Sophons spend performing confusing behavior in order to foil their own direct examination, the less time they can spend faking data in the experiments directed at basic research. As you may be aware, machines like the LHC are actually devoted to many lines of research simultaneously and physicists are remarkably adept at piggybacking one experiment on top of another in order to conserve resources and obtain additional bang for the same buck.

One final aspect of the “science lock-down” which I take some umbrage with is the notion that only particle accelerators are responsible for fundamental research. They aren’t. There is a huge branch of physics and chemistry probing quantum mechanics based on spectroscopy. Lasers are unequivocally a quantum mechanical device and much probing into basic quantum mechanics is performed by some variation on the theme of lasing. The Nobel prize winning discovery of the Bose-Einstein condensed matter phase did not occur in a super-collider, it occurred on an optical bench. Most super precise clock mechanisms used by the human race at this point are optical devices and have absolutely nothing to do with particle accelerators –optical gratings and optical metrology are driving the expansion of precision measurement! The leaps which are in the process of producing quantum computers (one device the author specifically prohibits in book 2 under the science lockdown!) are not being made at particle accelerators at all: they are being made in optical lattice traps on lab benches and in photo-etched masks used to produce nano-scale solid state resonators. We are currently in the process of building analog quantum computers for the purposes of simulating quantum chromodynamic systems using optical and nano-resonator devices… and this development has nothing to do with particle accelerators, except as a means of reproducing results! The author made the argument that humans couldn’t build massive super-collider accelerators, Synchrotrons and Linacs, fast enough to match the production capacity that the Aliens have for making the sophons needed to foil these instruments, but the author never even touched on the rapidly expanding field of plasma wake field acceleration, which uses lasers to accelerate particles to relativitistic speeds in bench-top apparatuses for a fraction of the price of a super-collider.

The bleeding edge of physics is very multi-pronged; the Higgs boson discovery carried out in a synchrotron may someday be reproduced by a bench-top plasma wake field accelerator for a tiny fraction of the price. Can ‘locking down’ big particle accelerators like the LHC prohibit the extensive physical exploration that is occurring due to a mostly unrelated black swan technological development like lasers? I really don’t think it can. Tying one arm behind your back leaves you with the other arm. It’s true that the mothballing of the superconducting super-collider in the United States prevented humans from definitively discovering the Higgs boson for more than a decade, but that isn’t to say that there aren’t other avenues to the same discovery.

Do I think that science lockdown is possible by the means suggested by the author? Not really. And, especially not for devices like quantum computers, which is one critical development that the author suggests is prohibited by sophon interference in the second book.

Don’t get me wrong, this is a good piece of science fiction and it’s a wonderful thought experiment, but like many thought experiments, it’s arguable.

edit 2-16-17

I saw a physics colloquium yesterday delivered by a Nobel prize winner. His lab is currently working on a molecular spectroscopy experiment directed at measuring the electric dipole moment of the electron. A precision measurement of this value ties directly to the existence (or not) of supersymmetric particle theory… which is one candidate expansion of the Standard Model of particle physics. This experiment is not being done in a super collider, but on an optics bench for a fraction of the price. Experiments like this one completely invalidate the thesis of Three Body Problem: that by locking down colliders that there is no other way for particle physics to advance. There are other ways that are comparatively cheap and requiring less resources and manpower. Physics would find a way.