Revoke Shaquille’s Doctorate in Education… he doesn’t deserve it.

We are in a world where truth doesn’t matter.

Read this and weep. These men are apparently the authorities of truth in our world.

Everywhere you look, truth itself is under assault. It doesn’t really matter whether you believe, it really doesn’t matter what you want it to say. Truth is not beholden to human whims. We can’t ultimately change it by manipulating it with cellphone apps. We can’t reinterpret it if we wanted to. One of these days, in however great of importance we hold ourselves, the truth will catch up. And we will deserve what happens to us after that point in time.

“It’s true. The Earth is flat. The Earth is flat. Yes, it is. Listen, there are three ways to manipulate the mind — what you read, what you see and what you hear. In school, first thing they teach us is, ‘Oh, Columbus discovered America,’ but when he got there, there were some fair-skinned people with the long hair smoking on the peace pipes. So, what does that tell you? Columbus didn’t discover America. So, listen, I drive from coast to coast, and this s*** is flat to me. I’m just saying. I drive from Florida to California all the time, and it’s flat to me. I do not go up and down at a 360-degree angle, and all that stuff about gravity, have you looked outside Atlanta lately and seen all these buildings? You mean to tell me that China is under us? China is under us? It’s not. The world is flat.”

This spoken by a man with a public platform and a Doctorate in Education. This is the paragon of teachers!

{Edit: 3-20-17 since I’m thinking better about this now, I will rebut his meaningless points.

First, arguments about whether or not Columbus discovered America are a non-sequitur as to whether or not the Earth is round.

Second, driving coast to coast can tell you very little about the overall roundness of the Earth, especially if you aren’t paying attention to the things that do. The curvature of the earth is extremely small: only about 8 inches per mile. This means that on the scale of feet, the curvature is in thousandths of an inch, so that you can’t measure it to not be flat at the dimensions that a human being can meaningfully experience standing directly on the surface. Can you see the couple feet of curvature at a distance of fifty miles looking off a sky scraper in the middle of Atlanta, or distinguish the deviation from the same direction of ‘up’ of two sky scrapers separated by ten miles? You can’t resolve tens of feet with your eyes at a distance of miles. That said, you actually can see Pikes Peak emerge over the horizon as you come out of Kansas into Colorado, but I suppose you would explain that away by some sort of giant conspiracy theory elevator device. To actually start to directly see the curvature at a meaningful degree with your eyes, you need to be at an altitude of hundreds of thousands of feet above the surface… which you could actually do as somebody with ridiculous wealth.

Third, how would you know that China is not ‘under?’ How would you know where China isn’t when you wouldn’t be able to see that distance along a flat surface no matter which direction you look? Can you explain the phase factor that you pick up to your day that causes your damn jet lag every time your wealthy, ignorant ass travels to places like China? By your logic, you should be able to use your colossal wealth to travel to where the globe of the sun pops out of the plane of the Earth in the east every morning. Hasn’t it once occurred to you that if you’re truly right, you should test a hypothesis first before making an assertion that can be easily shown to be wrong?}

You made a mint of money on the backs of a lot of people who made it possible for you to be internationally known, all because of the truth that they determined for you! You do not respect them, you do not understand the depth of their efforts, you do not know how hard they worked. You do not deserve the soapbox they built for you.

For everyone who values the truth, take a moment to share a little about it. Read other things in my blog to see what else I have to say. I have very little I can say right this second; I’m aghast and I feel the need to cry. My hard work is rendered essentially meaningless by morons like Shaquille O’Neal… men of no particular intellect or real skill dictating what reality ‘actually is’ while having no particular capacity to judge it for themselves.

From a time before cellphone apps and computer graphics manipulation, I leave you with one of the greatest pinnacles of truth ever to be achieved by the human species:

moon_and_earth_lroearthrise_frame_0

Like it or not, that’s Earth.

If you care to, I ask you to go and hug the scientist or engineer in your life. Tell them that you care about what they do and that you value their hard work. The flame of enlightenment kindled in our world is precious and at dire risk of guttering out.

Edit:

An open letter to the Shaq:

Dear Shaquille O’Neal,

I’m incredibly dismayed by your use of your public personae to endorse an intellectually bankrupt idea like flat earth conspiracy theories particularly in light of your Doctorate Degree in Education. If you are truly educated, and value truth, you should know that holding this stance devalues the hard work of generations of physicists and engineers and jeopardizes the standing of actual scientific truth in the public arena. The purpose of an educator is to educate, not to misinform… the difference is in whether you spread the truth or not.

There is so much evidence of the round earth available in the world around us without appeal to digital media, the cycle of the seasons, scheduled passages of the moon and the planets, observations of Coriolis forces in the weather patterns and simple ballistics, the capacity to jump in an airplane heading west and continue to head west until you get back to where you started, the passage of satellites and spacecraft visible from the surface of the Earth over our heads, the very existence of GPS available on your goddamn smart phone, to the common shapes of objects like the moon and planets visible through telescopes in the night skies around us, that appeals to flat earth conspiracies show a breathtaking lack of capacity to understand how the world fits together. That it comes from a figure who is ostensibly a force of truth –an educator– is truly deeply hurtful to those of us who developed that truth… modern scientists and engineers.

Since you are so profoundly wealthy, you among all people are singularly in a position to prove to yourself the roundness of our world. I bet you 50 million dollars that I don’t even have and will spend my entire life trying to repay, that you can rent an airliner with an honest pilot of your choice and fly west along a route also of your choice, and come back to the airport you originally departed from without any significant eastward travel. Heck, you can do the same exercise heading north or south if you want. And, if that experiment isn’t enough, use your celebrity to talk to Elon Musk: I hear he’s selling tickets now to rich people for flights around the moon. I bet he would build you a specially-sized two-person-converted-to-one berth in his Dragon capsule to give you a ride high enough to take a look for yourself at the shape of the world, if your eyes are the only thing you’ll believe. If you lose, you pay a 49 million dollar endowment to the University of Colorado Department of Physics for the support of Physics Education –and a million to me for the heartache you caused making a mockery of my education and profession by use of your ill-gotten public soapbox and mindlessly open mouth. Moreover, if you lose, you relinquish your Doctorate and make a public apology for standing for exactly the opposite of what that degree means.

Sincerely,

Foolish Physicist
of Poetry in Physics

Edit 4-5-17:

So, Shaq walked back his comments.

O’Neal: “The first part of the theory is, I’m joking, you idiots. That’s the first part of the theory. The second part is, I said jokingly that when I’m in my bus and I drive from Florida to California, which I do every summer, it seems to be flat. When I’m in my plane, and we’re getting ready to land, and I open up the window, and I’m looking at all the land that we’re flying over, it seems to be flat.”

“This world we live in, people take things too seriously, but I’m going to give the people answers to my test,” he said. “Knowing that I’m a funny guy, if something seems controversial or boom, boom, boom, you’ve got to have my funny points on, right? So now, once you have my funny points on, that should eradicate and get rid of all your negative thoughts, right? That’s what you should do when you hear a Shaquille O’Neal statement, OK? You should know that he has funny points right over here, and what did he say? Boom, boom, boom, add the funny points. You either laugh or you don’t laugh, but don’t take me seriously. When I want you to take me seriously, you will know by the tone of my voice that I’m being serious.”

“No, I don’t think that,” O’Neal told Harbinger of a flat Earth. “It was a joke, OK? So know that when Shaquille O’Neal says something, 80 percent of the time I’m being humorous, and it is a joke. And 20 percent of the time, I’m being serious, but when I’m being serious, you’ll know. You want to see me, seriously? See me and Charles Barkley going back and forth on TNT. That’s when I’m mad and when I’m serious. Other than that, you’re not going to get that out of me, so I was just joking people. The Earth is not round, it’s flat. I mean, the Earth is not flat, it’s round.”

One thing that should be added to these statements is this: there are people who are actively spreading misinformation about the state of the world, for instance that the earth is flat. The internet, Youtube, blogs, you name it, has given these people a soapbox that they would not otherwise have. Given that there is a blatant antiscientific thread in the United States which is attacking accepted, settled science as a big cover-up designed to destroy the rights of the everyday man, it is the duty of scientists and educators to take the truth seriously. In a world where Theory of Evolution, Climatology and Vaccine science are all actively politicized, we have to stand up for the truth.

Where real scientists are about studying and doing our work, the antiscientific activists are solely about spreading their belief… they don’t study, they don’t question, they spend their time actively lobbying the government and appealing to legislators, running for and getting onto school boards where they have an opportunity to pick which books are presented to school districts and various places where they can actively undercut what students are told about the truth of the world. They aren’t spending their energy studying, they are spending their energy solely on tinkering with the social mechanisms which provide our society with the next generation of scientists. As such, their efforts are more directed at undercutting the mechanisms that preserve the truth rather than on evaluating the truth… as scientists do. These people can do huge damage to us all. Every screwball coming out of a diploma mill “Quantum University” with a useless, unaccredited ‘PhD’… who goes off to promote woo-bong herbalist healthcare as an alternative to science based medicine, does damage to us all by undercutting what it means to get healthcare and by putting crankery and quackery in all seriousness at the same level as scientific truth when there should be no comparison.

If everybody understood that there is no ‘alternative’ to the truth, joking about what is true would mean something totally different to me. But, we live in a world where ‘alternative facts’ are a real thing and where everyone with a soapbox can say whatever they wish without fear of reprisal. Lying is a protected right! But someone has to stand up for truth. That someone should be scientists and educators. That should include an ‘education doctorate’ like the Shaq. If he were an NBA numbskull without the doctorate, I would care less: Kyrie Irving is a joke. But he isn’t; he’s got a doctorate and he has a responsibility to uphold what that degree means! The only reason humor in irony can work is if it can be clear that one is being ironic instead of serious… and that is never completely clear in this world.

Nuclear Toxins

A physicist from Lawrence Livermore Labs has been restoring old nuclear bomb detonation footage. This seems to me to be an incredibly valuable task because all of the original footage was shot on film, which is currently in the process of decaying and falling apart. There have been no open air nuclear bomb detonations on planet Earth since probably the 1960s, which is good… except that people are in the process of forgetting exactly how bad a nuclear weapon is. The effort of saving this footage makes it possible for people to know something about this world-changing technology that wasn’t previously declassified. Nukes are sort of mythical to a body like me who wasn’t even born until about the time that testing went underground: to everybody younger than me, I suspect that nukes are an old-people thing, a less important weapon than computers. That Lawrence Livermore Labs has posted this footage to Youtube is an amazing public service, I think.

As I was reading an article on Gizmodo about this piece of news, I happened to wander into the comment threads to see what the echo chamber had to say about all this. I should know better. Admittedly, I actually didn’t post any comments castigating anyone, but there was a particular comment that got me thinking… and calculating.

Here is the comment:

Nuclear explosions produce radioactive substances that are rare in nature — like carbon-14, a radioactive form of the carbon atom that forms the chemical basis of all life on earth.

Once released into the atmosphere, carbon-14 enters the food chain and gets bound up in the cells of most living things. There’s still enough floating around for researchers to detect in the DNA of humans born in 2016. If you’re reading this, it’s inside you.

This is fear mongering. If you’ve never seen fear mongering before, this is what it looks like. The comment is intended to deliberately inspire fear not just in nuclear weapons, but in the prospect of radionuclides present in the environment. The last sentence is pure body terror. Dear godz, the radionuclides, they’re inside me and there’s no way to clean them out! I thought for a time about responding to this comment. I decided not to because there is enough truth here that anyone should probably stop and think about it.

For anyone curious, the wikipedia article on the subject has some nice details and seems thorough.

It is true the C-14 is fairly rare in nature. The natural abundance is 1 part per trillion of carbon. It is also true that the atmospheric test detonations of nuclear bombs created a spike in the C-14 present in the environment. And, while it’s true that C-14 is rare, it is actually not technically unnatural since it is formed by cosmic rays impinging on the upper atmosphere. For the astute reader, C-14 produced by cosmic rays forms the basis of radiocarbon dating since C-14 is present at a particular known, constant proportion in living things right up until you die and stop uptaking it from the environment –a scientist can then determine the date when living matter died based on the radioactive decay curve for C-14.

Since it’s not unnatural, the real question here is whether the spike of radionuclides created by nuclear testing significantly increases the health hazard posed by excess C-14 above and beyond what it would normally be. You have it in your body anyway, is there greater hazard due to the extra amount released? This puzzle is actually a somewhat intriguing one to me because I worked for a time with radionuclides and it is kind of chilling all the protective equipment that you need to use and all the safety measures that are required. The risk is a non-trivial one.

But, what is the real risk? Does having a detectable amount of radionuclide in your body that can be ascribed to atomic air tests constitute an increased health threat?

To begin with, what is the health threat? For the particular case of C-14, one of a handful of radionuclides that can be incorporated into your normal body structures, the health threat would obviously come from the radioactivity of the atom. In this particular case, C-14 is a beta-emitter. This means that C-14 radiates electrons; specifically, one of the neutrons in the atom’s nucleus converts into a proton by giving off an electron and a neutrino, resulting in the carbon turning into nitrogen. The neutrino basically doesn’t interact with anything, but the radiated electron can travel with energies of 156 keV (or about 2.4×10^-14 Joules). This will do damage to the human body in two routes, either by direct collision of the radiated electron with the body, or by a structurally important carbon atom converting into a nitrogen atom during the decay process if the C-14 was part of your body already. Obviously, if a carbon atom turns suddenly into nitrogen, that’s conducive to organic chemistry occurring since nitrogen can’t maintain the same number of valence interactions as carbon without taking on a charge. So, energy deposition by particle collision, or spontaneous chemistry is the potential cause of the health threat.

In normal terms, the carbon-nitrogen chemistry routes for damage are not accounted for in radiation damage health effects simply because of how radiation is usually encountered: you need a lot of radiation in order to have a health effect, and this is usually from an exogenous source, that is, provided by a radiation source that is outside the body rather than incorporated with it, like endogenous C-14. This would be radiation much like the UV radiation which causes a sunburn. Heath effects due to radiation exposure are measured on a scale by a dose unit called a ‘rem.’ A rem expresses an amount of radiation energy deposited into body mass, where 1 rem is equal to 1.0×10^-5 Joules of radiation energy deposited into 1 gram of body mass. Here is a table giving the general scale of rem doses which causes health effects. People who work around radiation as part of their job are limited to a full-body yearly dose of 5 rem, while the general public is limited to 0.1 rem per year. Everybody is expected to have an environmental radiation dose exposure of about 0.3 rem per year and there’s an allowance of 0.05 rem per year for medical x-rays. It’s noteworthy that not all radiation doses are created equal and that the target body tissue matters; this is manifest by different radiation doses being allowed to occur to the eyes (15 rem) or the extremities, like the skin (50 rem). A sunburn would be like a dose of 100 to 600 rem to the skin.

What part of an organism must the damage affect in order to cause a health problem? Really, only one is truly significant, and that’s your DNA. Easy to guess. Pretty much everything else is replaceable to the extent that even a single cell dying from critical damage is totally expendable in the context of an organism built of a trillion cells. The problem of C-14 being located in your DNA directly is numerically a rather minor problem: DNA actually only accounts for about 3% of the dry mass of your cells, meaning that only about 3% of the C-14 incorporated into your body is directly incorporated into your DNA, so that most of the damage to your DNA is due to C-14 not directly incorporated in that molecule. This is not to say that chemistry doesn’t cause the damage, merely that most of the chemical damage is probably due to energy deposition in molecules around the DNA which then react with the DNA, say by generation of superoxides or similar paths. This may surprise you, but DNA damage isn’t always a complete all-or-nothing proposition either: to an extent, the cell has machinery which is able to repair damaged DNA… the bacterium Dienococcus radiodurans is able to repair its DNA so efficiently that it’s able to subsist indefinitely inside a nuclear reactor. Humans have some repair mechanisms as well.

Cells handling radiation damage in humans have about two levels of response. For minor damage, the cell repairs its DNA. If the DNA damage is too great to fix, a mechanism triggers in the cell to cause it to commit suicide. You can see the effect of this in a sunburn: critically radiation damaged skin cells commit suicide en mass in the substratum of your skin, ultimately sacrificing the structural integrity of your skin, causing the external layer to sough off. This is why your skin peels due to a sunburn. If the damage is somewhere in between, matters are a little murkier… your immune system has a way of tracking down damaged cells and destroying them, but those screwed up cells sometimes slip through the cracks to cause serious disease. Inevitably cancer. Affects like these emerge for ~20 rem full body doses. People love to worry about superpowers and three-arm, three-eye type heritable mutations due to radiation exposure, but congenital mutations are a less frequent outcome simply because your gonads are such a small proportion of your body; you’re more likely to have other things screwed up first.

One important trick in all of this to notice is that to start having serious health effects that can be clearly ascribed to radiation damage, you must absorb a dose of greater than about 5 rem.

Now, what kind of a radiation dose do you acquire on a yearly basis from body-incorporated C-14 and how much did that dose change in people due to atmospheric nuclear testing?

I did my calculations on the supposition of a 70 kg person (which is 154 lbs). I also adjusted rem into a more easily used physical quantity of Joules/gram (1 rem = 1×10^-5 J/g, see above.)  One rem of exposure for a 70 kg person works out to an absorbed dose of 0.7 J/year. An exposure sufficient to hit 5 rems is 3.5 J/year while 20 rem is 14 J/year. Beta-electrons from c-14 maximally hit with 2.4×10^-14 J/strike (150 keV) with about 0.8×10^-14 J/hit on average (50 keV).

In the following part of the calculation, I use radioactive decay and half-life in order to determine the rate of energy transference to the human body on the assumption that all the beta-electron energy emitted by radiation is absorbed by the human body. Radiation rates are a purely probabilistic event where the likelihood of seeing a radiated electron is proportional to the size of the radioactive atom population. The differential equation is a simple one and looks like this:

decay rate differential equation

This just means that the rate of decay (and therefore electron production rate) is proportional to the size of the decaying population where the k variable is a rate constant that can be determined from the half-life. The decay differential equation is solved by the following function:

exponential decay

This is just a simple exponential decay which takes an initial population of some number of objects and reduces it over time. You can solve for the decay constant by plugging the half-life into the time and simply asserting that you have 1/2 of your original quantity of objects at that time. The above exponential rearranges to find the decay constant:

decay constant

Here, Tau is the half-life in seconds (I could have used my time as years, but I’m pretty thoroughly trained to stick with SI units) and I’ve already substituted 1/2 for the population change. With k from half-life, I just need the population of radiation emitters present in the body in order to know the rate given in the first equation above… where I would simply multiply k by N.

To do this calculation, the half-life of C-14 is known to be 5730 years, which I then converted into seconds (ick; if I only care about years, next time I only calculate in years). This gives a decay constant of 3.836×10^-12 emissions/sec. In order to get the decay rate, I also need the population of C-14 emitters present in the human body. We know that C-14 has a natural prevalence of 1 per trillion and also that a 70 kg human body is 16 kg carbon after a little google searching, which gives me 1.6×10^-8 g of C-14. With C-14’s mass of 14 g/mole and Avagadro’s number, this gives about 6.88×10^14 C-14 atoms present in a 154 lb person. This population together with the rate constant gives me the decay rate by the first equation above, which is 2.639×10^3 decays per second. Energy per beta-electron absorbed times the decay rate gives the rate of energy deposited into the body per second on the assumption that all beta-decay energy is absorbed by the target: 2.639×10^3 decays/sec * 2.4×10^-14 Joules/decay = 6.33 x 10^-11 J/s. For the course of an entire year, the amount of energy works out to about 0.002 Joules/year.

This gets me to a place where I can start making comparisons. The exposure limit for any old member of the general public to ‘artificial’ radiation is 0.1 rem, or 0.07 J/year. The maximum… maximum… contribution due to endogenous C-14 is 35 times smaller than the allowed public exposure limits (for mean energy, it’s more like 100 times smaller). On average, endogenous C-14 gives 1/100th of the allowed permitted artificial radiation dose.

But, I’ve actually fudged here. Note that I said above that humans normally get a yearly environmental radiation dose of about 0.3 rem (0.21 J/year)… meaning that endogenous C-14 only provides about 1/300th of your natural dose. Other radiation sources that you encounter on a daily basis provide radiation exposure that is 300 times stronger than C-14 directly incorporated into the structure of your body. And, keep in mind that this is way lower than the 5 rem where health effects due to radiation exposure begin to emerge.

How does C-14 produced by atmospheric nuclear testing figure into all of this?

The wikipedia article I cited above has a nice histogram of detected changes in the environmental C-14 levels due to atmospheric nuclear testing. At the time of such testing, C-14 prevalence spiked in the environment by about 2 fold and has decayed over the intervening years to be less than 1.1-fold. This has an effect on C-14 exposure specifically of changing it from 1/300th of your natural dose to 1/150th, or about 0.5%, which then tapers to less than a tenth of a percent above natural prevalence in less than fifty years. Detectable, yes. Significant? No. Responsible for health effects…… not above the noise!

This is not to say that a nuclear war wouldn’t be bad. It would be very bad. But, don’t exaggerate environmental toxins. We have radionuclides present in our bodies no matter what and the ones put there by 1950s nuclear testing are only a negligible part, even at the time –what’s 100% next to 100.5%? A big nuclear war might be much worse than this, but this is basically a forgettable amount of radiation.

For anybody who is worried about environmental radiation, I draw your attention back to a really simple fact:

depositphotos_9985842_s-199x300

The woman depicted in the picture above has received a 100 to 600 rem dose of very (very very) soft X-rays by deliberately sitting out in front of a nuclear furnace. You can even see the nuclear shadow on her back left by her scant clothing. Do you think I’m kidding? UV light, which is lower energy than x-rays, but not by that much… about 3 eV versus maybe 500 eV, is ionizing radiation which is absorbed directly by skin DNA to produce real radiation damage, which your body treats indistinguishably from how it treats damage from particle radiation of radionuclides or X-rays or gamma-rays. The dose which produced this affect is something like two to twelve times higher than the federally permitted dose that radiation workers are allowed to receive in their skin over the course of an entire year… and she did it to herself deliberately in a matter hours!

Here’s a hint, don’t worry about the boogieman under the bed when what you just happily did to yourself over the weekend among friends is much much worse.

What is a qubit?

I was trolling around in the comments of a news article presented on Yahoo the other day. What I saw there has sort of stuck with me and I’ve decided I should write about it. The article in question, which may have been by an outfit other than Yahoo itself, was about the recent decision by IBM to direct a division of people toward the task of learning how to program a quantum computer.

Using the word ‘quantum’ in the title of a news article is a sure fire way to incite click-bait. People flock in awe to quantum-ness even if they don’t understand what the hell they’re reading. This article was a prime example. All the article really talked about was that IBM has decided that quantum computers are now a promising enough technology that they’re going to start devoting themselves to the task of figuring out how to compute with them. Note, the article spent a lot of time kind of masturbating over how marvelous quantum computers will be, but it really actually didn’t say anything new. Another tech company deciding to pretend to be in quantum computing by figuring out how to program an imaginary computer is not an advance in our technology… digital quantum computers are generally agreed to be at least a few years off yet and they’ve been a few years off for a while now. There’s no guarantee that the technology will suddenly emerge into the mainstream –and I’m neglecting the DSpace quantum computer because it is generally agreed among experts that DSpace hasn’t even managed to prove that their qubits remain coherent through a calculation to actually be a useful quantum computer, let alone that they achieved anything at all by scaling it up.

The title of this article was a prime example of media quantum click-bait. The title boldly declared that “IBM is planning to build a quantum computer millions of times faster than a normal computer.” Now, that title was based on an extrapolation in the midst of the article where a quantum computer containing a mere 1000 qubits suddenly becomes the fastest computing machine imaginable. We’re very used to computers that contain gigabytes of RAM now, which is actually several billion on-off switches on the chip, so a mere 1,000 qubits seems like a really tiny number. This should be underwritten with the general concerns of the physics community that an array of 100 entangled qubits may exceed what’s physically possible… and it neglects that the difficulty of dealing with entangled systems increases exponentially with the number of qubits to be entangled. Scaling up normal bits doesn’t bump into the same difficulty. I don’t know if it’s physically possible or not, but I am aware that IBM’s declaration isn’t a major break-through so much as splashing around a bit of tech gism to keep the stockholders happy. All the article really said was that IBM has happily decided to hop on the quantum train because that seems to be the thing to do right now.

I really should understand that trolling around in the comments on such articles is a lost cause. There are so many misconceptions about quantum mechanics running around in popular culture that there’s almost no hope of finding the truth in such threads.

All this background gets us to what I was hoping to talk about. One big misconception that seemed to be somewhat common among commenters on this article is that two identical things in two places actually constitute only one thing magically in two places. This may stem from a conflation of what a wave function is versus what a qubit is and it may also be a big misunderstanding of the information that can be encoded in a qubit.

In a normal computer we all know that pretty much every calculation is built around representing numbers using binary. As everybody knows, a digital computer switch has two positions: we say that one position is 0 and the other is 1. An array of two digital on-off switches then can produce four distinct states: in binary, to represent the on-off settings of these states, we have 00, 01, 10 and 11. You could easily map those four settings to mean 1, 2, 3 and 4.

Suppose we switch now to talk about a quantum computer where the array is not bits anymore, but qubits. A very common qubit to talk about is the spin of an atom or an electron. This atom can be in two spin states: spin-up and spin-down. We could easily map the state spin-up to be 1, and call it ‘on,’ while spin-down is 0, or ‘off.’ For two qubits, we then get the states 00, 01, 10 and 11 that we had before, where we know about what states the bits are in, but we also can turn around and invoke entanglement. Entanglement is a situation where we create a wave function that contains multiple distinct particles at the same time such that the states those particles are in are interdependent on one another based upon what we can’t know about the system as a whole. Note, these two particles are separate objects, but they are both present in the wave function as separate objects. For two spin-up/spin-down type particles, this can give access to the so-called singlet and triplet states in addition to the normal binary states that the usual digital register can explore.

The quantum mechanics works like this. For the system of spin-up and spin-down, the usual way to look at this is in increments of spinning angular momentum: spin-up is a 1/2 unit of angular momentum pointed up while spin-down is -1/2 unit of angular moment, but pointed the opposite direction because of the negative sign. For the entangled system of two such particles, you can get three different values of entangled angular momentum: 1, 0 and -1. Spin 1 has both spins pointing up, but not ‘observed,’ meaning that it is completely degenerate with the 11 state of the digital register since it can’t fall into anything but 11 when the wave function collapses. Spin -1 is the same way: both spins are down, meaning that they have 100% probability of dropping into 00. The spin 0 state, on the other hand, is kind of screwy, and this is where the extra information encoding space of quantum computing emerges. The 0 states could be the symmetric combination of spin-up with spin-down or the anti-symmetric combination of the same thing. Now, these are distinct states, meaning that the size of your register just expanded from (00, 01, 10 and 11) to (00, 01, 10, 11 plus anti-symmetric 10-01 and symmetric 10+01). So, the two qubit register can encode 6 possible values instead of just 4. I’m still trying to decide if the spin 1 and -1 states could be considered different from 11 and 00, but I don’t think they can since they lack the indeterminacy present in the different spin 0 states. I’m also somewhat uncertain whether you have two extra states to give a capacity in the register of 6 or just 5 since I’m not certain what the field has to say about the practicality of determining the phase constant between the two mixed spin-up/spin-down eigenstates, since this is the only way to determine the difference between the symmetric and anti-symmetric combinations of spin.

As I was writing here, I realized also that I made a mistake myself in the interpretation of the qubit as I was writing my comment last night. At the very unentangled minimum, an array of two qubits contains the same number of states as an array of two normal bits. If I consider only the states possible by entangled qubits, without considering the phasing constant between 10+01 and 10-01, this gives only three states, or at most four states with the phase constant. I wrote my comment without including the four purely unentangled cases, giving fewer total states accessible to the device, or at most the same number.

Now, the thing that makes this incredibly special is that the number of extra states available to a register of qubits grows exponentially with the number of qubits present in the register. This means that a register of 10 qubits can encode many more numbers than a register of ten bits! Further, this means that fewer bits can be used to make much bigger calculations, which ultimately translates to a much faster computer if the speed of turning over the register is comparable to that of a more conventional computer –which is actually somewhat doubtful since a quantum computer would need to repeat calculations potentially many times in order to build up quantum statistics.

One of the big things that is limiting the size of quantum computers at this point is maintaining coherence. Maintaining coherence is very difficult and proving that the computer maintains all the entanglements that you create 100% of the time is exceptionally non-trivial. This comes back to the old cat-in-the-box difficulty of truly isolating the quantum system from the rest of the universe. And, it becomes more non-trivial the more qubits you include. I saw a seminar recently where the presenting professor was expressing optimism about creating a register of 100 Josephson junction type qubits, but was forced to admit that he didn’t know for sure whether it would work because of the difficulties that emerge in trying to maintain coherence across a register of that size.

I personally think it likely that we’ll have real digital quantum computers in the relatively near future, but I think the jury is still out as to exactly how powerful they’ll be when compared to conventional computers. There are simply too many variables yet which could influence the power and speed of a quantum computer in meaningful ways.

Coming back to my outrage at reading comments in that thread, I’m still at ‘dear god.’ Quantum computers do not work by teleportation: they do not have any way of magically putting a single object in multiple places. The structure of a wave function is defined simply by what you consider to be a collection of objects that are simultaneously isolated from the rest of the universe at a given time. A wave function quite easily spans many objects all at once since it is merely a statistical description of the disposition of that system as seen from the outside, and nothing more. It is not exactly a ‘thing’ in and of itself insomuch as collections of indescribably simple objects tend to behave in absolutely consistent ways among themselves. Where it becomes wave-like and weird is that we have definable limits to how precisely we can understand what’s going on at this basic level and that our inability to directly ‘interact’ with that level more or less assures that we can’t ever know everything about that level or how it behaves. Quantum mechanics follows from there. It really is all about what’s knowable; building a situation where certain things are selectively knowable is what it means to build a quantum computer.

That’s admittedly pretty weird if you stop and think about it, but not crazy or magical in that wide-eyed new agey smack-babbling way.

Calculating Molarity part 2: Vaccine structure

I’ve continued to think about this post at Respectful Insolence. You may already have read my previous post on this subject. I had a short conversation with Orac by email about the previous post; he had asked me what I thought about the alterations he made after thinking about my objections. One thing I answered that I thought he might add has sort of stuck with me and I think is worthy of a post of its own. What do you know, two posts in one week! This one may not be tremendously long, but it’s important and it bolsters the thesis written in that post on Respectful Insolence. They are about minimizing the contamination; this is true, but I would actually modify it by saying that you have to know what you’re looking at before you claim it’s a problem.

My previous writing here has been directed at my fellow skeptics and could be used by antivaccine advocates to attack people whose efforts I normally support. I would rather my efforts be focused at the greater good: namely to support vaccines. I don’t write often about my specific research expertise, but I’m mainly a soft matter researcher and I have a great deal of experience with colloids, nanoparticles and liquid crystals. This paper they’re talking about is my cup of tea! More than that, I’ve spent time at the university electron microscopy lab using SEM and elemental analysis in the form of EDS, shooting electron beams at precipitates obtained from colloidal suspensions.

I feel that the strategy of showing that vaccine contaminants are extraordinarily minor and not nearly as large as the antivaccine efforts try to claim is a good effort, but might also be the wrong strategy for tackling this science, particularly when screwing up the math. A part of my reason for feeling this way is that the argument is actually hinging on the existence, or not, of particulate objects in the preparations that the antivaxxers are examining. The paper that Orac (and, in a quotation, Skeptical Raptor) are looking at, is focusing on the spurious occurrence of a small particle content revealed in vaccine samples under SEM examination. The antivaxxers are counting and reporting particles found in SEM, of which they are reporting highly dispersive values: very few in some, many in others. They are also reporting instances where EDS shows unexpected metal content, like gold and others. Here, Orac notes that the particles are typically so few that they should be considered negligible and that’s fair… question is, what is the nature of these particles? And, should we take the antivaxxer EDS results seriously? It seems poor form for me to criticize my fellow skeptics and to not turn my attention against the subject that are analyzing –to allay my own conscience, I have to open my mouth! I therefore spent a bit of time of my own looking at the paper they were analyzing “New Quality-Control Investigations on Vaccines: Micro- and Nanocontamination.” I won’t link to it directly because I have no respect for it.

I’ll deal with the EDS first.

edsschematic

This picture is from https://s32.postimg.org/yryuggo1x/EDSschematic.gif

EDS is another spectroscopy technique that is sometimes called electron fluorescence. You shoot an electron beam (or X-ray) at a sample with the deliberate intent of knocking a deep orbital electron out of the atom. A higher energy shell electron will then drop down into the vacant orbital and emit an X-ray at the transition energy between the two orbitals. The spectrometer then detects the emitted X-rays. Because atoms have differing transition energies due to the depth of their shells, you can identify the element based on the X-ray frequencies emitted. A precondition for seeing this X-ray spectrum is that your impinging electron beam must be at sufficiently high energy to knock a deep shell electron up into the continuum, ionizing the atom and that energy might actually be considerable. There is also a confounder in that a lot of atoms have EDS peaks at fairly similar energies, meaning that it can be hard sometimes to distinguish them.

Here is a periodic table containing EDS peaks from Jeol:

energy-20table-20for-20eds-20analysis-1

Now, when you perform SEM, you spread your sample onto a conductive substrate and observe it in a fair vacuum. To generate an SEM image, the electron beam is rastered in a point across an area in the sample and an off-angle detector detects electron scatter. You’re literally trying to puff electrons up into the space over the sample by bombarding the surface. The substrate is usually conductive in order to replenish ejected electrons. The direction the ejection puff travels depends on the topography of the surface and the off-angle positioning of the detector means that some surfaces face the detector and give bright puffs while surfaces facing away do not. This gives the dimensionality to SEM images. Many SEM samples are sputtered with a layer of gold to improve contrast by introducing a material that is electron dense, but a system with the intent to use EDS would actually be directed at naked samples. With SEM, you always have to remember that the electron beam is intrinsically erosive and damaging. The beam doesn’t just bounce off the surface, it penetrates into the sample to a depth that I’ve heard called the interaction volume. The interaction volume is regulated by the accelerating voltage of the electron beam: higher accelerating voltages means deeper interacting volumes. Crisp SEM images that show clear surface features are usually obtained with low accelerating voltages which limit the interacting volume to only surface features of the sample. SEM images obtained at higher accelerating voltages take on a sort of translucent cast because the beam penetrates into the sample and interacts with an interior volume.

The combination of EDS with SEM is a little tricky. In SEM, EDS gains its excitation from the imaging electron beam of the system. Now, what makes this tricky is that samples like protein antigens in a vaccine are predominantly carbon and have low electron density, making them low contrast. You hit the sample at low accelerating voltages to see surface features. If you try to do EDS, you must hit the sample with electrons at energies sufficient to eject deep orbital electrons: it depends on the depth of that atom’s potential and on which electron is ejected, but atoms like gold can have deeper orbitals than atoms like carbon, meaning larger energies are needed to resolve deeper gold atom orbital transitions. Energies favorable to SEM imaging are sometimes very low compared to the energies needed to hit the EDS ejection energies. When you switch to EDS from imaging, you must be aware that you’re gaining a deeper penetration depth from the larger interaction volume of the beam. If your sample is thin and has low electron density, like carbonaceous biological molecules, you can easily be shooting through the sample and hitting the substrate, whatever that might be.

This can be a serious confounder because you don’t necessarily know where your signal is coming from. In the article commented on by Orac, the authors mention that they’re using an aluminum stub as an SEM mount, but they also talk about aluminum hydroxide and aluminum phosphate. The EDS aluminum signal is sensitive only to the aluminum atoms: you can’t know if the signal is coming from the mount or the sample! How do they know that the phosphate signal isn’t from phosphate buffered saline? That’s a common medical buffer that shows up in vaccine preparation. You can’t know if the material you’re looking at is aluminum phosphate from EDS or SEM.

As I mentioned, you also have to contend with close spacing of EDS peaks: if you look at that periodic table linked above, there’s a lot of overlap. To know gold, for certain, you really need to hit a couple of its EDS peaks to make certain you aren’t misreading the signal (all the peaks you get will have a gaussian width, meaning that you might have a broad signal that covers a number of peaks.) And, at least in the figure presented by Orac, they’re making their calls based on single peak identifications. This in addition to the other potential confounders Orac brought up: exogenous grit and the possibility that they’re reusing their SEM stub for other experiments. How can they be certain they aren’t getting spurious signals?

For EDS, I would be careful about making calls without having some means of independent analysis… like knowing what materials are supposed to be present and possibly hiring out elemental analysis of the sample. Will the gold or zirconium appear in the second analysis? Remember, science depends on being able to reproduce a result… if it was always spurious, a good tale is not being able to make it dance the second time around! Reporting everything doesn’t always mean that you know what you’re looking at. When I was doing EDS more routinely, I had a devil of a time hitting Titanium over Silicon and Gold signals… I knew titanium was present because I put it there, but I had trouble hitting it or ascribing it to specific particles in the SEM image. The EDS would not routinely allow me to reproduce an observation before the sample simply exploded while I was pounding high energy electrons into it.

Referring directly to the crank paper myself and I note that they make some extremely complicated mineral calls in their tables from the EDS data. Again, be aware that EDS is only sensitive to atoms specifically: you can’t know if Aluminum signals are aluminum phosphate or aluminum hydroxide or aluminum from the SEM stub. To know mineral crystals, you need precision ratios of the contents or X-ray diffraction or maybe Raman analysis of the mineral’s crystal lattice.

From their SEM imagery, it looks to me like they’re using a very strong voltage, which is confirmed in their methods section. They claim to be using voltages between 10 kV and 30 kV. These are very high voltages. For good surface resolution of a proteinaceous sample, I restricted myself to around 1 kV to 5 kV and sometimes below 1 kV and found that I was cutting holes through the specimen for much higher than that. Let me actually quote a piece of their methods for sample mounting:

A drop of about 20 microliter of vaccine is released from
the syringe on a 25-mm-diameter cellulose filter (Millipore,
USA), inside a flow cabinet. The filter is then deposited on an
Aluminum stub covered with an adhesive carbon disc.

They put a cellulose filter from Millipore into this SEM. I would have dried directly onto a clean silicon substrate. Here are the appropriate specimen mounts from Ted Pella. Note that the specimen mounts are not cellulose. Cellulose filters are used for a completely different purpose from normal SEM specimen mounts and, really importantly, you can’t efficiently clean a cellulose filter before putting your sample onto it. And, since these filters are actually designed to easily collect dust and grit as a part of their function, it is actually kind of difficult to get crap off of them. Without a control showing that their filters are clean of dust, there’s no way to be certain that this article isn’t actually a long survey examining the dust and foreign crap that can be found impregnating cellulose filters since the SEM acceleration voltages are unquestionably high enough to be cutting through a thin, low contrast biological layer on the top.

I won’t say more about the EDS.

So, I wanted also to address the particulate discussion a bit more directly too.

First off, from the paper directly, there is no real effort at reproduction or control. The source of the particles mentioned could be the carbon adhesive, the cellulose membrane or the vaccine sample. Having thought about it, I personally would bet on that cellulose: you don’t use them this way! They claim to be making preparations in a flow hood to keep dust out, but that doesn’t mean the dust isn’t already on any of the components being brought into the hood.

I stand by my original criticism of Orac’s post that these particles can’t be effectively quantified by molarity: those shown in the paper are all clearly micron scale objects, meaning that they have relatively large mass in and of themselves and constitute significant quantities of material. A better concentration unit for describing them would be mg/mL. I repeat that we don’t know the source of these objects for certain because the experiment is performed without true replication! If the vaccines are the source, the authors should have been able to perform a simple filtration of a vaccine specimen by a 0.22 um or 0.1 um filter and show that this drastically reduces contamination because many of their micrographs are of objects that should not have passed through such a filter… but they did no comparable experiment.

As I’ve been thinking about it, there are a couple potential different particles that could be observed under these conditions. The first is dust, as already detailed. The second possible source is vaccine components, but from a non-contaminating perspective. Orac used a quote by Skeptical Raptor who was rebutting the idea of Aluminum hydroxide being a strong contaminant by again mistaking particles for molecules. I won’t get into his difficulty calculating concentration since it was similar to what happened to Orac, but he was speaking about Aluminum hydroxide being a chemical that is a tiny fraction of a nanogram in a vaccine and therefore much less than environmental exposure to aluminum. I know I probably annoyed Orac with my thoughts about this as I was thinking out loud, but Aluminum hydroxide is not any sort of contaminant in the Cervarix vaccine friend Raptor was talking about: it’s the Adjuvant! Here’s a product insert for a Cervarix vaccine.

cervarix-pi-pil

In this vaccine, I found that there is approximately 500 ug of Aluminum hydroxide adjuvant added per 0.5 mL vaccine dose. If you look in the Aluminum hydroxide MSDS, there is no LD50 for this compound, no cancinogen warnings and no other special health precautions from chronic exposure –it irritates your eyes from contact, but what doesn’t? It got a 1 as a chemical hazard. Antivaxxers are crazy about being anti-aluminum based upon more decades old information that has since been rebutted, but for all intents and purposes, this material is pretty harmless. One special thing about it is that it’s actually very insoluble unless you drop an acid or a strong base on it, meaning that it should be no surprise if it’s a particulate in a neutral physiological pH vaccine (Ksp = 3×10^-34)! In vaccine design, and I haven’t spent a huge amount of time looking, but the main point of the adjuvant is to cause the antigen to be retained at the site of injection for a prolonged time so that the body can be exposed to it for a longer period. The adjuvant adheres the vaccine antigen and, by being an insoluble particle, it lodges in your tissues upon injection and stays there, holding the antigen with it. I found immunology papers on pubmed calling this establishment of a ‘immune depot’ for stimulating immune cells. Over a prolonged period, the insoluble Ksp will allow this compound to gradually dissolve and release the antigen out of the injection site, but Aluminum hydroxide will never have a very high concentration in the body as a whole: that’s what Ksp says, that the soluble phase of the salt components can be no greater than about 2.4 nM, which is well below established exposure limits recommended in the MSDS of between 30 nM and 100 nM (by my calculation).

But, if you look at vaccine adjuvant under SEM, it will be a colloidal particle with a core of Aluminum in the EDS! You can even see examples of this in the target paper itself: the SEM in figure 1 looks like a colloid fractal (they call it a ‘crystals’, but it looks like a precipitate deposition fractal), and the colloids are probably aluminum hydroxide particles caked with antigen protein (again, EDS can’t distinguish between  aluminum hydroxide mixed with PBS and aluminum phosphate, contrary to what the caption says). And, these colloids are INTENDED TO BE THERE by the manufacture of the vaccine. Note, this is a structure designed into the vaccine to help prolong the immune response.

I’ve been debating the source of the singleton particles that the authors of this paper take many SEM pictures of in the remainder of their work. They are mostly not regular enough to be designed nanoparticles or precipitate colloids and they often look like dust (Orac mentions as much). I’ve been skeptical of the sample preparation practices outlined in the paper: I think adding the cellulose membrane to the sample is asking for trouble. You use substrates in SEM to avoid contaminant issues and to provide surfaces that are easily cleaned prior to use. The cellulose polymer and vaccine antigens are all low contrast… at 30 kV accelerating voltage, the SEM could actually be interacting down into the volume of the filter (as I mentioned above). If this isn’t dust sitting on the filter prior to dropping the vaccine onto it, it might also be dust dropped randomly into the cellulose monomer during the manufacturing process and trapped there while polymerizing the membrane. The filter won’t care about most of this sort of contamination because the polymer will immobilize it. Another possibility, but the paper tests almost no hypotheses for purposes of error checking, so we’ll never know.

Overall, I found that paper incompetent. There’s no reason to take it seriously. I hope that my writing this blog post will help balance the previous post which attacked science advocates for misusing the science.

Calculating Molarity (mole/L)

As a preface to this post, I want to make doubly clear my stance on vaccines. There is no good scientific evidence to support the notion that vaccination is in any way an unsafe practice or that it is responsible for any manner of health problem above and beyond the diseases that vaccines protect against. Vaccination is the single most powerful health intervention created in the last 150 years of medicine. There is, in my opinion, some potential for this post to be used to damage the credibility of a person who I believe to be a necessary positive force in the Healthcare scene and I want to make it clear that this was not the intention of my writing here. Orac is a tireless advocate for science and for clear, skeptical thought in general and I respect him quite deeply for the time he puts in and for putting up with the static he puts up with.

That said, I believe that science advocacy is a double edged sword: if you didn’t get it right, it can come back to bite you.

I love Respectful Insolence, but I’ve got to ding Orac for failing to calculate molarity correctly. He is profoundly educated, but I think he’s a surgeon and not a physicist. We all have our weak points! (Thank heaven above I’m not ever in the operating room with the knife!)

In this post, which he may now have edited for correctness (and it seems he has), he makes the following statement:

More importantly, look at the numbers of precipitates found per sample. It ranges from two to 1,821.

O.M.G.! 1,821 particles! Holy crap! That’s horrible! The antivaxers are right that vaccines are hopelessly contaminated!

No. They. Are. Not.

Look at it this way. This is what was found in 20 μl (that’s microliters) of liquid. That’s 0.00002 liters. That means, in a theoretical liter of the vaccine, the most that one would find is 91,050,000 (9.105 x 107) particles! Holy hell! That’s a lot. We should be scared, shouldn’t we? well, no. Let’s go back to our homeopathy knowledge and look at Avogadro’s number. One mole of particles = 6.023 x 1023. So divide 91,050,000 by Avogadro’s number, and you’ll get the molarity of a solution of 91,050,000 particle in a liter, as a 1 M solution would contain 6.023 x 1023 particles. So what’s the concentration:

1.512 x 10-16 M. that’s 0.15 femtomolar (fM) (or 150 altomolar), an incredibly low concentration. And that’s the highest amount the investigators found.

Anybody see the mistake? Let’s start here: Avogadro’s number is a scaling constant for a linear relationship and it has a unit! The units on this number are atoms(or molecules) per mole. It converts a number of atoms or molecules into a number of moles.

‘Moles’ is a convenient person-sized number that is standardized around ‘molecular weight,’ which is a weight unit that arbitrarily says that a single carbon atom has a weight of ’12’ and results in atomic hydrogen having a weight of ‘1.’ That’s atomic mass units (or AMU), which is usually very convenient for calculating relative weights of molecules by adding up all the AMU of their atomic constituents. To use molarity, we usually need a molecular weight in the form of Daltons, or grams/mole. Grams per mole says that it takes this many grams in mass of a substance for that substance to contain a single mole’s worth of molecules (or atoms) where it is then implicit that the number of molecules or atoms is Avogadro’s number.

‘Mole’ is extremely special. It refers to a collection of objects that are atomically identical! If you have a mole of a kind of protein, it means that you have 6.02 x 10^23 number of this kind of identical object. If you make a comparison between two proteins, the same molar number of each with a different molecular weight is a different overall mass. Consider Insulin (5808 g/mole) compared to the 70S Ribosome (2,500,000 g/mole)… one mole of Insulin would weigh 5.8 kg while one mole of 70S Ribosome would weigh 2.5 metric tons!!! If they have roughly the average density of proteins, what would be the volume of 1 mole of 70S ribosome as compared to 1 mole of Insulin? It would be 430 times greater for the Ribosome; 2900 L for 70S Ribosome while Insulin is about 6 L!

Notice something here: an object with a big molecular weight occupies a bigger volume than the same object of a smaller molecular weight… regardless of the fact that they are at the same molarity. Molarity as a number depends strongly on the molecular weight of the substance in question in order to mean anything at all. For the Ribosome, the same molar concentration as for Insulin means a solution containing a much larger amount of solute.

In the post in question on Respectful Insolence, Orac is talking about a paper which observes particulate matter derived from vaccine specimens in an SEM. It is clear from the authorship and publication of the paper that the intent is to find fault in vaccines based upon the contents of materials examined by this probing… from what little I know about the paper, it does not seem to be producing any information that is truly that informative. But, you can’t fault a paper on a point that may not actually be as flawed as an initial interpretation would imply. The paper reports number of particles observed per 20 uL of a solvent. They find as many as 1,821 particles per 20 uL. We are not told for certain what these particles are composed of except that the investigators aren’t sure and shot an overpower EDS at everything and reported even the spurious results. Orac scales up this number to 1L to get 90.1 x 10^7 particles and then divides by Avogadro’s number to find what proportion this is of one mole of these particles, never mind that we don’t know how big the particles are in terms of molecular weight or how dense in volume per mass. He declares it to be a tenth of a femtomole and runs on with how tiny the concentration is. As I initially wrote this, I focused on the gleeful way in which Orac does his deconstruction in large part because it really isn’t a valid thing to laugh at when the deconstruction is not properly done.

Here is how someone of my background approaches the same series of observations. I can see from the micrograph in the blog post that the scale bar is something like 2 mm (2000 microns)… the objects in question are maybe tens to hundreds of microns in size. Let’s make a physicist supposition here and think about it: pulling this out of my ass, I’ll claim these are 1,821 approximately spherical identical particles of sodium chloride, each of 40 microns diameter. That gives a volume of 4/3*Pi*20^3 um^3 or 1.9 x 10^-12 m^3 per particle and 3.5 x 10^-9 m^3 for the whole collection of particles. Now, density usually is given in terms of g/cm^3 or g/mL… there are 100 cm per meter and you must convert three times to cube it, so 3.5 x 10^-9 x 100^3 = 3.5 x 10^-3 cm^3. Wait a minute, we’re now at a volume of 3.5 uL!!! Did you see that? A cubic centimeter is a mL and 0.0035 mL is 3.5 uL, or 17% of the original 20 uL sample volume! What molarity is this? The density of sodium chloride is 2.16 g/mL or 2.16 mg/uL… which is 7.56 mg. That’s 7.56 mg of salt dissolved in 20 uL. The molecular weight of sodium chloride is 58.44 g/mole or 58.44 mg/mmole, which gives .129 mmole. From this .129 mmole in .02 mL is 6.47 mmole/mL.

That’s 6.47 mole/L……. 6.47 M!!!!

Let’s pause for a second. Is that femtomolar?

Orac missed the science here! I initially wrote that he should be apologizing for it, but I’ve revised this so that my respect for his work is more apparent. The volume of these particles and their composition is everything. A single particle with a molecular weight in the gigadaltons or teradaltons range is suddenly a very substantial mass in low particle number. If these particles are as I specified and composed of simple salt, they are at a molarity that is abruptly appreciable. If we make these into tiny balls of Ricin, that’s unquestionably a fatally toxic quantity!

As with all things, dose makes the poison and there’s no Ricin in evidence, but this argument Orac has made about concentration, in this particular case is catastrophically wrong. A femtomole of a big particle that can be dissolved could be a large dose!

I forgive him and I love his blog, but let this be a lesson… you don’t just divide by Avogadro’s number in order to get meaningful concentrations!

A Physicist Responds to “The Three Body Problem” part 2

To start with, this post will be almost pure spoiler. I’m assuming, if you got through part 1, that you’ve read Cixin Liu’s book.

I’ve gotten partway through the second book in the trilogy myself, meaning that I’ve had some additional time to think about the contents of this post, but that I don’t know the ultimate outcome of the series.

This post is addressing a central conclusion of the first book, a major piece of science fiction that I didn’t address in the previous post because it is so intrinsic to the plot. This is about the idea of the Sophon induced ‘science lock-down.’ An alien race is going to invade the planet Earth in 400 years and this race is concerned that Human technology will advance in that time to be more powerful than the alien race’s own technology, so the aliens have played a trick to prevent humans from performing fundamental scientific research in order to prevent human technology from developing.

The key of this is the idea of the “Sophon.”As mentioned in the previous post, the word ‘proton’ was chosen over the name of an actual fundamental particle in order to facilitate a wordplay in Chinese… particularly the Chinese word that got translated into English as “Sophon.” This word was chosen from a modification of the word “Sophont.” As any science fiction aficionado can tell you, this word means “intelligent creature.” A Sophon is intended to be an intelligent proton, a robot the size and mass of a subatomic particle. These Sophons are capable to some extent of changing their size and shape and can communicate back to the aliens instantaneously. Sophons can also travel, as subatomic particles, at very nearly the speed of light.

You can see right from that paragraph the first place where the Sophon (and therefore the idea of science lock-down) are broken. Sophons communicate with the aliens instantaneously by means of quantum entanglement. If you’ve read anything else I’ve written, you know how I feel about the cliche of the ‘Ansible.’ Entanglement can’t be used to pass information: the Quantum mechanics doesn’t allow for this, no matter how you misinterpret it. Entanglement means correlation, not necessarily communication. This quantum mechanical effect is an interesting and very real phenomenon, but to understand what it actually means, you need to understand more about the rest of what quantum is… the story of ‘Three Body Problem’ never goes there. I won’t go there either except to suggest learning about the Bell Inequality.

The reason that Sophons are capable of producing science lock-down is because they can falsify data coming out of particle accelerators. Sophons can fly through the sensors in particle detectors and trigger them falsely, creating intelligently designed noise. At the surface, this is a horrible prospect, making it impossible for Humans to probe the deep structure of matter and therefore attain the understandings necessary to build Sophons ourselves. Do not pass go, no ‘correct’ results means no good science!

Obviously, this looks really bad. Very interesting science fiction idea. On the other hand, it also demands a bit of discussion, both about how particle accelerators work and on how science works.

Particle accelerators are the wrecking ball of the scientific enterprise. They generate data almost entirely by accelerating charged particles up to substantial fractions of the speed of light and slamming them into each other and into stationary targets. Particle physicists are all about impact cross sections and statistical probabilities of outcomes. The gold standard of a discovery in particle physics is a 5-sigma observation. ‘Sigma’ is, of course, standard deviation, which is a statistical standard by which scientists use the Gaussian statistical distribution to judge probability of occurrence –it’s the Bell Curve. Average is the peak of this curve, while one standard deviation is either one sigma to the left or right of average. Particle physics is set up around a simple statistical weight tabulation which can be couched as a question: “How likely is it that my observation is false/true?” If an event observed in the accelerator is spurious –that is, if the event is noise– the statistical machinery of particle physics places it close to the peak of the Bell Curve, that is at the average, which is to say that the event observed is ‘not different’ from noise. A 5-sigma event is an event which has been so well observed statistically that the difference from noise is five standard deviations from the peak of the Bell curve out into the tail (99.9999% of the curve’s area is captured within this extent of the tail!) This is essentially like saying that a conclusion is better than 99% certain to be NOT false.

Do you know how big a particle accelerator data set is? They include billions of events. Particle accelerators run for months to years on end, collecting data automatically 24 hours a day. And, the whole enterprise is based on the assumption that every observation independently might be a false outcome. Statistical weight determines the correctness of an observation. Physical theory exists to model both the trends and noise of an experiment.

As I said above, the purpose of the Sophons is to produce false results within the sensors of an accelerator’s detector apparatus. The most major detection devices in the modern systems are calorimeters and photomultipliers. Calorimeters simply detect heat deposition within the sensor volume while photomultipliers give a small current when they are perturbed by a passing electric charge. Usually, detector assemblies contain layers of sensors wrapped around the collision target where photomultipliers form multiple inner layers and calorimeters reside around the outside of the whole assembly. There are usually also magnetic fields applied through the detector so that charged particles will tend to follow curving paths as they pass outward through the different layers away from the collision site. There are other detector technologies and refinements of these ideas, but this gives a basic taste.

Here is the ALTAS detector at the Large Hadron Collider:

atlasdet

Using this layered design, photomultipliers can resolve the path of outward flying particles, determining their charges based upon their path curvature through the magnetic fields established by the solenoids and then the calorimeters determine how much energy was in the particle when that particle heats the calorimeter upon crashing into it. Certain particles types penetrate shields differently, necessitating layers of calorimeters with different structural characteristics in order to resolve different particle types. Computers correlate detection traces between the layers and tabulate what heat depositions relate to which flight paths. Particle physicists can then do simple arithmetic  to count up all the heats and all the charges on all the particles detected for one collision event and deduce which subatomic particles appeared during a particular collision. Momentum and energy/mass get conserved relativistically while charge is directly conserved and you simply add up what went in in order to account for what comes out during a collision.

In order to falsify data within such a detector, the smart subatomic particle, the Sophon, would need to fly back and forth through the detector layers, switching its charge polarity between passes and somehow dumping heat into calorimeters without being destroyed or lost in some way. How the Sophons get their kinetic energy is somewhat opaque in the story and I spent some time abortively rereading the TBP trying to figure this out, but it can be assumed that they possess a self-contained power supply which enables them to either recharge themselves from their surroundings, or simply dip into a long term battery reserve whenever they need it. They are clearly able to accelerate to highly relativistic velocities in a self-contained manner since they flew across the void from the alien homeworld to Earth, and then slowed down without external assistance at Earth. You could presume that they are able to write completely fake collision events into the detector, pretending to travel wrong velocities and masquerading as false charges and masses.

Now, like I said, this is terrible! The experiments can’t always give reliable results. Never mind that the real experiments must always be filtered for the fact that false results exist in the data set anyway.

In the paragraph above, I said “can’t always give reliable results” because the real data set of collision events still exists behind the fake data set. The Sophon flying back and forth can’t prevent real particle collisions from occurring and also interacting with the detector. The particle physicists would actually know right away that something isn’t right with the systematic structure of the experiment because they know how many particles are in their particle beams and also know the cross-sections of interaction, meaning that they start the experiment knowing statistically how many collision events to expect in a unit of time: Sophon interference with the experiment would only increase over the expected number. What you get is two overlapping data sets, one that is false and one that’s true. If the false data is much different from the true data, you inevitably bin them as distinct results because they would create a bimodal distribution to your data set… some measurements add up to five-sigma toward one result while a distinct set will ultimately add up as five-sigma toward something distinctly different. Then, you just let the theorists work out what’s what.

In the story, the scientists just throw up their hands and declare ‘sophon barrier’ saying that science ‘can’t advance’ because it can’t discern correctness.

This prospect has really kind of sat in the back of my mind, nagging me. I’m not completely certain that the author understands the overall scientific mindset or philosophy. Science starts out assuming that all results might be false! Having a falsehood layered on top of other potential falsehoods is really not that deterring to me, particularly since the scientists know the Sophon interference is present by the end of the story. Science as a process is intrinsically concerned with error checking and finding systematic interference, even intelligent fabrication of data within the scientific community –you think the Sophons are bad: somebody simply altering the data set as they see fit, completely independent of the experiment, is worse. And, we deal with this in reality! At least with the Sophons, a real data set must sit behind the mixture of false events. If the data set is merely bimodal or multimodal with statistics backing up each conclusion, you design experiments to address each… at some point, consistency of a result must ultimately dominate. Sorting out this noise would take time, but it would be unable to stop progress overall, especially since the scientists know the noise is present!

Now, giving false data is actually somewhat different than prohibiting data collection. This facet is somewhat unclear to me by the story –my memory fails. You can imagine that the Aliens realize that the humans know about the tampering and rather than leaving humans with a data set that contains some good data, they would simply have their Sophons swamp the detectors. In this, the Sophons fly back and forth within the detector giving so many false events that they prohibit the detector from being able to trigger for the resolution of real events. They could simply white us out!

While this would indeed be a bad thing, it would have a sort of a perverse effect on a real scientist. Consider: you know how fast your instrument triggers and you know the latency required for it to recover… this gives you a measure for how quickly and in what frequency the Sophon must act! You can just imagine the particle beam physicist salivating at the prospect of his Nobel prize in the nascent field of Sophon physics. Imagine the flood of grant proposals around the subject of baiting a Sophon into a particle beam line by the performance of basic science only to try to turn the particle beam against the Sophon in order to smash it apart and see how it works!

Really, if you were a high energy physicist and you knew unequivocally that a smart particle was flying around inside your instrument, how could you not be trying to figure out a way to probe it? It’s like getting Maxwell’s demon handed to you on a shiny platter!

A realistic outcome here is actually not the prohibition of science. It would be an arm-wrestling match with the Aliens: at the very best, leaving us with a partial data set that we can ultimately advance with, or giving us the chance to probe the Sophons directly.

The prospect of probing the Sophons directly contains the danger that it would be hard to distinguish engineered results from real ones, but every demonstration by the Sophons of some other confusing behavior is in fact data itself. The author made a huge argument in “Three Body Problem” that Sophons are typically point-like and would probably subscribe to the notion that they can’t be probed since they would essentially have no collision cross-section; I would resist this idea because it either violates or misunderstands quantum mechanics, which I detailed a bit in the previous post. The author might even suggest that Sophons can’t be probed because they can dodge collisions with other particles in the collider, but I would doubt that simply because of the inability for the Sophon to know things about other particles due to simple quantum mechanics and the affect of relativity altering the rates of information flow: the decision would need to be made very quickly and it would have a built in imprecision from Uncertainty! Moreover, the more time the Sophons spend performing confusing behavior in order to foil their own direct examination, the less time they can spend faking data in the experiments directed at basic research. As you may be aware, machines like the LHC are actually devoted to many lines of research simultaneously and physicists are remarkably adept at piggybacking one experiment on top of another in order to conserve resources and obtain additional bang for the same buck.

One final aspect of the “science lock-down” which I take some umbrage with is the notion that only particle accelerators are responsible for fundamental research. They aren’t. There is a huge branch of physics and chemistry probing quantum mechanics based on spectroscopy. Lasers are unequivocally a quantum mechanical device and much probing into basic quantum mechanics is performed by some variation on the theme of lasing. The Nobel prize winning discovery of the Bose-Einstein condensed matter phase did not occur in a super-collider, it occurred on an optical bench. Most super precise clock mechanisms used by the human race at this point are optical devices and have absolutely nothing to do with particle accelerators –optical gratings and optical metrology are driving the expansion of precision measurement! The leaps which are in the process of producing quantum computers (one device the author specifically prohibits in book 2 under the science lockdown!) are not being made at particle accelerators at all: they are being made in optical lattice traps on lab benches and in photo-etched masks used to produce nano-scale solid state resonators. We are currently in the process of building analog quantum computers for the purposes of simulating quantum chromodynamic systems using optical and nano-resonator devices… and this development has nothing to do with particle accelerators, except as a means of reproducing results! The author made the argument that humans couldn’t build massive super-collider accelerators, Synchrotrons and Linacs, fast enough to match the production capacity that the Aliens have for making the sophons needed to foil these instruments, but the author never even touched on the rapidly expanding field of plasma wake field acceleration, which uses lasers to accelerate particles to relativitistic speeds in bench-top apparatuses for a fraction of the price of a super-collider.

The bleeding edge of physics is very multi-pronged; the Higgs boson discovery carried out in a synchrotron may someday be reproduced by a bench-top plasma wake field accelerator for a tiny fraction of the price. Can ‘locking down’ big particle accelerators like the LHC prohibit the extensive physical exploration that is occurring due to a mostly unrelated black swan technological development like lasers? I really don’t think it can. Tying one arm behind your back leaves you with the other arm. It’s true that the mothballing of the superconducting super-collider in the United States prevented humans from definitively discovering the Higgs boson for more than a decade, but that isn’t to say that there aren’t other avenues to the same discovery.

Do I think that science lockdown is possible by the means suggested by the author? Not really. And, especially not for devices like quantum computers, which is one critical development that the author suggests is prohibited by sophon interference in the second book.

Don’t get me wrong, this is a good piece of science fiction and it’s a wonderful thought experiment, but like many thought experiments, it’s arguable.

edit 2-16-17

I saw a physics colloquium yesterday delivered by a Nobel prize winner. His lab is currently working on a molecular spectroscopy experiment directed at measuring the electric dipole moment of the electron. A precision measurement of this value ties directly to the existence (or not) of supersymmetric particle theory… which is one candidate expansion of the Standard Model of particle physics. This experiment is not being done in a super collider, but on an optics bench for a fraction of the price. Experiments like this one completely invalidate the thesis of Three Body Problem: that by locking down colliders that there is no other way for particle physics to advance. There are other ways that are comparatively cheap and requiring less resources and manpower. Physics would find a way.

A Physicist Responds to “The Three Body Problem”

I’ve not had much motivation to post recently: it seems like I read another article every week or so where some fool is making the same wrong conclusions about Quantum Mechanics or Relativity or AI, or all of the above, simultaneously. It gets exhausting to read. I also haven’t had time for constructing a post on my recent problem work in part because I’m prepping for a major exam.

But, I need some time to take a break and change my focus. So, I decided to write a bit about some things I saw in Liu Cixin’s “The Three-Body Problem” of which I read Ken Liu’s translation. If you’re not familiar with this book, I would highly recommend it. This book deservedly won the Nebula and Hugo awards –both– and it is one piece of science fiction that is truly worth going through.

One of my non-spoiling responses here is that it shows how another culture, namely the Chinese culture, can go to extremes with how it treats Scientists and Intelligentsia and all the different ways that this relationship can oscillate back and forth. It shows too the humanity of scientists, both for better and worse. Based on the structure of the story, it’s clear to me that the author has respect for the scientific disciplines which is usually not so present in western literature anymore. I was also quite happy that characters were not meaninglessly fed to the meat grinder in the way they are too often in many western books in the supposed name of ‘authenticity.’

With that said and my badge of worthiness placed, we will get to the actually purpose of this post… some places where Liu Cixin’s Science fiction Authoritis shows through.

header

The great problem with many science fiction writers is that they know just enough to be dangerous, but not enough to be right. Where they fall apart is when they start to over-explain the phenomenology of what’s happening in their stories in order to ‘make it work.’ There are two places I will talk about where this happened in ‘3BP’.

The first is the Zither.

To start with, I loved the idea of the zither. It was a very classy, ingenious use for the cliche of the monofilament wire. Note first that this is a cliche (a ‘trope’ maybe, but I detest that word for its cliche overuse). In the form that appeared in 3BP, nanomaterial monofilament 1/1000th the thickness of hair is strung in strings like a zither between pilings across a straight section of the Panama canal as an ambush trap for an oil tanker being used by the villains. The strings are strung between the banks of the canal attached to chains that can be raised and lowered so that ships which aren’t the target can be allowed through the canal unhindered. When the target ship approaches, the monofilaments are pulled up across the canal by tightening the chains such that the filaments are held in an invisible web of horizontal strands above water line, spaced from each other by only a few feet, like a big hardboiled egg slicer. The author even makes allowances for how the monofilaments can be attached to the chains so as not to shred the anchoring when the target ship pushes against them. When the ship hits the zither, it sails silently through and continues on until the engine of the ship rips itself to pieces and causes the whole boat to slide apart in sections.

You have to admit, it’s a nifty trap. The monofilament in question is described as a material intended for use building orbital elevators and is dubbed ‘nanotechnology’ by the story.

The great stumbling point most people have about nanotechnology is that it is not tiny without limit since it exists in a scale gap of less than 1 micron and more than 1 nanometer. For comparison, hair is about 100 microns and the length of a carbon-carbon sigma bond is about 0.1 nanometers; the zither monofilaments are therefore about 100 nanometers. This is sort of a crossover regime where building structures by top-down bulk techniques, like photo etching, becomes hard, while building from bottom-up by chemistry is also hard. In general, this is into a big enough scale where quantum mechanical effects become small and statistical mechanics tends to dominate manipulation. At the nanoscale, everything we understand about how the basic level of material stuff holds together remains true. In a way, nanoscale is small, but not so small that objects are markedly described by quantum mechanics, but also not so big that they behave like bulk objects. That’s why ‘nano’ is difficult: it sits at an uncomfortable seam between classical and quantum universes where the tools for one or the other aren’t quite right for doing what needs to be done.

Cutting material is by a process called scission. The act of ‘scission’ is, by definition, the breakage of a long chain molecule into two shorter chain molecules. It means separating at least one chemical bond in order to free a unified mass into two independent parts. And, a chemical bond always has at least two electrons since the bond state must consist of spin-up and spin-down parts in order to cancel out angular momentum… and that’s pretty much the theme of chemistry: stable states mostly have angular momentum canceled. There are some special exceptions, but these do not define the rule. Still, since you can’t subdivide an electron, splitting a bond means intact electrons residing somewhere who are no longer in a quantum mechanical ground state and also nuclei lacking complete valence shells. This means that the system, immediately after scission, will have a strong desire to rearrange by chemical reaction into a more stable state. What will it react with? Whatever is close by… in this case, the monofilament wire! This kind of process is part of why blades dull over time: for a conventional metal knife cutting a metal structure, the structure is literally ‘cutting’ the knife too and blunting its edge. With a nanofiber, there isn’t much mass to wear away.

This is one of the difficulties in scaling up nanotechnology: they usually become fragile!

Overlooking this fragility issue, one can argue that the process of making this nanofiber yielded a structure that is exceptionally strong and perhaps robust to chemical processes occurring around it. This is presumably what you would want in such a material that would be useful for building orbital elevators. If you want a tether from Earth up into orbit, you could bundle many of these fibers together and add coatings on the surface to help render them inert to chemistry. Many materials used in construction of advanced structures work in a manner like this: you’ve certainly heard of “Composites!”

Now then, singling one of these fibers out and stringing it across the Panama canal produces a second major issue. The energy necessary to allow the zither to slice apart the ship comes from the kinetic energy of ship coasting along the water way: the ship hits the zither and the monofibers of the zither redirect parts of the ship infinitesmally from each other so that their tensile strength is not great enough to resist going different directions from each other… causing them to rend apart microscopically. This redirection is arrested because the parts separated from one another can’t pass through the bulk materials holding them in place. This ‘motion’ is then completely incoherent and can only be tabulated as heat deposited into the material bulk at the location of the nanofilament. So, part of the kinetic energy of the ship’s motion is deposited as heat around the monofilament cut. This might not be quite a huge problem but that the monofilament has an intrinsically tiny mass and therefore a miniscule heat capacity: its electrical structure has relatively few valence modes where it can stuff higher energy vibrational states. Moreover, the fiber is located at the origin of the heat and the materials heating up surround it from all sides, so there is no other place where the fiber can dump heat except linearly along its own body. If the heat doesn’t dissipate through the hull of the ship fast enough, how hot can the fiber get before its electrical structure starts sampling continuum states? However tough the fiber is, if it can’t dump the heat somewhere, its temperature might well rise until it literally ionizes into a plasma. For such tiny mass, only a little heat input is a substantial thing.

This is a difficulty, but one a clever writer can probably still explain away (maybe better left as a black box). You might argue that the fiber can cope with this abuse by conducting the heat along its length and then radiating it into the air or emitting it as light. That might work, I suppose, but it would mean increasing complexity in the structure of the nanomaterial. Not an impossibility, but now the fiber glows at least as a black body and is no longer invisible! For anybody familiar with super-resolution microscopy, emission of light can make visible objects tinier than the optical resolution limits.

Maybe the classiest way would be to convert the fiber into a thermoelectric couple of some sort and get rid of the heat using an electrical current. Some of the well known modern nanofibers, the fullerenes and such, are also very good electrical conductors because of their bonding structures. In reality, this would also probably limit the cutting rate: the rate of heat deposition in the line must not exceed the rate at which the cooling mechanism can suck heat away! An unfortunate fact about very thin conductors is that their resistance tends to be high, meaning that the conduction rate goes up as the channel of the conductor is thickened… and you are unfortunately crippled by using a nanofiber, which is very skinny indeed. I won’t mention superconductors except to say that they have a limited range of temperatures where they can superconduct… using a superconductor in a thermoelectric couple is asking for trouble.

My big complaint about the zither boils down to that: heat and wear. Because of the difference of the applications, a material which is suitable to the purpose of building an orbital elevator is not necessarily suitable to building a monofilament cutter. I would also offer that a real monofilament cutter would be specifically engineered to the task and not a windfall of a second technology. The applications are just too different and don’t boil down to merely ‘strength’ and ‘tiny size.’

Having addressed the zither, I’ll talk about a second major point which suffered from too much description and too little plausibility. I’ll try to describe this part of the story without giving away a major plot point.

In this section of the story, someone is trying to use a colossal factory hovering in orbit above the planet to take a proton and expand it from a point-like object into a three dimensional structure. The author makes the case that a simple object, like a proton, which is essentially point-like when viewed from our place in spacetime, is actually an object with extensive higher dimensional structure and that some technological application can be carried out where this higher dimensionality can be expanded so that it can be manipulated in our three dimensional space. He even makes the case that these higher dimensions contain considerable volume and may be big enough to harbor entire universes. As he repeatedly emphasizes, a whole universe of complexity, but only a proton’s worth of mass.

To start with, I have little to say about the string theory. For one thing, I don’t really understand it. A major argument in string theory is that the tiniest bits of space in our universe can actually have seven or eight additional dimensions hidden away where we three dimensional creatures can’t see them. Perhaps that’s true, but as yet, string theory has made no predictions that have been verified by experiment. None!

From the standpoint of a person, it’s certainly true that a proton might seem point-like, but this is actually false! Unlike an electron, which is truly dimensionally point-like for all that physics currently understands of it, a proton has a known structure that occupies a definable three dimensional volume. The size here is tiny, at only about 10^-15 meters, but it is a volume with a few working parts. A proton is constructed of two “Up” quarks and a “Down” quark that are held together by nuclear strong force (making the proton a baryon with spin 1/2, and so abiding Fermi statistics).

I have considered that perhaps the application of a ‘proton’ in the story is perhaps a missed translation and that the author really wanted a dimensionless particle like a quark (which are never observed outside of particulate sets of two or three) or an electron (which can be a free particle). After writing the previous sentence, I spent some time looking at translator notes for this book and I found that the choice of the Chinese word for ‘proton’ facilitated a word play in the author’s native language that did not quite translate to English. I won’t detail this word play because it gives away a plot point of the book that is beyond the scope of what I wish to write about. A lesson here is that the author’s loyalty is definitely toward his literature above scientific truth.

One significant issue that must be brought up here is that ‘point-like’ is a relative description when you start talking about particles like these. An electron is fundamentally point-like, but it is also quantum mechanical, meaning that they tend to occupy finite volumes in space that vary quite strongly depending on the shape and boundaries of that space, as given by the wave function. Reaching in and ‘grabbing’ the electron reveals what appears to be a point, but that ‘point’ can be distributed in non-intuitive ways across the volume it occupies. We have no real capacity to describe that it has a shape and one might certainly consider that ‘point-like’ dimensionless object to be a singularity in exactly the same way that a black hole is a singularity. I have half a mind to say that the only reason an electron is not a black hole is because the diameter of the volume it occupies as described by the uncertainty principle is larger than its Schwarzschild radius. This statement is limited by the fact that Quantum Mechanics doesn’t play well with General relativity and the limits of the Schwarzchild radius may not coincide with the limits of the Uncertainty principle –both are physically true, but they each have a context where they are most valid and no unifying math exists to link one case directly to the other.

Now then, in 3BP, a point-like elementary particle with the mass and dimensionality of such a particle is shifted by a machine so that its higher dimensional properties are exhibited as a proportionate volume or geometric shape in three dimensions. In the first flawed experiment, the particle expands into a one dimensional thread which snaps off and comes wafting down everywhere onto the planet in nearly weightless tufts that annoy everybody. After the author spent such a time laboring over the invisible nature of a monofilament wire, he decided that a one dimensional thread could be visible! Note, a monofilament wire has a small but finite width, while a one dimensional line has no width at all! Which is ‘thinner?’ The 1D line is thinner by an infinite degree!

In the next flawed experiment, the higher dimensions of the point-like particle turn out to contain a super-intelligent civilization which realizes that the particle where they reside is about to be destroyed during the experiment. This civilization distends the structure of their particle into a huge mirror which they then use to focus the sunlight as a weapon onto the surface of the planet in order to attack their enemy, who they recognize to be the scientists running the experiment, and they start leveling cities! This is creative writing, but the author makes the explicit point that the mirror-structure formed from the elementary particle, while big, has only the mass of that particle, which is infinitesimal. If you’re versed in physics, you’ll see the first problem: light has momentum (Poynting vector!). When you reflect a beam of light, you change the direction of the momentum in that light. Conservation of momentum then requires the existence of a force causing the mirror to rebound. Reflecting enough light to thermally combust a city is a large intensity of light, easily megawatts per square meter. An electron has a minuscule mass at about ~10^-31 kilograms (0r 10^-27 if you insist on it being a proton). Force equals mass times acceleration and pressure equals force per area where light intensity can be easily converted to pressure and pressure to force. When you rearrange Newton’s second law to solve for acceleration, the big ‘force’ number ends up on top while tiny ‘mass’ number ends up on the bottom of the ratio, giving a catastrophically huge number for the value of the acceleration (conservatively on the order or 10^20 or 10^30 m/s^2 given intensity on the scale of only watts/m^2 where the mirror is only a square meter). That’s right, the huge mirror with the mass of a ‘proton’ accelerates away from the planet at a highly relativistic rate the instant light bounces off of it!

Yeah, I know, physicists and science fiction authors don’t often get along even though they both pretend to love each other.

I had significant problems with the idea of making a single electric charge into a reflective surface, but I’ve rewritten this point twice without being satisfied that the physics are at all instructive to my actual objection. In a real reflective surface, like a mirror, the existence of the reflected light wave can be understood as coherent bulk scattering from many scattering centers, which are all themselves individual charged particles. In this sort of system, the amount of reflected wave quite obviously depends on the amount of charged surface present to interact with incoming waves. The amount of surface available to reflect is conceptually dodgy when you’re talking about only a single charge, no matter how big of an area this charge is spread out to cover. This is why a half-silvered mirror reflects less intensity than a fully silvered mirror. Though I have failed in my own opinion to encapsulate the physical argument well, an individual charge has a finite average rate at which it can exchange information with the universe around it and reflecting photons en mass is an act of exchanging a great deal of information for such a tiny coupling. Since the quantum mechanics of scattering depends on a probability of overlap, the probability of simultaneously overlapping with many photons is small for only a single charge. The number densities are overwhelmingly different.

All said, the mirror is likely a very transparent mirror unless it has more than one charged particle’s worth of charge.

Despite all this analysis, I don’t believe that it detracts from the story. I really didn’t mind the flight of fancy in a well written piece of fiction. It’s unlikely that the casual reader will ever care.