The problem I have is a very simple one. You cannot argue with a flat earther. Flat earthers are so certain that they have the truth of it that they simply cannot be argued out of their stance. The belief is so deeply wrong and so deeply contrarian to reality, that a good portion of the support thinking for the claim involves completely discarding evidence that anyone might point to. They will not accept satellite photos. They will not accept videos. They will not accept pictures obtained by any source that is “part of the conspiracy.” The evidence has to come through their own eyes. Even more, if you look at that CNN article, it seems that you can’t tell them where to look and expect them to see what’s in front of their faces.

As a scientist, I did question and I do question. I understand why we think the Earth is round and I’ve tested this in the world around me. What I would say, you can read here, here, here and here. I have gone and looked in the best way I can and I can tell you that aspects pointing to the roundness of the Earth are indisputably visible. The flat earth model that is cited in the CNN article, the disc with an ice wall, is complete trash; it does not stand up to any physical inquiry even at the most basic level at all. You can junk this model just by asking what path the sun should take across the sky as opposed to what path the sun actually takes, and that’s enough. It appalls me deeply that these people think of themselves as scientists and are so completely inept at checking their models.

Read this quote from the CNN article and cringe:

But most adherents say they’re just curious, as all good scientific minds should be. “We love science,” Davidson insists.

That is the biggest flat-out lie that has ever been uttered. No, you don’t. No, you *absolutely* do not! Davidson says this first:

“Let’s just say there is an adversary, there is a devil, there is a Satan. His whole job would be to try to convince the world that God doesn’t exist. He’s done an incredible job convincing people with the idea that we’re just on a random speck in an infinite universe.”

There are reasons why people think such a thing about this universe. It is based on physical observations, not random supposition. The subject is so big that you cannot throw everything that has been seen before out the window and pretend like you can start climbing the mountain from its base all by yourself. The mountain of science is completely insurmountable if you do it alone.

Part of the reason for my willingness to post today about flat eartherism is because I watched a documentary on Amazon Prime this weekend about Mike Hughes, called “Rocketman.” I mentioned him in one of my previous posts on the flat earth; he’s the fellow who built the steam rocket and flew about 1,800 ft into the air in the name of figuring out what the shape of the Earth is.

I feel truly truly sorry for this fellow. I won’t say anything about his character, but watching him in that movie made me very sad. If he should google his name and end up reading this post, all I can say is… he really doesn’t deserve death threats. I don’t believe that anyone does. However, he is not merely wrong; he is prodigiously wrong. Mike, if you read this, hurry up and build that bigger rocket… the sooner you can take a flight higher, the sooner you will see exactly how wrong you are.

Fact is, I say hurry up SpaceX, hurry up Blue Origin, hurry up Virgin Galactic. The sooner people are flying where they don’t have to work at making the measurement, the sooner we can put this flat earth bullshit behind us and get on with more important things.

I will counter one of the Mike Hughes claims that I saw in the documentary. There is a Mike Hughes talk show appearance where he says that the thing which convinced him that the Earth is flat was that airliners flying from one place to another don’t have to tip their noses forward in order to navigate the curve going around the world, therefore there is no curve. This idea is totally wrong.

There are multiple reasons why airplanes don’t have to make this manuver.

The first reason is that airplanes are quite strongly dependent on air pressure in order to fly; the geometry of the atmosphere confines where airplanes can fly and they are confined ultimately to where the ~40,000 ft tall layer of the atmosphere is isolated. An airfoil doesn’t work without air. And, since the atmosphere follows the curve of the Earth, airplanes have to stay within it.

Secondarily, airplanes are kept in the air by balancing the force of lift with the force of weight. One must note that the center of lift and the center of mass, where the force of gravity acts, are not necessarily in the same spot, meaning that interactions between lift and weight can put torque on the body of the airplane. Weight changes direction as the airplane goes around the curve, creating a natural torque if lift is not also adjusted. Consider the design of a Cessna 172:

This airplane has an interesting design, don’t you think? The center of mass is somewhere inside the body of the airplane directly below the center of the wing while the center of lift is between the wings at the top of the roof. The airplane’s mass hangs below the wing. As you go over the curve, the weight of the airplane continues to point toward the center of the planet, creating a torque since the lift vector is at a lever arm from the center of mass, bringing the nose forward until the vectors for lift and weight are coaxial again. The pilot needs to make no adjustments because physics built the adjustment straight into the design of the airplane.

Note, the force decomposition of the Red Lift force makes the Green vector bigger than the Yellow one. At the same lever arm from the center of mass (the big green spot) the green vector will dominate creating a net torque that will tend to tip the nose of the plane forward until Red and Blue point along the same line again and balance. In most normal situations, the angle of difference between the Blue (weight) and Red (lift) vectors is much smaller than illustrated here, meaning that the analogous green vector is usually much bigger than the matching yellow vector (which would have nearly zero length). Totally passive here; the body of the plane will simply tip forward to follow the curve. In more advanced aircraft than this one, the management of flight characteristics is performed electronically, meaning that an airliner with a glass cockpit is using its computers to keep lift and weight balanced, totally hiding whether the aircraft is even dynamically unstable and further, making the adjustment to follow the Earth’s curve straight in the calculation simply to keep the aircraft flying.

Several additional thoughts. First, Youtube has done huge damage to our ability to view reality in the world around us. Combine a D+ understanding of physics with a dependence on “Googling it” and damn, you are screwed. The shape of the Earth was understood a long time before the infrastructure for a crazy huge government conspiracy ever existed –never mind that organized religion was the earliest infrastructure capable of crazy conspiracies and the irony that flat earthers also seem to include a large number of born again Christians. Second, you have to wonder if the rise in flat eartherism isn’t also rooted in the explosion of marijuana as a legal recreational drug. One of the basic symptoms of marijuana overuse, even overdose, is paranoia. Conspiracy theory thinking is rooted in paranoia. I make the basic observation that people who use marijuana a lot also tend to be unusually paranoid, if not quite psychotically so. Does the population Venn diagram for marijuana use also include an unusual number of modern flat earthers? If Eddie Bravo is an typical example, I’d say yes. In legalizing, are we shooting ourselves in the foot because we’re normalizing people dosing themselves recreationally with a drug that predisposes them to thought processes that can become weaponized in an anti-reality bent? One wonders…

I end by shaking my head.

]]>

Moreover, understanding why these points imply roundness depends on both synthesis of the observations together and on use of Occam’s razor to suggest that imaginative flat earth alternatives are more complicated than necessary or would have unintended side-observations.

This one is an unavoidable first place observation that is profoundly hard to ignore. It’s so hard to ignore that flat-earthers turn back flips to try to add it to their models. This is why the U.S. is mostly awake at the time when China is mostly asleep. This is why that first day you travel to Europe for a vacation, you’re so completely screwed up: the sun is up, but your body is demanding that you sleep. How simple a thing, the sun rising at different times everywhere on Earth. The time of sunrise is delayed by traveling west and accelerated by traveling east; if you’re on an airliner crossing the pacific ocean, going west can prolong your daylight hours, giving you day light through a period which would otherwise contain both a day and a night, while going East reverses that, giving you a night of only a couple hours.

The reason it works is quite simple. The curve of the round Earth hides the sun from some locations and not others. If the Earth were flat, a sun that radiates in all directions equally would light the entire plane of the Earth at the same time because there would be no place on the surface hidden from it. That’s the problem with being flat: all of a flat surface is visible at once! The east-west delay of sunrise is due to the rotation axis of the planet; that axis is strung through the north pole to the south pole and the axis is *nearly* at right angles to the direction pointing to the sun. I do say “nearly” because the deviation of 23 degrees gives us seasons.

If you have not appreciated the effect of jet lag as a direct consequence of the roundness of the Earth, shame on you!

In North America, why is daytime long in the summer, but short in the winter? It’s a consequence of the curvature of the Earth as associated with the tilt of the Earth’s rotation axis relative to the plane of the ecliptic!

When I said that the sun rises at different times everywhere on Earth, I did mean it. In addition to the time zones, there is a seasonal variation to the sunrise time as well which is linked to your latitude. The reason this point is important is because time zones are reproducible by a planet that is shaped like a cylinder… this isn’t flat, but it’s also not technically round. The north-south variation of the daylight period has to do with how your local reference measurement of “flat” varies on the surface of the Earth relative to how you travel around the rotation axis of the planet. To a good first approximation, there are only two days a year when everyone along a single longitude line has the same sunrise… the days of the equinoxes, which is to say the first day of spring and the first day of autumn. This occurs because the Earth reaches a place in its orbit where the 23 degree tilt of the rotation axis is in a direction that is exactly perpendicular to the direction toward the sun, canceling out the effect of the tilt on that day so that the north pole is neither leaning toward nor away from the sun.

You will note that this explains very clearly the track which the sun takes across the sky based on latitude. At far north latitudes, the sun never rises very high when it’s up and it takes a grazing path across the southern horizon before it sets. The opposite is true in the far south, where the sun takes a grazing path across the northern horizon. At other less extreme latitudes, the path of the sun can deviate either somewhat north or somewhat south of the apex of the sky; for example it always favors traveling south of that apex when seen from the United States or Europe, or north of it when seen from South Africa or Australia. At equinox, when seen from the equator, the sun rises dead in the east, goes straight overhead through the apex of the sky and sets dead in the west. At midsummer, in Iceland, the sun rolls around the horizon in a clockwise manner without ever quite setting. This is because the north part of the rotation axis of the Earth tips toward the sun during the summer just enough that no part of the local surface is obscured from sunlight throughout the day.

Flat earthers misinterpret the behavior of the sun during the day near the north pole region as evidence of a flat earth in large part because the behavior in that local region -only- is a close approximation of what a flat disc shaped planet, like a record turning around the Earth’s axis, can do during the day at the appropriate season. They then ignore the fact that mid-latitudes behave more like a turning cylinder and frequently omit that daylight in the extreme south behaves like a disc turning in the opposite direction from the north (if the sun is visible there.) Only a globe can tie all these local behaviors together. The fact that the sun disappears below the horizon in Iceland for months on end during winter is conveniently ignored by flat earth arguments… without a rounded curve to the Earth for the sun to hide behind, the sun would never be able to fall below any horizon.

The track the sun takes across the sky, and therefore the time it rises in the morning, relative to your latitude of observation throughout the year is best explained by a round Earth.

Every planet or moon that you can look at through the telescope is always circular in shape when seen from every possible angle. It may be possible to argue that the moon is also a disc where the face is pointed toward the Earth and the disc is simply far enough away that it doesn’t deform when seen from different places on the Earth, but this ignores the hemispheric shadow patterns of the moon lit by the sun. Full moon always rises just as the sun sets; half moon is near its apex when the sun either rises or sets; new moon sets at nearly the same time as the sun sets. The shadowing of the half or quarter moon shows quite clearly that the moon has a spherical shape and that sunlight is obscured from some surfaces of the moon by the shape of the moon itself in a pattern that can only be spherical. All large planetary celestial bodies visible by an average telescope are apparently spherical by the same argument.

The argument that the Earth *only* has a disc-like shape in light of this is an argument of undue exclusivism. Why should the Earth, which is a body known to be bigger than the spherical Mars based on gravitational mass measurements, be a different shape than Mars, when it is known that all observable planet-sized masses are spherical in shape? Why should Earth be different? There are some tiny moons that have non-spherical shapes, but these are known to be much less massive than Earth. Truth is that nobody has ever looked through a telescope and seen even one (continuous) astronomical body with a flat, disc-like shape. (Galaxies don’t count because they are entirely discontinuous structures.)

When the Earth comes between the sun and the moon, a circular shadow can be seen to cut across the face of the moon, rendering the moon dark. The aspect of this shadow is always circular, no matter the inclination of the moon in the sky when it happens. If the moon sits near the horizon when it moves into the Earth’s shadow, the edge of the Earth’s shadow appears curved. If the moon is high up near the apex of the sky when it passes into the Earth’s shadow, the edge of the shadow appears to have the same curve. A disc would have a round shadow from one aspect, but differing curves from other aspects and at least one aspect where a shadow has a definite straight edge. The Earth casts no such shadow!

The shadow the Earth casts on the moon always has a rounded edge. This is only possible if the Earth has a globe-like shape which always projects shadows of circular aspect.

This is a subtle but very important point. Suppose you’re on a hike at the local open-space. You look off and marvel at the line of the horizon; the edge is quite sharp. If you try the same experiment in the window seat of an airplane (suppose you’re a deviant who unplugs from your personal screen during the airplane flight in order to actually look at the world around you for a moment) you would see that the horizon is no longer sharp. It becomes diffuse and very hard to see. Fact is that the horizon can only become sharpish again if you get up into space out of the atmosphere!

The distance to your local horizon on Earth depends on your altitude over the surface. The higher you go, the farther away you can look before the curve of the Earth hides what lies beyond. The only thing about this is that the atmosphere is imperfectly transparent; over a distance of miles, particulates in the air and heat fluctuations tend to scramble up the straight lines that light rays would prefer to travel. As such, the further your horizon is from you as long as there is intervening atmosphere, the harder it is to see clearly. This variation of horizon clarity as a function of your altitude when you try to see it is a direct consequence of the roundness of the Earth and you can observe it yourself by comparing how the horizon looks depending on if you’re standing on the ground or if you’re in an airplane. The sharpest horizon will always be the one encountered at the lowest observation altitude.

Now, this shows an important effect of flat earth that’s rarely ever discussed. If the Earth is flat, the horizon line while standing on the surface is expanded to being the distance of the edge of the surface, which is as far away as the edge can possibly be. The path light must travel to bring an image of that distant edge is through a huge amount of atmosphere (even flat earthers would have a hard time denying the existence of the very air they breathe). This means that images of the horizon line would be very scrambled up and difficult to see. So, on a flat earth, the surface of the Earth would merge smoothly with the sky and you would likely not really see a line dividing them. Note, you would need to be away from obstructions like mountains and hills to see this… you would need to be on a flat with an unobstructed view of the “horizon.”

Given the example of the above image, we’re all sort of conditioned to think that the Earth can never actually look curved while standing on the surface. When the horizon is very close, it’s true: the curve is very slight and difficult to see. There is however a conditional way in which you can start to see some things you may not have expected. If you use a true straight edge to compare with what your eyes might otherwise think to be flat, you will find curves… for instance, looking at the surface of the ocean with a straight edge in hand. Here, you might suddenly realize that the ocean can be drawn up into a bulge over a distance of miles, perhaps due to pressure shifts or the vagaries of the tide. More even than that, if you do this experiment at the top of a sky scraper, say the Empire State Building in New York looking east, the deviation predicted by the actual curvature of the Earth relative to the sight horizon is appreciable enough to be visible on comparison to a straight edge (as in, ~2 millimeters of gap at either end of a ruler held in front of your face). Sitting in the pilot seat of an airliner or a jet fighter at cruising altitude with a full appreciable view of the horizon, the curve should be qualitatively visible even if the horizon line is very diffuse in the distance.

I started doing calculations about this after I noticed a curve to the ocean surface while I was standing on the sea shore. I was trying to determine how visible the curve of the earth actually is and it turns out that it is visible to an unexpected degree at relatively low altitudes if you have a good straight edge to compare it to. As I said, you can absolutely see it from a skyscraper if you use the correct tools to look.

Edit 9-24-19:

When I started doing calculations on this, I also started to look for it deliberately myself. I particularly tried to look out of the window whenever I got to fly in airliners, but I failed most of the time because the clarity of the horizon is just very bad without absolutely perfect conditions. Recently, I got lucky and managed one flight where I had perfect conditions.

Look closely at the following picture and tell me what you see:

Looks pretty flat there, doesn’t it? In this particular case, I got very clear skies with low lying clouds marking exactly where the horizon is. There are some scattering effects to the light which make the horizon very difficult to see, which is to be expected at this altitude in an airplane (note the previous point); I tried this experiment both with and without clouds and the clouds here only serve to make the effect more visible. This picture was taken with a plain old Samsung Galaxy S8 pressed up against the window. No special filters, not fish eye lenses. To the unaided eye, you may not see it, but check out what happens when you add a computer perfect straight edge for comparison:

The horizontal red line was applied to the picture in Power Point and shifted downward until it grazes the horizon in the center of the picture. If you then follow the rim of the horizon to the edge of the picture, you’ll note that there’s a gap between the red line and the horizon! The gap is marked here with the small yellow bracket on the right edge. On the opposite side, near the airplane wing, the red line is also above the horizon, showing the horizon to be a distinct convex curve. This, folks, is the curve of the Earth seen from the window of an airliner at cruising altitude!

If you doubt me, I beg you, go and do the same experiment. Look for yourself! It isn’t hard. Further, this is really a tiny picture… when you see it with your eyes, you’ll note that it’s easier to see because of the actual large size of the sky! That it appears in this picture is a testament to how visible it actually is.

For the inevitable fool who claims that the window of the airliner contains a fish eye, look at the wing of the airplane. It’s straight, or maybe even bent in the wrong direction to suggest a fish-eye. Some of these windows do bulge and deform, but you can shift your head around and see for yourself where it does, then look out of a region of the window that minimizes this distortion. The problem with random fish-eye lens claims usually comes down to the pesky fact that foreground features in the picture have straight edges, which wouldn’t happen with a true fish-eye. Moreover, you could always go to the open air observation deck of the Empire State Building and look for the curve from there… no windows to block you then!

Why is it called a cyclone? Because it has a distinct cyclic motion… it turns in a circle!

A cyclone north of the equator turns in a counter clockwise fashion when seen from above. A cyclone south of the equator turns clockwise. This always occurs. Why?

The typical party line of what causes this is simply the “Coriolis effect,” which is absolutely true. The reason Coriolis effect occurs is because of the slight decoupling between the atmosphere and surface of the planet on the large scale of the storm. When examined in the northern hemisphere, as the Earth rotates (in a right-hand sense going from West to East as seen from space) at the southern edge of the storm, the surface of the Earth is traveling slightly faster to the East than at the northern edge. As a result, the surface of the Earth *drags* the southern edge of the storm just a bit more strongly than the northern edge. The established fluid mechanics that hold the storm together reflect this by creating a gyre that goes fast to the East along the south edge, circulates to the north, swings around back south, and is accelerated again to the East at the southern edge where the surface drags on it most strongly. Many people misunderstand that Coriolis Effect is the same thing as Coriolis force, which it actually isn’t quite. The reason for the difference is because a basic assumption of the pure Coriolis force is that objects experiencing Coriolis force are completely decoupled from the rotating frame when they are moving, which is actually not true on real Earth where the atmosphere is only loosely decoupled from the surface… a rotating storm rotates because it can’t feel the full Coriolis force since the atmosphere isn’t moving totally decoupled from the surface. You could almost say that the highest altitude part of the storm is feeling the fullest Coriolis force, while the lowest altitude part is being sheared from the top of the storm by its coupling to the surface.

The reason flat Earth models can’t handle Coriolis effect is because the flat Earth can’t reverse the direction of storm circulation when observed south of the equator. In the southern hemisphere, the northern edge of the storm is dragged most strongly to the East, causing clockwise circulation by the same mechanism I outlined above.

How do you observe all this without a satellite photo? This would be fairly subtle because you need to know where you are on the surface in relation to the center of the storm, to know which direction the wind should be expected to travel. In other words, it would take some dedicated thought and careful observation, which flat earthers don’t seem to do that well!

As a small aside, the direction that water circulates as it goes down the toilet or is let out of the sink is actually not a reflection of Coriolis effect, regardless of the popularity of the claim. You can establish circulation in either direction under these circumstances because the coupling of the water to the surface here is very strong and there are only tiny differential forces on the water across the diameter of your toilet bowl, so effects of local water flow tend to dominate over effects of the rotating Earth.

The constellations of the zodiac are only visible for half a year. Not the same half a year for each sign, to be sure, but only half a year at a time. Why is this? Because the period of time that we mark as a day is totally decoupled from the period of time that it takes for the planet to go around the sun. A year takes 365.25 something or other days (New Years Eve is the most arbitrary and ridiculous holiday ever… the Earth basically never actually finishes its orbit around the sun when the Ball drops in Times Square). For half of the Earth’s orbit around the sun, the body of the Earth points you toward the sun when certain zodiac constellations would be up in the sky, washing them out. Those constellations then rise at night for the other half of the Earth’s orbit, visible when the sun is no longer in the way. Literally, the body of the Earth blocks you from seeing certain things at certain times, but not at other times. This in itself could be explicable on a flat Earth, except for the next part.

For certain constellations, depending on whether they are in the far northern part of the sky or the far southern part of the sky, are never visible from certain locations on the surface of the Earth. The north star, Polaris, is not visible from south of the equator. The Southern Cross, on the other hand, is not visible from north of the equator. When you’re at the north (or south) of the planet, the body of the planet blocks constellations in the south (or north) part of the sky. Traveling smoothly from North to South and seeing certain constellations become visible above the local horizon is a direct result of the Earth having curvature.

I’ve included this point because it’s true. I’ve also specialized this argument to suggest a mode of observation. Leave the telephoto lens at home. Just watch for it as you drive across the prairie toward a big mountain range.

This is the old “mast of a ship disappearing over the horizon” argument. The one big problem with this argument is that it depends on you making the observation in such a way that you don’t pick up the non- linearities possible in the travel of light. For the region of sky very near to the horizon, light grazing along the planet’s surface has a maximal probability of interacting with abrupt shifts of density in the air caused by local coupling of the atmosphere to the ground (say by heat radiating off the surface, or cold water sucking heat out of the air). Within these regions, light rays can be uniformly bent so that images of the sky are projected apparently into the ground or images of the ground are projected up into the sky. With a telephoto lens, you can see these images, which may be originating from somewhere beyond the local horizon. This is of course part of why the horizon line becomes increasingly hard to see when examining it from high over the surface as mentioned before; light doesn’t always travel in straight lines and the longer the light interacts with the atmosphere, the more likely it is to bend in some unpredictable fashion.

Looking for mountains coming up over the horizon can get the view you’re looking for up above this cluttered region where the light does weird things. (Keep in mind that Flat Earthers use weird optical effects as an argument for the flatness of the Earth).

I’m concluding here with an example of a Fata Morgana –the cool name for a type of mirage. In this kind of mirage, the surface of the ocean is hotter than the air above it, meaning that the accompanying density change through the volume of the air places high density air above low density air. The shift in the index of refraction allows certain rays of light at very shallow angles of incidence to experience “total internal reflection,” where the air can act as a mirror. There are a great many photoshopped fake Fata Morgana pictures showing boats hovering above the water. To be a true Fata Morgana, the image will include both the actual boat, seen directly and, beneath it, it’s reflection off the density-shift mirror formed in the air over the water, making it look like twin boats joined at the bottom. You can see Fata Morgana type mirages for yourself on a hot day driving down a long, flat, empty road. It’s simply an image of the region of sky just above the horizon. What this mirage proves is simply that you must always be careful with your eyes. Some things you think you see are true, while some are false. I’ve listed a series of true observations above, pending an extraordinary reinterpretation that I know (from satellite data) not to be forthcoming.

I may or may not add to this list as I encounter other unique, observable points.

]]>With the steady improvement of my skill using GAMESS, I was able to find the chemical reaction I was looking for in that earlier post. The basis set strongly influenced the reaction pathway in this one. Here is the transfer of a proton in water from one hydroxide to another.

I won’t talk a huge amount about this. The intent was simply to put up the pictures. This is the reaction by which protons are transferred around in water. This is the alkaline version of the reaction. I haven’t looked for it yet, but I suspect there is a homologous reaction under acidic conditions, involving hydronium only. It may be possible for this reaction to occur directly between hydronium and hydroxide, but I’m not sure there’s a stationary state possible in that system: the electrostatics will probably create a very steep potential energy gradient between the two molecules which may not have a stationary point in the middle (may depend on the solvent model to introduce screening).

The lesson: protons are probably never actually free in water.

]]>This ab initio chemical reaction simulation took a huge amount of time and effort to generate.

Depicted here is an attack by hydroxide (-1 charge) on aminophosphate (-2 charge). The black ball is phosphorous, red is oxygen, white is hydrogen and blue is nitrogen. As should be evident by my usual mode of operation, the surface is an equipotential surface containing 90% of all electron density. The coloration of that surface is in electrostatic potential with blue as negative and red as positive. Dashed lines are hydrogen bonds, single sticks are single covalent bonds and double sticks are double covalent bonds. The reaction produces singly protonated phosphate (-2 charge) and deprotonated hydroxyamine (-1 charge) . A polarizable continuum model is in use in the simulation to place the reaction effectively in water (you need this to help dampen the repulsive forces charged portions of these molecules have on each other.) This structure was calculated using the Pople 6-31G** basis set, literally at the limit of my computer, in order to include some polarizability in the atoms and to create enough variational flexibility in the diffuse region of the model to pick up the interactions between the molecules approaching each other. This basis set is very big and costly, meaning that I can only do fairly small systems with it on my computer using this strategy.

I’ve been trying to find a version of this particular reaction for literally months now, hampered in large part by my lack of skill with ab initio and by my sometimes shaky knowledge of organic chemistry. The last reaction that I posted with phosphate and water was a biproduct of my search for this reaction. This particular reaction is one possible prototype for the reaction by which DNA is polymerized from dNTPs. In this case, the hydroxide is standing in for the 3′ hydroxyl of oligomeric DNA and the hydroxylamine is standing in for pyrophosphate. The reaction is a form of transesterification reaction. In DNA, the resulting products would be phosphodiester linkages between nucleotide bases.

To make this reaction proceed, a deprotonated hydroxyl must attack through the face of tetrahedral phosphate opposite the desired leaving group. Further, the attacking group must be a poorer leaving group than the desired group –I had this problem early in my search. For the search to work, I needed to guess the appropriate transition state, which I didn’t think too clearly about until I’d already wrapped four or five failures under my belt. As it turns out, the phosphorous must go from tetrahedral valence to a trigonal-bipyrimidal state. I then needed to tease out spurious other “reactions” including a rotation by the hydroxyamine. This becomes difficult with more complicated molecules because additional degrees of freedom allow for a lot of unimportant motion that can totally screw up the search.

The highest occupied molecular orbital (HOMO) throughout this reaction is, I think, very interesting.

When these chemicals are separated in exclusion of one another, the polarization of the water continuum allows the molecular orbitals to have negative energy, meaning that they can stay occupied (theoretically, if you believe the MO energies, which are known to be off). But, as the hydroxide is placed near the aminophosphate, the orbitals, particularly the HOMO, shift to positive energy, meaning that the hydroxide wants to give its electrons up. You can then see the lobe of the hydroxide HOMO rotate into the open region of the phosphate tetrahedron during the attack whereby the HOMO is transferred to the hydroxyamine, which is then displaced. This new HOMO still has positive energy and, even though it actually starts to rotate to face the phosphate, the potential energy surface of the reaction, meaning the charge forces and orbital exclusion, force the potential reactants apart. The hydroxide HOMO peaks at 0.238 Hartrees, while the Hydroxyamine peaks at 0.243 Hartrees. I would actually sort of think that hydroxyamine would be the better attacker, but I also was aware that the nitrogen-based leave group should be better than the hydrogen in hydroxide and would give me a better chance at finding this reaction. To be clear, the hydroxyamine HOMO is definitely antibonding (just look at it) and this molecule would still be reactive elsewhere.

(Quick edit 7-1-19: Also looking at the HOMO, you can see a point where the HOMO spreads from hydroxide over onto the phosphate just before the reaction has “officially” taken place by reaching the transition state. This means that electrons from the hydroxide have leaked over onto the phosphate and have mixed with the electrons already present on the oxygens located there. This comes back to the Born-Oppenheimer approximation; because of the difference in masses, the time scale governing the motion of the electrons is very different from that of the nuclei. When the HOMO has shown up on the phosphorous, it means that the electrons are tunneling over and back even as the nucleus still moves in.)

Just some other views of the same thing.

Most of my reason for doing this post was to just add some kind of pretty pictures. You can’t blame me; this took a ridiculous amount of time to produce and I feel like showing it.

]]>I’ve continued to learn with GAMESS. The functionality that interests me the most is the capacity to use the background of automated quantum mechanical calculations to simulate the occurrence of chemical reactions. A constellation of eigenstates containing electrons as shaped by the positioning of atomic nuclei can be sculpted by the constraints of energy minimization to reveal which configurations are more likely to occur than others. You can sort of think about nuclei as tiny pearls trapped in a continuum of cotton candy: if you grab a pearl and try to move it, the candy clings to it like springy rubber to try to dampen the movement. The only trick is that the candy is not distributed in a uniform fashion and is actually redistributed by the movement of these pearls; it tends to spring and flex in preferred directions. As with most physics, this becomes essentially a system of masses on springs where you can calculate how the deflections occur in a potential energy “surface” that is a function of the positions of the nuclei. I put “surface” in quotes because it’s a hyper-surface with many more than simply two dimensions: it has three dimensions with every nucleus.

Figuring out a chemical reaction is very much about intuiting the shape of this hyper-surface from clues in the 3D structure of the atoms. You use the potential energy surface to generate a mathematical entity called a “hessian,” which is a 3Nx3N matrix formed from a function of the potential energy surface with second-derivatives taken with respect to pairs of the 3N coordinates. Derivatives of energies are forces and the hessian gives what are called force constants (as I understand it) of the potential energy surface and enable the identification of minima and maxima by use of inflection (second derivative!). Eigenvalues of the hessian matrix reveal vibration modes within the molecule and, if you find imaginary vibrations, you have stumbled over transits in the free energy surface between one configuration and another… literally non-vibratory motions inside the model corresponding to the occurrence of chemical reactions.

To find a chemical reaction with GAMESS, the first task is to find features in the potential energy surface that are called “saddle points.” These saddle points are locations in the space of the molecular coordinates where the vibratory modes of the hessian are all minimized while the one imaginary mode is maximized (again, as I understand it… these constructs are all such huge numbers of variables that you have the computer sitting between you and the calculation and I have not done the math myself directly). Saddle points can seem very arcane described in that way, but they mark transitory states in the geometric structure: literally the intermediate structural state of the chemicals positioned between the chemical products and reactants called a transition state. If you can guess accurately at the form of a transition state, the computer can use energy minimization to massage that configuration into the stationary configuration -the flat points along the potential energy surface when the derivative goes to zero- where the imaginary vibratory mode is maximized and the other modes are minimized in potential energy.

If you can find a saddle point, which turns out to be very tricky, the computer is then able to follow the curves of quickest descent in the potential energy surface forward to the reaction products and backward to the reactants. You can then stitch these half-paths together to produce the whole chemical reaction.

Here is a chemical reaction…

This is the entire reaction pathway between a water molecule and a molecule of singly protonated phosphate. The ball-and-stick model above is the usual modeling produced by the GAMESS satellite program wxMacMolPlt: oxygens are red balls, phosphorous is black and hydrogen is white. The wireframe surface is an electron equipotential surface and its coloration is for regions of negative electrostatic potential (blue) and positive potential (red). Bonds are drawn based on electron occupancy of orbitals in that region, with dashed lines depicting essentially hydrogen bonds, single sticks depicting single covalent bonds (~two electrons) and double covalent bonds (~four electrons). Orbitals are closed shells using restricted Hartree-Fock and the basis set was relatively small, only 3-21G. There is a polarized continuum model in the simulation in order to pretend to some extent that this takes place in a medium like continuous water.

In this reaction, phosphate with a charge of -2 sucks a proton (charge +1) off of water, creating a transition state of hydroxide (charge -1) which then sucks a proton back off of the phosphate to recover water.

Since the water is turned away from the camera, here’s another set of images of the same thing:

In performing this set of calculations, which took a ridiculous amount time, my initial objective was to try to find a proton exchange. Happily, I finally found that, though not without 5 iterations of the work. Even in this version, I was backward in my guess of the transition state: I was guessing the transition would be hydronium, with both protons present on the water… it ended up with both on the phosphate since the phosphorous apparently doesn’t withdraw electrons strongly enough from the oxygens to make unbonded water better at holding protons.

I found this in multiple steps. First, I modeled water with triply deprotonated phosphate and looked for an energy minimum. This ended up with water doubly hydrogen bonded to the phosphate (looked like a guy in stirrups). I then plunked a third proton onto the water, creating hydronium (+1 charge). Since this ended up with the third proton added in a funny direction, I used part of the saddle point search as if I was doing a regular optimization to get to the proper shape of hydronium. Finally, as I was looking for the actual saddle point, phosphate dragged the hydronium in by a pincer move and ripped off both of his arms.

This sort of surprised me since I wasn’t expecting such violence, but the transition state ended up with both exchanged protons on the phosphate and only one remaining in the water as hydroxide.

Following the intrinsic reaction coordinate using the imaginary hessian mode allowed me to slide along the potential energy surface to give the reaction depicted above. You can see the bonds shrink and swell as the electrons are redistributed in the molecule to accept the exchange with the water, electron density pulling out away from the phosphorous as the proton is accepted and filling back in on the other arm as the proton is reciprocally donated.

I have a strong feeling that this is generally similar to the state of acids and bases in water, where protons are passed around like bad kleenex along lines of hydrogen bonding. Likely, acidic protons are never free and hydronium and hydroxide constitute transition states. Since water has an overall concentration of about 55 molar, and aqueous strong acids and bases have concentrations of only about 12 molar, plenty of intact water exists even in strong acid or strong base where protons are constantly moving around. In base, the proton is scarce, so the transition state is probably hydroxide, while acid has the proton as common, giving hydronium as the transition state instead. It’s an interesting speculation.

]]>This was an attempted saddle point search in GAMESS trying to find if a transition state exists in the transfer of a proton from one water molecule to another (for formation of hydronium and hydroxide.)

This is the weirdest thing I’ve seen using GAMESS yet. This is not exactly a time simulation, it’s an attempted geometry minimization showing the computer trying different geometries in an attempt to locate a saddle point in the potential energy surface. I’m befuddled a bit by this search type because I’m trying to study a reaction pathway in my own work. Unfortunately, this sort of geometry search is operating in a counter-intuitive fashion and I’m not certain whether or not it’s broken in the program. However, when you see two oxygens fighting over a proton… well… that’s just cool. If the waters are set close enough so that they enjoy a hydrogen bond, the energy surface appears to have no extrema except where the protons are located as two waters. If you back the waters off from one another so that they are out of hydrogen bonding distance and pull the hydrogen out so that it is hydrogen bonding with both oxygens, you get this weird behavior where the proton bounces around until the nuclei are close enough to go back to the hydrogen bonded water configuration. I need to pin the oxygens away from each other, which won’t happen in reality.

Not sure what I think.

(Edit 5-10-19:)

This post seems to be a good place to put weird and interesting things. Here is an ab initio computation for the structure of a Guanine Quartet.

The surface is again an equiprobability surface and is colored by electrical potential with green being most negative and red being most positive. This calculation took my meager computer 527 minutes (or 8 and a half hours). The central atom is a coordinating sodium and the purine rings are hydrogen bonded to each other by the dashed lines. The basis set was 3-21G, which is as big as I dare use while still being able to make the nitrogens lie flat when they’re in those purine rings. The structure definitely seems to approach C4h point symmetry, though I could not build the model with that symmetry to start out and simplify the calculation due to the complexity of the structure.

This aggregate has a very wickedly interesting HOMO (highest occupied molecular orbital). Because the system is not quite perfectly relaxed to planarity, the symmetry is a little tiny bit broken and the state is not quite but nearly 4-fold degenerate. If you plot all the (nearly) degenerate orbitals together, a very interesting structure emerges.

I thought this was really cool. You can see the peaks and troughs of the electron waves running around the plane. That’s eight electrons circulating in sort of a pi-bond-like structure. This is the highest energy occupied eigenstate of the complex.

Edit 7-17-19:

I’ve learned some things about G-quartet recently that have had me going back and revisiting the calculations presented above. In the sets shown above, the geometry never quite reached convergence; the optimization step size was set too big and the geometry steps were oscillating around the minimum without ever reaching it for several hundred steps at a time and many hours –just a function of my inexperience at the time.

As it turns out, the structure of the G-quartet isn’t actually C4h in symmetry, as I note above. It might be C2, or even C1. When I was performing the calculation mentioned above, I was operating under the assumption that the G-quartet is a planar assembly, much like Watson-Crick base pairs. This is not an undue assumption and I’ve found published G-quadruplex NMR structure papers that seem to make similar assumptions –an NMR structure is built of computed correlations where certain features about how these correlations can be expressed are an assumption, like that G-quartets are flat. When I flattened it into a plane and attempted to optimize the geometry, it turned out that the structure has no stationary state that is flat. The calculation simply never converges. The G-quartet, liganded to a monovalent sodium ion or not, appears to prefer a saddle splay configuration. The thing has a shape like a Pringle.

This form of the G-quartet is produced using the Pople 6-31G** basis set, including a single D-function on atoms in the second row and a single P-function on hydrogens. The structure is not hugely different from the structures posted above, but the saddle splay it demonstrates is actually a geometry minimized feature of the structure. The hydrogen bonds holding this structure together are mainly unmarked because they turn out to be on the long side, ~2 Angstrom. (Certain G-quadruplex papers I found report these to be longer still, as much as 3 Angstrom)

Part of why I went back and repeated this calculation was because I discovered that the 3-21G basis set I was using above does not properly represent hydrogen bonding. 3-21G is on the small side and is apparently pretty good for a rough draft. For an object the size of a G-quartet, an appropriate, fully converged geometry optimization with just this small basis set took 239 steps and 451 minutes (7.5 hours). This even with my more recent discovery that direct SCF (calculating integrals as needed without ever storing them) is more efficient than classical SCF (calculating integrals once and then storing them) while parallel computing –recovering the integral from a large storage file is slower than simply computing it again. For big structures, you need to stay as small in the basis set as practical and the bigger you go, the smaller the basis which captures the features of the molecule, the better off you are time wise. 3-21G seemed a good choice until I realized that it was producing a non-planar structure for the G-quartet when I fully converged it.

At around this time, I had just discovered that 3-21G massively shortens hydrogen bonds… so much so, in fact, that you can’t really use it to simulate the chemistry of proton transfer in water. Noting that the G-quartet had this big saddle splay in 3-21G and that the quartet structure is dependent on hydrogen bonding, I had the <sarcasm>ingenious</sarcasm> notion that maybe the splay is a result of bond shortening and that a bigger basis set might capture the complex as flat.

So, I embarked on optimizing the structure with 6-31G and supplemented in a couple polarization functions (for 6-31G**) to try to cover the diffuse region where hydrogen bonding occurs. 6-31G doubles the function density of 3-21G, meaning that it increases the required number of integrals by about 16-fold. This is a ten-fold increase in computation time at minimum. I ended up running the thing overnight several nights in a row. The optimization took about 4 days with a bunch of restarts (I learned something cool about GAMESS restarting which I’ll keep to myself).

Remarkably enough, the saddle splay did not go away. In fact, 6-31G** increased the twisting! It got more splayed by a few degrees. This geometry appears to be not an artifact; I switched the coordinate system from the internal coordinates I was using during optimization to a cartesian external system, recreated the hessian and tried to repeat the optimization on the converged geometry and it refused to optimize any further. I was worried about the internal coordinates because GAMESS kept fretting that they were too interdependent, but switching systems didn’t obliterate the stationary point, suggesting it was common across possible coordinate systems.

On the other hand, the more detailed basis set only helps to give more accurate values. Qualitatively, it still very much resembles the previous work, even the poorly converged work above. Here is the cluster of HOMO orbitals, which are simply a retread of the image shown above but with the bond lengths and angles a little more accurate.

You can still see the electron wavelength as they circulate around the plane. You could actually calculate the approximate momentum from that: the de Broglie wavelength is visually obvious in there.

I’ve still got so much to learn. I’m looking at other more recent basis sets. It may well be that someone has invented an effective core potential basis set which captures the diffuse region better than 6-31G** and manages to thin up the number of functions lost in the core in order to cut the calculation time. 6-31G** is a bear on something like a G-quartet using only a laptop computer.

Edit 7-22-19:

Here’s is an animated .gif showing the potato chip-like shape of a sodium liganded Guanine quartet.

]]>The very notion of Quantum University sets my heart on fire. I want to take away that funhouse mirror they use to admire themselves and put them in front of a real mirror so that they understand why people with actual comprehension laugh at them (or should be given the *opportunity* to laugh and point and maybe throw some rotten cabbage).

Still, the reality is that you can’t fix a believer. The one great problem with cranks of this sort is that a lot of them genuinely believe they’re onto something. Never really quite occurs to them that basically everything they ever do never achieves anything and that any achievements they come across only come from fellow travelers who also believe. A believer can only butt their uncomprehending head against the granite block that is reality and stop to wonder why there’s blood. They do not actually achieve, ever, they waste time running in circles doing everything they can to collect testimonials from dupes to mark their “achievements.” Oh, and utter curses about the vast conspiracies being leveled to keep what they believe down. Still, if they can get people to *believe* *them*, they can do one achievement that is meaningful in society: they can make money.

The fellow in the comments honestly believes that there’s a “brand” of quantum physics out there that doesn’t require you to know how to use calculus.

The profession of physics has a very distinct and simple structure. The entire purpose of a physicist is to translate a series of real world observations into numerical representations and then fit those values onto mathematical formulae. If the fitting is sufficiently good, the process can be reversed: the mathematical formulae discovered in the fitting process can be used to predict what real world conditions are required to reproduce certain observational outcomes. Note, this is flat-out crystal ball stuff; physicists predict what *will happen *observationally if conditions for a given formula are met and to what precision that outcome can be expected. I’m not saying “some brand of physicist” or “sometimes this is one thing we do”… this is what physicists do, end of story. If you cannot carry out this function, you are by definition not a physicist. Physics is completely inseparable from the math, so much so that the profession is divided down the middle into two classes: the people who wrangle the math, called the “Theorists,” and the people who wrangle the observations to plug into the math, called the “Experimentalists.” Theorists and Experimentalists work together to get physics to operate.

Any jackass bleating, “Well, you don’t know the Real(tm) science because you haven’t gotten around your evil, malicious logical right brain and circumvented the math to find the Real Reality,” has essentially shoved his own hand down the garbage disposal. By dumping the math, that person has admitted to not being a physicist –despite his/her claims to the contrary– without math, there is no physics. Period. End of story. This is totally non-negotiable. You cannot redefine reality and expect the rest of the universe to suddenly adhere to your declaration.

Since understanding this subtlety is a real challenge for those of Quantum University, I’m going to make an example here of just what it is that a physicist does and why physicists are deserving of the street cred that they’ve earned. These Quantum U jackasses crave the legitimacy of that word: “Physicist.” There is no other reason why anyone would accuse an actual physicist of being uncomprehending of the nature of physics. From what I intend to add here, anybody reading this blog post will be able to make an assessment of themselves as to whether they could ever be qualified to call themselves a physicist.

What I’m going to add is a quiz containing a series of questions that a genuine quantum physicist would have no difficulty at least attempting to answer –some will be very easy, but some may require more than transient thought. If you have any hope of completing it, you will have to do some math. I will write the problems in order of increasing difficulty, then detail what each problem gives to the overall puzzle of exploring quantum physics and try to add a real life outcome from the given type of calculation to show why physicists have credibility in society in the first place. Credibility is the point here; this is why Quantum U craves the word “Physicist” and is willing to rewrite reality for it. My point is that if you jettison the part of physics that allows it to attain credibility, you lose the right to claim credibility by association.

**Problem 1)** You suspend a 5 kg bowling ball on a 2 meter cable from the ceiling. With the cable taut, you pull the ball aside until the cable is at an angle 30 degrees from vertical. You release the ball and allow it to swing. What is the maximum speed of the ball as it swings and where is it achieved?

**Why does this matter to Quantum Physics? **This is a very basic classical physics problem that would be encountered midway through your first semester in introductory physics. The Quantum U jackass would immediately scoff, “Well, this is classical, quantum allows us to escape that!” Well, no, actually it doesn’t. This problem is the root from which quantum physics grows. This is one of the simplest Conservation of Energy problems imaginable and the layout of the calculation sets the root of Hamiltonian formalism, meaning that it is almost exactly the same as the layout of the time-independent Schrodinger equation. If you lose the Schrodinger equation, you’d better have something impressive ready to replace it because you can’t do quantum without this.

**Why is this important to Physicist cred?** Most introductory physics does not seem like it should be all that important. If you can solve this problem, does it mean you can load heavy things into your car without straining your back? Maybe, maybe not. This problem is important to society because it involves exchange of potential and kinetic energy in a conservative situation. With a tiny bit of tweaking, this particular problem can be rewritten to estimate how much hydroelectric power can be generated from a particular design of hydroelectric dam. What? You mean to say physics has real world implications? That sound you just heard was me driving a nail into the third eye of a quantum U jackass.

**Problem 2)**

In this picture is an electronic circuit. I’ve labeled all the components. The switch connects the unlabeled wire to either wire 1) or wire 2). It starts connected at position 1). What happens when you turn it to position 2)… in other words, what’s the time varying behavior after the connection is closed? That’s the easy part of the question; to be a physicist, you have to answer this: what values of ‘L’ and ‘C’ could you pick to get a period of 2 milliseconds?

**Why does this matter to Quantum Physics?** I debated for a long time what sort of basic electromagnetism problems to add. I thought originally to keep it to one, but I decided instead on two because you really can never get away from electromagnetism while you’re doing quantum physics. There are four known fundamental forces and this is one; electromagnetism crops up in everything. This particular problem involves an oscillator and is therefore a forerunner to wave behavior. If you can’t do oscillators, you can’t do probability waves. If you know a thing about physics, this problem is actually extremely easy and is typically encountered in second semester basic electromagnetism and in whatever electronics classes you’re forced to take. The chemists, who do quantum physics of one sort, may have some difficulty with this problem, but the physicists really shouldn’t. If you call yourself a physicist, this should be as easy as wiping your ass.

**Why is this important to Physicist cred?** You have an evolved, heavily engineered offspring of this little doodad in every connected device carried on your person at this moment. The oscillators have all changed faces and the components to achieve them are probably almost unrecognizable at this point, but the physics is not. The thing in the picture above could be converted into the tank circuit of a radio. This was a gift to the 20th century by the hard work of 19th century physicists. Radio, electric power and the associated ability to instantaneously communicate long distances has built our world. If you stop to realize that William Thomson, the Lord Kelvin, made a mint off laying a telegraph cable across the Atlantic to connect England and North America for communications purposes, you will understand the power that all the offshoots of this technology had. The circuit above is two-fold; it relies on the electric conduction physics upon which Thomson’s telegraph infrastructure depended and also could be used to facilitate the generation of electromagnetic waves that could be transmitted through the air, as performed by Marconi (and Tesla… the real one, not the car maker). If you know what you’re doing, you can turn this device into a small EMP generator… you’re welcome. (As an aside, I always feel a little sorry for William Thomson: modern people mostly only ever call him Lord Kelvin and forget his actual name… the title of Lord Kelvin was created for him because of his success as a physicist, and so, his success deprived history of his actual name!)

**Problem 3)** You’re stranded on a deserted island. You go and hunt for food along the flood plain around the island when the tide comes in. You see a fish swimming along the sand beneath the flat surface of unperturbed water, by eye 60 degrees below the horizon line of the ocean. You stand 1.8 m tall and you have a 1.5 m spear. Measuring with your spear, you know the depth of the water is 2/3 the length of your spear. The index of refraction of water is about 1.333. You have a calculator for a brain. If you thrust the spear from your shoulder, what angle must you launch it at in order to hit the fish.

**Why does this matter to Quantum Physics?** Good question. This is the second EM question that I will add and it’s added because it deals directly with the physics of light. Snell’s law is a product of electromagnetism and it emerges from applying Maxwell’s equations to a boundary situation much like I’ve detailed in the problem above. Index of refraction is a direct ratio of speed of light in a vacuum over speed of that same light in a substance (like water). The phenomenon of light bending its path as it passes through the boundary between two translucent substances is a direct consequence of the wave-like properties of light. I have no doubt that the Quantum U jackasses love waves and vibrations. Can they handle this one? As I chose to add a problem about electromagnetic force, I needed to add a problem about the basics of light, which is directly connected to the EM force. Light is very pivotal to Quantum physics because most every observation people ever make involves some measurement of light.

**Why is this important to Physicist cred?** The lens maker equation is expanded from this foundation. Without this, there would be no glasses, no contact lenses and no corrective laser eye surgery. The work of physicists actually corrects vision in the two eyes that matter.

**Problem 4)** The half life of a muon is 2.2 microseconds. If it’s a cosmic ray traveling at 99.999% of the speed of light, on average how long does that muon appear to last if you happen to see it fly by while you’re standing on Earth?

**Why does this matter to Quantum Physics?** This is a token special relativity problem. A large portion of Quantum physics does not require relativity, but an equal amount does. As such, you can’t get away from relativity. You need to know at least some to be a quantum physicist. Quantum U jackasses clearly want to marginalize all those “particles and math-ematical equations” and beg that something exists beyond that, never mind that by removing the math, they have zero chance of ever defining what… I say fine, remove what you like, I’ll steamroll you flat anyway. I could as easily have said “You will live 79 years and 10 seconds, how long does *that* appear to be to somebody watching you run past at human foot speed for your entire life?” The relativity will probably say 79 years and 10.000001 seconds or something (I didn’t calculate it), but at least this is better than begging the limits of human potential and claiming the person ran by at 99.999% the speed of light. *Somebody* has to realistic about human potential. Relativity is pretty important because it’s the first time humans changed Newtonian physics. That precedent is important to understand in light of quantum physics (which was about the third time humans changed Newtonian physics, General Relativity being the second). Quantum physics didn’t emerge by immaculate conception… there was a huge background of math that lead to it. Discard it at your risk.

**Why is this important to Physicist cred?** Congratulations, you can now perform one of the clock calculations needed to make the Global Positioning System (GPS) work. You’re welcome; physicists just saved you from getting lost… again. Note, we’re also responsible for the military ability to drop a bomb down your chimney from a flying aircraft. I’d love to see you astral project out of that.

**Problem 5)** What do the ‘A’ and ‘B’ constants refer to in Einstein’s stimulated emission equations?

* Problem context:* To detail the situation for the mathematically illiterate, who are none-the-less following along because they are genuinely interested, Einstein’s set-up is a Bohr atom… a nucleus with electrons orbiting it at levels; he postulated that a passing electromagnetic wave causes a lower energy electron to hop up to a higher energy level orbit if the wave matches the energy difference between the two levels (absorbing a photon). The electron in this now excited state can either spontaneously hop back down to the lower state, giving off a photon, or it can be ‘jarred’ to give off the photon and hop back down to the lower state by being subjected to an electromagnetic wave that happens to match the energy difference between the two states –called stimulated emission.

**Why does this matter to Quantum Physics?** Einstein’s work on stimulated emission occurred in 1917, in the framework of what’s called the “Old Quantum.” This is my first genuine quantum physics question for you. Oh goody, right? Tired of the equations yet? Sorry, but if you can’t handle equations, you’re not a physicist. This work is the front runner of the Fermi Golden Rule. I’m skipping most of the other Old Quantum because it was still too incomplete.

**Why is this important to Physicist cred?** Without us, no lasers bio-tch! And, in the interest of full disclosure, the laser is one example of short-sightedness in physicists. Einstein had this realization in 1917, but failed to see the significance himself. Physicists then hurried on and found their focus on other shiny things while nobody thought more carefully about it. It took some 40 years until Maiman, Gould, Townes and Schawlow (physicists whose names you may not know, though Maiman was also an engineer) had the critical insights to finally make it work. I ended up including this problem on a lark mainly because it also helps to put guided missiles through windows militarily. Gotta put the p’chank of fear into somebody’s chakra. How many CD players do you suppose were built because of us?

**Problem 6)** A drunken hobo, who weighs 70 kg including his tattered blanket and a full bottle of peach schnapps, shambles along at about 0.5 m/s. If he were to stumble through a two-slit apparatus, how far apart would the slits need to be spaced for him to exhibit quantum mechanical interference? Can this setup be built?

**Why does this matter to Quantum Physics?** This question involves the de Broglie equation, the beating heart of modern quantum physics. This equation is one of several reasons why Quantum University craves the word “Quantum.” For those less versed, the de Broglie relation is the first equation written that explores the ‘wave’-ness of physical objects and is the source of particle-wave duality in matter waves. With the way that most quantum mechanical wave equations are written, the de Broglie relation is always hidden somewhere inside the argument (particularly in time-independent cases). In essence, because they do no math, quantum U gets it wrong because they fail to include Planck’s constant. Ask yourself what came first, an “institution” calling itself “Quantum U” or Planck’s constant?

**Why is this important to Physicist cred?** Do I really need to say it?

**Problem 7)** The hobo from the previous problem shambles along for a moment, then stumbles to a stop. He stands there wavering about, struggling to keep his balance, foot speed now reduced to 0 m/s. Because of the alcohol induced gaps in his memory, he may certainly think that this happens, but why doesn’t he ever just suddenly *pop* into existence in front of the hardware store or soup kitchen? Careful examination of the previous problem would suggest that if he stops moving, maybe he can!

**Why does this matter to Quantum Physics?** Are you kidding? This is the weird-ass core of quantum physics! I never did claim that weird stuff doesn’t happen. What I claimed was that there are specific expectations for how the weirdness can emerge. What is written in this problem should be analyzed with the Heisenberg Uncertainty Principle. The cranks typically use the Uncertainty principle as a get out of jail free card, “Well, there is uncertainty, so anything is possible, right?” The actuality is that the Uncertainty Principle acts like a governor, telling how much weird is possible depending on the set-up of a given situation. How exactly stopped must the bum be for his position to grow so uncertain that he can teleport around town? Note, the argument here would actually also work if he’s still walking, despite the hole in de Broglie’s relation, but his speed must be very perfectly uniform… the uncertainty of his momentum must be nil.

**Why is this important to Physicist cred?** This stuff is one of the fundamental reasons why quantum U jackasses covet the word “physicist.” Did the uncertainty principle come first, or the slack-jaws desperate to misunderstand it in order to promote their woo?

**Problem 8)** A lightning bolt strikes for about 30 microseconds, creating a radio frequency EMP. What is the frequency spread of the interference it causes in radio/microwave transmissions occurring around it?

**Why does this matter to Quantum Physics?** This is a second application of the Uncertainty Principle. In this form, it addresses a different pair of uncertainties, but it’s the same principle. I’ve included this problem to show the stark quantitative nature of the equation. There is nothing at all qualitative or indecisive about the Heisenberg Principle. It says something extremely specific and if you lose the math, it becomes a lie, period.

**Why is this important to Physicist cred?** We invented the Uncertainty Principle and we damn well have a say in how it works.

**Problem 9)** A tiny, effectively featureless quantum mechanical tiger of mass ‘m’ is caught in a prison of only one dimension. He runs back and forth trying to get out, but the walls on either end are infinitely high. The prison is large compared to the actual physical shape of the tiger and this tiger lives by feeding on heat energy. Further, the prison is sized so that it’s on about the same size-scale as the tiger’s de Broglie wavelength for the low temperature where this tiger lives –and in fact keeps the tiger alive under those circumstances where he’s starving. The zoo keeper must fire photons into the tiger’s cell one at a time to try to hit the tiger and see where he is. The frequency of the photon is very high and the zoo keeper can tell exactly where the photon went in and will be able to tell exactly where the photon comes back out, thus giving him an accurate understanding of the location of whatever the photon bounces off of. The photon will interact elastically with the tiger and the interaction is independent of the photon’s frequency. If the tiger has been allowed to starve and has the smallest energy a tiger of this impossible sort can, what is the probability of finding him at any particular place in this prison with a photon? After you hit him that first time with a photon, finding exactly where he is, how many of the prison’s eigenstates are needed to describe his location thereafter?

**Why does this matter to Quantum Physics?** This is the most basic Schrodinger equation problem, the particle-in-the-box. You should substitute ‘electron’ for ‘tiger’ in the interests of reality, but I can choose how I write the problem. A part of why I wrote this problem the way I did is to give a little bit of a feeling for what the quantum mechanics is like and how it works. In this kind of problem, you are outside the system looking in and the system is completely dark, you cannot see what’s going on. You could be a zoo keeper facing an angry tiger in a sealed crate; your only way to find this tiger is shove a prod through a breathing hole and see if you bump something. If he’s sleeping, you may discover a mass distributed somewhere in the middle of the crate. If he’s lunging back and forth, the prod may bounce off of something now and then, but it appears as if the tiger is distributed everywhere in the box. I added an embellishment too. In my version of the problem, I’ve included a prepared state and then a state collapse: I would recommend asking yourself what the difference is between the Hilbert space associated with the photon probe (designed around a position space representation) and the Hilbert space of the box (which would be the eigenspace solving the Hamiltonian of the tiger trapped in the box).

**Why is this important to Physicist cred?** The particle-in-the-box problem has actual physical applications. The 1D version can be used to approximate the absorbance spectra of aliphatic molecules containing stretches of conjugated bond. A 3D version of this problem can be invoked to describe the light absorption characteristics of quantum dots. Ever seen one of those beautiful Samsung quantum dot TVs? You’re welcome.

**Problem 10)** Suppose you did hit that tiger in the previous problem with a photon, momentarily finding his exact location in the box. What happens to the probability of finding him again at that location over time afterward?

**Why does this matter to Quantum Physics?** This is a time-dependent Schrodinger equation question. If you can’t understand why this is important to quantum mechanics, I feel truly sorry for you.

**Why is this important to Physicist cred?** The sort of logic in this problem is used in pump-probe experiments to see how excited states evolve, for instance. This is a real life example of Deepak Chopra’s “ceaselessly flowing quantum soup,” and I mean it in the sense that this is how it would actually be employed in reality by physicists that actually do quantum physics. In one sense only, Chopra is not wrong: the physics *can be* weird. But, for it to work in weird ways, you must match the circumstances where the effect is seen… the confinement must be on the size-scale of the matter wave. When you fail to invoke the appropriate scale, involving Planck’s constant and the size of the confinement relative to the size of matter wave of the object being considered, is where it becomes a lie. That’s why math is needed… it saves the reality from flowing over into becoming a lie.

**Problem 11)** In order to make a point about the nature of quantum mechanical tunneling, a physics professor lecturing a group of graduate students turns and runs across the classroom and crashes face-first into a wall. He has just insisted that one day he knows he’ll tunnel through and reach the other side. For a 0.25 m thick wall and a 70 kg physics professor, estimate the ratio of probability amplitude for the professor’s wave function on either side of the wall (or better, estimate the probability flux). Assume that the actual potential of the wall is constant over its width and can be approximated from the knowledge that the wall is just a little stronger than the normal force required to decelerate a 70 kg physics professor from human foot speed to stopped in a tenth of a second over the space of a hundredth of an inch. How many times would the professor need to try this experiment in order to achieve his dream of tunneling through?

**Why does this matter to Quantum Physics?** Quantum mechanical tunneling is a real thing. This is the effect where a physical object pops through a barrier, unimpeded. Think Kitty Pryde. To perform this, you need to do the particle in the box problem, but backward (a real physicist will understand my recommendation). This is prime weirdness, exactly why the cranks love quantum. I would recommend trying the same problem with an object the mass of an electron where the thickness of the barrier in question is about the same as the object’s de Broglie wavelength. This problem is based in part on a real-life anecdote, where the experiment in question was initiated by a real physics professor. When asked why he wouldn’t try it again since he knows that the probability is small and a large numbers of trials would improve his chances of success, he answered that the university only pays him enough to perform the experiment once a semester.

**Why is this important to Physicist cred?** Tunneling is responsible for radioactive decay –indeed, we just gave you nuclear power. Also, some of the best microscopes ever built, scanning tunneling microscopes (STM), rely on this physics.

**Problem 12)** You have a cubic (or rhombohedral) crystal of Ammonium Dihydrogen Phosphate whose optic axis is 52.4 degrees from normal to the crystal faces. You shine a 325 nm He-Cad laser through this crystal at some known angle to the optic axis. If the laser output is reduced so that you’re at the shot noise limit, hitting the crystal with one photon at a time, every so often, you see two photons coming out of the crystal. Many measurements show that the output photons lie in the same plane as the input photon, where both out-bound photons possess ordinary polarization and the same wavelength as each other and that they depart from the crystal along beam paths on the surface of a cone away from the incident beam –in other words, they leave at the same angle in opposite directions. Why are these new photons produced and what’s special about them? Suppose I tell you the output angle is 50 mradian, use physics to tell me the wavelengths of the output photons. Supposing the two photons are detected by detectors positioned equal distances from the crystal, what’s the time delay between detections?

**Why does this matter to Quantum Physics?** I spent some significant time thinking about this problem –this addresses a piece of quantum physics badly abused by everyone and their brother, but most intensely by the cranks. What’s written above is in basic structure an actual experiment dating from 1970. I avoided writing about this experiment in the typical pop-culture manner so that you can see what the reality actually looks like. I won’t name the quantum mechanical phenomenon that this demonstrates, but I will refer you to a paper by Einstein, Podolsky and Rosen from 1935. I’m hoping that it looks superficially boring because people want to see something really crazy here without thinking about what they’re actually seeing.

**Why is this important to Physicist cred?** I won’t be snarky this time. I want people to genuinely think about what’s written here for themselves. Preferably, you read the papers and really try to process it. Can you separate even the initial idea from the math? Believe me, it’s there in all its blazing, bizarre glory. What’s the point of this observation? Asking this question is the core of an education that is devoid of indoctrination. Don’t take my word for it, damn well do the work for yourself!

**Problem 13)** You have a proton and an electron interacting by electromagnetic force. Find the eigenstates of the electron. Impress me by finding the *unbounded* eigenstates of the electron (for electron energies greater than zero).

**Why does this matter to Quantum Physics?** This is at its heart a very basic problem that every physicist sees. If you haven’t seen it and you’re calling yourself a quantum physicist, you’re not from a place where they teach quantum physics and, no, you are not a quantum physicist. Tired of the math yet? Sorry, but you can’t be a physicist if you’re afraid of math. In all honesty, I’ve met physicists who claim to be afraid of the math, but these are people who do derivatives as well as they breathe and then get scared of what *mathematicians* do.

**Why is this important to Physicist cred?** The periodic table of the elements is largely understood based on the *bound* states found in this problem. The unbounded states are important for understanding how atoms collide in a low energy, non-relativistic collider. We’ll get to the relativistic ones soon enough…

**Problem 14)** You have a 4 Tesla magnet. You stick your hand into the bore and somebody across the room fires up a computer program to shoot radiowaves into the cavity of the magnet. What frequency and pulse duration must you fire into your arm in order to set your protons to clamoring most noisily? To what radio frequency must you listen to pick up that clamor? Should the input be polarized? Are you able to feel or hear this clamor? Why or why not?

**Why does this matter to Quantum Physics?** If you read my blog, you know that this problem can be approached in part classically. If you want to impress me as a quantum physicist, I expect the *quantum* version. This problem involves spin.

**Why is this important to Physicist cred?** This problem is about MRI. Yes, we’re responsible for MRI too. If I microwave somebody’s chi long enough, does a mystical turkey timer pop out to tell me it’s metaphysically done? I suggest we do an experiment and see; we can jam the safety on the door of a microwave oven and stick somebody’s face in there… any takers? (Oh, right, physicists also gave us microwave ovens and invented the safety screen in the window. Was it a mechanical engineer who suggested the door latch with the safety interlink? Actually, that was probably us too; we’ve been shooting holes through our own heads at particle accelerators for years.)

**Problem 15)** If I say a certain perturbative interaction involves spin-orbit coupling, write the term which would go into the Hamiltonian. From the symmetry of the term, are there any forbidden matrix elements? Use the eigenstates found in problem 13 to calculate the first order perturbation between the ground state and the first excited state.

**Why does this matter to Quantum Physics?** I am gradually turning up the heat here. State of the art modern quantum physics is still way up somewhere ahead. This problem is about a component of Fine structure.

**Why is this important to Physicist cred?** Fine structure and Hyperfine structure are basics necessary to explain spectroscopy. This tool is one of many that people use to engineer materials from medications to coatings for prescription glasses to the plastics used to built the chair you’re sitting in. Spectroscopy is how we know about the atmospheres of planets orbiting nearby stars (yes, this is a measurement that has been made in a few cases).

**Problem 16)** A certain transition involves the quadrupole moment operator. Determine selection rules for the operator and estimate the transition rate between levels connected by this operator. If you want to use the eigenstates from problem 13, go for it.

**Why does this matter to Quantum Physics?** I have a lot of these mechanistic problems floating around. These are middling level quantum physics. Wigner-Eckart theorem and Fermi Golden rule are both essentials; if you haven’t even heard of them, shame on you.

**Why is this important to Physicist cred?** These things are needed for modern laser engineering and are the product of physicists. I’m sorry, but this is what physicists do.

**Problem 17)** What is the set of matrices that can be used to represent the group of all proper rotations?

**Why does this matter to Quantum Physics?** I’ve asked a couple questions here that involve rotation in some form or another. Truth is that I just like this problem and have been thinking about adding it since I started writing these. This is hitting higher level quantum physics and it is actually peripherally a math problem rather than a physics problem.

**Why is this important to Physicist cred?** If you aren’t a physicist, you won’t understand why group theory is interesting. Your reaction to this problem should tell you something very strong about whether you should use the word “physicist” to describe yourself or not. Sorry, I can’t change the reality of what we are.

**Problem 18)** Write the character table for translational symmetry (Correction: *discrete translations* on a 1D lattice). Propose a viable candidate for the 1D representation and explain the associated eigenstates.

**Why does this matter to Quantum Physics?** Can’t mention group symmetry without spending a moment talking about Bloch theory. This is like taking the particle-in-a-box problem and putting it between mirrors.

**Why is this important to Physicist cred?** If you truly understand this, you can go tell Intel how to dope their semi-conductors. Yes, I just gave you microchips; without us, you wouldn’t be poring over this screed on your smartphone.

**Problem 19)** Is a Cooper pair a majorana fermion? How is the Fermi temperature associated with the disappearance of electrical resistivity in a cold solid?

**Why does this matter to Quantum Physics?** Can’t mention semi-conductors without going whole hog and mentioning superconductors. Majorana fermions are a concept that is still argued in many domains of quantum physics. This question is actually fairly qualitative… if you want to go the physicist route, I would suggest using the eigenstates from the particle-in-a-box problem and describing a fermion next to a boson. If you really want to impress me, pull a page out of a Feynman book and derive the partition function for fermions.

**Why is this important to Physicist cred?** Remember that 4 Tesla magnet in problem 14? Probably can’t build that without the super-conductivity mentioned here (full disclosure, we can do rare earth magnets that are that strength too, but again, *real* physicists are responsible for this). Maybe someday superconductors will give us floating trains.

**Problem 20)** Use the Roothaan equations to do a restricted self-consistent field calculation in order to determine what the ground state energy of propane is.

**Why does this matter to Quantum Physics?** I’ve recently done a version of this problem from scratch on my own time and I couldn’t rightly produce this quiz without adding it. This is starting to push against the limits of quantum physics. This problem matters because it is one of the few ways that we can determine wave functions of real systems more complicated than problem 13. If you legitimately try to do this problem from scratch by hand, you will discover that it is one of the most frighteningly difficult things you’ve ever done. As a supplement to this problem, when do relativistic corrections become necessary? What’s the Hartree-Fock limit and what do people do to try to get around it?

**Why is this important to Physicist cred?** This is one of the chief tools by which we understand the structure of atoms heavier than hydrogen. A Nobel prize was awarded for work automating the solution to this problem. For full disclosure, this prize was awarded in *chemistry*, but keep in mind that it is pure physics in the sense that modern chemistry is almost totally dependent on quantum physics. The automation for solving this problem is broadly disseminated in the hands of normal chemists so that they can design molecules without having to trudge through the nightmare of this physics problem for themselves.

**Problem 21)** Why does the Klein-Gordon equation imply antimatter?

** Problem context: **Buckle up sports fans, the ride gets bumpy from here. For mathematical context, Klein-Gordon equation is a low level relativistic analog to Schrodinger equation.

**Why does this matter to Quantum Physics?** Schrodinger equation is actually a manifestly classical construction. I’m sure this probably throws a wrench at the Quantum U worldview with me just somehow colliding the words “classical” and “quantum,” since Schrodinger’s equation is fundamentally the backbone of quantum physics as far as most people understand it. But, it’s actually true; Schrodinger’s equation has a pseudoclassical limit in that it assumes that information travels between particles without a speed limit –you derive Schrodinger’s equation by putting non-commuting operators into the equation I initially introduced you to all the way back in *problem 1*. Klein-Gordon is derived the same way, but from putting the non-commuting operators into the *relativistic* energy-momentum relation. In this sense, Schrodinger’s equation is a form of classical (in the sense of being non-relativistic) physics. One upshot from this is that you must be very careful about claims of simultaneity that hinge on non-relativistic quantum physics; like say, collapse of entanglement (you cannot *tell* the other guy that he should look at his particle, or what you saw when you looked at your particle at faster than the speed of light). Klein-Gordon implies antimatter, but this is actually understood in retrospect; Paul Dirac (another luminary you may not have heard of) suggested it from the Dirac equation, which is a fermionic analog to the bosonic Klein-Gordon. Disturbed by all this reference to math? Don’t be; this is what physicists do… they look at math used to represent reality and then make claims about reality based on that math. For the tenth time, a “physicist” who does no math is not a physicist.

**Why is this important to Physicist cred?** Physicists suggested antimatter to the world. Antimatter isn’t exactly sitting on every table or in every gas tank, but it does have at least one practical application. Have you ever gone to the hospital to get a PET scan? That’s **p**ositron **e**mission **t**omography, which uses antimatter to make tomographic images of the human body. What, another *real* medical application that actually is known to work. Don’t believe me? That’s fine, go back to tending your cupping bruises and hope that nobody screwed up.

**Problem 22)** Show that non-relativistic path integral formalism is equivalent to Schrodinger’s equation.

** Problem context:** Yes, integrating along a path is mathematical. No, you can’t escape math if you’re in physics.

**Why does this matter to Quantum Physics?** Path integral formalism is a big part of everything that is used in high energy physics. Path integrals are introduced early in your quantum physics education, but they don’t become really important until gauge symmetry is introduced and you start working with functionals on fields. That’s right, no quantum field theory without path integrals. A more basic demonstration of path integral formalism is to show that in non-relativistic terms, it’s equal to the more basic Schrodinger’s equation. It’s a tricky conceptual proof that shows you really understand your quantum physics. And, no, I won’t do it for you.

**Why is this important to Physicist cred?** You want modern quantum physics, this is one route to it.

**Problem 23)** Two uncharged thin metal plates are placed in a vacuum such that they lie with surfaces parallel to one another. Explain why they spontaneously exert force on one another. How much force do they exert and how does it vary with distance between the plates?

**Why does this matter to Quantum Physics?** This is the set-up for Casimir effect. Welcome to the bizarre world of zero-point energy and vacuum fluctuations. Yes, this is a real thing.

**Why is this important to Physicist cred?** Someday, maybe this will form the basis of a science fictional star drive which requires no exhaust. Until then, it’s pretty curious and kind of cool. One thing to remember about the bleeding edge of physics is that many things we learn about do not always find technological application. Sometimes, the insight which leads to an application is years away. But, it requires having the real basis and not just the ability to spout nonsense technobabble. If you can apply Casimir effect to build something useful, be my guest… my hat will be off to you if you can actually prove you’re doing it.

**Problem 24) **

Taken from INSPIRE high energy physics, these are Feyman diagrams for production of the Higgs Boson. Use these diagrams to write the Lagrangian for coupling to the Higgs field.

**Why does this matter to Quantum Physics?** Several somebodies won a Nobel prize for this. If you don’t get why and are claiming to be a physicist, shame on you.

**Why is this important to Physicist cred?** Good question. You tell me. Why?

This quiz could go on a long way. I have to tie it off because I only know so much myself (words of wisdom: know thy limits!). There is so much real physics in quantum mechanics that specialists in the various subfields could add questions forever beyond my single class in QFT and nearly non-existent solid state. Why does renormalization work? What is a topological insulator? Why do people try to build computers using atomic spins for bits? How is it that Chinese scientists are passing undecryptable messages to themselves? Why why why? A thousand questions with a thousand real answers. Anybody wasting time pretending they are learning anything about reality at Quantum University will never be able to answer any of them. They will continue to putz around and make believe that they know more than everybody else, calling themselves physicists, even though they do no math and therefore no physics.

I know there’s a huge number of physics cranks out there and I know that attacking one or ten or fifty will probably not make a scratch in the surface. I started writing this in a fit of rage, if you couldn’t tell in my response in the original version of this post. These people utterly piss me off. They think very highly of their own essentially non-existent attainment and pretty much can’t be convinced of their own self-deficit. Writing this, I feel a bit like Don Quixote, astride my horse, charging as fast as I can at that windmill blade on the downswing. It’s a fool’s errand; Quantum University still stands and it still pulls in chumps paying money, no matter what I write.

One thing that angered me most was the insinuation that people essentially throw out their ability to be creative when they strive to accomplish in real physical sciences, that somehow the “Left brain” atrophies and becomes a ghost of itself under the weight of the cold, calculating “Right brain” and that the higher reaches of my soul are made off-limits because of reductionism. This is the view of somebody who knows no scientists. The reality is very much more nuanced and complicated. I’ve met so many scientists with acid wit and magnificent creative bents that I can’t stand the thought of just lumping them all into some big pail of monolithic monotony. Science itself is an act of enormous creativity in designing and executing the right experiment. Almost none of it is reached solely on cold calculation. The people who inhabit the discipline are a broad cross-section of humanity, possessing all the faults and strengths that that implies, but also there are some amazing geniuses the like of which pretty much no other walk in life can claim. Faking genius is what Quantum U wants to package.

My rage is spent. I have little else to say right now.

(Edit 5-7-19:)

There was a discouraged comment provoked by this post that I would like to try to respond to.

The comment was that this quiz was very good, but that it showed the speaker that he/she should leave physics to the professionals (essentially). This is paraphrasing.

Professional physicists have to deal with the feelings that have apparently been elicited by this quiz. Physics scales in difficulty to match your capacity for understanding it –it literally gets harder and harder until you can’t go any further. As a discipline, it was created by a collaborative effort among some of the smartest people who have ever lived. The physics written in books is one big act of genius, the sum total of all the eureka moments of these smartest people. It is every life’s work and piercing insight all at once! Nobody measures up to that. Nobody understands it all. Of physics as a whole, quantum physics is one of the hardest parts.

This is maybe one of the most difficult things that *human beings* have ever learned in the history of the world.

If it feels daunting to you, that’s the way the truth works. Coming to grips with that is necessary in order to move forward. Nobody understands it all. At the oceanside, it’s easy to walk on the beach and visit the shallows. But, if you swim out into it, at some point it gets deeper than you can handle. Not even Michael Phelps can swim from San Francisco to Tokyo.

There is a reward for coming to grips with that. Physics is built on the genius moments of some of the greatest geniuses that there ever was. If you study what they did and come to understand what their work actually means, you can have that spark of insight that the very best of us have had. If you want to understand what Einstein’s genius was, for instance, studying his work directly is a way to commune with him. Working really hard and finally breaking through and really seeing it is like nothing else.

Nobody gets it all, but most of us come to grips with the fact that nobody has to. See what you can see and enjoy the trip. There are gems even in the shallows.

]]>A comment has lead me to believe that the august body of Quantum University is stung by my opinion as a professional physicist of their validity. Can you believe it? Right here on my little ol’ insignificant blog. Great! If I’m enough to rattle them, maybe I ought to keep writing articles about them.

If you want me to respect you, next time, bring physics, not pseudo-pop psychology. There *is* a right way to do quantum mechanics.

I do have some other thoughts about this comment. I will quote it here in its entirety so that you can see what the thinking looks like:

As someone who appears to be heavily indoctrinated in a “material-empirical” orientation with regards to science, it would be very hard for you to appreci- ate the type of education fostered at a place like Quantum University.

The material-empirical science oriented individual tends to live out of touch with Reality, for this to him/her is composed only of particles and myriad physical phenomena proven by math- matical formulas.

What does any of this have to do with your Life, your Consciousness, your Relationships… your own Soul. It’s only a Grand Illusion that you’ve been unable to perceive/discern the connect- ion between these and the brand of physics you’re pursuing.

If you’re ever fortunate enough to transition from the “ordinary mode consciousness,” dominated by obses- sive “left-brained,” rational/analytical thought, and shift toward the higher, more. transrational states for a break, you just may discover that indeed there is a connection to it all.

There is a lot in this little blurb. I think it may even have been written by the same fellow who wrote the “otological prison” quote I used above, though I couldn’t confirm that. He accuses me of being indoctrinated and claims that if only I escaped my rational analytical mind that maybe I would see the truth that we all live in the matrix or some such.

What is indoctrination?

According to the dictionary, it is “the process of teaching a person or group to accept a set of beliefs uncritically.” Direct quote from Google by my lazy ass.

The critical word here is “uncritically.” What does this mean?

Uncritically: “with a lack of criticism or consideration of whether something is right or wrong.” Another direct quote from Google by my even lazier and more tired ass.

So, indoctrination is an education where the student is not critical of the content of what they’ve been taught.

You have only my word to take for it, but I’ve walked all up and down physics. I’ve read 1920s articles on quantum mechanical spin translated from the original German trying to see what claims were being made about it. I’ve read Einstein. I’ve read Feynman. I’ve read Schrodinger… the real guys, their own words. I have worked probably thousands of hours rederiving math made famous by people dead sometimes hundreds of years ago just to be certain I understood how it worked (Do you really believe the Pythagorean theorem?) I’ve marched past the appendix of the textbook and gone to the original papers when I thought the author was lying to me or leaving something important out. And yes, I’ve found a few mistakes in the primary literature by noted physicists. Does that sound uncritical to you? In the 3 years since I originally wrote the Quantum U post above, I’ve earned a genuine physicist PhD from a major accredited university.

I would turn this analysis back on the fellow in the comments: have you done this kind of due diligence on what Quantum U taught you? Did you attack them to check if *they* were wrong? If not, you’ve been indoctrinated. Since they are about as wrong as it’s possible to be, my guess is that no, he didn’t and he isn’t about to… he’s a believer.

The next thought about this comment which pops up is a little claim about my dim-witted nature. I am clearly without a third eye and my life is definitely in the crapper because I am not seeing that other level beyond the workaday world where I could be mystically synergizing with some deeper aspect of reality in the hands of the Real truth. My dreadful left brain is clearly overwhelming my potential as a person. Do you actually believe that you know me?

By design I don’t speak often about my personal life on this blog. Fact is I’m not an unhappy or unfulfilled person. If you take the spine of that comment, the implication that if only I had a Soul, I’d see that Quantum U would have something to give me, truth is that I can say for certain that I need nothing from them in that regard. I came to a point in my life where I don’t need the training wheels… I, as a person, am enough. That has nothing to do with my scientist education, but everything to do with my complicated path through life. That path has lead me a long way and through a lot. Walk one mile in my shoes –I dare you!

Do not make assumptions about the soul of a person you know next to nothing about.

I have one piece of experience that I feel would inform a searcher who sees the allure of Quantum University and it’s “ability” to give students some deeper insight into consciousness, soul and self-actualization. The most difficult thing that people can ever grasp about themselves is the fact that we are all flawed in the sense that our very capacity to interact with reality is fundamentally confused about what’s real. Your brain, the generator of your reality, is not perfect and you can believe in a lie as if it were actually true. Did they find WMDs in Iraq?

I have to laugh at his “transrationalist” higher state of being nonsense because it seems that he’s bitten off the biggest lie imaginable. He believes that everything he thinks about the world is true! Why else would he sneer at material-empirical rationalist analytical mindsets? He wants to disconnect his mind from being connected to tangible reality… you can see that in every word he’s written, right down to the carefully chosen yet inappropriate caps.

The problem I have with that is a simple one: by decoupling your mind from everything else, you remove from yourself the ability to do an external error check based upon what is physically true in the world around you. This is pattern recognition with a broken compass. If you have no way of checking whether or not what you believe matches what is actually real, you have no way of confirming what, if anything, is false in what you see. Everybody can dream and imagine they have psychically contacted a dead relative or telepathically commanded a poodle to piss on a baby. There is no badge of honor to be gained by believing you can lie your hands on someone and heal them with the strength of your Chi because anybody can believe that. You can sit around, do deep breathing, and listen to the white noise in your own anatomy and ascribe all sorts of meanings to it. The hardest thing in a world is sorting out whether anything you imagine is actually true, particularly when you want something to be true. Your mind can dredge up some utter unreality that seems absolutely real in that instant. How can you ever be completely sure?

In my experience, the truth is true regardless of whether or not I believe in it.

That’s the thing about empirical reality. You have a chance to come back and interrogate something, or someone, external to yourself about whether or not you are seeing true things in the world around you. This is a timely subject, I think, because people have turned to filter silos –pocket realities where groups of people are telling you what you want to hear– to avoid having to do really painful self-checks. Empirical reality is imperfect because we never know everything about it, but at least it’s basically invariant and can serve as a good calibration point. That’s the thing about the truth: two contradictory things can’t be simultaneously true. Empiracism at least gives a stationary ground that every observer (literally every observer) can share. If we can all come back and agree that the sky is blue, we at least have something in common to work with, no matter what murmurings are pressing on the backs of our heads. You can’t show that “transrationalist” higher state of being is anything different from a schizophrenic fantasy because they have equal connectedness to the external world; there is no internal frame of reference by which to prove that the first isn’t actually the second. That somebody at Quantum U told you it’s so and you uncritically decided to believe them does not suddenly make it true… that’s almost like a filter bubble; you’re just using someone in particular as your authority whom you wish to believe. Never mind that the person you picked is, maliciously or not, lying their ass off to you.

I think the hardest thing in the world is facing when you’re really wrong about something you deeply want to believe. Sometimes people do get these things wrong. Are you among them? Clearly, the fellow in the comment understands that people can be wrong, or he wouldn’t accuse me of *being wrong*. Does he never turn his optics against himself?

Now, you may want to call me a hypocrite. Am I a believer? Surely I *believe* in physics, being a physicist. My answer here might surprise you. Only kind of. Quite a lot of it I don’t fully understand. I’m either agnostic or skeptical about the parts I don’t understand. And, I’ve gone to some pretty extreme ends to try to decide that I understand it well enough to believe certain things about it. This leads to two things, first, I know I don’t know everything and, second, I freely admit that I get things wrong. But that doesn’t mean that I have no idea what I’m talking about… what skill I have with Quantum Mechanics is well earned.

Let this serve as a warning: anybody else making comments about my soul or implying with heavy hand that there is a lack, I will delete what you say out of hand. That’s ad hominem, as far as I’m concerned. You don’t know me. That you make any such statement shows that you didn’t understand word one about human potential that anyone at any school tried to teach you. You have no idea who I am.

Because I made a different set of points in my immediate direct response to the original comment, here is that as well:

I will approve this comment so that people can read it.

First, there is no “brand” of physics. There is physics and then there is not physics. Because of how it’s fundamentally designed, physics is physics. It must truly burn you up that the words “quantum theory” were coined by someone who was indoctrinated to a “material-empirical” outlook on the world. I find it especially funny that the like of you, oh so high and mighty in your supposed depth and vision, are not creative enough to create anything believable without stealing your entire foundation from my ilk. Ask yourself if you would even have a Quantum University to defend if it wasn’t for us.

“The material-empirical science oriented individual tends to live out of touch with Reality” —Wow, that’s an amazing oxymoron. Great job!

“If you’re ever fortunate enough to transition from the “ordinary mode consciousness,” dominated by obses- sive “left-brained,” rational/analytical thought, and shift toward the higher, more. transrational states for a break, you just may discover that indeed there is a connection to it all.” —And if you stopped eschewing the math, you might eventually realize that lying to yourself doesn’t actually get you out of the garden. But, sure, go ahead, put on the blindfold, spin yourself around a few more times and try to pin the tail on the donkey. I don’t mind.

As an aside, I would recommend this guy for a writing gig on Star Trek, he has an amazing capacity for inventing jargon that sounds like it should means something. Transrational? We have rational and irrational. Argue for me as to where it helps to mix the two. I suppose this fellow and I are in agreement about something; nobody who isn’t in some turbid state of translucid parasanity would willfully spend money on Quantum University.

If you doubt the level of bile the idea of Quantum University brings up in me, please understand that if we lived 700 years ago, I would probably be riding out to help put these witches to the sword. If I can help to spread a single genuinely deep thought about them and what they do through the internet, I will.

]]>I’ve said repeatedly that Organic Chemistry is along the spectrum of pursuits that uses Quantum Mechanics. Organic Chemists learn a brutal regimen of details for constructing ball-and-stick models of complicated molecules. I’ve also recently discovered that chemistry –to this day– is teaching a fundamental lie to undergraduates about quantum mechanics… not because they don’t actually know the truth, but because it’s easier and more systematic to teach.

As a basic example, let’s use the model of methane (CH_{4}) for a small demonstration.

This image is taken pretty much at random from The New World Encyclopedia via a Google image search. The article on that link is titled “covalent bond” and they actually do touch briefly on the lie.

A covalent bond is a structure that is formed between two atoms where each atom donates one electron to form a paired structure. You have probably heard of sigma- and pi- bonds.

This image of Ethylene (a fairly close relative of methane) is taken from Brilliant and shows details of the two most major types of covalent bonds. Along this path, you might even remember my playing around in the first post I made in this series, where I directly plotted sigma- and pi- bonds from linear combinations of hydrogenic orbitals.

These bond structure ideas seem to emerge predominantly based on papers by Linus Pauling in the 1930s. The notion is that the molecule is fabricated out of overlapping atomic orbitals to make a structure sort of resembling a balloon animal, as seen in the figure above containing ethylene. Organic chemistry is largely about drawing sticks and balls.

With methane, you have four sticks joining the balls together. We understand the carbon to be in Sp3 hybridization, which is directly a construct offered by Linus Pauling in 1931, describing a four orbital system, with four sigma bonds, involving carbon with tetrahedral symmetry which is three parts p and one part s. The orbitals are formed specifically from hydrogenic s- and p- types. If you count, you’ll see that there are 8 electrons involved in the bonding in this model.

I used to think this was the story.

The molecular orbital calculations tell me something different. First I will recall for you the calculated density for methane achieved by closed-shell Hartree-Fock.

This density sort of looks like the thing above, I will admit. To see the lie, you have to glance a little closer.

This is a collection of the molecular orbitals calculated by STO-3G and the energy axis is not to perfect scale. The reported energies are high given the incompleteness of the basis. The arrows show the distribution of the electrons in the ground state with one spin up and spin down electron in each orbital. The -11.03 Hartree orbital is the deep 1s electrons of the carbon and these are so tightly held that the density is not very visible at this resolution. The -0.93 orbital is the next out and the density is mainly like a 2s orbital, though when you threshold to see the diffuse part of the wave function, it has a sort of tetrahedral shape. Note, this shape only emerges if you threshold so that it becomes visible. The next three orbitals at -0.53 are degenerate in energy and have these weird blob-like shapes that actually don’t really look like anything; one of them sort of looks like a Linus Pauling Sp-hybrid, but we’re stumped by the pesky fact that there are three rather than four. The next four orbitals above zero are virtual orbitals and are unpopulated in the ground state of the molecule –these could be called anti-bonding states.

Focusing on the populated degenerate orbitals:

These three seem to throw a wrench at everything that you might ever think from Linus Pauling. They do not look like the stick-like bonds that you would expect from your freshman chemistry balloon animal intuition. Fact is that these three are selected in the Hartree-Fock calculation as a composite rather than as individual orbitals. They occur at the same energy, meaning that they are fundamentally entangled with each other and the filter placed on finding them finds all three together in a mixture. This has to be the case because these orbitals examined in isolation do not preserve the symmetry of the molecule.

With methane, we must expect the eigenstates to have tetrahedral symmetry: the symmetry transformations for tetrahedral symmetry (120 degree rotations around each of the points) would leave the Hamiltonian unaltered (it transforms back into itself), so that the Hamiltonian and the symmetry operators commute. If these operators commute, the eigenstates of the molecule’s Hamiltonian must be simultaneous eigenstates of tetrahedral symmetry. This is basic quantum mechanics.

You can see by eye that these orbitals are not.

Now, with this in mind, you can look at the superposition of these which was found during the Hartree-Fock calculation:

This is the probability distribution for the superposition of the three degenerate eigenstates above. Now we have a thing that’s tetrahedral. Note, there is no thresholding here, this is the real intensity distribution for this orbital collection. This manifold structure contains 6 electrons in three up-down spin pairs where they are in superpositions of three unknown (unknowable) degenerate states.

The next lower energy set has two electrons in up-down and looks like this:

This is the -0.93 orbital without thresholding so that you can see where the orbital is mostly distributed as a 2s-like orbital close to the Carbon atom in the center. It does have a diffuse fringe that reaches the hydrogens, but it’s mainly held to the carbon.

I have to conclude that the tetrahedral superposed orbital thing is what holds the hydrogens onto the molecule.

Where are my stick-like bonds? If you stop and think about the Linus Pauling Sp-hybrids, you realize that those orbitals in isolation also *don’t preserve symmetry*! Further, we’ve got a counting conundrum: the orbitals holding the molecule together have six electrons, while the ball-and-stick covalent sigma-bonded model has eight. In the molecular orbital version, two of the electrons have been drawn in close to the carbon, leaving the hydrogen atoms sitting out in a six-electron tetrahedral shell state.

This vividly shows the effect of electronegativity: carbon is withdrawing two of the electrons to itself while only six remain to hold the four hydrogen nuclei. There is not even one spin-up-down two electron sigma-bond in sight!

And so we hit the lie: there is no such thing as sigma- and pi- bonds!

…there is no spoon…

The ideas of the sigma- and pi-bonds come from a model not that different from the Bohr atom. They have power to describe the multiplicity that comes from angular momentum closure, having originated as a description in the 1930s explaining bonding effects noticed in the 1910 to 1920 range, but they are not a complete description. The techniques to produce the molecular orbitals originated later: ’50s, ’60s, ’70s and ’80s. These newer ideas are crazily different from the older ones and require a good dose of pure quantum mechanics to understand. I have a Physical Chemistry book for Chemists from the early 2000s that does not contain a good treatment of molecular orbital theory, stopping only with basically the variational methods Pauling and the workers in the 1930s were using. I asked one of my coworkers, who is versed in organic chemistry models, how many electrons she thought were in the methane bonding system and she said “8,” exactly as I would have prior to this little undertaking.

There’s a conspiracy! We’re living in a lie man!

Edit 2-12-19:

I spent some time looking at Ethylene, which is the molecule featuring the example of the balloon animal Pi-bond in the image above. I found a structure that resembles a Pi-bond at the highest energy occupied orbital of the molecule.

Density of Ethylene:

Density of Ethylene:

I’ve added two images of the density so that you can see the three dimensional structure.

Ethylene -0.32 hartrees molecular orbital, looks like a pi-bond:

The -0.53 hartrees orbital looks sort of sigma-like between the carbons:

The rest of the orbitals look nothing like conventional sigma- or pi- bonds. The hydrogens are again attached by a manifold of probability density which probably allows the entire system to be entangled and invertible based on symmetry.

Admittedly, ethylene has only one pi-bond and the first image above probably qualifies as the pi-bond. I would point out, however, that in the case of ethylene, the stereotypical sigma- and pi- configurations between the carbons matches the symmetry of the molecule, which has a reflection symmetry plane between the carbons and a 180 degree rotation axis along the long axis of the molecule. The sigma- and pi- bond configurations can be symmetry preserving here, but for the carbons only.

One other interesting observation is that the deep electrons in the 1s orbitals of the carbons are degenerate in energy, leading these orbitals to be entangled:

This also matches the reflection symmetry of the molecule (and would in fact be required by it). There are four electrons in this orbital and you can’t tell which are which, so the probability distribution allows them to be in both places at once… either on the one carbon or on the other. Note, this does not mean that they are actually in both places; it means that you could find them in one place or the other and that you cannot know where they are unless you look –I think this distinction is important and frequently overlooked.

An interesting accessory question here is what happens if you twist ethylene? Molecules like ethylene are not friendly to rotation along the long axis of the double bond because that supposedly breaks the pi-bonding. So, I did that. The total energy of the molecule increases from -77.07 to -76.86; that isn’t a huge amount, but it would constitute a barrier to rotation around the double bond.

Twisted Ethylene density, rotated about the bond by 90 degrees:

In this case, you do get what appear –sort of– to look like four-fold degenerate sigma-bonds attaching the hydrogens:

But, the multiplicity is about two-fold degenerate, suggesting only four electrons in the orbital instead of eight, which badly breaks the sigma-bond idea (of two electrons to a sigma-bond). This again suggests strong electron withdrawing by carbon, and stronger with ethylene than methane.

The highest energy occupied state has an energy increased from -0.32 in the planar state to -0.177 in the twisted state… and it looks like a broken pi-bond:

I think that the conventional idea about why ethylene is rigid is probably fairly accurate. The pictures here might be regarded as a transition state between the two planar cases where the molecule has a barrier to twisting, but is permitted to do so at some slow rate.

In the twisted case, the deep 1s electrons on the carbons are broken from reflection symmetry and they become distinctly localized to one carbon or the other.

Overall, I can see why you would teach the ideas of the sigma- and pi- bonds, even though they are probably best regarded as special cases. If you’re not completely aware that they *are* special cases, and that pictures like the one on Brilliant.org are broken, then we have a problem.

This exercise has been a very helpful one for me, I think. I’ve heard a huge amount about symmetries and about different organic chemistry conventions. Performing this series of calculations really helps to bridge the gap. Seeing actual examples is eye-opening. Why aren’t there more out there?

Edit 2-23-19:

As I’ve continued to learn more about electronic bonds, I’ve learned that the structural details have been continuously argued for a long time. It becomes clear pretty quickly that the molecular orbital structures tend to exclude those notions you encounter early in schooling. Still, molecular orbitals have broken-physics problems themselves when you try to pull them apart by splitting a molecule in half. You end up having to be molecular orbital-like when the molecule is intact, but atomic orbital-like when the molecule is pulled apart into its separate atoms.

I found a paper from 1973 By Goddard and company which rescues some of the valance bond ideas as Generalized Valence Bonds (GVB). Within this framework, the molecular orbitals are again treated as linear combinations of atomic parts and answers the protestations of symmetry by saying simply that if you can make a combination of atomic orbitals that globally preserve the symmetry in a molecule, then that combination is an acceptable answer. GVB adds to the older ideas by putting in the notion that bonds can push and deform each other, which certainly fits with the things you start to see when you examine the molecular orbitals.

You can have sigma and pi bonds if you make adjustments. I’m not sure yet how the GVB version of methane would be constructed, but the direct treatment of carbon in the paper slays the idea of Sp-hybridization, as I understand it, while still producing the expected geometry of molecules.

Still thinking about this.

edit 3-5-19:

I’ve been strongly aware that my little Python program is simply not going to cut it in the long haul if I desire to be able to make some calculations that are actually useful to a modern level of research. I decided to learn how to use GAMESS.

For a poor academic with some desire to do quantum mechanics/ molecular mechanics type calculations, GAMESS is a godsend.

More than that actually. GAMESS is like stumbling over an Aston Martin Vanquish sitting in an alley way, unlocked, with the keys in the ignition, where the vanity plate says “wtng4U.” It isn’t actually shareware, but it could be called licensed freeware. GAMESS is an academic project whose roots existed clear back in the 1970s, roughly parallel to Gaussian, which still exists today and is accessible to people whom the curators deem reasonable. My academic email address probably helped with the vetting and I can’t say I know exactly how far they are willing to distribute their admittedly precious program.

To give you an idea of the performance gap between my little go-cart and this porche: the methane calculations I made above took 17 seconds for my Python program… GAMESS did it in 0.1 seconds. Roughly 170-fold! This would bring benzene down from two hours for my program to maybe a few minutes with GAMESS.

This image, produced by a GAMESS satellite program called wxMacMolPlt, is a methane coordinate model with a GAMESS calculated electron density depicted as a mesh to demonstrate a probability isosurface. What GAMESS adds to where I was in my own efforts is a sophistication including direct calculations of orbital electron occupancy. Under these calculations, it’s clear that electrons are withdrawn from the hydrogens, but maybe not quite as extremely as my crude estimations above would suggest: the orbitals associated with the hydrogens have 93% to 96% electron occupancy… withdrawn, but not so withdrawn as to be empty (I estimated 6 for 8 electrons above, or more like 75% occupancy, which was relatively naive). This presumably comes from the fringes of the 2s orbital centered on the carbon. Again, the analysis is very different from the simple notations of sigma- and pi-bonding, where the electrons are clearly set in clouds defined by the whole molecule rather than as distinct localizations.

I’ve really just learned how to make GAMESS work, so my ability to do this is very much limited. And, admittedly, since I have no access to real computer infrastructure (just a quadcore CPU) it will *never* reach its full profound ability. In my hands, GAMESS is still an atomic bomb used as a fly swatter. We’ll see if I can improve upon that.

edit 3-10-19:

Hit a few bumps learning how to make GAMESS dance, but it seems I’ve managed to turn it against the basic pieces I was able to attack on my own.

Here is Ethylene, both a model and a thresholded form of the total electron density.

I also went and found those orbitals in the carbon-carbon bond.

The first is the sigma-like bond at -0.54 and the second is the pi-like bond at -0.33. The numbers here are slightly off from what I quote above because the geometry is optimized and STO-3G ends up optimizing slightly shorter than X-ray observed bond lengths. These are somewhat easier to see than the clouds I was able to produce with my own program (though I think my work might be a little prettier). I’ve also noticed that you can’t plot density of orbital super-positions with the available GAMESS associated programs, as I did with methane above. I can probably get tricky by processing molecular orbitals on my own to create the superpositions and *then* plot them –GAMESS handily supplies all the eigenvectors and basis functions in its log files.

In the build of GAMESS that I acquired, I’ve stumbled over an apparent bug. The program can’t distinguish tetrahedral symmetry in a normal manner… it’s converting the Td point group of methane into what appears to be a D2h point group, apparently. I was able to work around this by calling symmetry C1. Considering that I started out with no idea how to enter anything at all, I take this as a victory. As open freeware, they work with a smaller budget and team, so I think the goof is probably understandable –though it sure felt malicious when I realized that the problem was with GAMESS itself. I’m not savvy enough with programming to dig in and fix this one myself, I think, though the pseudo-open source nature of GAMESS would certainly allow that.

Given how huge an effort my own python SCF program ended up requiring, I’m not too surprised that GAMESS has small problems floating around. As an academic product, they have funding limits. At the very least, I’m impressed that it cranks out in seconds what took my program minutes… that speed extends my range a lot. I was able to experiment with true geometry optimization in GAMESS where my program stopped with me scrounging atomic coordinates out of the literature.

(edit 4-10-19):

This is an image of pyrophosphate, calculated with the 6-311G basis set in GAMESS by restricted Hartree-Fock. This includes geometry optimization and is in a polarized continuum model for representation of solvation in water. The wireframe itself represents an equi-probability surface in the electron density profile while the coloration of the wireframe represents the electrostatic potential at that surface (blue for negative, red for positive).

Edit 5-7-19:

This was an attempted saddle point search in GAMESS trying to find if a transition state exists in the transfer of a proton from one water molecule to another (for formation of hydronium and hydroxide)

This is the weirdest thing I’ve seen using GAMESS yet. This is not exactly a time simulation, it’s an attempted geometry minimization showing the computer trying different geometries in an attempt to locate a saddle point in the potential energy surface. I’m befuddled a bit by this search type because I’m trying to study a reaction pathway in my own work. Unfortunately, this sort of geometry search is operating in a counter-intuitive fashion and I’m not certain whether or not it’s broken in the program. However, when you see two oxygens fighting over a proton… well… that’s just cool. If the waters are set close enough so that they enjoy a hydrogen bond, the energy surface appears to have no extrema except where the protons are located as two waters. If you back the waters off from one another so that they are out of hydrogen bonding distance and pull the hydrogen out so that it is hydrogen bonding with both oxygens, you get this weird behavior where the proton bounces around until the nuclei are close enough to go back to the hydrogen bonded water configuration. I need to pin the oxygens away from each other, which won’t happen in reality.

Not sure what I think.

]]>I will use this post to talk about my time spent learning how to apply the classic quantum mechanical calculation of Hartree-Fock (note, this is plain old Hartree-Fock rather than multi-configuration Hartree-Fock or something newer that gives more accurate results). I’ve spoken some about my learning of this theory in a previous post. Since writing that other post, I’ve passed through numerous travails and learned quite a lot more about the process of ab initio molecular calculation.

My original goal was several-fold. I decided that I wanted a structural tool that, at the very least, would allow me access to some new ways of looking at things in my own research. I chose it as a project to help me acquire some skill in a computer programming language. Finally, I also chose to pursue it because it turned out to be a very interesting question.

With several months of effort behind me, I know now several things. First, I *do* think it’s an interesting tool which will give new insight into my line of research, provided I access the tool correctly. Second, I think I was incredibly naive in my approach: the art and science of ab initio calculation is a much bigger project than can bear high quality fruit in the hands of one overly ambitious individual. It was a labor of years for a lot of people and the time spent getting around my deficits in programming are doubly penalized by the sheer scope of the project. My little program will never produce a calculation at a modern level! Third, I chose Python for my programming language for its ease of availability and ubiquity, but I think a better version of the self-consistent field theory would be written in C or Fortran. Without having this be my full-time job, which it isn’t, I doubt there’s any hope of migrating my efforts to a language better suited to the task. For any other intrepid explorers seeking to tread this ground in the future, I would recommend asking yourself how pressing your needs are: you will never catch up with Gaussian or GAMESS or any of the legion of other professionally designed programs intended to perform ab initio quantum mechanics.

Still, I did get somewhere.

The study of Hartree-Fock is a parallel examination of Quantum Mechanics and the general history of how computers and science have become entangled. You cannot perform Hartree-Fock by hand; it is so huge and so involved that a computer is needed to hold it together. I talked about the scope of the calculation previously and what I said before still holds. It cannot be done by hand. That said, the physics were still worked out mostly by hand.

I would say that part of the story started almost 90 years ago. Linus Pauling wrote a series of papers connecting the then newly devised quantum mechanics of Schrodinger and his ilk to the puzzle of molecular structure. Pauling took hydrogenic atomic orbitals and used linear combinations of these assemblies to come up with geometric arrangements for molecules like water and methane and benzene. A sigma-orbital is the product of two atomic orbitals placed side-by-side with an overlap and then adjusted energy optimization to pick the right distance. A pi-orbital is the same, but with two p-orbitals placed side-by-side and turned so that they lie parallel to one another.

Much of Pauling’s insights now form the backbone of what you learn in Organic Chemistry. The geometry of molecules as taught in that class came out of these years of development and Pauling’s spell of ground-breaking papers from that time will have you doing a double-take regarding exactly how much impact his work had on chemistry. Still, for the work of the 1930s by Pauling and his peers, they only had approximations, with limited accuracy for the geometry and no real ability to calculate spectra.

Hartree-Fock came together gradually. C.S.S. Roothaan published what are now called the Roothaan equations, which constitute the core of more modern closed-shell Hartree-Fock, in 1951. Nearly simultaneously, Frank Boys published a treatment of gaussian functions, showing that all needed integrals for molecular overlap could be calculated in closed form with the gaussian function family, something not possible with the Slater functions that were to that point being used in place of the hydrogenic functions. Hydrogenic functions do show one exact case of what these wave functions actually look like, but they are basically impossible to calculate for any atom except hydrogen and pretty much impossible to adapt to broader use. Slater functions took over in place of the exact hydrogenic functions because they were easier to use as approximations. Gaussian functions then took over from Slater functions because they are easier to use still and much easier to put into computers, a development largely kicked off by Boys. There is a whole host of names that stick out in the literature after that, including John Pople who duly won a Nobel prize in 1998 for his work leading to the creation of Gaussian, which to this day is a dominant force in molecular ab initio calculation (and will do everything you could imagine needing do as a chemist if you’ve got like $1,000 to afford the academic program license… or $30,000 if you’re commercial).

The depth of this field set me to thinking. Sitting here in the modern day, I am reminded slightly of Walder Frey and the Frey brethren in the Game of Thrones. This may seem an unsightly and perhaps unflattering comparison, but stick with me for a moment. In the Game of Thrones, the Freys own a castle which doubles as a bridge to span the waters of the Green Fork in the lands of Riverrun. The Frey castle is the only ford for miles and if you want to cut time on your trade (or marching your army), you have no choice but to deal with the Freys. They can charge whatever price they like for the service of providing a means of commerce –or, as the case may be, war– and if you don’t go with them, you have to go the long way around. Programs like Gaussian (and GAMESS, though it is basically protected freeware), are a bridge across a nearly uncrossable river. They have such a depth of provenance in the scientific service that they provide that you are literally up a creek if you try to go the long way around. This is something I’ve been learning the hard way. In truth, there are many more programs out there which can do these calculations, but they are not necessarily cheap, or -conversely- stable.

I think this feature is interesting on its own. There is a big gap between the Quantum Mechanics which everybody knows about, which began in the 1920s, and what can be done now. The people writing the textbooks now in many cases came to their own in an environment where the deepest parts of ab initio was mainly already solved. Two of the textbooks I delved into, the one by Szabo and Ostlund, and work by Helgaker, clearly show experts who are deeply knowledgeable of the field, but have characteristics suggesting that these authors themselves have never actually been able to cross the river between classical quantum mechanics and modern quantum chemistry fully unaided (Szabo and Ostlund never give theory that can handle more than gaussian S-orbitals, where what they give is merely a nod to Boys, while Helgaker is given to quoting as recently as 2010 from a paper that, to the best of my ability, actually gives faulty theory pending some deep epistimologic insight guarded by the cloistered brotherhood of Quantum Chemists). The workings hidden within the bridge of the Freys is rather impenetrable. The effort of going from doing toy calculations as seen in Linus Pauling’s already difficult work to doing modern real calculations is genuinely a herculean effort. Some of the modern textbooks cost hundreds of dollars and are still incomplete stories on how to get from here to there. Note, this only for gaining the underpinnings of Hartree-Fock, a flawed technique itself without including Configuration Interaction or other more modern adjustments and those even short shrift if you don’t have ways of dealing with the complexities of the boundary conditions.

Several times in the past couple months, I’ve been wishing for Aria Stark’s help.

I will break this story up into sections.

The core of Hartree-Fock is perhaps as good a place to start as any. Everybody knows about the Schrodinger equation. If you’ve gone through physical chemistry, you may have cursed at it a couple times as you struggled to learn how to do the Particle-in-a-Box toy problem. You may be a physicist and have solved the hydrogen atom, or seen Heisenberg’s way of deriving spherical harmonics and might be aware that more than just Schrodinger was responsible for quantum mechanics.

Sadly, I would say you basically haven’t seen anything.

As the Egyptian Book of the Dead claims that Death is only a beginning, Schrodinger’s equation is but the surface of Quantum Mechanics. I will pick up our story in this unlikely place by pointing out that Schrodinger’s equation was put through the grinder in the 1930s and 1940s in order to spit out a ton of insight involving molecular symmetry and a lot of other thoughts about group symmetry and representation. Hartree and Fock had already spent time on their variational methods and systematic techniques for a combined method called Hartree-Fock began to emerge by the 1950s. Heck, an atomic bomb came out of that era. The variant of Schrodinger’s equation where I pick up my story is a little ditty now called the Roothaan equation.

It definitely doesn’t *look* like the Schrodinger equation. In fact, it looks almost as small and simple as E=mc^2 or F = ma, but that’s actually somewhat superficial. I won’t go terribly deeply into the derivation of this math because it would balloon what will already be a long post into a nightmare post. My initial brush with this form of the Roothaan equation came from Szabo and Ostlund, but I’ve since gone and tracked down Roothaan’s original paper… only to find that Szabo and Ostlund’s notation, which I found to be quite elegant, is actually almost directly Roothaan’s notation. Roothaan’s purpose seems to have been collecting prior insight regarding Hartree-Fock into a systematic method.

This equation emerges from taking the Schrodinger equation and expanding it into a many-body system where the Hamiltonian has been applied onto a wave equation that preserves electron fermion exchange anti-symmetry –literally a Slater determinant wave function, where you may have like 200! terms or more. ‘F’, ‘S’ and ‘C’ are all usually square matrices.

‘F’ is called the Fock matrix and it contains all of the terms of the Hamiltonian. Generally speaking, once the equation is in this form, you’re actually out in numerical calculation land and no mathematical dongles remain in the matrix. The matrix contains only numbers. The Fock matrix is a square matrix which is symmetric, meaning that the terms above and below the diagonal equal each other, which is an outcome of quantum mechanics using hermitian operators. To construct the Fock matrix, you’ve already done a ton of integrals and a huge amount of addition and subtraction on terms that look sort of like pieces of the Schrodinger equation. You can think of the Fock matrix as being a version of the Hamiltonian. Within the Fock matrix are terms referred to as the Core Hamiltonian, which looks like the Schrodinger Hamiltonian, and the ‘G’ matrix, which is a sum of electron repulsion and electron exchange terms, which only occur when you’ve expanded the Schrodinger equation to a many body system. The Fock matrix is usually symmetric rather than just hermitian because the Roothaan equations assume that every molecular orbital is closed… that is, every orbital has one spin up and one spin down electron which are degenerate and indistinguishable. The eigenstates are therefore spatial equations instead of spin-orbitals where spin was integrated out.

‘C’ is a way to represent the eigenstates of the Hamiltonian. Note, I did not say that ‘C’ is a wave function because these wave functions are actually impossible to write down (how many is ~200! or 400! or more terms?) ‘C’ is a representation of a way to write down the eigenstates that you might use to construct a wave function in the space of the Fock matrix. It actually isn’t even the eigenstates directly, but the coefficients for a basis set that could be used to represent the eigenstates you desire. ‘C’ is square unitary matrix, meaning that multiplying it by the transpose of itself produces identity. The eigenstates contained by ‘C’ are orbitals that are associated with the Hamiltonian in the form of the Fock matrix.

‘S’ is called the “overlap matrix.” The overlap matrix is a symmetric matrix that is constructed by use of the basis set. As you may have read in my other post on this subject, the basis set may be a bunch of gaussian functions or a bunch of slater functions or a bunch of some other miscellaneous basis set that you would use to represent the system at hand. The overlap matrix is introduced because, mathematically, whatever basis you chose may be composed of functions that are not orthogonal to one another. Gaussian basis functions are useful, but they are not orthogonal. The purpose of the overlap matrix then is to work through the calculus necessary to construct orthogonal combinations of the basis functions. For a basis set that is not orthogonal you need some way to account for the non-orthogonality.

The form of the Roothaan equation written above is an adapted form for an eigenvalue equation where ε is the eigenvalue. In the case of molecular orbitals, this eigenvalue is an energy that is called the orbital energy. These eigenstates are non-orthogonal as accommodated by the ‘S’ matrix, where the eigenvalues are distributed across combinations of basis functions, as expressed in ‘C’, that are orthogonal to each other.

What makes this equation truly a monstrosity is that the Fock matrix is dependent itself on the ‘C’ matrix. The way this dependence appears is that the integrals which are used to construct the Fock matrix are calculated from the values of the ‘C’ matrix. The Roothaan equation is a sort of feedback loop: the ‘C’ matrix is calculated from working an eigenvalue equation involving ‘F’ and ‘S’ to find ε, where ‘C’ is then used to calculate ‘F’. In practice, this operates as an iteration: you guess at a starting Fock matrix and calculate a ‘C’ matrix, which is then used to calculate a new Fock matrix, from which you calculate a new ‘C’ matrix. The hope is that eventually the new ‘C’ matrix you calculate during each cycle of calculation converges to a constant value.

Oroboros eating its own tail.

This is the spine of Hartree-Fock: you’re looking for a convergence to give constant output values of ‘C’ and ‘F’. As I articulated poorly in my previous attempt at this topic, this is the self-consistent electron field. Electrons occupy some combination of the molecular orbitals expressed by ‘C’ at the energies of ε, forming a electrostatic force field that governs the form of ‘F’, from which ‘C’ is the only acceptable solution. ‘C’ is used to calculated the values of the hamiltonian inside ‘F’, where the integrals are the repulsions of electrons in the ‘C’ orbitals against each other or attractions of those electrons toward nuclei, giving the kinetic and potential energies that you usually expect in a hamiltonian.

Here is the scale of the calculation: a minimal basis set for methane (four hydrogens and a carbon) is 9 basis functions. The simplest, most basic basis set in common use is Pople’s STO-3G basis, which creates orbitals from sums of gaussian functions (called a contraction)… “3G” meaning three gaussians to a single orbital. One overlap integral between two functions therefore involves nine integrals. Generation of the 9 x 9 S-matrix mentioned above then involves 9*9*9 integrals, 729 integrals. Kinetic energy and nuclear attraction terms would each involve another 729 (3*729 = 2187 integrals) which can be shortened by the fact that the matrices are symmetric, so that only a few more than half actually need to be calculated. The electron-electron interactions, including repulsions and exchanges are a larger number still: one quarter of 9^4*9 or ~14,700 integrals (symmetry allows you to avoid the full 9^4 where basically the whole matrix must influence each matrix element, giving a square of a square). Roughly 17,000 integration operations for a molecule of only 5 atoms using the least expensive form of basis set.

The only way to do this calculation is by computer. Literally thousands of calculations go into making ‘F’ and then hundreds more to create ‘C’ and this needs to be done repeatedly. It’s all an intractably huge amount of busy work that begs for automation.

One big problem I faced in dealing with the Roothaan equations was trying to understand how to solve big eigenvalue problems.

Most of my experience with calculating eigenvalues has been while working analytically by hand. You may remember this sort of problem from your linear algebra class: you basically set up a characteristic equation by setting the determinant of the matrix equal to zero after having subtracted a dummy variable for the eigenvalue from the diagonal and then solving that characteristic equation. It’s kind of complicated by itself and depends on the eigenvalues being separable. A 3 x 3 matrix produces a cubic equation –which you hope to God is separable because nobody ever wants to do more than the quadratic equation. If it isn’t separable, you are up a creek without a paddle even at just 3 x 3.

For the example of the methane minimal basis set, the resulting matrices of the Roothaan equation are 9 x 9.

This is past where you can go by hand. Ideally, one would prefer to not be confined to molecules as small as molecular hydrogen, so you need some method of calculating eigenvalues that can be scaled and –preferably– automated.

This was actually where I started trying to write my program. Since I didn’t know at the time whether I would be able to build the matrix tools necessary to approach the math, I used solving the eigenvalue problem as my barometer for whether or not I should continue. If I couldn’t do even this, there would be no way to approach the Roothaan equations.

The first technique I figured out was a technique called Power Iteration. At the time, this seemed like a fairly straight-forward, accessible method to pull eigenvalues from a big matrix.

To perform power iteration, all you do is operate a square matrix onto a vector, normalize the resulting vector, then act the matrix again on *that* new vector. If you do this 10,000 times, you will eventually find a point where the resulting vector is just the initial vector times some constant factor. The constant ends up being the biggest eigenvalue in the matrix and the normalized vector is the associated eigenvector. This gives *only* the biggest eigenvalue in the matrix; you access the next smaller eigenvalue by “deflating” the matrix. This is accomplished by forming the outer product of the eigenvector and multiplying it by eigenvalue and subtracting the result from the initial matrix, which produces a new matrix where the already determined eigenvalue has been “deactivated.” Performing this set of actions repeatedly allows you to work your way through each eigenvalue in the matrix.

There are some difficulties with Power Iteration. In particular, you’re kind of screwed if an eigenvalue happens to be zero since you no longer have the ability to find the eigenvector.

Much of my initial work on self-consistent field theory used Power Iteration as the core solving technique. When I started to run into significant problems later in my efforts, and couldn’t tell whether my problems were due to the way I was finding eigenvalues or some other darker crisis, I ended up switching to a different solving technique.

The second solving technique that I learned was Jacobi Diagonalization. For Power Iteration, I stumbled over the technique from a list of computational eigenvalue calculation methods discovered on-line. Jacobi, on the other hand, was recommended by the Szabo and Ostlund Quantum Chemistry book. Power Iteration is an iterative method while Jacobi is a direct calculation method.

To my somewhat naive eye, the Jacobi method seems ready-made for quantum mechanics problems. A necessary precondition for this method is that the matrix of choice be at least a symmetric matrix, if not actually a hermitian matrix. And, since quantum chemistry seems to mostly reduce its basis sets to non-complex symmetric forms, the Fock matrix is assured to be symmetric as a result of the hermiticity of ground-level quantum mechanics.

Jacobi operates on the observation that the off-diagonal elements of a symmetric matrix can be reduced to zeros by a sequence of unitary rotations. The rotation matrix (called a Givens matrix) can be directly calculated to convert one particular off-diagonal element to a zero. If you do this repeatedly, you can work your way through each off-diagonal element, zeroing each in turn, until the matrix is diagonal. This works only if diagonalization proceeds in a particular order, where you pick the Givens matrix that zeros the largest off-diagonal element present on any particular turn. This largest element is referred to as “the pivot” since it’s the crux of a mathematical rotation. As the pivot is never assured to be in any particular spot during the process, the program must work its way through off-diagonal elements in an almost random order, picking only the largest present at that time.

Once the matrix is diagonalized, all the eigenvalues lie along the diagonal… easy peezy. Further, the product of all the Givens matrices is a unitary matrix containing the eigenvectors for each eigenvalue encoded in order by column.

In the process of learning these numerical methods, I discovered a wonderful diagnostic tool for quantum mechanics programs. As a check for whether or not Jacobi can be applied to a particular matrix, one should look at whether or not the matrix is symmetric. This isn’t an explicit precondition of Power Iteration and I didn’t write any tools to look for it while I was relying on that technique. After I started using Jacobi, I wrote a tool for checking whether or not an input matrix is symmetric and discovered that other routines in my program were failing in their calculations to produce the required symmetric matrices. This diagnostic helped me root out some very deep programming issues elsewhere in what turned out to be a very complex program (for an idiot like me, anyway).

I made an early set of design choices in order to construct the basis. As mentioned in detail in my previous post, the preferred basis sets are Gaussian functions.

It may seem trivial to most people who may read this post, but I was particularly proud of myself for learning how to import a comma delineated .csv file into a python list while converting select character strings into floating points. In the final version, I figured out how to exploit an exception call as a choice for whether an input was intended to be text or floating point.

In the earliest forms of the program, most of my work was done in lists, but I eventually discovered python libraries. If you’re working in python, I caution you to not use lists for tasks that require searchability: libraries are easier! For any astute computer scientists out there, I have no doubt they can chime up with plenty of advice about where I’m wrong, but boy oh boy, I fell in love with libraries really quick when I discovered the functionality they promote.

For dealing with the basis sets, python has a fair number of tools in its math and cmath libraries. This is limited to basic level operations, however. It may seem a no-brainer, but teaching a program how to do six-dimensional integrals is really not as easy as discovering the right programming library. This intended task defined my choices for how I stored my basis sets.

Within the academic papers, most of the gaussian basis sets can be found in tables stripped of everything but the vital constants. The orbitals are fabricated from “primitive” contractions, where a “contraction” is simply a sum of several bare-bones “primitive” gaussian functions with each identified uniquely by a weighting coefficient and an exponential coefficient. There is frequently also a standard exponent to scale a particular contraction to fit the orbitals for a desired atom. The weighting coefficient tells the magnitude of the gaussian function (often chosen so that the sequence of primitives has an overlap integral that is normalized to equal 1) while the exponential coefficient tells how wide the bell-curve of a particular gaussian primitive spreads. The standard exponent is then applied uniformly across an entire contraction to make it bigger or smaller for a particular atom.

In the earliest papers, these gaussian contractions are frequently intended to mimic atomic orbitals. The texts often refer to “linear combinations of atomic orbitals” when calculating molecular orbital functions. In later years, it seems pretty clear that these gaussian contractions are not necessarily specified atomic orbitals so much as an easy basis set which has the correct densities to give good approximations to atoms with relatively few functions. It’s simply an economical basis for the task at hand.

Since python doesn’t automatically know how to deal specifically with gaussian functions, my programming choice was to create a gaussian primitive class. Each primitive object automatically carried around all the numbers needed to identify a particular gaussian primitive. Within the class there were a few class methods necessary to establish the object and identify the associated constants and position. The orbitals were then lists of these primitive class objects. Later in my programming, I even learned how to make the class object callable so that the primitive could spit out the value of the gaussian for a particular position in space.

This is certainly trivial to the overall story, but it was no small amount of work. That I learned how to do class objects is a point of pride.

Constructing generalized basis functions turns out to be a distinct set of actions from fabricating the particular basis to attack a select problem. After a generalized basis is available to the program, I decided to build *The Basis* as a list identified by atoms at positions in space. This gave a list of lists, where each entry into the top list was a itself a list of primitive objects constituting an orbital, where each orbital was associated with a particular center in space as connected to an atom. It’s not really necessary that these basis functions be centered at the locations of the atoms, but their distributions are ideally suited if they are.

With a basis in hand, what do you do with it?

I’ve been debating how much math I actually want to add to this post. I think probably the less the better. If any human eyes ever actually read this post and are curious about the deeper mathematical machinations, I will happily give whatever information is in my possession. In the end, I want merely to tell the story of what I did.

For calculating a quantum mechanical self-consistent electron field, there is a nightmare of integrals that need to be used. These integrals are a cutting rain that forms the deluge which will warn any normal human being away from trying to do this calculation. It isn’t small and the numbers do not scale linearly with the size of the problem. They get big very fast.

The basic paper which sits at the bottom of all contracted gaussian basis sets is the paper by Frank Boys in 1950. I ended up in this paper after I realized that Szabo and Ostlund had no intention of telling me how to do anything deeper than s-orbital gaussians. Being poor, I can’t just randomly buy textbooks to search around for a particular author who will tell how to do all the integrals I desired. So, I took advantage of being in an academic position and I turned over the scientific literature to find how this problem was actually addressed historically. This got me to Boys.

Boys lays out the case as to why one would ever use gaussian functions in these quantum mechanical calculations. As it turns out there are basically just four integral forms that are needed to perform Hartree-Fock; you need an overlap integral, a kinetic energy integral with several derivatives of the overlap, a one center nuclear attraction integral and a two center electron repulsion integral (which covers both repulsion and exchange interaction). For many basis function types that you might use, including the vanilla hydrogenic orbitals, each of these types of integrals is unique to the basis functions you put into them. This means that no specific method is common to the integration for whatever family you’re talking about and some may not even have analytic solutions. This makes the problem very computationally expensive in many cases, and likely impossible. With the Gaussian functions, you can perform this clever trick where a p-type gaussian can be accessed from a derivative on an s-type gaussian… if the derivative is on a free variable somewhere in the function, the operations of integration and differentiation can be reversed since they aren’t on the same variable. Instead of doing the integral on the p-type gaussian, you do the integral on the s-type gaussian and then perform the derivative to find the associated result for the p-type. Derivatives are always easier!

Boys showed that all the needed integrals can be analytically solved for the s-type gaussian function, meaning that any gaussian class function can be integrated just by integrating the s-type function. In the process he introduced a crazy special function related to the Error Function (erf) which is now often called the “Boys Function.” The Boys function is an intriguing machine because it’s a completely new way of dealing with the 1/distance propagator that makes electromagnetism so hard (for one example, refer to this post).

Boys Function family with member ‘n’:

While this is a simplification, I’ll tell you that it’s still not just easy.

I did troll around in the literature for a time looking for more explicit ways of dealing with implementing this whole gaussian integral problem and grew depressed that most of what I found seemed too complicated for what I had in mind. Several sources gave useful insight into how to systematize some of these integrals to higher angular momentum guassians, but not all. Admittedly, in my earliest pass, I think I didn’t understand the needs of the math well enough. Dealing with this whole problem ended up being an evolving process.

My earliest effort was an attempt to simply, methodically teach the computer how to do symbolic derivatives using chain rule and product rule. This was not a small thing and I figured that I would probably have a slow computer program, but –by god– it would be mine. I had a semi-working system which achieved this.

A parallel problem that cropped up here was learning how to deal with the Boys function. I started out with but one form of the Boys function and knew that I needed erf to use it. Unfortunately, I rapidly discovered that I needed to make derivatives of the Boys function (each derivative is related to the ‘n’ in the function above). I trolled around in the literature for a long time trying to figure out how to perform these derivatives and eventually analytically worked out a series expansion built around erf that successfully formed derivatives for the Boys function based on the derivatives of erf. Probably the smartest and least useful thing I did this entire project.

In its initial form, using these methods, I was able to successfully calculate the minimal basis hydrogen molecule and the HeH+ molecule, both of which have a minimal basis that contains only s-type gaussian functions.

This image is electron density of the hydrogen molecule using the python maya vi package. Literally just two sigma-bonded hydrogen atoms, where the scales are in Bohr radii (1 = 0.53 Angstrom). This is the most simple quantum chemistry system and the hardest system I could perform using strictly my own methods. Sadly, this system can be done by hand in Hartree-Fock.

When I tried to use more complex basis functions for atoms above Lithium, trying things like Carbon monoxide (CO), my system failed rather spectacularly. Many of the difficulties appeared as non-symmetric matrices (mentioned above) and inexplicably ballooning integral values. The Hartree-Fock loop would oscillate rather than converge. Most of this tracked to issues occurring among my integrals.

One of the problems I discovered was that error function in the python math library couldn’t handle input values that approached zero; I supplied the zero limit input originally since the function is non-inclusive of zero, but I also found that the python math library erf function starts to freak out and generate crazy numbers when you get really close to zero, say within 10^-5. So, I got the appropriate values directly at zero due to my patch and mostly good values larger than 10^-4, but crazy weird values in this little block near zero. In one version of correction, I simply introduced a cut-off to send my Boys function to its zero value when I got sufficiently close, but this felt like a very inelegant fix to me. I searched the literature and around on-line and had myself 6 different representations of the Boys function before I was finished. The most stable version was a formulation that pulled the Boys function and all its derivatives out of the confluent hypergeometric function 1F1, which I was able find implemented in the scipy special functions library (I felt fortunate; I had added Scipy to my python environment for a completely unrelated reason that turned out to be a dead-end and ended up needing this special function… lucky!)

I write the formulation here because somebody may someday benefit from it. In this, Fn differs from the nth derivative of the Boys function by -1^n (the ‘n’ functions are all positive while the derivatives of the Boys function alternate positive and negative. The official “Boys Function” is where n = 0.)

Initially, I could not distinguish whether the errors in my program were in the Boys function or in the method being used to form the gaussian derivatives. I knew I was having problems with my initial implementation of the Boys function from a modified erf, but problems persisted no matter how I altered my Boys function. The best conclusion was that the Boys function was not my only stumbling block. Eventually I waved the white flag and surrendered to the fact that my derivative routine was not going to cut it.

This lead to a period where I was back in the literature learning how gaussian derivatives were being made by the professional quantum chemists. I found a number of different strategies: Pople and Hehre produced a paper in 1978 outlining a method used in their Gaussian software which apparently performs a cartesian rotation to cause most of the busy work to go away, which is supposedly really fast. There was a method by Dupuis, Rys and King in the 1970s which generates higher angular momentum integrals by some method of quadrature. A paper by McMurchie and Davidson in 1978 detailed a method of recursion which generates the higher angular momentum gaussians from an s-type seed using hermite polynomials. Another by Obara and Saika in 1986 broke the system down into a three center gaussian integral and used recurrence relations to generate the integrals by another method of recursion. And still a further paper involving Pople elaborated something similar (I found the paper, but never read this one).

Because I had found a secondary source from a more modern quantum chemist detailing the method, I focused on McMurchie and Davidson. This method gave several fairly interesting techniques that I was able to successfully program. I learned here how to program recursive functions since McMurchie calculates their integrals by a recurrence relation.

About this time, also, I was butting my head against the difficulty that I had no way of really checking whether my integrals were functioning properly or not. For Szabo and Ostlund, values for the integrals are only given involving s-type orbitals. I had nothing in the way of numbers for p-type orbitals. I could tell that my Hartree-Fock routine was not converging, but I couldn’t tell why. I tried making a comparison calculation in Mathematica, but the integration failed miserably. I might’ve been able to go further with Mathematica, if only the time step of figuring out how to do that programming wasn’t steeper –when the functions aren’t directly in their colossal library, implementation can become pretty hard in Mathematica. Having hammered on the Boys function until I could basically build nothing more stable and then after I found a way to diagnose whether or not my routines were producing symmetric matrices, I had no other way of telling where faults existed. These integrals are not easy by hand.

I dug a paper out of Taketa, Huzinaga and O-hata in 1966 that was about the only paper I could find which actually reported values for these integrals given a particular set-up. Apparently, after 1966, it stopped being a novelty to show that you were able to calculate these integrals, so nobody has actually published values in the last fifty years! Another paper by Shavitt and Karplus a few years earlier references values calculated at MIT still earlier, but aside from these, I struggled to find reference values. This experience is a formative one because it shows how hard you have to work to be competitive if you’re not actually in the in-club of the field –for modern workers, the problem is solved and you refer to a program built during *that time* which can do these operations.

Comparing to Taketa, using the McMurchie-Davidson method, I was able to confirm that my overlap and kinetic energy integrals were functioning properly. The nuclear attraction integral was a bust, no dice and no better in the electron repulsion integrals. They worked for s-orbitals, but not for p-type and higher. Unfortunately, Taketa and company had mistransliterated one of their basis functions in a previous paper, leading me to worry that maybe the paper was actually lying about the values they reported. I eventually decided that Shavitt was probably not also lying, meaning that there was still something wrong with my integration, even as I had hammered on McMurchie until smoke was coming out of my ears and I was sure I implemented it correctly.

This was rock bottom for me. You can sort of see the VH1 special: and here he hit rock bottom. I didn’t know what else to do; I was failing at finding alternative ways to generate appropriate reference checks and simply could not see what was wrong in my programming. I had no small amount of doubt about my fitness to perform computer programming.

My selected path forward was to learn how to implement Obara-Saika and to turn that against McMurchie. This method is also a recursive method to perform exactly the same derivatives without actually doing a long-form derivative. Initially, Obara-Saika also gave a value different from Taketa and also Shavitt, but I was able to track down a -1 that changed everything. Suddenly, Obara-Saika was giving values right on with Taketa.

When I tried a comparison by hand of the outcomes of McMurchie-Davidson against those of Obara-Saika on the very simplest case of p-type primitives, I found that these two methods produce different values. For the same problem, Obara-Saika does not produce the same result as McMurchie-Davidson. I conclude that I either misunderstood McMurchie-Davidson, which is possible, or that the technique is either willfully mangled in the paper to protect a possibly valuable piece of work from a competitor or it’s simply wrong (somebody tell Helgaker: he actually teaches this method in classes where he is reproducing it faithfully, part of why I originally trusted it). I do not know why McMurchie-Davidson fails in my hands because everything they do *looks* right!

After a huge amount of work, I broke through. My little python program was able to reproduce some historical Hartree-Fock calculations.

This image is a maya vi plot depicting the quantum mechanical electron density of water (H_{2}O). The oxygen is centered while the two hydrogens are at the points of the stems. This calculation used the minimal basis set of STO-3G, which places three primitive gaussians for each orbital. Each hydrogen contains only a 1s orbital while the carbon contains a 1s, a 2s and three 2p orbitals in the x, y and z directions. The image above is threshold filtered to show the low density parts of the distribution, where the intensities are near zero, which was necessary because the oxygen has so much higher density than the hydrogens that you cannot see anything but the inner shell of the oxygen when plotting on absolute density. This system has ten electrons in five closed molecular orbitals where the electron density represents the superposition of those orbitals. The reported energies were right on the values expected for an STO-3G calculation.

With water, I worried initially that the calculation wouldn’t converge if I didn’t make certain symmetry considerations about the molecule, but that turned out to be unnecessary. After I solved my integral problems, the calculation converged immediately.

I also spent some time on methane using the same tools and basis…

With methane (CH_{4}) you can see quite clearly the tetrahedral shape of the molecule with the carbon at center and the hydrogens arranged in a halo around it. This image was also filtered to show the low density regions.

I had some really stunning revelations about organic chemistry when I examined the molecular orbital structure of methane somewhat more closely. Turns out that what you learn in organic chemistry class is a lie! I’ll talk about this in an independent blog post because it deserves to be highlighted, front and center.

As kind of a conclusion here, I will note that STO-3G is not the most perfect modern basis set for doing quantum mechanical calculations… and even with the most modern basis, you can’t quite get there. Hartree-Fock does not include cross-correlation between electrons with differing spin and therefore converges to a limit called the Hartree-Fock limit. The difference seen between molecular energies calculated at the Hartree-Fock limit and those actually observed in experiment is referred to as the correlation energy, which can be calculated with greater accuracy using more modern Post-Hartree-Fock techniques using higher quality basis sets than STO-3G. With a basis of infinite size and with calculations that include the correlation energy, you get close to the truth. What is seen here is still just another approximation… better than the last, but still just short of reality.

My little python program probably can’t go there without a serious redesign that would take more time than I currently have available (and probably involve me learning FORTRAN). The methane calculation took 12 seconds –as compared to molecular hydrogen, which took 5% of a second. Given the scaling of the problem, benzene (6 carbons and 6 hydrogens) would take something close to 2 hours to calculate and maybe all night to plot. This using only STO-3G, three gaussians per orbital, which is dinky and archaic compared to a more modern basis set, which might have 50 or 60 functions for the basis of a single atom. Compared to what modern programs can do, benzene itself is but a toy.

]]>