At Play With GAMESS (4)

Another in my GAMESS series on quantum chemistry. GAMESS is unquestionably a magnificent piece of work. All the people who have produced it should be very proud.

With the steady improvement of my skill using GAMESS, I was able to find the chemical reaction I was looking for in that earlier post. The basis set strongly influenced the reaction pathway in this one. Here is the transfer of a proton in water from one hydroxide to another.

water hydroxide rxn v3
Acid-base reaction in water, proton transfer, 6-31G**.

I won’t talk a huge amount about this. The intent was simply to put up the pictures. This is the reaction by which protons are transferred around in water. This is the alkaline version of the reaction. I haven’t looked for it yet, but I suspect there is a homologous reaction under acidic conditions, involving hydronium only. It may be possible for this reaction to occur directly between hydronium and hydroxide, but I’m not sure there’s a stationary state possible in that system: the electrostatics will probably create a very steep potential energy gradient between the two molecules which may not have a stationary point in the middle (may depend on the solvent model to introduce screening).

The lesson: protons are probably never actually free in water.

water hydroxide rxn v2
Acid-base reaction, the transfer for a proton. 6-31G**.
Advertisements

At Play With GAMESS (3)

The results of some calculations just need to be showcased.

This ab initio chemical reaction simulation took a huge amount of time and effort to generate.

Phosphate rxn v7
Hydroxide attacking aminophosphate

Depicted here is an attack by hydroxide (-1 charge) on aminophosphate (-2 charge). The black ball is phosphorous, red is oxygen, white is hydrogen and blue is nitrogen. As should be evident by my usual mode of operation, the surface is an equipotential surface containing 90% of all electron density. The coloration of that surface is in electrostatic potential with blue as negative and red as positive. Dashed lines are hydrogen bonds, single sticks are single covalent bonds and double sticks are double covalent bonds. The reaction produces singly protonated phosphate (-2 charge) and deprotonated hydroxyamine (-1 charge) . A polarizable continuum model is in use in the simulation to place the reaction effectively in water (you need this to help dampen the repulsive forces charged portions of these molecules have on each other.) This structure was calculated using the Pople 6-31G** basis set, literally at the limit of my computer, in order to include some polarizability in the atoms and to create enough variational flexibility in the diffuse region of the model to pick up the interactions between the molecules approaching each other. This basis set is very big and costly, meaning that I can only do fairly small systems with it on my computer using this strategy.

I’ve been trying to find a version of this particular reaction for literally months now, hampered in large part by my lack of skill with ab initio and by my sometimes shaky knowledge of organic chemistry. The last reaction that I posted with phosphate and water was a biproduct of my search for this reaction. This particular reaction is one possible prototype for the reaction by which DNA is polymerized from dNTPs. In this case, the hydroxide is standing in for the 3′ hydroxyl of oligomeric DNA and the hydroxylamine is standing in for pyrophosphate. The reaction is a form of transesterification reaction. In DNA, the resulting products would be phosphodiester linkages between nucleotide bases.

To make this reaction proceed, a deprotonated hydroxyl must attack through the face of tetrahedral phosphate opposite the desired leaving group. Further, the attacking group must be a poorer leaving group than the desired group –I had this problem early in my search. For the search to work, I needed to guess the appropriate transition state, which I didn’t think too clearly about until I’d already wrapped four or five failures under my belt. As it turns out, the phosphorous must go from tetrahedral valence to a trigonal-bipyrimidal state. I then needed to tease out spurious other “reactions” including a rotation by the hydroxyamine. This becomes difficult with more complicated molecules because additional degrees of freedom allow for a lot of unimportant motion that can totally screw up the search.

Phosphate rxn v3
HOMO of the reaction throughout the pathway.

The highest occupied molecular orbital (HOMO) throughout this reaction is, I think, very interesting.

When these chemicals are separated in exclusion of one another, the polarization of the water continuum allows the molecular orbitals to have negative energy, meaning that they can stay occupied (theoretically, if you believe the MO energies, which are known to be off). But, as the hydroxide is placed near the aminophosphate, the orbitals, particularly the HOMO, shift to positive energy, meaning that the hydroxide wants to give its electrons up. You can then see the lobe of the hydroxide HOMO rotate into the open region of the phosphate tetrahedron during the attack whereby the HOMO is transferred to the hydroxyamine, which is then displaced. This new HOMO still has positive energy and, even though it actually starts to rotate to face the phosphate, the potential energy surface of the reaction, meaning the charge forces and orbital exclusion, force the potential reactants apart. The hydroxide HOMO peaks at 0.238 Hartrees, while the Hydroxyamine peaks at 0.243 Hartrees. I would actually sort of think that hydroxyamine would be the better attacker, but I also was aware that the nitrogen-based leave group should be better than the hydrogen in hydroxide and would give me a better chance at finding this reaction. To be clear, the hydroxyamine HOMO is definitely antibonding (just look at it) and this molecule would still be reactive elsewhere.

(Quick edit 7-1-19: Also looking at the HOMO, you can see a point where the HOMO spreads from hydroxide over onto the phosphate just before the reaction has “officially” taken place by reaching the transition state. This means that electrons from the hydroxide have leaked over onto the phosphate and have mixed with the electrons already present on the oxygens located there. This comes back to the Born-Oppenheimer approximation; because of the difference in masses, the time scale governing the motion of the electrons is very different from that of the nuclei. When the HOMO has shown up on the phosphorous, it means that the electrons are tunneling over and back even as the nucleus still moves in.)

Phosphate rxn v6

 

Just some other views of the same thing.

Phosphate rxn v4

 

Most of my reason for doing this post was to just add some kind of pretty pictures. You can’t blame me; this took a ridiculous amount of time to produce and I feel like showing it.

At Play With GAMESS (2)

Rather than make a post that is six miles long with updates, new posts with new content seem like a good idea. For most people this would be an easy idea!

I’ve continued to learn with GAMESS. The functionality that interests me the most is the capacity to use the background of automated quantum mechanical calculations to simulate the occurrence of chemical reactions. A constellation of eigenstates containing electrons as shaped by the positioning of atomic nuclei can be sculpted by the constraints of energy minimization to reveal which configurations are more likely to occur than others. You can sort of think about nuclei as tiny pearls trapped in a continuum of cotton candy: if you grab a pearl and try to move it, the candy clings to it like springy rubber to try to dampen the movement. The only trick is that the candy is not distributed in a uniform fashion and is actually redistributed by the movement of these pearls; it tends to spring and flex in preferred directions. As with most physics, this becomes essentially a system of masses on springs where you can calculate how the deflections occur in a potential energy “surface” that is a function of the positions of the nuclei. I put “surface” in quotes because it’s a hyper-surface with many more than simply two dimensions: it has three dimensions with every nucleus.

Figuring out a chemical reaction is very much about intuiting the shape of this hyper-surface from clues in the 3D structure of the atoms. You use the potential energy surface to generate a mathematical entity called a “hessian,” which is a 3Nx3N matrix formed from a function of the potential energy surface with second-derivatives taken with respect to pairs of the 3N coordinates. Derivatives of energies are forces and the hessian gives what are called force constants (as I understand it) of the potential energy surface and enable the identification of minima and maxima by use of inflection (second derivative!). Eigenvalues of the hessian matrix reveal vibration modes within the molecule and, if you find imaginary vibrations, you have stumbled over transits in the free energy surface between one configuration and another… literally non-vibratory motions inside the model corresponding to the occurrence of chemical reactions.

To find a chemical reaction with GAMESS, the first task is to find features in the potential energy surface that are called “saddle points.” These saddle points are locations in the space of the molecular coordinates where the vibratory modes of the hessian are all minimized while the one imaginary mode is maximized (again, as I understand it… these constructs are all such huge numbers of variables that you have the computer sitting between you and the calculation and I have not done the math myself directly). Saddle points can seem very arcane described in that way, but they mark transitory states in the geometric structure: literally the intermediate structural state of the chemicals positioned between the chemical products and reactants called a transition state. If you can guess accurately at the form of a transition state, the computer can use energy minimization to massage that configuration into the stationary configuration -the flat points along the potential energy surface when the derivative goes to zero- where the imaginary vibratory mode is maximized and the other modes are minimized in potential energy.

If you can find a saddle point, which turns out to be very tricky, the computer is then able to follow the curves of quickest descent in the potential energy surface forward to the reaction products and backward to the reactants. You can then stitch these half-paths together to produce the whole chemical reaction.

Here is a chemical reaction…

phosphate proton exchange
Proton exchange between mono-protonated Phosphate and water.

This is the entire reaction pathway between a water molecule and a molecule of singly protonated phosphate. The ball-and-stick model above is the usual modeling produced by the GAMESS satellite program wxMacMolPlt: oxygens are red balls, phosphorous is black and hydrogen is white. The wireframe surface is an electron equipotential surface and its coloration is for regions of negative electrostatic potential (blue) and positive potential (red). Bonds are drawn based on electron occupancy of orbitals in that region, with dashed lines depicting essentially hydrogen bonds, single sticks depicting single covalent bonds (~two electrons) and double covalent bonds (~four electrons). Orbitals are closed shells using restricted Hartree-Fock and the basis set was relatively small, only 3-21G. There is a polarized continuum model in the simulation in order to pretend to some extent that this takes place in a medium like continuous water.

In this reaction, phosphate with a charge of -2 sucks a proton (charge +1) off of water, creating a transition state of hydroxide (charge -1) which then sucks a proton back off of the phosphate to recover water.

Since the water is turned away from the camera, here’s another set of images of the same thing:

phosphate proton exchange2

In performing this set of calculations, which took a ridiculous amount time, my initial objective was to try to find a proton exchange. Happily, I finally found that, though not without 5 iterations of the work. Even in this version, I was backward in my guess of the transition state: I was guessing the transition would be hydronium, with both protons present on the water… it ended up with both on the phosphate since the phosphorous apparently doesn’t withdraw electrons strongly enough from the oxygens to make unbonded water better at holding protons.

I found this in multiple steps. First, I modeled water with triply deprotonated phosphate and looked for an energy minimum. This ended up with water doubly hydrogen bonded to the phosphate (looked like a guy in stirrups). I then plunked a third proton onto the water, creating hydronium (+1 charge). Since this ended up with the third proton added in a funny direction, I used part of the saddle point search as if I was doing a regular optimization to get to the proper shape of hydronium. Finally, as I was looking for the actual saddle point, phosphate dragged the hydronium in by a pincer move and ripped off both of his arms.

This sort of surprised me since I wasn’t expecting such violence, but the transition state ended up with both exchanged protons on the phosphate and only one remaining in the water as hydroxide.

Following the intrinsic reaction coordinate using the imaginary hessian mode allowed me to slide along the potential energy surface to give the reaction depicted above. You can see the bonds shrink and swell as the electrons are redistributed in the molecule to accept the exchange with the water, electron density pulling out away from the phosphorous as the proton is accepted and filling back in on the other arm as the proton is reciprocally donated.

I have a strong feeling that this is generally similar to the state of acids and bases in water, where protons are passed around like bad kleenex along lines of hydrogen bonding. Likely, acidic protons are never free and hydronium and hydroxide constitute transition states. Since water has an overall concentration of about 55 molar, and aqueous strong acids and bases have concentrations of only about 12 molar, plenty of intact water exists even in strong acid or strong base where protons are constantly moving around. In base, the proton is scarce, so the transition state is probably hydroxide, while acid has the proton as common, giving hydronium as the transition state instead. It’s an interesting speculation.

At Play with GAMESS

(Upgraded to its own post from here.)

This was an attempted saddle point search in GAMESS trying to find if a transition state exists in the transfer of a proton from one water molecule to another (for formation of hydronium and hydroxide.)

 

water_bounce
Saddle point search for proton transfer in water.

This is the weirdest thing I’ve seen using GAMESS yet. This is not exactly a time simulation, it’s an attempted geometry minimization showing the computer trying different geometries in an attempt to locate a saddle point in the potential energy surface. I’m befuddled a bit by this search type because I’m trying to study a reaction pathway in my own work. Unfortunately, this sort of geometry search is operating in a counter-intuitive fashion and I’m not certain whether or not it’s broken in the program. However, when you see two oxygens fighting over a proton… well… that’s just cool. If the waters are set close enough so that they enjoy a hydrogen bond, the energy surface appears to have no extrema except where the protons are located as two waters. If you back the waters off from one another so that they are out of hydrogen bonding distance and pull the hydrogen out so that it is hydrogen bonding with both oxygens, you get this weird behavior where the proton bounces around until the nuclei are close enough to go back to the hydrogen bonded water configuration. I need to pin the oxygens away from each other, which won’t happen in reality.

Not sure what I think.

(Edit 5-10-19:)

This post seems to be a good place to put weird and interesting things. Here is an ab initio computation for the structure of a Guanine Quartet.

G-quartet in water 3-21G
G-quartet, probability density equipotential surface, colored by local electrostatic potential.

The surface is again an equiprobability surface and is colored by electrical potential with green being most negative and red being most positive. This calculation took my meager computer 527 minutes (or 8 and a half hours). The central atom is a coordinating sodium and the purine rings are hydrogen bonded to each other by the dashed lines. The basis set was 3-21G, which is as big as I dare use while still being able to make the nitrogens lie flat when they’re in those purine rings. The structure definitely seems to approach C4h point symmetry, though I could not build the model with that symmetry to start out and simplify the calculation due to the complexity of the structure.

This aggregate has a very wickedly interesting HOMO (highest occupied molecular orbital). Because the system is not quite perfectly relaxed to planarity, the symmetry is a little tiny bit broken and the state is not quite but nearly 4-fold degenerate. If you plot all the (nearly) degenerate orbitals together, a very interesting structure emerges.

G-quartet in water 3-21G HOMO
G-quartet HOMO, all four nearly degenerate states.

I thought this was really cool. You can see the peaks and troughs of the electron waves running around the plane. That’s eight electrons circulating in sort of a pi-bond-like structure. This is the highest energy occupied eigenstate of the complex.

 

Edit 7-17-19:

I’ve learned some things about G-quartet recently that have had me going back and revisiting the calculations presented above. In the sets shown above, the geometry never quite reached convergence; the optimization step size was set too big and the geometry steps were oscillating around the minimum without ever reaching it for several hundred steps at a time and many hours –just a function of my inexperience at the time.

As it turns out, the structure of the G-quartet isn’t actually C4h in symmetry, as I note above. It might be C2, or even C1. When I was performing the calculation mentioned above, I was operating under the assumption that the G-quartet is a planar assembly, much like Watson-Crick base pairs. This is not an undue assumption and I’ve found published G-quadruplex NMR structure papers that seem to make similar assumptions –an NMR structure is built of computed correlations where certain features about how these correlations can be expressed are an assumption, like that G-quartets are flat. When I flattened it into a plane and attempted to optimize the geometry, it turned out that the structure has no stationary state that is flat. The calculation simply never converges. The G-quartet, liganded to a monovalent sodium ion or not, appears to prefer a saddle splay configuration. The thing has a shape like a Pringle.

G-quartet in water 6-31G++
Converged G-quartet geometry optimization. Basis set 6-31G**

This form of the G-quartet is produced using the Pople 6-31G** basis set, including a single D-function on atoms in the second row and a single P-function on hydrogens. The structure is not hugely different from the structures posted above, but the saddle splay it demonstrates is actually a geometry minimized feature of the structure. The hydrogen bonds holding this structure together are mainly unmarked because they turn out to be on the long side, ~2 Angstrom. (Certain G-quadruplex papers I found report these to be longer still, as much as 3 Angstrom)

Part of why I went back and repeated this calculation was because I discovered that the 3-21G basis set I was using above does not properly represent hydrogen bonding. 3-21G is on the small side and is apparently pretty good for a rough draft. For an object the size of a G-quartet, an appropriate, fully converged geometry optimization with just this small basis set took 239 steps and 451 minutes (7.5 hours). This even with my more recent discovery that direct SCF (calculating integrals as needed without ever storing them) is more efficient than classical SCF (calculating integrals once and then storing them) while parallel computing –recovering the integral from a large storage file is slower than simply computing it again. For big structures, you need to stay as small in the basis set as practical and the bigger you go, the smaller the basis which captures the features of the molecule, the better off you are time wise. 3-21G seemed a good choice until I realized that it was producing a non-planar structure for the G-quartet when I fully converged it.

At around this time, I had just discovered that 3-21G massively shortens hydrogen bonds… so much so, in fact, that you can’t really use it to simulate the chemistry of proton transfer in water. Noting that the G-quartet had this big saddle splay in 3-21G and that the quartet structure is dependent on hydrogen bonding, I had the <sarcasm>ingenious</sarcasm> notion that maybe the splay is a result of bond shortening and that a bigger basis set might capture the complex as flat.

So, I embarked on optimizing the structure with 6-31G and supplemented in a couple polarization functions (for 6-31G**) to try to cover the diffuse region where hydrogen bonding occurs. 6-31G doubles the function density of 3-21G, meaning that it increases the required number of integrals by about 16-fold. This is a ten-fold increase in computation time at minimum. I ended up running the thing overnight several nights in a row. The optimization took about 4 days with a bunch of restarts (I learned something cool about GAMESS restarting which I’ll keep to myself).

Remarkably enough, the saddle splay did not go away. In fact, 6-31G** increased the twisting! It got more splayed by a few degrees. This geometry appears to be not an artifact; I switched the coordinate system from the internal coordinates I was using during optimization to a cartesian external system, recreated the hessian and tried to repeat the optimization on the converged geometry and it refused to optimize any further. I was worried about the internal coordinates because GAMESS kept fretting that they were too interdependent, but switching systems didn’t obliterate the stationary point, suggesting it was common across possible coordinate systems.

On the other hand, the more detailed basis set only helps to give more accurate values. Qualitatively, it still very much resembles the previous work, even the poorly converged work above. Here is the cluster of HOMO orbitals, which are simply a retread of the image shown above but with the bond lengths and angles a little more accurate.

G-quartet in water 6-31G++ HOMO
G-Quartet, degenerate HOMO orbitals.

You can still see the electron wavelength as they circulate around the plane. You could actually calculate the approximate momentum from that: the de Broglie wavelength is visually obvious in there.

I’ve still got so much to learn. I’m looking at other more recent basis sets. It may well be that someone has invented an effective core potential basis set which captures the diffuse region better than 6-31G** and manages to thin up the number of functions lost in the core in order to cut the calculation time. 6-31G** is a bear on something like a G-quartet using only a laptop computer.

Quantum Physicist Self Evaluation

(This post was generated in response to a Quantum University Crank over the last month and tacked at the end of my Quantum University post. I left it there for the Quantum U jackasses to come and gawk and pretend their pseudoscience will give them some supernatural powers, but I also post it here because I wish for it to stand on its own.)

The very notion of Quantum University sets my heart on fire. I want to take away that funhouse mirror they use to admire themselves and put them in front of a real mirror so that they understand why people with actual comprehension laugh at them (or should be given the opportunity to laugh and point and maybe throw some rotten cabbage).

Still, the reality is that you can’t fix a believer. The one great problem with cranks of this sort is that a lot of them genuinely believe they’re onto something. Never really quite occurs to them that basically everything they ever do never achieves anything and that any achievements they come across only come from fellow travelers who also believe. A believer can only butt their uncomprehending head against the granite block that is reality and stop to wonder why there’s blood. They do not actually achieve, ever, they waste time running in circles doing everything they can to collect testimonials from dupes to mark their “achievements.” Oh, and utter curses about the vast conspiracies being leveled to keep what they believe down. Still, if they can get people to believe them, they can do one achievement that is meaningful in society: they can make money.

The fellow in the comments honestly believes that there’s a “brand” of quantum physics out there that doesn’t require you to know how to use calculus.

The profession of physics has a very distinct and simple structure. The entire purpose of a physicist is to translate a series of real world observations into numerical representations and then fit those values onto mathematical formulae. If the fitting is sufficiently good, the process can be reversed: the mathematical formulae discovered in the fitting process can be used to predict what real world conditions are required to reproduce certain observational outcomes. Note, this is flat-out crystal ball stuff; physicists predict what will happen observationally if conditions for a given formula are met and to what precision that outcome can be expected. I’m not saying “some brand of physicist” or “sometimes this is one thing we do”… this is what physicists do, end of story. If you cannot carry out this function, you are by definition not a physicist. Physics is completely inseparable from the math, so much so that the profession is divided down the middle into two classes: the people who wrangle the math, called the “Theorists,” and the people who wrangle the observations to plug into the math, called the “Experimentalists.” Theorists and Experimentalists work together to get physics to operate.

Any jackass bleating, “Well, you don’t know the Real(tm) science because you haven’t gotten around your evil, malicious logical right brain and circumvented the math to find the Real Reality,” has essentially shoved his own hand down the garbage disposal. By dumping the math, that person has admitted to not being a physicist –despite his/her claims to the contrary– without math, there is no physics. Period. End of story. This is totally non-negotiable. You cannot redefine reality and expect the rest of the universe to suddenly adhere to your declaration.

Since understanding this subtlety is a real challenge for those of Quantum University, I’m going to make an example here of just what it is that a physicist does and why physicists are deserving of the street cred that they’ve earned. These Quantum U jackasses crave the legitimacy of that word: “Physicist.” There is no other reason why anyone would accuse an actual physicist of being uncomprehending of the nature of physics. From what I intend to add here, anybody reading this blog post will be able to make an assessment of themselves as to whether they could ever be qualified to call themselves a physicist.

Quantum Physicist Self-Evaluation Quiz:

What I’m going to add is a quiz containing a series of questions that a genuine quantum physicist would have no difficulty at least attempting to answer –some will be very easy, but some may require more than transient thought. If you have any hope of completing it, you will have to do some math. I will write the problems in order of increasing difficulty, then detail what each problem gives to the overall puzzle of exploring quantum physics and try to add a real life outcome from the given type of calculation to show why physicists have credibility in society in the first place. Credibility is the point here; this is why Quantum U craves the word “Physicist” and is willing to rewrite reality for it. My point is that if you jettison the part of physics that allows it to attain credibility, you lose the right to claim credibility by association.

Problem 1) You suspend a 5 kg bowling ball on a 2 meter cable from the ceiling. With the cable taut, you pull the ball aside until the cable is at an angle 30 degrees from vertical. You release the ball and allow it to swing. What is the maximum speed of the ball as it swings and where is it achieved?

Why does this matter to Quantum Physics? This is a very basic classical physics problem that would be encountered midway through your first semester in introductory physics. The Quantum U jackass would immediately scoff, “Well, this is classical, quantum allows us to escape that!” Well, no, actually it doesn’t. This problem is the root from which quantum physics grows. This is one of the simplest Conservation of Energy problems imaginable and the layout of the calculation sets the root of Hamiltonian formalism, meaning that it is almost exactly the same as the layout of the time-independent Schrodinger equation. If you lose the Schrodinger equation, you’d better have something impressive ready to replace it because you can’t do quantum without this.

Why is this important to Physicist cred? Most introductory physics does not seem like it should be all that important. If you can solve this problem, does it mean you can load heavy things into your car without straining your back? Maybe, maybe not. This problem is important to society because it involves exchange of potential and kinetic energy in a conservative situation. With a tiny bit of tweaking, this particular problem can be rewritten to estimate how much hydroelectric power can be generated from a particular design of hydroelectric dam. What? You mean to say physics has real world implications? That sound you just heard was me driving a nail into the third eye of a quantum U jackass.

Problem 2)

LC circuit

In this picture is an electronic circuit. I’ve labeled all the components. The switch connects the unlabeled wire to either wire 1) or wire 2). It starts connected at position 1). What happens when you turn it to position 2)… in other words, what’s the time varying behavior after the connection is closed? That’s the easy part of the question; to be a physicist, you have to answer this: what values of ‘L’ and ‘C’ could you pick to get a period of 2 milliseconds?

Why does this matter to Quantum Physics? I debated for a long time what sort of basic electromagnetism problems to add. I thought originally to keep it to one, but I decided instead on two because you really can never get away from electromagnetism while you’re doing quantum physics. There are four known fundamental forces and this is one; electromagnetism crops up in everything. This particular problem involves an oscillator and is therefore a forerunner to wave behavior. If you can’t do oscillators, you can’t do probability waves. If you know a thing about physics, this problem is actually extremely easy and is typically encountered in second semester basic electromagnetism and in whatever electronics classes you’re forced to take. The chemists, who do quantum physics of one sort, may have some difficulty with this problem, but the physicists really shouldn’t. If you call yourself a physicist, this should be as easy as wiping your ass.

Why is this important to Physicist cred? You have an evolved, heavily engineered offspring of this little doodad in every connected device carried on your person at this moment. The oscillators have all changed faces and the components to achieve them are probably almost unrecognizable at this point, but the physics is not. The thing in the picture above could be converted into the tank circuit of a radio. This was a gift to the 20th century by the hard work of 19th century physicists. Radio, electric power and the associated ability to instantaneously communicate long distances has built our world. If you stop to realize that William Thomson, the Lord Kelvin, made a mint off laying a telegraph cable across the Atlantic to connect England and North America for communications purposes, you will understand the power that all the offshoots of this technology had. The circuit above is two-fold; it relies on the electric conduction physics upon which Thomson’s telegraph infrastructure depended and also could be used to facilitate the generation of electromagnetic waves that could be transmitted through the air, as performed by Marconi (and Tesla… the real one, not the car maker). If you know what you’re doing, you can turn this device into a small EMP generator… you’re welcome. (As an aside, I always feel a little sorry for William Thomson: modern people mostly only ever call him Lord Kelvin and forget his actual name… the title of Lord Kelvin was created for him because of his success as a physicist, and so, his success deprived history of his actual name!)

Problem 3) You’re stranded on a deserted island. You go and hunt for food along the flood plain around the island when the tide comes in. You see a fish swimming along the sand beneath the flat surface of unperturbed water, by eye 60 degrees below the horizon line of the ocean. You stand 1.8 m tall and you have a 1.5 m spear. Measuring with your spear, you know the depth of the water is 2/3 the length of your spear. The index of refraction of water is about 1.333. You have a calculator for a brain. If you thrust the spear from your shoulder, what angle must you launch it at in order to hit the fish.

Why does this matter to Quantum Physics? Good question. This is the second EM question that I will add and it’s added because it deals directly with the physics of light. Snell’s law is a product of electromagnetism and it emerges from applying Maxwell’s equations to a boundary situation much like I’ve detailed in the problem above. Index of refraction is a direct ratio of speed of light in a vacuum over speed of that same light in a substance (like water). The phenomenon of light bending its path as it passes through the boundary between two translucent substances is a direct consequence of the wave-like properties of light. I have no doubt that the Quantum U jackasses love waves and vibrations. Can they handle this one? As I chose to add a problem about electromagnetic force, I needed to add a problem about the basics of light, which is directly connected to the EM force. Light is very pivotal to Quantum physics because most every observation people ever make involves some measurement of light.

Why is this important to Physicist cred? The lens maker equation is expanded from this foundation. Without this, there would be no glasses, no contact lenses and no corrective laser eye surgery. The work of physicists actually corrects vision in the two eyes that matter.

Problem 4) The half life of a muon is 2.2 microseconds. If it’s a cosmic ray traveling at 99.999% of the speed of light, on average how long does that muon appear to last if you happen to see it fly by while you’re standing on Earth?

Why does this matter to Quantum Physics? This is a token special relativity problem. A large portion of Quantum physics does not require relativity, but an equal amount does. As such, you can’t get away from relativity. You need to know at least some to be a quantum physicist. Quantum U jackasses clearly want to marginalize all those “particles and math-ematical equations” and beg that something exists beyond that, never mind that by removing the math, they have zero chance of ever defining what… I say fine, remove what you like, I’ll steamroll you flat anyway. I could as easily have said “You will live 79 years and 10 seconds, how long does that appear to be to somebody watching you run past at human foot speed for your entire life?” The relativity will probably say 79 years and 10.000001 seconds or something (I didn’t calculate it), but at least this is better than begging the limits of human potential and claiming the person ran by at 99.999% the speed of light. Somebody has to realistic about human potential. Relativity is pretty important because it’s the first time humans changed Newtonian physics. That precedent is important to understand in light of quantum physics (which was about the third time humans changed Newtonian physics, General Relativity being the second). Quantum physics didn’t emerge by immaculate conception… there was a huge background of math that lead to it. Discard it at your risk.

Why is this important to Physicist cred? Congratulations, you can now perform one of the clock calculations needed to make the Global Positioning System (GPS) work. You’re welcome; physicists just saved you from getting lost… again. Note, we’re also responsible for the military ability to drop a bomb down your chimney from a flying aircraft. I’d love to see you astral project out of that.

Problem 5) What do the ‘A’ and ‘B’ constants refer to in Einstein’s stimulated emission equations?

Problem context: To detail the situation for the mathematically illiterate, who are none-the-less following along because they are genuinely interested, Einstein’s set-up is a Bohr atom… a nucleus with electrons orbiting it at levels; he postulated that a passing electromagnetic wave causes a lower energy electron to hop up to a higher energy level orbit if the wave matches the energy difference between the two levels (absorbing a photon). The electron in this now excited state can either spontaneously hop back down to the lower state, giving off a photon, or it can be ‘jarred’ to give off the photon and hop back down to the lower state by being subjected to an electromagnetic wave that happens to match the energy difference between the two states –called stimulated emission.

Why does this matter to Quantum Physics? Einstein’s work on stimulated emission occurred in 1917, in the framework of what’s called the “Old Quantum.” This is my first genuine quantum physics question for you. Oh goody, right? Tired of the equations yet? Sorry, but if you can’t handle equations, you’re not a physicist. This work is the front runner of the Fermi Golden Rule. I’m skipping most of the other Old Quantum because it was still too incomplete.

Why is this important to Physicist cred? Without us, no lasers bio-tch! And, in the interest of full disclosure, the laser is one example of short-sightedness in physicists. Einstein had this realization in 1917, but failed to see the significance himself. Physicists then hurried on and found their focus on other shiny things while nobody thought more carefully about it. It took some 40 years until Maiman, Gould, Townes and Schawlow (physicists whose names you may not know, though Maiman was also an engineer) had the critical insights to finally make it work. I ended up including this problem on a lark mainly because it also helps to put guided missiles through windows militarily. Gotta put the p’chank of fear into somebody’s chakra. How many CD players do you suppose were built because of us?

Problem 6) A drunken hobo, who weighs 70 kg including his tattered blanket and a full bottle of peach schnapps, shambles along at about 0.5 m/s. If he were to stumble through a two-slit apparatus, how far apart would the slits need to be spaced for him to exhibit quantum mechanical interference? Can this setup be built?

Why does this matter to Quantum Physics? This question involves the de Broglie equation, the beating heart of modern quantum physics. This equation is one of several reasons why Quantum University craves the word “Quantum.” For those less versed, the de Broglie relation is the first equation written that explores the ‘wave’-ness of physical objects and is the source of particle-wave duality in matter waves. With the way that most quantum mechanical wave equations are written, the de Broglie relation is always hidden somewhere inside the argument (particularly in time-independent cases). In essence, because they do no math, quantum U gets it wrong because they fail to include Planck’s constant. Ask yourself what came first, an “institution” calling itself “Quantum U” or Planck’s constant?

Why is this important to Physicist cred? Do I really need to say it?

Problem 7) The hobo from the previous problem shambles along for a moment, then stumbles to a stop. He stands there wavering about, struggling to keep his balance, foot speed now reduced to 0 m/s. Because of the alcohol induced gaps in his memory, he may certainly think that this happens, but why doesn’t he ever just suddenly *pop* into existence in front of the hardware store or soup kitchen? Careful examination of the previous problem would suggest that if he stops moving, maybe he can!

Why does this matter to Quantum Physics? Are you kidding? This is the weird-ass core of quantum physics! I never did claim that weird stuff doesn’t happen. What I claimed was that there are specific expectations for how the weirdness can emerge. What is written in this problem should be analyzed with the Heisenberg Uncertainty Principle. The cranks typically use the Uncertainty principle as a get out of jail free card, “Well, there is uncertainty, so anything is possible, right?” The actuality is that the Uncertainty Principle acts like a governor, telling how much weird is possible depending on the set-up of a given situation. How exactly stopped must the bum be for his position to grow so uncertain that he can teleport around town? Note, the argument here would actually also work if he’s still walking, despite the hole in de Broglie’s relation, but his speed must be very perfectly uniform… the uncertainty of his momentum must be nil.

Why is this important to Physicist cred? This stuff is one of the fundamental reasons why quantum U jackasses covet the word “physicist.” Did the uncertainty principle come first, or the slack-jaws desperate to misunderstand it in order to promote their woo?

Problem 8) A lightning bolt strikes for about 30 microseconds, creating a radio frequency EMP. What is the frequency spread of the interference it causes in radio/microwave transmissions occurring around it?

Why does this matter to Quantum Physics? This is a second application of the Uncertainty Principle. In this form, it addresses a different pair of uncertainties, but it’s the same principle. I’ve included this problem to show the stark quantitative nature of the equation. There is nothing at all qualitative or indecisive about the Heisenberg Principle. It says something extremely specific and if you lose the math, it becomes a lie, period.

Why is this important to Physicist cred? We invented the Uncertainty Principle and we damn well have a say in how it works.

Problem 9) A tiny, effectively featureless quantum mechanical tiger of mass ‘m’ is caught in a prison of only one dimension. He runs back and forth trying to get out, but the walls on either end are infinitely high. The prison is large compared to the actual physical shape of the tiger and this tiger lives by feeding on heat energy. Further, the prison is sized so that it’s on about the same size-scale as the tiger’s de Broglie wavelength for the low temperature where this tiger lives –and in fact keeps the tiger alive under those circumstances where he’s starving. The zoo keeper must fire photons into the tiger’s cell one at a time to try to hit the tiger and see where he is. The frequency of the photon is very high and the zoo keeper can tell exactly where the photon went in and will be able to tell exactly where the photon comes back out, thus giving him an accurate understanding of the location of whatever the photon bounces off of. The photon will interact elastically with the tiger and the interaction is independent of the photon’s frequency. If the tiger has been allowed to starve and has the smallest energy a tiger of this impossible sort can, what is the probability of finding him at any particular place in this prison with a photon? After you hit him that first time with a photon, finding exactly where he is, how many of the prison’s eigenstates are needed to describe his location thereafter?

Why does this matter to Quantum Physics? This is the most basic Schrodinger equation problem, the particle-in-the-box. You should substitute ‘electron’ for ‘tiger’ in the interests of reality, but I can choose how I write the problem. A part of why I wrote this problem the way I did is to give a little bit of a feeling for what the quantum mechanics is like and how it works. In this kind of problem, you are outside the system looking in and the system is completely dark, you cannot see what’s going on. You could be a zoo keeper facing an angry tiger in a sealed crate; your only way to find this tiger is shove a prod through a breathing hole and see if you bump something. If he’s sleeping, you may discover a mass distributed somewhere in the middle of the crate. If he’s lunging back and forth, the prod may bounce off of something now and then, but it appears as if the tiger is distributed everywhere in the box. I added an embellishment too. In my version of the problem, I’ve included a prepared state and then a state collapse: I would recommend asking yourself what the difference is between the Hilbert space associated with the photon probe (designed around a position space representation) and the Hilbert space of the box (which would be the eigenspace solving the Hamiltonian of the tiger trapped in the box).

Why is this important to Physicist cred? The particle-in-the-box problem has actual physical applications. The 1D version can be used to approximate the absorbance spectra of aliphatic molecules containing stretches of conjugated bond. A 3D version of this problem can be invoked to describe the light absorption characteristics of quantum dots. Ever seen one of those beautiful Samsung quantum dot TVs? You’re welcome.

Problem 10) Suppose you did hit that tiger in the previous problem with a photon, momentarily finding his exact location in the box. What happens to the probability of finding him again at that location over time afterward?

Why does this matter to Quantum Physics? This is a time-dependent Schrodinger equation question. If you can’t understand why this is important to quantum mechanics, I feel truly sorry for you.

Why is this important to Physicist cred? The sort of logic in this problem is used in pump-probe experiments to see how excited states evolve, for instance. This is a real life example of Deepak Chopra’s “ceaselessly flowing quantum soup,” and I mean it in the sense that this is how it would actually be employed in reality by physicists that actually do quantum physics. In one sense only, Chopra is not wrong: the physics can be weird. But, for it to work in weird ways, you must match the circumstances where the effect is seen… the confinement must be on the size-scale of the matter wave. When you fail to invoke the appropriate scale, involving Planck’s constant and the size of the confinement relative to the size of matter wave of the object being considered, is where it becomes a lie. That’s why math is needed… it saves the reality from flowing over into becoming a lie.

Problem 11) In order to make a point about the nature of quantum mechanical tunneling, a physics professor lecturing a group of graduate students turns and runs across the classroom and crashes face-first into a wall. He has just insisted that one day he knows he’ll tunnel through and reach the other side. For a 0.25 m thick wall and a 70 kg physics professor, estimate the ratio of probability amplitude for the professor’s wave function on either side of the wall (or better, estimate the probability flux). Assume that the actual potential of the wall is constant over its width and can be approximated from the knowledge that the wall is just a little stronger than the normal force required to decelerate a 70 kg physics professor from human foot speed to stopped in a tenth of a second over the space of a hundredth of an inch. How many times would the professor need to try this experiment in order to achieve his dream of tunneling through?

Why does this matter to Quantum Physics? Quantum mechanical tunneling is a real thing. This is the effect where a physical object pops through a barrier, unimpeded. Think Kitty Pryde. To perform this, you need to do the particle in the box problem, but backward (a real physicist will understand my recommendation). This is prime weirdness, exactly why the cranks love quantum. I would recommend trying the same problem with an object the mass of an electron where the thickness of the barrier in question is about the same as the object’s de Broglie wavelength. This problem is based in part on a real-life anecdote, where the experiment in question was initiated by a real physics professor. When asked why he wouldn’t try it again since he knows that the probability is small and a large numbers of trials would improve his chances of success, he answered that the university only pays him enough to perform the experiment once a semester.

Why is this important to Physicist cred? Tunneling is responsible for radioactive decay –indeed, we just gave you nuclear power. Also, some of the best microscopes ever built, scanning tunneling microscopes (STM), rely on this physics.

Problem 12) You have a cubic (or rhombohedral) crystal of Ammonium Dihydrogen Phosphate whose optic axis is 52.4 degrees from normal to the crystal faces. You shine a 325 nm He-Cad laser through this crystal at some known angle to the optic axis. If the laser output is reduced so that you’re at the shot noise limit, hitting the crystal with one photon at a time, every so often, you see two photons coming out of the crystal. Many measurements show that the output photons lie in the same plane as the input photon, where both out-bound photons possess ordinary polarization and the same wavelength as each other and that they depart from the crystal along beam paths on the surface of a cone away from the incident beam –in other words, they leave at the same angle in opposite directions. Why are these new photons produced and what’s special about them? Suppose I tell you the output angle is 50 mradian, use physics to tell me the wavelengths of the output photons. Supposing the two photons are detected by detectors positioned equal distances from the crystal, what’s the time delay between detections?

Why does this matter to Quantum Physics? I spent some significant time thinking about this problem –this addresses a piece of quantum physics badly abused by everyone and their brother, but most intensely by the cranks. What’s written above is in basic structure an actual experiment dating from 1970. I avoided writing about this experiment in the typical pop-culture manner so that you can see what the reality actually looks like. I won’t name the quantum mechanical phenomenon that this demonstrates, but I will refer you to a paper by Einstein, Podolsky and Rosen from 1935. I’m hoping that it looks superficially boring because people want to see something really crazy here without thinking about what they’re actually seeing.

Why is this important to Physicist cred? I won’t be snarky this time. I want people to genuinely think about what’s written here for themselves. Preferably, you read the papers and really try to process it. Can you separate even the initial idea from the math? Believe me, it’s there in all its blazing, bizarre glory. What’s the point of this observation? Asking this question is the core of an education that is devoid of indoctrination. Don’t take my word for it, damn well do the work for yourself!

Problem 13) You have a proton and an electron interacting by electromagnetic force. Find the eigenstates of the electron. Impress me by finding the unbounded eigenstates of the electron (for electron energies greater than zero).

Why does this matter to Quantum Physics? This is at its heart a very basic problem that every physicist sees. If you haven’t seen it and you’re calling yourself a quantum physicist, you’re not from a place where they teach quantum physics and, no, you are not a quantum physicist. Tired of the math yet? Sorry, but you can’t be a physicist if you’re afraid of math. In all honesty, I’ve met physicists who claim to be afraid of the math, but these are people who do derivatives as well as they breathe and then get scared of what mathematicians do.

Why is this important to Physicist cred? The periodic table of the elements is largely understood based on the bound states found in this problem. The unbounded states are important for understanding how atoms collide in a low energy, non-relativistic collider. We’ll get to the relativistic ones soon enough…

Problem 14) You have a 4 Tesla magnet. You stick your hand into the bore and somebody across the room fires up a computer program to shoot radiowaves into the cavity of the magnet. What frequency and pulse duration must you fire into your arm in order to set your protons to clamoring most noisily? To what radio frequency must you listen to pick up that clamor? Should the input be polarized? Are you able to feel or hear this clamor? Why or why not?

Why does this matter to Quantum Physics? If you read my blog, you know that this problem can be approached in part classically. If you want to impress me as a quantum physicist, I expect the quantum version. This problem involves spin.

Why is this important to Physicist cred? This problem is about MRI. Yes, we’re responsible for MRI too. If I microwave somebody’s chi long enough, does a mystical turkey timer pop out to tell me it’s metaphysically done? I suggest we do an experiment and see; we can jam the safety on the door of a microwave oven and stick somebody’s face in there… any takers? (Oh, right, physicists also gave us microwave ovens and invented the safety screen in the window. Was it a mechanical engineer who suggested the door latch with the safety interlink? Actually, that was probably us too; we’ve been shooting holes through our own heads at particle accelerators for years.)

Problem 15) If I say a certain perturbative interaction involves spin-orbit coupling, write the term which would go into the Hamiltonian. From the symmetry of the term, are there any forbidden matrix elements? Use the eigenstates found in problem 13 to calculate the first order perturbation between the ground state and the first excited state.

Why does this matter to Quantum Physics? I am gradually turning up the heat here. State of the art modern quantum physics is still way up somewhere ahead. This problem is about a component of Fine structure.

Why is this important to Physicist cred? Fine structure and Hyperfine structure are basics necessary to explain spectroscopy. This tool is one of many that people use to engineer materials from medications to coatings for prescription glasses to the plastics used to built the chair you’re sitting in. Spectroscopy is how we know about the atmospheres of planets orbiting nearby stars (yes, this is a measurement that has been made in a few cases).

Problem 16) A certain transition involves the quadrupole moment operator. Determine selection rules for the operator and estimate the transition rate between levels connected by this operator. If you want to use the eigenstates from problem 13, go for it.

Why does this matter to Quantum Physics? I have a lot of these mechanistic problems floating around. These are middling level quantum physics. Wigner-Eckart theorem and Fermi Golden rule are both essentials; if you haven’t even heard of them, shame on you.

Why is this important to Physicist cred? These things are needed for modern laser engineering and are the product of physicists. I’m sorry, but this is what physicists do.

Problem 17) What is the set of matrices that can be used to represent the group of all proper rotations?

Why does this matter to Quantum Physics? I’ve asked a couple questions here that involve rotation in some form or another. Truth is that I just like this problem and have been thinking about adding it since I started writing these. This is hitting higher level quantum physics and it is actually peripherally a math problem rather than a physics problem.

Why is this important to Physicist cred? If you aren’t a physicist, you won’t understand why group theory is interesting. Your reaction to this problem should tell you something very strong about whether you should use the word “physicist” to describe yourself or not. Sorry, I can’t change the reality of what we are.

Problem 18) Write the character table for translational symmetry (Correction: discrete translations on a 1D lattice). Propose a viable candidate for the 1D representation and explain the associated eigenstates.

Why does this matter to Quantum Physics? Can’t mention group symmetry without spending a moment talking about Bloch theory. This is like taking the particle-in-a-box problem and putting it between mirrors.

Why is this important to Physicist cred? If you truly understand this, you can go tell Intel how to dope their semi-conductors. Yes, I just gave you microchips; without us, you wouldn’t be poring over this screed on your smartphone.

Problem 19) Is a Cooper pair a majorana fermion? How is the Fermi temperature associated with the disappearance of electrical resistivity in a cold solid?

Why does this matter to Quantum Physics? Can’t mention semi-conductors without going whole hog and mentioning superconductors. Majorana fermions are a concept that is still argued in many domains of quantum physics. This question is actually fairly qualitative… if you want to go the physicist route, I would suggest using the eigenstates from the particle-in-a-box problem and describing a fermion next to a boson. If you really want to impress me, pull a page out of a Feynman book and derive the partition function for fermions.

Why is this important to Physicist cred? Remember that 4 Tesla magnet in problem 14? Probably can’t build that without the super-conductivity mentioned here (full disclosure, we can do rare earth magnets that are that strength too, but again, real physicists are responsible for this). Maybe someday superconductors will give us floating trains.

Problem 20) Use the Roothaan equations to do a restricted self-consistent field calculation in order to determine what the ground state energy of propane is.

Why does this matter to Quantum Physics? I’ve recently done a version of this problem from scratch on my own time and I couldn’t rightly produce this quiz without adding it. This is starting to push against the limits of quantum physics. This problem matters because it is one of the few ways that we can determine wave functions of real systems more complicated than problem 13. If you legitimately try to do this problem from scratch by hand, you will discover that it is one of the most frighteningly difficult things you’ve ever done. As a supplement to this problem, when do relativistic corrections become necessary? What’s the Hartree-Fock limit and what do people do to try to get around it?

Why is this important to Physicist cred? This is one of the chief tools by which we understand the structure of atoms heavier than hydrogen. A Nobel prize was awarded for work automating the solution to this problem. For full disclosure, this prize was awarded in chemistry, but keep in mind that it is pure physics in the sense that modern chemistry is almost totally dependent on quantum physics. The automation for solving this problem is broadly disseminated in the hands of normal chemists so that they can design molecules without having to trudge through the nightmare of this physics problem for themselves.

Problem 21) Why does the Klein-Gordon equation imply antimatter?

Problem context: Buckle up sports fans, the ride gets bumpy from here. For mathematical context, Klein-Gordon equation is a low level relativistic analog to Schrodinger equation.

Why does this matter to Quantum Physics? Schrodinger equation is actually a manifestly classical construction. I’m sure this probably throws a wrench at the Quantum U worldview with me just somehow colliding the words “classical” and “quantum,” since Schrodinger’s equation is fundamentally the backbone of quantum physics as far as most people understand it. But, it’s actually true; Schrodinger’s equation has a pseudoclassical limit in that it assumes that information travels between particles without a speed limit –you derive Schrodinger’s equation by putting non-commuting operators into the equation I initially introduced you to all the way back in problem 1. Klein-Gordon is derived the same way, but from putting the non-commuting operators into the relativistic energy-momentum relation. In this sense, Schrodinger’s equation is a form of classical (in the sense of being non-relativistic) physics. One upshot from this is that you must be very careful about claims of simultaneity that hinge on non-relativistic quantum physics; like say, collapse of entanglement (you cannot tell the other guy that he should look at his particle, or what you saw when you looked at your particle at faster than the speed of light). Klein-Gordon implies antimatter, but this is actually understood in retrospect; Paul Dirac (another luminary you may not have heard of) suggested it from the Dirac equation, which is a fermionic analog to the bosonic Klein-Gordon. Disturbed by all this reference to math? Don’t be; this is what physicists do… they look at math used to represent reality and then make claims about reality based on that math. For the tenth time, a “physicist” who does no math is not a physicist.

Why is this important to Physicist cred? Physicists suggested antimatter to the world. Antimatter isn’t exactly sitting on every table or in every gas tank, but it does have at least one practical application. Have you ever gone to the hospital to get a PET scan? That’s positron emission tomography, which uses antimatter to make tomographic images of the human body. What, another real medical application that actually is known to work. Don’t believe me? That’s fine, go back to tending your cupping bruises and hope that nobody screwed up.

Problem 22) Show that non-relativistic path integral formalism is equivalent to Schrodinger’s equation.

Problem context: Yes, integrating along a path is mathematical. No, you can’t escape math if you’re in physics.

Why does this matter to Quantum Physics? Path integral formalism is a big part of everything that is used in high energy physics. Path integrals are introduced early in your quantum physics education, but they don’t become really important until gauge symmetry is introduced and you start working with functionals on fields. That’s right, no quantum field theory without path integrals. A more basic demonstration of path integral formalism is to show that in non-relativistic terms, it’s equal to the more basic Schrodinger’s equation. It’s a tricky conceptual proof that shows you really understand your quantum physics. And, no, I won’t do it for you.

Why is this important to Physicist cred? You want modern quantum physics, this is one route to it.

Problem 23) Two uncharged thin metal plates are placed in a vacuum such that they lie with surfaces parallel to one another. Explain why they spontaneously exert force on one another. How much force do they exert and how does it vary with distance between the plates?

Why does this matter to Quantum Physics? This is the set-up for Casimir effect. Welcome to the bizarre world of zero-point energy and vacuum fluctuations. Yes, this is a real thing.

Why is this important to Physicist cred? Someday, maybe this will form the basis of a science fictional star drive which requires no exhaust. Until then, it’s pretty curious and kind of cool. One thing to remember about the bleeding edge of physics is that many things we learn about do not always find technological application. Sometimes, the insight which leads to an application is years away. But, it requires having the real basis and not just the ability to spout nonsense technobabble. If you can apply Casimir effect to build something useful, be my guest… my hat will be off to you if you can actually prove you’re doing it.

Problem 24)

Higgs feynman diagrams

Taken from INSPIRE high energy physics, these are Feyman diagrams for production of the Higgs Boson. Use these diagrams to write the Lagrangian for coupling to the Higgs field.

Why does this matter to Quantum Physics? Several somebodies won a Nobel prize for this. If you don’t get why and are claiming to be a physicist, shame on you.

Why is this important to Physicist cred? Good question. You tell me. Why?

Conclusion

This quiz could go on a long way. I have to tie it off because I only know so much myself (words of wisdom: know thy limits!). There is so much real physics in quantum mechanics that specialists in the various subfields could add questions forever beyond my single class in QFT and nearly non-existent solid state. Why does renormalization work? What is a topological insulator? Why do people try to build computers using atomic spins for bits? How is it that Chinese scientists are passing undecryptable messages to themselves? Why why why? A thousand questions with a thousand real answers. Anybody wasting time pretending they are learning anything about reality at Quantum University will never be able to answer any of them. They will continue to putz around and make believe that they know more than everybody else, calling themselves physicists, even though they do no math and therefore no physics.

I know there’s a huge number of physics cranks out there and I know that attacking one or ten or fifty will probably not make a scratch in the surface. I started writing this in a fit of rage, if you couldn’t tell in my response in the original version of this post. These people utterly piss me off. They think very highly of their own essentially non-existent attainment and pretty much can’t be convinced of their own self-deficit. Writing this, I feel a bit like Don Quixote, astride my horse, charging as fast as I can at that windmill blade on the downswing. It’s a fool’s errand; Quantum University still stands and it still pulls in chumps paying money, no matter what I write.

One thing that angered me most was the insinuation that people essentially throw out their ability to be creative when they strive to accomplish in real physical sciences, that somehow the “Left brain” atrophies and becomes a ghost of itself under the weight of the cold, calculating “Right brain” and that the higher reaches of my soul are made off-limits because of reductionism. This is the view of somebody who knows no scientists. The reality is very much more nuanced and complicated. I’ve met so many scientists with acid wit and magnificent creative bents that I can’t stand the thought of just lumping them all into some big pail of monolithic monotony. Science itself is an act of enormous creativity in designing and executing the right experiment. Almost none of it is reached solely on cold calculation. The people who inhabit the discipline are a broad cross-section of humanity, possessing all the faults and strengths that that implies, but also there are some amazing geniuses the like of which pretty much no other walk in life can claim. Faking genius is what Quantum U wants to package.

My rage is spent. I have little else to say right now.

(Edit 5-7-19:)

There was a discouraged comment provoked by this post that I would like to try to respond to.

The comment was that this quiz was very good, but that it showed the speaker that he/she should leave physics to the professionals (essentially). This is paraphrasing.

Professional physicists have to deal with the feelings that have apparently been elicited by this quiz. Physics scales in difficulty to match your capacity for understanding it –it literally gets harder and harder until you can’t go any further. As a discipline, it was created by a collaborative effort among some of the smartest people who have ever lived. The physics written in books is one big act of genius, the sum total of all the eureka moments of these smartest people. It is every life’s work and piercing insight all at once! Nobody measures up to that. Nobody understands it all. Of physics as a whole, quantum physics is one of the hardest parts.

This is maybe one of the most difficult things that human beings have ever learned in the history of the world.

If it feels daunting to you, that’s the way the truth works. Coming to grips with that is necessary in order to move forward. Nobody understands it all. At the oceanside, it’s easy to walk on the beach and visit the shallows. But, if you swim out into it, at some point it gets deeper than you can handle. Not even Michael Phelps can swim from San Francisco to Tokyo.

There is a reward for coming to grips with that. Physics is built on the genius moments of some of the greatest geniuses that there ever was. If you study what they did and come to understand what their work actually means, you can have that spark of insight that the very best of us have had. If you want to understand what Einstein’s genius was, for instance, studying his work directly is a way to commune with him. Working really hard and finally breaking through and really seeing it is like nothing else.

Nobody gets it all, but most of us come to grips with the fact that nobody has to. See what you can see and enjoy the trip. There are gems even in the shallows.

Grade A crankery at Quantum University (part 2)

This is a repost of a section of my original quantum university post. I decided that I wanted to put it up as its own blog entry so that it would have some opportunity to be read in its own right.

A comment has lead me to believe that the august body of Quantum University is stung by my opinion as a professional physicist of their validity. Can you believe it? Right here on my little ol’ insignificant blog. Great! If I’m enough to rattle them, maybe I ought to keep writing articles about them.

If you want me to respect you, next time, bring physics, not pseudo-pop psychology. There is a right way to do quantum mechanics.

I do have some other thoughts about this comment. I will quote it here in its entirety so that you can see what the thinking looks like:

As someone who appears to be heavily indoctrinated in a “material-empirical” orientation with regards to science, it would be very hard for you to appreci- ate the type of education fostered at a place like Quantum University.

The material-empirical science oriented individual tends to live out of touch with Reality, for this to him/her is composed only of particles and myriad physical phenomena proven by math- matical formulas.

What does any of this have to do with your Life, your Consciousness, your Relationships… your own Soul. It’s only a Grand Illusion that you’ve been unable to perceive/discern the connect- ion between these and the brand of physics you’re pursuing.

If you’re ever fortunate enough to transition from the “ordinary mode consciousness,” dominated by obses- sive “left-brained,” rational/analytical thought, and shift toward the higher, more. transrational states for a break, you just may discover that indeed there is a connection to it all.

There is a lot in this little blurb. I think it may even have been written by the same fellow who wrote the “otological prison” quote I used above, though I couldn’t confirm that. He accuses me of being indoctrinated and claims that if only I escaped my rational analytical mind that maybe I would see the truth that we all live in the matrix or some such.

What is indoctrination?

According to the dictionary, it is “the process of teaching a person or group to accept a set of beliefs uncritically.” Direct quote from Google by my lazy ass.

The critical word here is “uncritically.” What does this mean?

Uncritically: “with a lack of criticism or consideration of whether something is right or wrong.” Another direct quote from Google by my even lazier and more tired ass.

So, indoctrination is an education where the student is not critical of the content of what they’ve been taught.

You have only my word to take for it, but I’ve walked all up and down physics. I’ve read 1920s articles on quantum mechanical spin translated from the original German trying to see what claims were being made about it. I’ve read Einstein. I’ve read Feynman. I’ve read Schrodinger… the real guys, their own words. I have worked probably thousands of hours rederiving math made famous by people dead sometimes hundreds of years ago just to be certain I understood how it worked (Do you really believe the Pythagorean theorem?) I’ve marched past the appendix of the textbook and gone to the original papers when I thought the author was lying to me or leaving something important out. And yes, I’ve found a few mistakes in the primary literature by noted physicists. Does that sound uncritical to you? In the 3 years since I originally wrote the Quantum U post above, I’ve earned a genuine physicist PhD from a major accredited university.

I would turn this analysis back on the fellow in the comments: have you done this kind of due diligence on what Quantum U taught you? Did you attack them to check if they were wrong? If not, you’ve been indoctrinated. Since they are about as wrong as it’s possible to be, my guess is that no, he didn’t and he isn’t about to… he’s a believer.

The next thought about this comment which pops up is a little claim about my dim-witted nature. I am clearly without a third eye and my life is definitely in the crapper because I am not seeing that other level beyond the workaday world where I could be mystically synergizing with some deeper aspect of reality in the hands of the Real truth. My dreadful left brain is clearly overwhelming my potential as a person. Do you actually believe that you know me?

By design I don’t speak often about my personal life on this blog. Fact is I’m not an unhappy or unfulfilled person. If you take the spine of that comment, the implication that if only I had a Soul, I’d see that Quantum U would have something to give me, truth is that I can say for certain that I need nothing from them in that regard. I came to a point in my life where I don’t need the training wheels… I, as a person, am enough. That has nothing to do with my scientist education, but everything to do with my complicated path through life. That path has lead me a long way and through a lot. Walk one mile in my shoes –I dare you!

Do not make assumptions about the soul of a person you know next to nothing about.

I have one piece of experience that I feel would inform a searcher who sees the allure of Quantum University and it’s “ability” to give students some deeper insight into consciousness, soul and self-actualization. The most difficult thing that people can ever grasp about themselves is the fact that we are all flawed in the sense that our very capacity to interact with reality is fundamentally confused about what’s real. Your brain, the generator of your reality, is not perfect and you can believe in a lie as if it were actually true. Did they find WMDs in Iraq?

I have to laugh at his “transrationalist” higher state of being nonsense because it seems that he’s bitten off the biggest lie imaginable. He believes that everything he thinks about the world is true! Why else would he sneer at material-empirical rationalist analytical mindsets? He wants to disconnect his mind from being connected to tangible reality… you can see that in every word he’s written, right down to the carefully chosen yet inappropriate caps.

The problem I have with that is a simple one: by decoupling your mind from everything else, you remove from yourself the ability to do an external error check based upon what is physically true in the world around you. This is pattern recognition with a broken compass. If you have no way of checking whether or not what you believe matches what is actually real, you have no way of confirming what, if anything, is false in what you see. Everybody can dream and imagine they have psychically contacted a dead relative or telepathically commanded a poodle to piss on a baby. There is no badge of honor to be gained by believing you can lie your hands on someone and heal them with the strength of your Chi because anybody can believe that. You can sit around, do deep breathing, and listen to the white noise in your own anatomy and ascribe all sorts of meanings to it. The hardest thing in a world is sorting out whether anything you imagine is actually true, particularly when you want something to be true. Your mind can dredge up some utter unreality that seems absolutely real in that instant. How can you ever be completely sure?

In my experience, the truth is true regardless of whether or not I believe in it.

That’s the thing about empirical reality. You have a chance to come back and interrogate something, or someone, external to yourself about whether or not you are seeing true things in the world around you. This is a timely subject, I think, because people have turned to filter silos –pocket realities where groups of people are telling you what you want to hear– to avoid having to do really painful self-checks. Empirical reality is imperfect because we never know everything about it, but at least it’s basically invariant and can serve as a good calibration point. That’s the thing about the truth: two contradictory things can’t be simultaneously true. Empiracism at least gives a stationary ground that every observer (literally every observer) can share. If we can all come back and agree that the sky is blue, we at least have something in common to work with, no matter what murmurings are pressing on the backs of our heads. You can’t show that “transrationalist” higher state of being is anything different from a schizophrenic fantasy because they have equal connectedness to the external world; there is no internal frame of reference by which to prove that the first isn’t actually the second. That somebody at Quantum U told you it’s so and you uncritically decided to believe them does not suddenly make it true… that’s almost like a filter bubble; you’re just using someone in particular as your authority whom you wish to believe. Never mind that the person you picked is, maliciously or not, lying their ass off to you.

I think the hardest thing in the world is facing when you’re really wrong about something you deeply want to believe. Sometimes people do get these things wrong. Are you among them? Clearly, the fellow in the comment understands that people can be wrong, or he wouldn’t accuse me of being wrong. Does he never turn his optics against himself?

Now, you may want to call me a hypocrite. Am I a believer? Surely I believe in physics, being a physicist. My answer here might surprise you. Only kind of. Quite a lot of it I don’t fully understand. I’m either agnostic or skeptical about the parts I don’t understand. And, I’ve gone to some pretty extreme ends to try to decide that I understand it well enough to believe certain things about it. This leads to two things, first, I know I don’t know everything and, second, I freely admit that I get things wrong. But that doesn’t mean that I have no idea what I’m talking about… what skill I have with Quantum Mechanics is well earned.

Let this serve as a warning: anybody else making comments about my soul or implying with heavy hand that there is a lack, I will delete what you say out of hand. That’s ad hominem, as far as I’m concerned. You don’t know me. That you make any such statement shows that you didn’t understand word one about human potential that anyone at any school tried to teach you. You have no idea who I am.

Because I made a different set of points in my immediate direct response to the original comment, here is that as well:

I will approve this comment so that people can read it.

First, there is no “brand” of physics. There is physics and then there is not physics. Because of how it’s fundamentally designed, physics is physics. It must truly burn you up that the words “quantum theory” were coined by someone who was indoctrinated to a “material-empirical” outlook on the world. I find it especially funny that the like of you, oh so high and mighty in your supposed depth and vision, are not creative enough to create anything believable without stealing your entire foundation from my ilk. Ask yourself if you would even have a Quantum University to defend if it wasn’t for us.

“The material-empirical science oriented individual tends to live out of touch with Reality” —Wow, that’s an amazing oxymoron. Great job!

“If you’re ever fortunate enough to transition from the “ordinary mode consciousness,” dominated by obses- sive “left-brained,” rational/analytical thought, and shift toward the higher, more. transrational states for a break, you just may discover that indeed there is a connection to it all.” —And if you stopped eschewing the math, you might eventually realize that lying to yourself doesn’t actually get you out of the garden. But, sure, go ahead, put on the blindfold, spin yourself around a few more times and try to pin the tail on the donkey. I don’t mind.

As an aside, I would recommend this guy for a writing gig on Star Trek, he has an amazing capacity for inventing jargon that sounds like it should means something. Transrational? We have rational and irrational. Argue for me as to where it helps to mix the two. I suppose this fellow and I are in agreement about something; nobody who isn’t in some turbid state of translucid parasanity would willfully spend money on Quantum University.

If you doubt the level of bile the idea of Quantum University brings up in me, please understand that if we lived 700 years ago, I would probably be riding out to help put these witches to the sword. If I can help to spread a single genuinely deep thought about them and what they do through the internet, I will.

The Organic Chemistry Lie

Fallout from my learning about Molecular Orbital theory and Hartree-Fock.

I’ve said repeatedly that Organic Chemistry is along the spectrum of pursuits that uses Quantum Mechanics. Organic Chemists learn a brutal regimen of details for constructing ball-and-stick models of complicated molecules. I’ve also recently discovered that chemistry –to this day– is teaching a fundamental lie to undergraduates about quantum mechanics… not because they don’t actually know the truth, but because it’s easier and more systematic to teach.

As a basic example, let’s use the model of methane (CH4) for a small demonstration.

4 methane schematic

This image is taken pretty much at random from The New World Encyclopedia via a Google image search. The article on that link is titled “covalent bond” and they actually do touch briefly on the lie.

A covalent bond is a structure that is formed between two atoms where each atom donates one electron to form a paired structure. You have probably heard of sigma- and pi- bonds.

5 sigma pi bonds

This image of Ethylene (a fairly close relative of methane) is taken from Brilliant and shows details of the two most major types of covalent bonds. Along this path, you might even remember my playing around in the first post I made in this series, where I directly plotted sigma- and pi- bonds from linear combinations of hydrogenic orbitals.

 

 

These bond structure ideas seem to emerge predominantly based on papers by Linus Pauling in the 1930s. The notion is that the molecule is fabricated out of overlapping atomic orbitals to make a structure sort of resembling a balloon animal, as seen in the figure above containing ethylene. Organic chemistry is largely about drawing sticks and balls.

6 methane bonding

With methane, you have four sticks joining the balls together. We understand the carbon to be in Sp3 hybridization, which is directly a construct offered by Linus Pauling in 1931, describing a four orbital system, with four sigma bonds, involving carbon with tetrahedral symmetry which is three parts p and one part s. The orbitals are formed specifically from hydrogenic s- and p- types. If you count, you’ll see that there are 8 electrons involved in the bonding in this model.

I used to think this was the story.

The molecular orbital calculations tell me something different. First I will recall for you the calculated density for methane achieved by closed-shell Hartree-Fock.

Density low threshold

This density sort of looks like the thing above, I will admit. To see the lie, you have to glance a little closer.7 molecular orbitals

This is a collection of the molecular orbitals calculated by STO-3G and the energy axis is not to perfect scale. The reported energies are high given the incompleteness of the basis. The arrows show the distribution of the electrons in the ground state with one spin up and spin down electron in each orbital. The -11.03 Hartree orbital is the deep 1s electrons of the carbon and these are so tightly held that the density is not very visible at this resolution. The -0.93 orbital is the next out and the density is mainly like a 2s orbital, though when you threshold to see the diffuse part of the wave function, it has a sort of tetrahedral shape. Note, this shape only emerges if you threshold so that it becomes visible. The next three orbitals at -0.53 are degenerate in energy and have these weird blob-like shapes that actually don’t really look like anything; one of them sort of looks like a Linus Pauling Sp-hybrid, but we’re stumped by the pesky fact that there are three rather than four. The next four orbitals above zero are virtual orbitals and are unpopulated in the ground state of the molecule –these could be called anti-bonding states.

Focusing on the populated degenerate orbitals:

 

 

These three seem to throw a wrench at everything that you might ever think from Linus Pauling. They do not look like the stick-like bonds that you would expect from your freshman chemistry balloon animal intuition. Fact is that these three are selected in the Hartree-Fock calculation as a composite rather than as individual orbitals. They occur at the same energy, meaning that they are fundamentally entangled with each other and the filter placed on finding them finds all three together in a mixture. This has to be the case because these orbitals examined in isolation do not preserve the symmetry of the molecule.

With methane, we must expect the eigenstates to have tetrahedral symmetry: the symmetry transformations for tetrahedral symmetry (120 degree rotations around each of the points) would leave the Hamiltonian unaltered (it transforms back into itself), so that the Hamiltonian and the symmetry operators commute. If these operators commute, the eigenstates of the molecule’s Hamiltonian must be simultaneous eigenstates of tetrahedral symmetry. This is basic quantum mechanics.

You can see by eye that these orbitals are not.

Now, with this in mind, you can look at the superposition of these which was found during the Hartree-Fock calculation:

density of superposition states 3 4 5 v2

This is the probability distribution for the superposition of the three degenerate eigenstates above. Now we have a thing that’s tetrahedral. Note, there is no thresholding here, this is the real intensity distribution for this orbital collection. This manifold structure contains 6 electrons in three up-down spin pairs where they are in superpositions of three unknown (unknowable) degenerate states.

The next lower energy set has two electrons in up-down and looks like this:

state 2 -0.925

This is the -0.93 orbital without thresholding so that you can see where the orbital is mostly distributed as a 2s-like orbital close to the Carbon atom in the center. It does have a diffuse fringe that reaches the hydrogens, but it’s mainly held to the carbon.

I have to conclude that the tetrahedral superposed orbital thing is what holds the hydrogens onto the molecule.

 

 

Where are my stick-like bonds? If you stop and think about the Linus Pauling Sp-hybrids, you realize that those orbitals in isolation also don’t preserve symmetry! Further, we’ve got a counting conundrum: the orbitals holding the molecule together have six electrons, while the ball-and-stick covalent sigma-bonded model has eight. In the molecular orbital version, two of the electrons have been drawn in close to the carbon, leaving the hydrogen atoms sitting out in a six-electron tetrahedral shell state.

This vividly shows the effect of electronegativity: carbon is withdrawing two of the electrons to itself while only six remain to hold the four hydrogen nuclei. There is not even one spin-up-down two electron sigma-bond in sight!

And so we hit the lie: there is no such thing as sigma- and pi- bonds!

…there is no spoon…

The ideas of the sigma- and pi-bonds come from a model not that different from the Bohr atom. They have power to describe the multiplicity that comes from angular momentum closure, having originated as a description in the 1930s explaining bonding effects noticed in the 1910 to 1920 range, but they are not a complete description. The techniques to produce the molecular orbitals originated later: ’50s, ’60s, ’70s and ’80s. These newer ideas are crazily different from the older ones and require a good dose of pure quantum mechanics to understand. I have a Physical Chemistry book for Chemists from the early 2000s that does not contain a good treatment of molecular orbital theory, stopping only with basically the variational methods Pauling and the workers in the 1930s were using. I asked one of my coworkers, who is versed in organic chemistry models, how many electrons she thought were in the methane bonding system and she said “8,” exactly as I would have prior to this little undertaking.

There’s a conspiracy! We’re living in a lie man!

 

Edit 2-12-19:

I spent some time looking at Ethylene, which is the molecule featuring the example of the balloon animal Pi-bond in the image above. I found a structure that resembles a Pi-bond at the highest energy occupied orbital of the molecule.

Density of Ethylene:

ethylene density threshold

Density of Ethylene:

ethylene density threshold 2

I’ve added two images of the density so that you can see the three dimensional structure.

Ethylene -0.32 hartrees molecular orbital, looks like a pi-bond:

-0.323 ethylene

The -0.53 hartrees orbital looks sort of sigma-like between the carbons:

-0.528 ethylene

The rest of the orbitals look nothing like conventional sigma- or pi- bonds. The hydrogens are again attached by a manifold of probability density which probably allows the entire system to be entangled and invertible based on symmetry.

Admittedly, ethylene has only one pi-bond and the first image above probably qualifies as the pi-bond. I would point out, however, that in the case of ethylene, the stereotypical sigma- and pi- configurations between the carbons matches the symmetry of the molecule, which has a reflection symmetry plane between the carbons and a 180 degree rotation axis along the long axis of the molecule. The sigma- and pi- bond configurations can be symmetry preserving here, but for the carbons only.

One other interesting observation is that the deep electrons in the 1s orbitals of the carbons are degenerate in energy, leading these orbitals to be entangled:

-11.02 ethylene

This also matches the reflection symmetry of the molecule (and would in fact be required by it). There are four electrons in this orbital and you can’t tell which are which, so the probability distribution allows them to be in both places at once… either on the one carbon or on the other. Note, this does not mean that they are actually in both places; it means that you could find them in one place or the other and that you cannot know where they are unless you look –I think this distinction is important and frequently overlooked.

An interesting accessory question here is what happens if you twist ethylene? Molecules like ethylene are not friendly to rotation along the long axis of the double bond because that supposedly breaks the pi-bonding. So, I did that. The total energy of the molecule increases from -77.07 to -76.86; that isn’t a huge amount, but it would constitute a barrier to rotation around the double bond.

Twisted Ethylene density, rotated about the bond by 90 degrees:

twist ethylene density threshold 1

In this case, you do get what appear –sort of– to look like four-fold degenerate sigma-bonds attaching the hydrogens:

T ethylene -0.551

But, the multiplicity is about two-fold degenerate, suggesting only four electrons in the orbital instead of eight, which badly breaks the sigma-bond idea (of two electrons to a sigma-bond). This again suggests strong electron withdrawing by carbon, and stronger with ethylene than methane.

The highest energy occupied state has an energy increased from -0.32 in the planar state to -0.177 in the twisted state… and it looks like a broken pi-bond:

T ethylene -0.177

I think that the conventional idea about why ethylene is rigid is probably fairly accurate. The pictures here might be regarded as a transition state between the two planar cases where the molecule has a barrier to twisting, but is permitted to do so at some slow rate.

In the twisted case, the deep 1s electrons on the carbons are broken from reflection symmetry and they become distinctly localized to one carbon or the other.

 

 

Overall, I can see why you would teach the ideas of the sigma- and pi- bonds, even though they are probably best regarded as special cases. If you’re not completely aware that they are special cases, and that pictures like the one on Brilliant.org are broken, then we have a problem.

This exercise has been a very helpful one for me, I think. I’ve heard a huge amount about symmetries and about different organic chemistry conventions. Performing this series of calculations really helps to bridge the gap. Seeing actual examples is eye-opening. Why aren’t there more out there?

 

Edit 2-23-19:

As I’ve continued to learn more about electronic bonds, I’ve learned that the structural details have been continuously argued for a long time. It becomes clear pretty quickly that the molecular orbital structures tend to exclude those notions you encounter early in schooling. Still, molecular orbitals have broken-physics problems themselves when you try to pull them apart by splitting a molecule in half. You end up having to be molecular orbital-like when the molecule is intact, but atomic orbital-like when the molecule is pulled apart into its separate atoms.

I found a paper from 1973 By Goddard and company which rescues some of the valance bond ideas as Generalized Valence Bonds (GVB). Within this framework, the molecular orbitals are again treated as linear combinations of atomic parts and answers the protestations of symmetry by saying simply that if you can make a combination of atomic orbitals that globally preserve the symmetry in a molecule, then that combination is an acceptable answer. GVB adds to the older ideas by putting in the notion that bonds can push and deform each other, which certainly fits with the things you start to see when you examine the molecular orbitals.

You can have sigma and pi bonds if you make adjustments. I’m not sure yet how the GVB version of methane would be constructed, but the direct treatment of carbon in the paper slays the idea of Sp-hybridization, as I understand it, while still producing the expected geometry of molecules.

Still thinking about this.

 

edit 3-5-19:

I’ve been strongly aware that my little Python program is simply not going to cut it in the long haul if I desire to be able to make some calculations that are actually useful to a modern level of research. I decided to learn how to use GAMESS.

For a poor academic with some desire to do quantum mechanics/ molecular mechanics type calculations, GAMESS is a godsend.

More than that actually. GAMESS is like stumbling over an Aston Martin Vanquish sitting in an alley way, unlocked, with the keys in the ignition, where the vanity plate says “wtng4U.” It isn’t actually shareware, but it could be called licensed freeware. GAMESS is an academic project whose roots existed clear back in the 1970s, roughly parallel to Gaussian, which still exists today and is accessible to people whom the curators deem reasonable. My academic email address probably helped with the vetting and I can’t say I know exactly how far they are willing to distribute their admittedly precious program.

To give you an idea of the performance gap between my little go-cart and this porche: the methane calculations I made above took 17 seconds for my Python program… GAMESS did it in 0.1 seconds. Roughly 170-fold! This would bring benzene down from two hours for my program to maybe a few minutes with GAMESS.

methane density

This image, produced by a GAMESS satellite program called wxMacMolPlt, is a methane coordinate model with a GAMESS calculated electron density depicted as a mesh to demonstrate a probability isosurface. What GAMESS adds to where I was in my own efforts is a sophistication including direct calculations of orbital electron occupancy. Under these calculations, it’s clear that electrons are withdrawn from the hydrogens, but maybe not quite as extremely as my crude estimations above would suggest: the orbitals associated with the hydrogens have 93% to 96% electron occupancy… withdrawn, but not so withdrawn as to be empty (I estimated 6 for 8 electrons above, or more like 75% occupancy, which was relatively naive). This presumably comes from the fringes of the 2s orbital centered on the carbon. Again, the analysis is very different from the simple notations of sigma- and pi-bonding, where the electrons are clearly set in clouds defined by the whole molecule rather than as distinct localizations.

I’ve really just learned how to make GAMESS work, so my ability to do this is very much limited. And, admittedly, since I have no access to real computer infrastructure (just a quadcore CPU) it will never reach its full profound ability. In my hands, GAMESS is still an atomic bomb used as a fly swatter. We’ll see if I can improve upon that.

 

edit 3-10-19:

Hit a few bumps learning how to make GAMESS dance, but it seems I’ve managed to turn it against the basic pieces I was able to attack on my own.

Here is Ethylene, both a model and a thresholded form of the total electron density.

I also went and found those orbitals in the carbon-carbon bond.

The first is the sigma-like bond at -0.54 and the second is the pi-like bond at -0.33. The numbers here are slightly off from what I quote above because the geometry is optimized and STO-3G ends up optimizing slightly shorter than X-ray observed bond lengths. These are somewhat easier to see than the clouds I was able to produce with my own program (though I think my work might be a little prettier). I’ve also noticed that you can’t plot density of orbital super-positions with the available GAMESS associated programs, as I did with methane above. I can probably get tricky by processing molecular orbitals on my own to create the superpositions and then plot them –GAMESS handily supplies all the eigenvectors and basis functions in its log files.

In the build of GAMESS that I acquired, I’ve stumbled over an apparent bug. The program can’t distinguish tetrahedral symmetry in a normal manner… it’s converting the Td point group of methane into what appears to be a D2h point group, apparently. I was able to work around this by calling symmetry C1. Considering that I started out with no idea how to enter anything at all, I take this as a victory. As open freeware, they work with a smaller budget and team, so I think the goof is probably understandable –though it sure felt malicious when I realized that the problem was with GAMESS itself. I’m not savvy enough with programming to dig in and fix this one myself, I think, though the pseudo-open source nature of GAMESS would certainly allow that.

Given how huge an effort my own python SCF program ended up requiring, I’m not too surprised that GAMESS has small problems floating around. As an academic product, they have funding limits. At the very least, I’m impressed that it cranks out in seconds what took my program minutes… that speed extends my range a lot. I was able to experiment with true geometry optimization in GAMESS where my program stopped with me scrounging atomic coordinates out of the literature.

(edit 4-10-19):

pyrophosphate -4 in water
Pyrophosphate, fully deprotonated.

This is an image of pyrophosphate, calculated with the 6-311G basis set in GAMESS by restricted Hartree-Fock. This includes geometry optimization and is in a polarized continuum model for representation of solvation in water. The wireframe itself represents an equi-probability surface in the electron density profile while the coloration of the wireframe represents the electrostatic potential at that surface (blue for negative, red for positive).

Edit 5-7-19:

This was an attempted saddle point search in GAMESS trying to find if a transition state exists in the transfer of a proton from one water molecule to another (for formation of hydronium and hydroxide)

 

water_bounce
Saddle point search for proton transfer in water.

This is the weirdest thing I’ve seen using GAMESS yet. This is not exactly a time simulation, it’s an attempted geometry minimization showing the computer trying different geometries in an attempt to locate a saddle point in the potential energy surface. I’m befuddled a bit by this search type because I’m trying to study a reaction pathway in my own work. Unfortunately, this sort of geometry search is operating in a counter-intuitive fashion and I’m not certain whether or not it’s broken in the program. However, when you see two oxygens fighting over a proton… well… that’s just cool. If the waters are set close enough so that they enjoy a hydrogen bond, the energy surface appears to have no extrema except where the protons are located as two waters. If you back the waters off from one another so that they are out of hydrogen bonding distance and pull the hydrogen out so that it is hydrogen bonding with both oxygens, you get this weird behavior where the proton bounces around until the nuclei are close enough to go back to the hydrogen bonded water configuration. I need to pin the oxygens away from each other, which won’t happen in reality.

Not sure what I think.

Building a Molecule; Time Spent in the Gap

How is a molecule built? Rather, what exactly is required to predict the electronic structure of a molecule using modern tools?

I will use this post to talk about my time spent learning how to apply the classic quantum mechanical calculation of Hartree-Fock (note, this is plain old Hartree-Fock rather than multi-configuration Hartree-Fock or something newer that gives more accurate results). I’ve spoken some about my learning of this theory in a previous post. Since writing that other post, I’ve passed through numerous travails and learned quite a lot more about the process of ab initio molecular calculation.

My original goal was several-fold. I decided that I wanted a structural tool that, at the very least, would allow me access to some new ways of looking at things in my own research. I chose it as a project to help me acquire some skill in a computer programming language. Finally, I also chose to pursue it because it turned out to be a very interesting question.

With several months of effort behind me, I know now several things. First, I do think it’s an interesting tool which will give new insight into my line of research, provided I access the tool correctly. Second, I think I was incredibly naive in my approach: the art and science of ab initio calculation is a much bigger project than can bear high quality fruit in the hands of one overly ambitious individual. It was a labor of years for a lot of people and the time spent getting around my deficits in programming are doubly penalized by the sheer scope of the project. My little program will never produce a calculation at a modern level! Third, I chose Python for my programming language for its ease of availability and ubiquity, but I think a better version of the self-consistent field theory would be written in C or Fortran. Without having this be my full-time job, which it isn’t, I doubt there’s any hope of migrating my efforts to a language better suited to the task. For any other intrepid explorers seeking to tread this ground in the future, I would recommend asking yourself how pressing your needs are: you will never catch up with Gaussian or GAMESS or any of the legion of other professionally designed programs intended to perform ab initio quantum mechanics.

Still, I did get somewhere.

The study of Hartree-Fock is a parallel examination of Quantum Mechanics and the general history of how computers and science have become entangled. You cannot perform Hartree-Fock by hand; it is so huge and so involved that a computer is needed to hold it together. I talked about the scope of the calculation previously and what I said before still holds. It cannot be done by hand. That said, the physics were still worked out mostly by hand.

I would say that part of the story started almost 90 years ago. Linus Pauling wrote a series of papers connecting the then newly devised quantum mechanics of Schrodinger and his ilk to the puzzle of molecular structure. Pauling took hydrogenic atomic orbitals and used linear combinations of these assemblies to come up with geometric arrangements for molecules like water and methane and benzene. A sigma-orbital is the product of two atomic orbitals placed side-by-side with an overlap and then adjusted energy optimization to pick the right distance. A pi-orbital is the same, but with two p-orbitals placed side-by-side and turned so that they lie parallel to one another.

Much of Pauling’s insights now form the backbone of what you learn in Organic Chemistry. The geometry of molecules as taught in that class came out of these years of development and Pauling’s spell of ground-breaking papers from that time will have you doing a double-take regarding exactly how much impact his work had on chemistry. Still, for the work of the 1930s by Pauling and his peers, they only had approximations, with limited accuracy for the geometry and no real ability to calculate spectra.

Hartree-Fock came together gradually. C.S.S. Roothaan published what are now called the Roothaan equations, which constitute the core of more modern closed-shell Hartree-Fock, in 1951. Nearly simultaneously, Frank Boys published a treatment of gaussian functions, showing that all needed integrals for molecular overlap could be calculated in closed form with the gaussian function family, something not possible with the Slater functions that were to that point being used in place of the hydrogenic functions. Hydrogenic functions do show one exact case of what these wave functions actually look like, but they are basically impossible to calculate for any atom except hydrogen and pretty much impossible to adapt to broader use. Slater functions took over in place of the exact hydrogenic functions because they were easier to use as approximations. Gaussian functions then took over from Slater functions because they are easier to use still and much easier to put into computers, a development largely kicked off by Boys. There is a whole host of names that stick out in the literature after that, including John Pople who duly won a Nobel prize in 1998 for his work leading to the creation of Gaussian, which to this day is a dominant force in molecular ab initio calculation (and will do everything you could imagine needing do as a chemist if you’ve got like $1,000 to afford the academic program license… or $30,000 if you’re commercial).

The depth of this field set me to thinking. Sitting here in the modern day, I am reminded slightly of Walder Frey and the Frey brethren in the Game of Thrones. This may seem an unsightly and perhaps unflattering comparison, but stick with me for a moment. In the Game of Thrones, the Freys own a castle which doubles as a bridge to span the waters of the Green Fork in the lands of Riverrun. The Frey castle is the only ford for miles and if you want to cut time on your trade (or marching your army), you have no choice but to deal with the Freys. They can charge whatever price they like for the service of providing a means of commerce –or, as the case may be, war– and if you don’t go with them, you have to go the long way around. Programs like Gaussian (and GAMESS, though it is basically protected freeware), are a bridge across a nearly uncrossable river. They have such a depth of provenance in the scientific service that they provide that you are literally up a creek if you try to go the long way around. This is something I’ve been learning the hard way. In truth, there are many more programs out there which can do these calculations, but they are not necessarily cheap, or -conversely- stable.

I think this feature is interesting on its own. There is a big gap between the Quantum Mechanics which everybody knows about, which began in the 1920s, and what can be done now. The people writing the textbooks now in many cases came to their own in an environment where the deepest parts of ab initio was mainly already solved. Two of the textbooks I delved into, the one by Szabo and Ostlund, and work by Helgaker, clearly show experts who are deeply knowledgeable of the field, but have characteristics suggesting that these authors themselves have never actually been able to cross the river between classical quantum mechanics and modern quantum chemistry fully unaided (Szabo and Ostlund never give theory that can handle more than gaussian S-orbitals, where what they give is merely a nod to Boys, while Helgaker is given to quoting as recently as 2010 from a paper that, to the best of my ability, actually gives faulty theory pending some deep epistimologic insight guarded by the cloistered brotherhood of Quantum Chemists). The workings hidden within the bridge of the Freys is rather impenetrable. The effort of going from doing toy calculations as seen in Linus Pauling’s already difficult work to doing modern real calculations is genuinely a herculean effort. Some of the modern textbooks cost hundreds of dollars and are still incomplete stories on how to get from here to there. Note, this only for gaining the underpinnings of Hartree-Fock, a flawed technique itself without including Configuration Interaction or other more modern adjustments and those even short shrift if you don’t have ways of dealing with the complexities of the boundary conditions.

Several times in the past couple months, I’ve been wishing for Aria Stark’s help.

I will break this story up into sections.

 

The Spine

The core of Hartree-Fock is perhaps as good a place to start as any. Everybody knows about the Schrodinger equation. If you’ve gone through physical chemistry, you may have cursed at it a couple times as you struggled to learn how to do the Particle-in-a-Box toy problem. You may be a physicist and have solved the hydrogen atom, or seen Heisenberg’s way of deriving spherical harmonics and might be aware that more than just Schrodinger was responsible for quantum mechanics.

Sadly, I would say you basically haven’t seen anything.

As the Egyptian Book of the Dead claims that Death is only a beginning, Schrodinger’s equation is but the surface of Quantum Mechanics. I will pick up our story in this unlikely place by pointing out that Schrodinger’s equation was put through the grinder in the 1930s and 1940s in order to spit out a ton of insight involving molecular symmetry and a lot of other thoughts about group symmetry and representation. Hartree and Fock had already spent time on their variational methods and systematic techniques for a combined method called Hartree-Fock began to emerge by the 1950s. Heck, an atomic bomb came out of that era. The variant of Schrodinger’s equation where I pick up my story is a little ditty now called the Roothaan equation.

1 Roothaan equation

It definitely doesn’t look like the Schrodinger equation. In fact, it looks almost as small and simple as E=mc^2 or F = ma, but that’s actually somewhat superficial. I won’t go terribly deeply into the derivation of this math because it would balloon what will already be a long post into a nightmare post. My initial brush with this form of the Roothaan equation came from Szabo and Ostlund, but I’ve since gone and tracked down Roothaan’s original paper… only to find that Szabo and Ostlund’s notation, which I found to be quite elegant, is actually almost directly Roothaan’s notation. Roothaan’s purpose seems to have been collecting prior insight regarding Hartree-Fock into a systematic method.

This equation emerges from taking the Schrodinger equation and expanding it into a many-body system where the Hamiltonian has been applied onto a wave equation that preserves electron fermion exchange anti-symmetry –literally a Slater determinant wave function, where you may have like 200! terms or more. ‘F’, ‘S’ and ‘C’ are all usually square matrices.

‘F’ is called the Fock matrix and it contains all of the terms of the Hamiltonian. Generally speaking, once the equation is in this form, you’re actually out in numerical calculation land and no mathematical dongles remain in the matrix. The matrix contains only numbers. The Fock matrix is a square matrix which is symmetric, meaning that the terms above and below the diagonal equal each other, which is an outcome of quantum mechanics using hermitian operators. To construct the Fock matrix, you’ve already done a ton of integrals and a huge amount of addition and subtraction on terms that look sort of like pieces of the Schrodinger equation. You can think of the Fock matrix as being a version of the Hamiltonian. Within the Fock matrix are terms referred to as the Core Hamiltonian, which looks like the Schrodinger Hamiltonian, and the ‘G’ matrix, which is a sum of electron repulsion and electron exchange terms, which only occur when you’ve expanded the Schrodinger equation to a many body system. The Fock matrix is usually symmetric rather than just hermitian because the Roothaan equations assume that every molecular orbital is closed… that is, every orbital has one spin up and one spin down electron which are degenerate and indistinguishable. The eigenstates are therefore spatial equations instead of spin-orbitals where spin was integrated out.

‘C’ is a way to represent the eigenstates of the Hamiltonian. Note, I did not say that ‘C’ is a wave function because these wave functions are actually impossible to write down (how many is ~200! or 400! or more terms?) ‘C’ is a representation of a way to write down the eigenstates that you might use to construct a wave function in the space of the Fock matrix. It actually isn’t even the eigenstates directly, but the coefficients for a basis set that could be used to represent the eigenstates you desire. ‘C’ is square unitary matrix, meaning that multiplying it by the transpose of itself produces identity. The eigenstates contained by ‘C’ are orbitals that are associated with the Hamiltonian in the form of the Fock matrix.

‘S’ is called the “overlap matrix.” The overlap matrix is a symmetric matrix that is constructed by use of the basis set. As you may have read in my other post on this subject, the basis set may be a bunch of gaussian functions or a bunch of slater functions or a bunch of some other miscellaneous basis set that you would use to represent the system at hand. The overlap matrix is introduced because, mathematically, whatever basis you chose may be composed of functions that are not orthogonal to one another. Gaussian basis functions are useful, but they are not orthogonal. The purpose of the overlap matrix then is to work through the calculus necessary to construct orthogonal combinations of the basis functions. For a basis set that is not orthogonal you need some way to account for the non-orthogonality.

The form of the Roothaan equation written above is an adapted form for an eigenvalue equation where ε is the eigenvalue. In the case of molecular orbitals, this eigenvalue is an energy that is called the orbital energy. These eigenstates are non-orthogonal as accommodated by the ‘S’ matrix, where the eigenvalues are distributed across combinations of basis functions, as expressed in ‘C’, that are orthogonal to each other.

What makes this equation truly a monstrosity is that the Fock matrix is dependent itself on the ‘C’ matrix. The way this dependence appears is that the integrals which are used to construct the Fock matrix are calculated from the values of the ‘C’ matrix. The Roothaan equation is a sort of feedback loop: the ‘C’ matrix is calculated from working an eigenvalue equation involving ‘F’ and ‘S’ to find ε, where ‘C’ is then used to calculate ‘F’. In practice, this operates as an iteration: you guess at a starting Fock matrix and calculate a ‘C’ matrix, which is then used to calculate a new Fock matrix, from which you calculate a new ‘C’ matrix.  The hope is that eventually the new ‘C’ matrix you calculate during each cycle of calculation converges to a constant value.

Oroboros eating its own tail.

This is the spine of Hartree-Fock: you’re looking for a convergence to give constant output values of ‘C’ and ‘F’. As I articulated poorly in my previous attempt at this topic, this is the self-consistent electron field. Electrons occupy some combination of the molecular orbitals expressed by ‘C’ at the energies of ε, forming a electrostatic force field that governs the form of ‘F’, from which ‘C’ is the only acceptable solution. ‘C’ is used to calculated the values of the hamiltonian inside ‘F’, where the integrals are the repulsions of electrons in the ‘C’ orbitals against each other or attractions of those electrons toward nuclei, giving the kinetic and potential energies that you usually expect in a hamiltonian.

Here is the scale of the calculation: a minimal basis set for methane (four hydrogens and a carbon) is 9 basis functions. The simplest, most basic basis set in common use is Pople’s STO-3G basis, which creates orbitals from sums of gaussian functions (called a contraction)… “3G” meaning three gaussians to a single orbital. One overlap integral between two functions therefore involves nine integrals. Generation of the 9 x 9 S-matrix mentioned above then involves 9*9*9 integrals, 729 integrals. Kinetic energy and nuclear attraction terms would each involve another 729 (3*729 = 2187 integrals) which can be shortened by the fact that the matrices are symmetric, so that only a few more than half actually need to be calculated. The electron-electron interactions, including repulsions and exchanges are a larger number still: one quarter of 9^4*9 or ~14,700 integrals (symmetry allows you to avoid the full 9^4 where basically the whole matrix must influence each matrix element, giving a square of a square). Roughly 17,000 integration operations for a molecule of only 5 atoms using the least expensive form of basis set.

The only way to do this calculation is by computer. Literally thousands of calculations go into making ‘F’ and then hundreds more to create ‘C’ and this needs to be done repeatedly. It’s all an intractably huge amount of busy work that begs for automation.

 

Solving Big Eigenvalue Problems

One big problem I faced in dealing with the Roothaan equations was trying to understand how to solve big eigenvalue problems.

Most of my experience with calculating eigenvalues has been while working analytically by hand. You may remember this sort of problem from your linear algebra class: you basically set up a characteristic equation by setting the determinant of the matrix equal to zero after having subtracted a dummy variable for the eigenvalue from the diagonal and then solving that characteristic equation. It’s kind of complicated by itself and depends on the eigenvalues being separable. A 3 x 3 matrix produces a cubic equation –which you hope to God is separable because nobody ever wants to do more than the quadratic equation. If it isn’t separable, you are up a creek without a paddle even at just 3 x 3.

For the example of the methane minimal basis set, the resulting matrices of the Roothaan equation are 9 x 9.

This is past where you can go by hand. Ideally, one would prefer to not be confined to molecules as small as molecular hydrogen, so you need some method of calculating eigenvalues that can be scaled and –preferably– automated.

This was actually where I started trying to write my program. Since I didn’t know at the time whether I would be able to build the matrix tools necessary to approach the math, I used solving the eigenvalue problem as my barometer for whether or not I should continue. If I couldn’t do even this, there would be no way to approach the Roothaan equations.

The first technique I figured out was a technique called Power Iteration. At the time, this seemed like a fairly straight-forward, accessible method to pull eigenvalues from a big matrix.

To perform power iteration, all you do is operate a square matrix onto a vector, normalize the resulting vector, then act the matrix again on that new vector. If you do this 10,000 times, you will eventually find a point where the resulting vector is just the initial vector times some constant factor. The constant ends up being the biggest eigenvalue in the matrix and the normalized vector is the associated eigenvector. This gives only the biggest eigenvalue in the matrix; you access the next smaller eigenvalue by “deflating” the matrix. This is accomplished by forming the outer product of the eigenvector and multiplying it by eigenvalue and subtracting the result from the initial matrix, which produces a new matrix where the already determined eigenvalue has been “deactivated.” Performing this set of actions repeatedly allows you to work your way through each eigenvalue in the matrix.

There are some difficulties with Power Iteration. In particular, you’re kind of screwed if an eigenvalue happens to be zero since you no longer have the ability to find the eigenvector.

Much of my initial work on self-consistent field theory used Power Iteration as the core solving technique. When I started to run into significant problems later in my efforts, and couldn’t tell whether my problems were due to the way I was finding eigenvalues or some other darker crisis, I ended up switching to a different solving technique.

The second solving technique that I learned was Jacobi Diagonalization. For Power Iteration, I stumbled over the technique from a list of computational eigenvalue calculation methods discovered on-line. Jacobi, on the other hand, was recommended by the Szabo and Ostlund Quantum Chemistry book. Power Iteration is an iterative method while Jacobi is a direct calculation method.

To my somewhat naive eye, the Jacobi method seems ready-made for quantum mechanics problems. A necessary precondition for this method is that the matrix of choice be at least a symmetric matrix, if not actually a hermitian matrix. And, since quantum chemistry seems to mostly reduce its basis sets to non-complex symmetric forms, the Fock matrix is assured to be symmetric as a result of the hermiticity of ground-level quantum mechanics.

Jacobi operates on the observation that the off-diagonal elements of a symmetric matrix can be reduced to zeros by a sequence of unitary rotations. The rotation matrix (called a Givens matrix) can be directly calculated to convert one particular off-diagonal element to a zero. If you do this repeatedly, you can work your way through each off-diagonal element, zeroing each in turn, until the matrix is diagonal. This works only if diagonalization proceeds in a particular order, where you pick the Givens matrix that zeros the largest off-diagonal element present on any particular turn. This largest element is referred to as “the pivot” since it’s the crux of a mathematical rotation. As the pivot is never assured to be in any particular spot during the process, the program must work its way through off-diagonal elements in an almost random order, picking only the largest present at that time.

Once the matrix is diagonalized, all the eigenvalues lie along the diagonal… easy peezy. Further, the product of all the Givens matrices is a unitary matrix containing the eigenvectors for each eigenvalue encoded in order by column.

In the process of learning these numerical methods, I discovered a wonderful diagnostic tool for quantum mechanics programs. As a check for whether or not Jacobi can be applied to a particular matrix, one should look at whether or not the matrix is symmetric. This isn’t an explicit precondition of Power Iteration and I didn’t write any tools to look for it while I was relying on that technique. After I started using Jacobi, I wrote a tool for checking whether or not an input matrix is symmetric and discovered that other routines in my program were failing in their calculations to produce the required symmetric matrices. This diagnostic helped me root out some very deep programming issues elsewhere in what turned out to be a very complex program (for an idiot like me, anyway).

 

Dealing with the Basis

I made an early set of design choices in order to construct the basis. As mentioned in detail in my previous post, the preferred basis sets are Gaussian functions.

It may seem trivial to most people who may read this post, but I was particularly proud of myself for learning how to import a comma delineated .csv file into a python list while converting select character strings into floating points. In the final version, I figured out how to exploit an exception call as a choice for whether an input was intended to be text or floating point.

In the earliest forms of the program, most of my work was done in lists, but I eventually discovered python libraries. If you’re working in python, I caution you to not use lists for tasks that require searchability: libraries are easier! For any astute computer scientists out there, I have no doubt they can chime up with plenty of advice about where I’m wrong, but boy oh boy, I fell in love with libraries really quick when I discovered the functionality they promote.

For dealing with the basis sets, python has a fair number of tools in its math and cmath libraries. This is limited to basic level operations, however. It may seem a no-brainer, but teaching a program how to do six-dimensional integrals is really not as easy as discovering the right programming library. This intended task defined my choices for how I stored my basis sets.

Within the academic papers, most of the gaussian basis sets can be found in tables stripped of everything but the vital constants. The orbitals are fabricated from “primitive” contractions, where a “contraction” is simply a sum of several bare-bones “primitive” gaussian functions with each identified uniquely by a weighting coefficient and an exponential coefficient. There is frequently also a standard exponent to scale a particular contraction to fit the orbitals for a desired atom. The weighting coefficient tells the magnitude of the gaussian function (often chosen so that the sequence of primitives has an overlap integral that is normalized to equal 1) while the exponential coefficient tells how wide the bell-curve of a particular gaussian primitive spreads. The standard exponent is then applied uniformly across an entire contraction to make it bigger or smaller for a particular atom.

In the earliest papers, these gaussian contractions are frequently intended to mimic atomic orbitals. The texts often refer to “linear combinations of atomic orbitals” when calculating molecular orbital functions. In later years, it seems pretty clear that these gaussian contractions are not necessarily specified atomic orbitals so much as an easy basis set which has the correct densities to give good approximations to atoms with relatively few functions. It’s simply an economical basis for the task at hand.

Since python doesn’t automatically know how to deal specifically with gaussian functions, my programming choice was to create a gaussian primitive class. Each primitive object automatically carried around all the numbers needed to identify a particular gaussian primitive. Within the class there were a few class methods necessary to establish the object and identify the associated constants and position. The orbitals were then lists of these primitive class objects. Later in my programming, I even learned how to make the class object callable so that the primitive could spit out the value of the gaussian for a particular position in space.

This is certainly trivial to the overall story, but it was no small amount of work. That I learned how to do class objects is a point of pride.

Constructing generalized basis functions turns out to be a distinct set of actions from fabricating the particular basis to attack a select problem. After a generalized basis is available to the program, I decided to build The Basis as a list identified by atoms at positions in space. This gave a list of lists, where each entry into the top list was a itself a list of primitive objects constituting an orbital, where each orbital was associated with a particular center in space as connected to an atom. It’s not really necessary that these basis functions be centered at the locations of the atoms, but their distributions are ideally suited if they are.

With a basis in hand, what do you do with it?

 

The Nightmare of Integration

I’ve been debating how much math I actually want to add to this post. I think probably the less the better. If any human eyes ever actually read this post and are curious about the deeper mathematical machinations, I will happily give whatever information is in my possession. In the end, I want merely to tell the story of what I did.

For calculating a quantum mechanical self-consistent electron field, there is a nightmare of integrals that need to be used. These integrals are a cutting rain that forms the deluge which will warn any normal human being away from trying to do this calculation. It isn’t small and the numbers do not scale linearly with the size of the problem. They get big very fast.

The basic paper which sits at the bottom of all contracted gaussian basis sets is the paper by Frank Boys in 1950. I ended up in this paper after I realized that Szabo and Ostlund had no intention of telling me how to do anything deeper than s-orbital gaussians. Being poor, I can’t just randomly buy textbooks to search around for a particular author who will tell how to do all the integrals I desired. So, I took advantage of being in an academic position and I turned over the scientific literature to find how this problem was actually addressed historically. This got me to Boys.

Boys lays out the case as to why one would ever use gaussian functions in these quantum mechanical calculations. As it turns out there are basically just four integral forms that are needed to perform Hartree-Fock; you need an overlap integral, a kinetic energy integral with several derivatives of the overlap, a one center nuclear attraction integral and a two center electron repulsion integral (which covers both repulsion and exchange interaction). For many basis function types that you might use, including the vanilla hydrogenic orbitals, each of these types of integrals is unique to the basis functions you put into them. This means that no specific method is common to the integration for whatever family you’re talking about and some may not even have analytic solutions. This makes the problem very computationally expensive in many cases, and likely impossible. With the Gaussian functions, you can perform this clever trick where a p-type gaussian can be accessed from a derivative on an s-type gaussian… if the derivative is on a free variable somewhere in the function, the operations of integration and differentiation can be reversed since they aren’t on the same variable. Instead of doing the integral on the p-type gaussian, you do the integral on the s-type gaussian and then perform the derivative to find the associated result for the p-type. Derivatives are always easier!

Boys showed that all the needed integrals can be analytically solved for the s-type gaussian function, meaning that any gaussian class function can be integrated just by integrating the s-type function. In the process he introduced a crazy special function related to the Error Function (erf) which is now often called the “Boys Function.” The Boys function is an intriguing machine because it’s a completely new way of dealing with the 1/distance propagator that makes electromagnetism so hard (for one example, refer to this post).

Boys Function family with member ‘n’:

3 Boys function2

While this is a simplification, I’ll tell you that it’s still not just easy.

I did troll around in the literature for a time looking for more explicit ways of dealing with implementing this whole gaussian integral problem and grew depressed that most of what I found seemed too complicated for what I had in mind. Several sources gave useful insight into how to systematize some of these integrals to higher angular momentum guassians, but not all. Admittedly, in my earliest pass, I think I didn’t understand the needs of the math well enough. Dealing with this whole problem ended up being an evolving process.

My earliest effort was an attempt to simply, methodically teach the computer how to do symbolic derivatives using chain rule and product rule. This was not a small thing and I figured that I would probably have a slow computer program, but –by god– it would be mine. I had a semi-working system which achieved this.

A parallel problem that cropped up here was learning how to deal with the Boys function. I started out with but one form of the Boys function and knew that I needed erf to use it. Unfortunately, I rapidly discovered that I needed to make derivatives of the Boys function (each derivative is related to the ‘n’ in the function above). I trolled around in the literature for a long time trying to figure out how to perform these derivatives and eventually analytically worked out a series expansion built around erf that successfully formed derivatives for the Boys function based on the derivatives of erf. Probably the smartest and least useful thing I did this entire project.

In its initial form, using these methods, I was able to successfully calculate the minimal basis hydrogen molecule and the HeH+ molecule, both of which have a minimal basis that contains only s-type gaussian functions.

hydrogen density

This image is electron density of the hydrogen molecule using the python maya vi package. Literally just two sigma-bonded hydrogen atoms, where the scales are in Bohr radii (1 = 0.53 Angstrom). This is the most simple quantum chemistry system and the hardest system I could perform using strictly my own methods. Sadly, this system can be done by hand in Hartree-Fock.

When I tried to use more complex basis functions for atoms above Lithium, trying things like Carbon monoxide (CO), my system failed rather spectacularly. Many of the difficulties appeared as non-symmetric matrices (mentioned above) and inexplicably ballooning integral values. The Hartree-Fock loop would oscillate rather than converge. Most of this tracked to issues occurring among my integrals.

One of the problems I discovered was that error function in the python math library couldn’t handle input values that approached zero; I supplied the zero limit input originally since the function is non-inclusive of zero, but I also found that the python math library erf function starts to freak out and generate crazy numbers when you get really close to zero, say within 10^-5. So, I got the appropriate values directly at zero due to my patch and mostly good values larger than 10^-4, but crazy weird values in this little block near zero. In one version of correction, I simply introduced a cut-off to send my Boys function to its zero value when I got sufficiently close, but this felt like a very inelegant fix to me. I searched the literature and around on-line and had myself 6 different representations of the Boys function before I was finished. The most stable version was a formulation that pulled the Boys function and all its derivatives out of the confluent hypergeometric function 1F1, which I was able find implemented in the scipy special functions library (I felt fortunate; I had added Scipy to my python environment for a completely unrelated reason that turned out to be a dead-end and ended up needing this special function… lucky!)

2 Boys function

I write the formulation here because somebody may someday benefit from it. In this, Fn differs from the nth derivative of the Boys function by -1^n (the ‘n’ functions are all positive while the derivatives of the Boys function alternate positive and negative. The official “Boys Function” is where n = 0.)

Initially, I could not distinguish whether the errors in my program were in the Boys function or in the method being used to form the gaussian derivatives. I knew I was having problems with my initial implementation of the Boys function from a modified erf, but problems persisted no matter how I altered my Boys function. The best conclusion was that the Boys function was not my only stumbling block. Eventually I waved the white flag and surrendered to the fact that my derivative routine was not going to cut it.

This lead to a period where I was back in the literature learning how gaussian derivatives were being made by the professional quantum chemists. I found a number of different strategies: Pople and Hehre produced a paper in 1978 outlining a method used in their Gaussian software which apparently performs a cartesian rotation to cause most of the busy work to go away, which is supposedly really fast. There was a method by Dupuis, Rys and King in the 1970s which generates higher angular momentum integrals by some method of quadrature. A paper by McMurchie and Davidson in 1978 detailed a method of recursion which generates the higher angular momentum gaussians from an s-type seed using hermite polynomials. Another by Obara and Saika in 1986 broke the system down into a three center gaussian integral and used recurrence relations to generate the integrals by another method of recursion. And still a further paper involving Pople elaborated something similar (I found the paper, but never read this one).

Because I had found a secondary source from a more modern quantum chemist detailing the method, I focused on McMurchie and Davidson. This method gave several fairly interesting techniques that I was able to successfully program. I learned here how to program recursive functions since McMurchie calculates their integrals by a recurrence relation.

About this time, also, I was butting my head against the difficulty that I had no way of really checking whether my integrals were functioning properly or not. For Szabo and Ostlund, values for the integrals are only given involving s-type orbitals. I had nothing in the way of numbers for p-type orbitals. I could tell that my Hartree-Fock routine was not converging, but I couldn’t tell why. I tried making a comparison calculation in Mathematica, but the integration failed miserably. I might’ve been able to go further with Mathematica, if only the time step of figuring out how to do that programming wasn’t steeper –when the functions aren’t directly in their colossal library, implementation can become pretty hard in Mathematica. Having hammered on the Boys function until I could basically build nothing more stable and then after I found a way to diagnose whether or not my routines were producing symmetric matrices, I had no other way of telling where faults existed. These integrals are not easy by hand.

I dug a paper out of Taketa, Huzinaga and O-hata in 1966 that was about the only paper I could find which actually reported values for these integrals given a particular set-up. Apparently, after 1966, it stopped being a novelty to show that you were able to calculate these integrals, so nobody has actually published values in the last fifty years! Another paper by Shavitt and Karplus a few years earlier references values calculated at MIT still earlier, but aside from these, I struggled to find reference values. This experience is a formative one because it shows how hard you have to work to be competitive if you’re not actually in the in-club of the field –for modern workers, the problem is solved and you refer to a program built during that time which can do these operations.

Comparing to Taketa, using the McMurchie-Davidson method, I was able to confirm that my overlap and kinetic energy integrals were functioning properly. The nuclear attraction integral was a bust, no dice and no better in the electron repulsion integrals. They worked for s-orbitals, but not for p-type and higher. Unfortunately, Taketa and company had mistransliterated one of their basis functions in a previous paper, leading me to worry that maybe the paper was actually lying about the values they reported. I eventually decided that Shavitt was probably not also lying, meaning that there was still something wrong with my integration, even as I had hammered on McMurchie until smoke was coming out of my ears and I was sure I implemented it correctly.

This was rock bottom for me. You can sort of see the VH1 special: and here he hit rock bottom. I didn’t know what else to do; I was failing at finding alternative ways to generate appropriate reference checks and simply could not see what was wrong in my programming. I had no small amount of doubt about my fitness to perform computer programming.

My selected path forward was to learn how to implement Obara-Saika and to turn that against McMurchie. This method is also a recursive method to perform exactly the same derivatives without actually doing a long-form derivative. Initially, Obara-Saika also gave a value different from Taketa and also Shavitt, but I was able to track down a -1 that changed everything. Suddenly, Obara-Saika was giving values right on with Taketa.

When I tried a comparison by hand of the outcomes of McMurchie-Davidson against those of Obara-Saika on the very simplest case of p-type primitives, I found that these two methods produce different values. For the same problem, Obara-Saika does not produce the same result as McMurchie-Davidson. I conclude that I either misunderstood McMurchie-Davidson, which is possible, or that the technique is either willfully mangled in the paper to protect a possibly valuable piece of work from a competitor or it’s simply wrong (somebody tell Helgaker: he actually teaches this method in classes where he is reproducing it faithfully, part of why I originally trusted it). I do not know why McMurchie-Davidson fails in my hands because everything they do looks right!

 

Pretty Pictures

After a huge amount of work, I broke through. My little python program was able to reproduce some historical Hartree-Fock calculations.

water density filter low 2

This image is a maya vi plot depicting the quantum mechanical electron density of water (H2O). The oxygen is centered while the two hydrogens are at the points of the stems. This calculation used the minimal basis set of STO-3G, which places three primitive gaussians for each orbital. Each hydrogen contains only a 1s orbital while the carbon contains a 1s, a 2s and three 2p orbitals in the x, y and z directions. The image above is threshold filtered to show the low density parts of the distribution, where the intensities are near zero, which was necessary because the oxygen has so much higher density than the hydrogens that you cannot see anything but the inner shell of the oxygen when plotting on absolute density. This system has ten electrons in five closed molecular orbitals where the electron density represents the superposition of those orbitals. The reported energies were right on the values expected for an STO-3G calculation.

With water, I worried initially that the calculation wouldn’t converge if I didn’t make certain symmetry considerations about the molecule, but that turned out to be unnecessary. After I solved my integral problems, the calculation converged immediately.

I also spent some time on methane using the same tools and basis…

Density low threshold

With methane (CH4) you can see quite clearly the tetrahedral shape of the molecule with the carbon at center and the hydrogens arranged in a halo around it. This image was also filtered to show the low density regions.

I had some really stunning revelations about organic chemistry when I examined the molecular orbital structure of methane somewhat more closely. Turns out that what you learn in organic chemistry class is a lie! I’ll talk about this in an independent blog post because it deserves to be highlighted, front and center.

As kind of a conclusion here, I will note that STO-3G is not the most perfect modern basis set for doing quantum mechanical calculations… and even with the most modern basis, you can’t quite get there. Hartree-Fock does not include cross-correlation between electrons with differing spin and therefore converges to a limit called the Hartree-Fock limit. The difference seen between molecular energies calculated at the Hartree-Fock limit and those actually observed in experiment is referred to as the correlation energy, which can be calculated with greater accuracy using more modern Post-Hartree-Fock techniques using higher quality basis sets than STO-3G. With a basis of infinite size and with calculations that include the correlation energy, you get close to the truth. What is seen here is still just another approximation… better than the last, but still just short of reality.

My little python program probably can’t go there without a serious redesign that would take more time than I currently have available (and probably involve me learning FORTRAN). The methane calculation took 12 seconds –as compared to molecular hydrogen, which took 5% of a second. Given the scaling of the problem, benzene (6 carbons and 6 hydrogens) would take something close to 2 hours to calculate and maybe all night to plot. This using only STO-3G, three gaussians per orbital, which is dinky and archaic compared to a more modern basis set, which might have 50 or 60 functions for the basis of a single atom. Compared to what modern programs can do, benzene itself is but a toy.

The Classical version of NMR

As I’ve been delving quite deeply into numerical solutions of quantum mechanics lately, I thought I would take a step back and write about something a little less… well… less. One thing about quantum mechanics that is sometimes a bit mind-boggling is that classical interpretations of certain systems can be helpful to understand things about the quantum mechanics of the same system.

Nuclear magnetic resonance (NMR) dovetails quite nicely with my on-going series about how magnetism works. You may be familiar with NMR from a common medical technique that makes heavy use of it: Magnetic Resonance Imaging (MRI). The imaging technique of MRI makes use of NMR to build density maps of the human anatomy. The imaging technique accomplishes this feat by using magnetism to excite a radio signal from NMR active atomic nuclei and then create a map in space from the 3D distribution of NMR signal intensity. NMR itself is due specifically to the quantum mechanics of spin, particularly spin flipping, but it also has a classical interpretation which can aid in understanding what these more difficult physics mean. The system is very quantum mechanical, don’t get me wrong, but the classical version is actually a pretty good analog for once.

I touched very briefly on the entry gate to classical NMR in this post. The classical physics describing the behavior of a compass needle depicts a magnetized needle which rotates in order to follow the lines of an external magnetic field. For a magnetic dipole, compass needle-like behavior will tend to dominate how that dipole interacts with a magnetic field unless the rotational moments of inertia of that dipole are very small. In this case, the compass needle no longer swings back and forth. So, What does it do?

Let’s consider again the model of a compass needle awash in a uniform magnetic field…

1 compass needle new

This model springs completely from part 3 of my magnetism series. The only difference I’ve added is that dipole points in some direction while the field is along the z-axis. The definition of the dipole is cribbed straight from part 4 of my magnetism series and is expressing quantum mechanical spin as ‘S.’ We can back off from this a little bit and recognize that spin is simply angular momentum, where I transit to calling it ‘L’ instead so that I can slip away from the quantum. In this particular post, I’m not delving into quantum!

2 magnetic dipole classic nmr

In this formula, ‘q’ is electric charge, ‘m’ is the mass of the object and ‘g’ is the gyromagnetic ratio which regularizes spin angular momentum to a classical rotational moment.

I will crib one more of the usual suspects from my magnetism series.

3 torque expression

I originally derived this torque expression to show how compass needles swing back and forth in a magnetic field. In this case, it helps to stop and think about the relationship between torque and angular momentum. It turns out that these two quantities are related in much the same manner as plain old linear momentum and force. You acquire torque by finding out how angular momentum changes with time. Given that magnetic moment can be expressed from angular momentum, as can torque, I rewrite the equation above in terms of angular momentum.

4 rewritten torque

This differential equation has the time derivative of angular momentum (signified in physicist shorthand as the ‘dot’ over the quantity of ‘L’) equal to a cross product involving angular momentum and the magnetic field. If you decompress the cross product, you can get to a fairly interesting little coupled differential equation system.

5 decompressing cross product

 

This simplifies the cross product to the two relevant surviving terms after considering that the B-field only lies along one axis. This gives a vector equation…

6 opening diff eqn

I’ve expressed the vector equation in component form so that you can see how it breaks apart. In this, you get three equations, one for each hatted vector which connect to each dimension of a three dimensional angular momentum. These can all be written separately.

7 differential equations

I’ve grouped the B-field into the coefficient because it’s a constant and I’ve tried to take control of my reckless highlighting problem so that you can see how these differential equations are coupled. The z-axis of the angular momentum is easy since it must solve as a constant and since it’s decoupled from ‘x’ and ‘y’. The other two are not so easy. The coefficient is a special quantity which is called the Larmor frequency.

8 Larmor frequency

This gives us a fairly tidy package.

9 revised differential eqn

I’ve always loved the solution of this little differential equation. There’s a neat trick here from wrapping the ‘x’ and ‘y’ components up as the two parts of a complex number.

10 complex number

You then just take a derivative of the complex number with respect to time and work your way through the definitions of the differential equation.

11 Complex num diff eqn

After working through this substitution, the differential equation is reduced to maybe the simplest first order differential equation you could possibly solve. The answer is but a guess.

12 soln 1

Which can be broken up into the original ‘x’ and ‘y’ components of angular momentum using the Euler relation.

12 soln 1a

There’s an argument here that ‘A’ is determined by the initial conditions of the system and might contain a complex phase, but I’m going to just say that we don’t really care. You can more or less just say that all angular momentum is distributed between the x, y and z components of the angular momentum, part of it a linear combination that lies in the x-y plane and the rest pointed along the z-axis.

13 basis solution

And, as the original casting of the problem is in terms of the magnetic dipole moment, I can switch angular momentum back to the dipole moment. Specifically, I can use the pseudo-quantum argument that the individual dipoles possess half-integer spin magnitude angular momentum as hbar over 2.

14 classical dipole moment

This gives an expression for how the classical atom sized spin dipole will move in a uniform magnetic field. The absolute value on the charge in the coefficient constrains the situation to reflect only the size of the magnetic moment given that the angular momentum was considered to be only a magnitude. Charge appears a second time inside the sine and cosine terms concerning the Larmor frequency: for example, if the charge is negative, the negative sign on the frequency will cause the sine to switch from negative to positive while the cosine is unaffected.

15 larmor precession

A classical magnetic dipole trapped in a uniform magnetic field pointed along the Z-axis will undergo a special motion called gyroscopic “precession.” In this picture, the ellipses are drawn to show the surfaces of a cylinder in order to follow the positions of the dipole moment vectors with time. Here, the ellipses are _not_ an electrical current loop as depicted in the first image above. The dipole moment vector traces out the surface of a cone as it moves; when viewed from above, the tip of the dipole moment with a +q charge sweeps clockwise while the -q charge sweeps CCW. This motion is very similar to a child’s top or a gyroscope…

gyroscope

This image taken from Real world physics, hopefully, you’ve had the opportunity to play with one of these. A mentioned, the direction of the gyroscopic motion is determined by the charge of the dipole moment. As also mentioned, this is a classical model of the motion and it breaks down when you start getting to the quantum mechanics, but it is remarkably accurate in explaining how a tiny, atom sized dipole “moves” when under the influence of a magnetic field.

Dipolar precession appears in NMR during the process of free induction decay. As illustrated in my earlier blog post on NMR, you can see the precession:

220px-nmr_fid_good_shim_en-svg

In the sense of classical magnetization, you can see the signal from the dipolar gyroscopes in the plot above. One period of oscillation in this signal is one full sweep of the dipole moment around the z-axis. As the signal here is nested in the “magnetization” of the NMR sample, the energy bleeding out of the magnetic dipoles into the radiowave signal that is actually observed saps energy from the system and causes the precession in the magnetization to die down until it lies fully along the z-axis (again, classical view!) In its relaxed state, the magnetization points along the axis of the external field, much as a compass needle does. The compass needle, of course, can’t precess the way an atomic dipole moment can. And, as I continue repeatedly to point out, this is a classical interpretation of NMR… where the quantum mechanics are appreciably similar, though decidedly not the same.

Because such a rotating dipole moment cannot exhibit this kind of oscillation indefinitely without radiating its energy away into electromagnetism, some action must be undertaken in order to set the dipole into precession. You must somehow tip it away from pointing along the external magnetic field, at which time it will begin to precess.

In my previous post on the topic, I gave several different explanations for how the dipoles are tipped in order to excite free induction decay. Verily, I said that you blast them with a radio frequency pulse in order to tip them. That is true, but very heavy handed. Classical NMR offers a very elegant interpretation for how dipoles are tipped.

To start, I will dial back to the picture we started with for the precession oscillation. In this set up, the dipole starts in a relaxed position pointing along the z-axis B-field and is subjected to a radio frequency pulse that is polarized so that the B-field of the radio wave lies in the x-y plane. The Poynting vector is somewhere along the x-axis and the radiowave magnetic field is along the y-axis.

16 setup 2nd field.png

In this, the radiowave magnetic field is understood to be much weaker than the powerful static magnetic field.

You can intuitively anticipate what must happen for a naive choice of frequency ‘ω.’ The direction of the magnetic dipole will bobble a tiny bit in an attempt to precess around the superposition of the B0 and B2 magnetic fields. But, because the B0 field is much stronger than the B2 field, the dipole will remain pointing nearly entirely along the z-axis. We could write it out in the torque equation in order to be explicit.

17 2 field torque

Without thinking about the tiny time dependence on the B2 field, we know the solution to this equation from above for atomic scale dipoles. The Larmor frequency would just depend on the vector sum of the two fields. This is of course a very naive response and the expected precession would be very small and hard to detect since the dipole is not displaced very far from the average direction of the field at any given time, again expecting B2 to be very small. And, if B2 is oscillatory, there is no point where the time average of the total field lies off the z-axis. The static field tends to dominate and precession would be weak at best.

Now, there is a condition where an arbitrarily weak B2 field can actually have a major impact on the direction of the magnetic dipole moment.

18 split field

This series of algebraic manipulations takes a cosinusoidal radiowave B-field and splits it into two parts. If you squint closely at the math, the time dependent B-fields present in the last line will spring out to you as counter-rotating magnetic fields. I got away with doing this by basically adding zero.

19 counter rotation

Why in the world did I do this? This seems like a horrible complexification of an already hard-to-visualize system.

To understand the reason for doing this, I need to make a digression. In physics, one of the most useful and sometimes overlooked tools you can run across is the idea of frame of reference. Frame of reference is simply the circumstance by which you define your units of measurement. You can think about this as being synonymous with where you have decided to park your lawn chair in order to take stock of the events occurring around you. In watching a train trundle past on its tracks, clearly I’ve decided my frame of reference is sitting someplace near the tracks where I can measure that the train is moving with respect to me. I can also examine the same situation from inside the train car looking out the window, watching the scenery scroll past. Both situations will yield the same mathematical observations from two different ways of looking at the same thing.

In this case, the frame of reference that is useful to step into is a rotating frame. If you’re on the playground, when you sit down on a moving merry-go-round, you have shifted to a rotating frame of reference where the world will appear as if it rotates around you. Sitting on this moving merry-go-round, if you watch someone toss a baseball across over your head, you would need to add some sort of fictitious force into your calculation to properly capture the path the ball will follow from your point of view. This means reinventing your derivative with respect to time.

20 rotating frame time derivative

This description of the rotating frame time derivative is simply a matter of tabulating all the different vectors that contribute to the final derivative. (The vectors here are misdrawn slightly because I initially had the rotating vector backward.) The vector as seen in the frame of reference moves through the rotation according to displacements that are due both to the internal (in) rotation and whatever external (ext) displacements contribute to its final state. The portion due to the rotation (rot) is a position vector that is simply shifted by the rotation at an angle I called ‘α’ where the rotation is defined with positive being in the right-handed sense –literally backward (lefthanded) when seen from within the rotating frame. The angular displacement ‘α’ is equal to the angular speed ‘Ω’ times time as Ωt and it can be represented by a vector that is defined to point along the z-axis. The little bit of trig here shows that the rotating frame derivative requires an extra term that is a cross product between the vector being differentiated and the rotational velocity vector.

How does this help me?

21 intro rotating frame

I’ve once again converted torque and magnetic moment into angular momentum in order to reveal the time derivative. It is noteworthy here that the term involving the Larmor frequency directly, the first term on the right, looks very similar to the form of the rotating frame if the Larmor frequency is taken to be angular velocity of the rotating frame. Moreover, I have already defined two other magnetic field terms that are both rotating in opposition to each other where I have not selected their frequencies of rotation.

22 cancelation is rotating frame

A rotating frame could be chosen where the term involving the static magnetic field will be canceled by the rotation. This will be a clockwise rotation at the speed of the Larmor frequency. If the frequency of rotation of B2 is chosen to be the Larmor frequency, the clockwise rotating B2 field term will enter into the rotating frame without time dependence while the frequency of the other term will double. As such, one version of the B2 field can be chosen to rotate with the rotating frame.

23 cancellation 2

In the final line, the primed unit vectors are taken to be with the rotating frame of reference. So, two things have happened here: the effect of the powerful static field is canceled out purely by the rotation of the rotating frame and the effect of the counter rotating field, spinning around at twice the Larmor frequency in the opposite direction, is on average in no direction. The only remaining significant term is the field that is stationary with respect to the rotating frame, which I’ve taken to be along the y’-axis.

The differential equation that I’ve ended up with here is exactly like the differential equation solved for the powerful static field by itself far above, but with a precession that will now occur at a frequency of ω2 around the y’-axis.

24 rotating frame solutions

If I take the starting state of the magnetic dipole moment to be relaxed along the z-axis, no component will ever exist along the y’-axis… the magnetic dipole moment will purely inhabit the z-x’ plane in the rotating frame.

25 motion in rotating frame

As long as the oscillating radiowave magnetic field is present, the magnetic dipole moment will continue to revolve around the y’-axis in the rotating frame. In the stationary frame, the dipole moment will tend to follow a complicated helical path both going around the static B-field and around the rotating y’-axis.

If you irradiate the sample with radiowaves for 1/4 of the period associated with the ω2 frequency, the magnetic dipole moment will rotate around until it lies in the x-y plane. You then shut off the radiowave source and watch as the NMR sample undergoes a free induction decay until the magnetization lies back along the static B-field.

This is a classical view of what’s going on: a polarized radiowave applied at the Larmor frequency will cause atomic magnetic dipoles to torque around in the sample until they are able to undergo oscillation. Once the radiowave is shut off, the magnetization performs a free induction decay. Applying the radiowave at the Larmor frequency is said to be driving the system at resonance since the static B-field will always be strong enough to overwhelm the comparably weak radiowave magnetic field.

I’ve completely avoided the quantum mechanics of this system. The rotating frame of Larmor precession is fairly accurate for describing what’s happening here until you need to consider other quantum mechanical effects present in the signal, such as spin-spin coupling of neighboring NMR active nuclei. Quantum mechanics are ultimately what’s going on, but you want to avoid the headaches associated with that wherever possible.

I do have it in mind to rewrite a long defunct post that described the quantum mechanics of the two-state system specifically in how it describes NMR. It will happen someday, honestly!

The Quantum Mechanics in the Gap

A big cat’s paw of mine is trying to fill the space between my backgrounds to understand how one thing leads to another.

When a biochemist learns quantum mechanics (QM), it happens from a background where little mathematical sophistication is required; maybe particle-in-a-box appears in the middle of a low grade Physical Chemistry class and many results of QM are qualitatively encountered in General Chemistry or perhaps in greater detail in Organic Chemistry. A biochemist does not need to be perfect at these things since the meat of biochemistry is a highly specialized corner of organic chemistry dealing with a relatively small number of molecule types where the complexity of the molecule tends to force the details into profound abstraction. Proteins and DNA, membranes and so on are all expressed mostly as symbols, sequences or structural motifs. Reactions occur symbolically where chemists have worked out the details of how a reaction proceeds (or not) without really saying anything very profound about it. This created a situation of deep frustration for me once upon a time because it always seemed like I was relying on someone else to tell me the specifics of how something actually worked. I always felt helpless. Enzymatic reaction mechanisms always drove me crazy because they seem very ad hoc; no reason they shouldn’t since evolution is always ad hoc, but the symbology used always made it opaque to me as to what was happening.

When I was purely a biochemist, an undergraduate once asked me whether they could learn QM in chemistry and I honestly answered “Yes” that everything was based on QM, but withheld the small disquiet I felt that I really didn’t believe that I understood how it fit in. Background that I had in QM being as it was at that point, I didn’t truly know a quantum dot from a deviled egg. Yes, quantum defines everything, but what does a biochemist know of quantum? Where does bond geometry come from? Everything seems like arbitrary tinker toys using O-Chem models. Why is it that these things stick together as huge, solid ball-and-stick masses when everything is supposed to be puffy wave clouds? Where is this uncertainty principle thing people vaguely talk about in hushed tones when referring to the awe inspiring weirdness that is QM? You certainly would never know such details looking at model structures of DNA. This frustration eventually drove me to multiple degrees in physics.

In physics, QM takes on a whole other dimension. The QM that a physicist learns is concerned with gaining the mathematical skill to deal with the core of QM while retaining the flexibility to specialize in a needed direction. Quantum Theory is a gigantic topic which no physicist knows in entirety. There are two general cousins of theory which move in different directions with Hamiltonian formalisms diverging from the Lagrangian. They connect, but have power in different situations. Where you get very specific on a topic is sometimes not well presented –you have to go a long way off the beaten path to hit either the Higgs Boson or General Relativity. Physicists in academia are interested in the weird things lying at the limits of physics and focus their efforts on pushing to and around those weirdnesses; you only focus efforts on specializations of quantum mechanics as they are needed to get to the untouched things physicists actually care to examine. This means that physicists sometimes focus little effort on tackling topics that are interesting to other fields, like chemistry… and the details of the foundations of chemistry, like the specifics of the electronic structure of the periodic table, are under the husbandry of chemists.

If you read my post on the hydrogen atom radial equation, you saw the most visible model atom. The expanded geometries of this model inform the structure of the periodic table. Most of the superficial parts of chemistry can be qualitatively understood from examining this model. S, P, D, F and so on orbitals are assembled from hydrogenic wave equations… at least they can be on the surface.

Unfortunately, the hydrogenic orbitals can only be taken as an approximation to all the other atoms. There are basically no analytic solutions to the wave functions of any atom beyond hydrogen.

Fine structure, hyper fine structure and other atomic details emerge from perturbations of the hydrogenic orbitals. Perturbation is a powerful technique, except that it’s not an exact solution. Perturbations approach solutions by assuming that some effect is a small departure from a much bigger situation that is already solved. You then do an expansion on which successive terms tend to approach the perturbative part more and more closely. Hydrogenic orbitals can be used as a basis for this. Kind of. If the “perturbation” becomes too big relative to the basis situation, the expansion necessary to approximate it becomes too big to express. Technically, you can express any solution for any situation from a “complete” basis, but the fraction of the basis required for an accurate expression becomes bigger than the “available” basis before you know it if the perturbation is too large compared to the context of the basis.

When I refer to “basis” here, I’m talking about Hilbert spaces. This is the use of orthogonal function sets as a method to compose wave equations. This works like Fourier series, which is one of the most common Hilbert space basis sets. Many Hilbert spaces contain infinitely many basis functions, which is bigger than the biggest number of functions any computer can use. The reality is that you can only ever actually use a small portion of a basis.

The hydrogen situation is merely a prototype. If you want to think about helium or lithium or so on, the hydrogenic basis becomes merely one choice of how to approach the problem. The hamiltonians of other atoms are structures that can in some cases be bigger than is easily approachable by the hydrogenic basis. Honestly, I’d never really thought very hard about the other basis sets that might be needed, but technically they are a very large subject since they are needed for the 120 odd other atoms on the periodic table beyond hydrogen. These other atoms have wave functions that are kind of like those of hydrogen, but are different. The S-orbital of hydrogen is a good example of S-orbitals found in many atoms, even though the functional form for other atoms is definitely different.

This all became interesting to me recently on the question of how to get to molecular bonds as more than the qualitative expression of hydrogenic orbital combinations. How do you actually calculate bond strengths and molecular wave functions? These are important to understanding the mechanics of chemistry… and to poking a finger from quantum mechanics over into biochemistry. My QM classes brushed on it, admittedly, deep in the quagmire of other miscellaneous quantum necessary to deal with a hundred different things. I decided to take a sojourn into the bowels of Quantum Chemistry and develop a competence with the Hartree-Fock method and molecular orbitals.

The quantum mechanics of quantum chemistry is, surprisingly enough, mechanically more simple than one might immediately expect. This is splitting hairs considering that all quantum is difficult, but it is actually somewhat easier than the difficulty of jumping from no quantum to some quantum. Once you know the basics, you pretty much have everything needed to get started. Still, as with all QM, this is not to scoff at; there are challenges in it.

This form of QM is a Hamiltonian formalism where the first mathematics originated in the 1930s. The basics revolve around the time independent Schroedinger equation. Where it jumps to being modern QM is in the utter complexity of the construct… simple individual parts, just crazily many of them. This type of QM is referred to as “Many Body theory” because it involves wave equations containing dozens to easily hundreds of interactions between individual electrons and atomic nuclei. If you thought the Hamiltonian I wrote in my hydrogen atom post was complicated, consider that it was only for one electron being attracted to a fixed center… and not even including the components necessary to describe the mechanics of the nucleus too. The many body theory used to build up atoms with many electrons works for molecules as well, so learning generalities about the one case is learning about it the other case too.

As an example of how complicated these Schrodinger equations become, here is the time independent Schrodinger equation for Lithium.

Lithium Schrodinger

This equation is simplified to atomic units to make it tractable. The part describing the kinetic energy of the nucleus is left in. All four of those double Del operators open up into 3D differentials like the single one present in the hydrogen atom. The next six terms describe electrostatic interactions between the three electrons among themselves and with the nucleus. This is only one nucleus and three electrons.

As I already mentioned, there are no closed-form analytical solutions for structures more complicated than hydrogen, so many body theory is about figuring out how to make useful approximations. And, because of the complexity, it must make some very elegant approximations.

One of the first useful features of QM for addressing situations like this I personally overlooked when I initially learned it. With QM, most situations that you might encounter have no exact solutions. Outside of a scant handful of cases, you can’t truly “solve” anything. But, for all the histrionics that goes along with that, the solutions, what are called the eigenstates, are a special case of lowest possible energy for the given circumstance. If you make a totally random guess about the form of the wave function which solves a given Hamiltonian, you are assured that the actual solution has a lower energy. Since that’s the case, you can play a game: if I make a some random guess about the form of the solution, another guess that has a lower energy is a better guess regarding the actual form. You can minimize this, always making adjustments to the guess such that it achieves a lower energy, where eventually it won’t go any lower. The actual solution still ends up being lower, but maybe not very far. Designing such energy minimizing guesses inevitably converges toward the actual solution and is usually accomplished by systematic mathematical minimization. This method is called “Variation” and is one of the most major methods for constructing approximations of an eigenstate. Also, as you might expect, this is a numerical strategy and it makes heavy use of computers in the modern day since the guesses are generally very big, complicated mathematical functions. Variational strategies are responsible for most of our knowledge of the electronic structure of the periodic table.

Using computers to make guesses has been elevated to a high art. Literally, a random function with a large number of unknown constants is tried against the Hamiltonian; you then take a derivative of the energy to see how it varies as a function of any one constant and then adjust that constant until the energy is at a minimum, where the derivative is near zero and where the second derivative shows an inflection indicative of a minimum. Do this over and over again with all the available constants in the function and eventually the trial wave function converges to the actual solution.

Take that in for a moment. We understand the periodic table mainly by guessing at it! A part of what makes these wave functions so complicated is that the state of any one electron in any system more complicated than hydrogen is dependent on every other electron and charged body present, as shown in the Lithium equation above. The basic orbital shapes are not that different from hydrogen, even requiring spherical harmonics to describe the angular shape, but the specific radial scaling and distribution is not solvable. These electrons influence each other in several ways. First, they place plain old electrostatic pressure on one another –all electrons push against each other by their charges and shift each other’s orbitals in subtle ways. Second, they exert what’s called “exchange pressure” on one another. In this, every electron in the system is indistinguishable from every other and electrons specifically deal with this by requiring that the wave function be antisymmetric such that no electron can occupy the same state as any other. You may have heard this called the Pauli Exclusion Principle and it is just a counting effect. In a way, this may be why quantum classes tend to place less weight on the hydrogen atom radial equation: even though it holds for hydrogen, it works for nothing else.

Multi-atom molecules stretch the situation even further. Multiple atoms, unsolvable in and of themselves, are placed together in some regularly positioned array in space, with unsolvable atoms now compounded into unsolvable molecules. Electrons from these atoms are then all lumped together collectively in some exchange antisymmetric wave function where the orbitals are dependent on all the bodies present in the system. These orbitals are referred to in quantum chemistry as molecular orbitals and describe how an electron cloud is dispersed among the many atoms present. Covalent electron bonds and ionic bonds are forms of molecular orbital, where electrons are dispersed between two atoms and act to hold these atoms in some fixed relation with respect to one another. The most basic workhorse method for dealing with this highly complicated arrangement is a technique referred to as the Hartree-Fock method. Modern quantum chemistry is all about extensions beyond Hartree-Fock, which often use this method as a spine for producing an initial approximation and then switch to other variational (or perturbative) techniques to improve the accuracy of the initial guess.

Within Hartree-Fock, molecular orbitals are built up out of atomic orbitals. The approximation postulates, in part, that each electron sits in some atomic orbital which has been contributed to the system by a given atom where the presence of many atoms tends to mix up the orbitals among each other. To obey exchange, each electron literally samples every possible contributed orbital in a big antisymmetric superposition.

Hartree-Fock is sometimes referred to as Self Consistent Field theory. It uses linear superpositions of atomic orbitals to describe the molecular orbitals that actually contain the valence electrons. In this, the electrons don’t really occupy any atomic orbital, but some combination of many orbitals all at once. For example, a version of the stereotypical sigma covalent bond is actually a symmetric superposition of two atomic S-orbitals. The sigma bond contains two electrons and is made antisymmetric by the solitary occupancy of electron spin states so that the spatial part of the S-orbitals from the contributing atoms can enter in as a symmetric combination –this gets weird when you consider that you can’t tell which electron is spin up and which is spin down, so they’re both in a superposition.

Sigma bond

The sigma bond shown here in Mathematica was actually produced from two m=0 hydrogenic p-orbitals. The density plot reflects probability density. The atom locations were marked afterward in powerpoint. The length of the bond here is arbitrary, and not energy minimized to any actual molecule. This was not produced by Hartree-Fock (though it would occur in Hartree-Fock) and is added only to show what molecular bonds look like.

From completeness, here is a pi bond.

Pi bond

At the start of the Hartree-Fock, the molecular orbitals are not known where the initial wave function guess is that every electron is present in a particular atomic orbital within the mixture. Electron density is then determined throughout the molecule and used to furnish repulsion and exchange terms among the electrons. This is then solved for energy eigenvalues and spits out a series of linear combinations describing the orbits where the electrons are actually located, which turns out to be different from the initial guess. These new linear combinations are then thrown back into the calculation to determine electron density and exchange, which is once more used to find energy eigenvalues and orbitals, which are once again different from the previous guess. As the crank is turned repeatedly, the output orbitals converge onto the orbitals used to calculate the electron density and exchange. When these no longer particularly change between cycles, the states describing electron density will be equal to those associated with the eigenvalues –the input becomes self consistent with the output, hence giving the name to the technique by production of a self-consistent field.

Once the self consistent electron field is reached, the atomic nuclei can be repositioned within it in order to minimize the electrostatic stresses on the nuclei. Typically, the initial locations of the nuclei must be guessed since they are themselves not usually exactly known. A basic approximation of the Hartree-Fock method is the Born-Oppenheimer approximation where massive atomic nuclei are expected to move on a much slower time scale than the electrons, meaning that the atomic nuclei create a stationary electrostatic field which arranges the electrons, but then are later moved by the average dispersion of the electrons around them. Minimizing the atomic positions necessitates re-calculation of the electron field, which in turn may require that atomic positions again be readjusted until eventually the electron field does not alter the atomic positions, whereby the atomic positions facilitate the configuration of the surrounding electrons. With the output energy of the Hartree-Fock method minimized by rearranging the nuclei, this gives the relaxed configuration of a molecule. And, from this, you automatically know the bonding angles and bond lengths.

The Born-Oppenheimer approximation is a natural simplification of the real wave function which splits the wave functions of the nuclei away from the wave functions of the electrons; it can be considered valid predominantly because of the huge difference in mass (a factor of ~100,000) between electrons and nuclei, where the nuclei are essentially not very wave-like relative to the electrons. In Lithium, above, it would simply mean removing the first term of the Schrodinger equation involving the nuclear kinetic energy and understanding that the total energy of the molecule is not E. Most of the shape of a molecule can treat atomic nuclei as point-like while electrons and their orbitals constitute pretty much all of the important molecular structure.

As you can see by the description, there are a huge number of calculations required. I’ve described them very topically. Figuring out the best way to run Hartree-Fock has been an ongoing process since the 1930s and has been raised to a high art nearly 90 years later. At the superficial level, Hartree-Fock approximation is hampered by the not placing the nuclei directly in the wave function and by not allowing full correlation among the electrons. This weakness is remedied by usage of variational and perturbative post-Hartree-Fock techniques that have come to flourish with the steady increase of computational power during the advancement of Moore’s Law in transistors. That said, the precision calculation of overlap integrals is so computationally demanding on the scale of molecules that the hydrogen atom eigenstate solutions are impractical as a basis set.

This actually really caught me by surprise. Hartree-Fock has a very weird and interesting basis set type which is used in place of the hydrogen atom orbitals. And, the reason for the choice is predominantly to reduce a completely intractable computational problem to an approachable one. When I say “completely intractable,” I mean that even the best supercomputers available today still cannot calculate the full, completely real wave functions of even small molecules. With how powerful computers have become, this should be a stunning revelation. This is actually one of the big motivating factors toward using quantum computers to make molecular calculations; the quantum mechanics arise naturally within the quantum computer enabling the approximations to strain credulity less. The approximation used for the favored Hartree-Fock basis sets is very important to conserving computational power.

The orbitals built up around the original hydrogen atom solution to approximate higher atoms have a radial structure that has come to be known as Slater orbitals. Slater orbitals are variational functions that resemble the basic hydrogen atom orbital which, as you may be aware, is an exponential-La Guerre polynomial combination. Slater orbitals are basically linear combinations of exponentials which are then minimized by variation to fit the Hamiltonians of higher atoms. As I understand it, Slater orbitals can be calculated through at least the first two rows of the periodic table. These orbitals, which are themselves approximations, are actually not the preferred basis set for molecular calculations, but ended up being one jumping off point to produce early versions of the preferred basis set.

The basis set that is used for molecular calculations is the so-called “Gaussian” orbital basis set. The Gaussian radial orbitals were first produced by use of simple least-squares fits of Slater orbitals. In this, the Slater orbital is taken as a prototype and several Gaussian functions in a linear combination are fitted to it until Chi-square becomes as small as possible… while the Slater orbital can be exactly reproduced by use of an infinite number of Gaussians, it can be fairly closely reproduced by typically just a handful. Later Gaussian basis sets were also produced by skipping the Slater orbital prototype and jumping to Hartree-Fock application directly on atomic Hamiltonians (as I understand it). The Gaussian fit to the Slater orbital is pretty good across most of the volume of the function except at the center where the Slater orbital has a cusp (from the exponential) when the Gaussian is smooth… with an infinite number of Gaussians in the fit, the cusp can be reproduced, but it is a relatively small part of the function.

Orbitals comparison

Here is a comparison of a Gaussian orbital with the equivalent Slater orbital for my old hydrogen atom post. The scaling of the Slater orbital is specific to the hydrogen atom while the Gaussian scaling is not specific to any atom.

The reason that the Gaussian orbitals are the preferred model is strictly because of a computational efficiency issue. Within the application of Hartree-Fock, there are several integral calculations that must be done repeated. Performing these integrations is computationally very very costly on functions like the original hydrogen atom orbitals. With Gaussian radial orbitals, superpositions of the gaussians are themselves gaussians and the integrals all end up having the same closed forms, meaning that one can simply transfer constants from one formula to another without doing any numerical busy work at all. Further, the Gaussian orbitals can be expressed in straight-forward cartesian forms, allowing them to be translated around space with little difficulty and generally making them easy to work with (I dare you: try displacing a hydrogen orbital away from the origin while it remains in spherical-polar form. You’ll discover you need the entire Hilbert space to do it!). As such, with Gaussians, very big calculations can be performed extremely quickly on a limited computational budget. The advantage here is a huge one.

One way to think about it is like this: Gaussian orbitals can be used in molecular calculations roughly the same way that triangles are used to build polyhedral meshes in computer graphics renderings.

Gaussians are not the only basis set used with Hartree-fock. I’ve learned only a little yet about this alternative implementation, but condensed matter folk also use the conventional Fourier series basis set of sines and cosines while working on a crystal lattice. Sines and cosines are very handy in situations with periodic boundaries, which you would find in the regimented array of a crystal lattice.

Admittedly, as far as I’ve read, Hartree-Fock is an imperfect solution to the whole problem. I’ve mentioned some of the aspects of the approximation above and it must always be remembered that the it fails to capture certain aspects of the real phenomenon. That said, Hartree-Fock provides predictions that are remarkably close to actual measured values and the approximation lends itself well to post-processing that further improves the outcomes to an impressive degree (if you have the computational budget).

I found this little project a fruitful one. This is one of those rare times when I actually blew through a textbook as if I was reading a novel. Some of the old citations regarding self-consistent field theory are truly pivotal, important papers: I found one from about the middle 1970s which had 10,000 citations on Web of Science! In the textbook I read, the chemists goofed up an important derivation necessary to produce a workable Hartree-Fock program and I was able to hunt down the 1950 paper detailing said calculation. Molecular Orbital theory is a very interesting subject and I think I’ve made some progress toward understanding where molecular bonds come from and what tools are needed to describe how QM produces molecules.

(Edit 11-6-18):

One cannot walk away from this problem without learning exactly how monumental the calculation is.

In Hartree-fock theory, the wave equations are expressed in the form of determinants in order to encapsulate the antisymmetry of the electron wave equation. These determinants are an antisymmetrized sum of permutations over the orbital basis set. Each permutation ends up being its own term in the total wave equation. The number of such terms goes as a factorial of the number of electrons contained in the wave. Moreover, probability density is the square of the wave equation.

Factorials become big very quickly.

Consider a single carbon atom. This atom contains 6 electrons. From this, the total wave equation for carbon has 6! terms. 6! = 720. The probability density then is 720^2 terms… which is 518,400 terms!

That should make your eyes bug out. You cannot ever write that in its full glory.

Now, for a simple molecule, let’s consider benzene. That’s six carbons and six hydrogens. So, 6×6+6 = 42 electrons. The determinant would contain 42! terms. That is 1.4 ×10^51 terms!!!! The probability density is about 2×10^102 terms…

Avogadro’s number is only 6.02×10^23.

If you are trying to graph the probability density with position, the cross terms are important to determining the value of the density at any location, meaning that you have 10^102 terms. This assures that you can never graph it in order to visualize it! If you integrate that across all of space for the spaces of each electron (an integral with 42 3D measures), every term with an electron in two different states dies, killing cross terms. And, because no integral can survive if it has even one zero among its 42 3D measures, only the diagonal terms survive in 42 cases, allowing the normalized probability to simply evaluate to the number of electrons in the wave function. Integrating the wave function totally cleans up the mess, meaning that you can basically still do integrals to find expectation values thinking only about sums across the 42 electrons. This orthogonality issue is why you can do quantum chemistry at all: for an operator working in a single electron space, every overlap that doesn’t involve that electron must only be 1 for a given term to survive, which is a vast minority of cases.

For purposes of visualization, these equations are unmanageably huge. Not merely unmanageably, but unimaginably so. So huge, in fact, that they cannot be expressed in full except in the most simplistic cases. Benzene is only six carbons and it’s effectively impossible to tackle in the sense of the total wave equation. The best you can do is look for expressions for the molecular orbitals… which may only contain N-terms (as many as 42 for benzene.) Molecular orbitals can be considered the eigenstates of the molecule, where each one can be approximated to contain only one electron (or one pair of electrons in the case of closed shell calculations). The fully quantum weirdness here is that every electron samples every eigenstate, which is basically impossible to deal with.

For anyone who is looking, some of the greatest revelations which constructed organic chemistry as you might know it occurred as early as 1930. Linus Pauling wrote a wonderful paper in 1931 where he outlines one way of anticipating the tetragonal bond geometry of carbon… performed without use of these crazy determinant wavefunctions and with simple consideration of the vanilla hydrogenic eigenstates. Sadly, these are qualitative results without resorting to more modern methods.

(Edit 11-21-18):

Because I can never just leave a problem alone, I’ve been steadily cobbling together a program for running Hartree-Fock. If you know me, you’ll know I’m a better bench chemist than I am a programmer, despite how much time I’ve spent on the math. I got interested because I just understand things better if I do them myself. You can’t calculate these things by hand, only by computer, so off I went into a programming language that I am admittedly pretty incompetent at.

In my steady process of writing this program, I’ve just hit a point where I can calculate some basic integrals. Using the STO-3G basis set produced from John Pople’s lab in 1969, I used my routines to evaluate the ground state energy of the hydrogen atom. There is a lot of script in this program in order to work the basic integrals and it becomes really really hard to diagnose whether the program is working or not because of the density of calculations. So, it spits out a number… is it the right number? This is very hard to tell.

I used the 1s orbital from STO-3G to compute the kinetic and nuclear interaction energies and then summed them together. With baited breath, one click of the key to convert to eV…

Bam—- -13.47 eV!

You have no idea how good that felt. The accepted value of the hydrogen atom ground state is -13.6 eV. I’m only off by about 1%! That isn’t bad using an archaic basis set which was intended for molecular calculations. Since my little lap top is a supercomputer next to the machines that originally created STO-3G, I’d say I’m doing pretty well.

Not sure how many lines of code that is, but for me, it was a lot. Particularly since my program is designed to accommodate higher angular momenta than the S orbital and more complicated basis sets than STO-3G. Cranking out the right number here feels really good. I can’t help but goggle at how cheap computational power has become since the work that got Pople his Nobel prize.

(edit 12-4-18):

Still spending time working on this puzzle. There are some other interesting adjustments to my thinking as I’ve been tackling this challenge which I thought I might write about.

First, I really didn’t specify the symmetry that I was referring to above which gives rise to the huge numbers of terms in the determinant style wave functions. In this sort of wave function, which contains many electrons all at once, the fermionic structure must be antisymmetric on exchange. This relies on an operator called the ‘exchange operator’ whose sole purpose is to trade electrons within the wave equation… the fermionic wave function has an eigenvalue of -1 when operated on by the exchange operator. This means that if you trade two electrons within the wave function that the wave function remains unchanged except to produce a -1. And, this is for any exchange you might perform between any two electrons in that wave function. The way to produce a wave function that preserves this symmetry is by permuting the positional variables of the electrons among the orbitals that they might occupy, as executed in a determinant where the orbitals form one axis and the electron coordinates form the other. The action of this permutation turns out huge numbers of terms, all of which are the same set of orbitals, but with the coordinates of the electrons permuted among them.

A second item I wanted to mention is the interesting disconnect between molecular wave functions and atomic functions. In the early literature on the implementation of Hartree-Fock, the basis sets for the molecular calculation are constructed from fits to atomic wave functions. They often referred to this as Linear Combination of Atomic Orbitals. As I was playing around with one of these early basis sets, I was using these basis functions against the hydrogen atom Hamiltonian in order to try to error check the calculus in my program by attempting to reproduce the hydrogenic state energies. Very frequently, these were giving erroneous energies even though the gaussians have symmetry very like the hydrogenic orbitals they were attempting to represent. Interestingly, as you may have read above, the lowest energy state, equivalent to the hydrogenic 1s, fit very closely to the ground state energy of hydrogen… where a basis with a larger number of gaussians for the same orbital fit even more closely to that energy.

I spent a little time stymied on the fact that the higher energy functions in the basis, the 2s and 2p functions, fit very very poorly to the higher energies of hydrogen. This is unnerving because the processing of these particular integrals in my program required a pretty complicated bit of programming to facilitate. I got accurate energies for 1s, but poor energies for 2s and 2p… maybe the program is working for 1s, but isn’t working for 2s or 2p. The infuriating part here is that 2s has very similar symmetry to 1s and is treated by the program in roughly the same manner, but the energy was off then too. I spent time analytically proving to myself that the most simple expression of the 2p orbital was being calculated correctly… and it is; I get consistent numbers across the board, just that there is a baked in inaccuracy in this particular set of basis functions which makes them not fit the equivalent hydrogenic orbital energies. It did not make much sense to me why the molecular community was citing this particular basis set so consistently, even though it really doesn’t seem to fit hydrogen very well. I’m not yet totally convinced that my fundamental math isn’t somehow wrong, but when numbers start emerging that are consistent with each other from different avenues, usually it means that my math isn’t failing. I still have some other error checks I’m thinking about, but one additional thought must be added.

In reality, the molecular orbitals are not required to mimic the atomic parts from which they can be composed. At the locations in a molecule very close to atomic nuclei, the basis functions need to look similar to the atomic functions in order to contain the flexibility to mimic atoms, but the same is not true at locations where multiple nuclei have sway all at once. The choice of making orbitals atom-like is a convenience that might save some computational overhead; you could have a sequence of any set of orthogonal functions you want and be able to calculate the molecular orbitals without looking very deeply at what the isolated atoms seem to be. For the first about two rows of the periodic table, up to Florine, most of the electrons in an atom are within reach of the valence band, meaning that they are contributed out into the molecule and distributed away from the nucleus. A convenient basis set for capturing this is to sort of appear atom-like around the nuclei, but not necessarily… if you have an infinite number of gaussians, slater functions or simple sines and cosines, the flexibility of a properly orthogonalized basis set can capture the actual orbital as a linear combination. The choice of using gaussians is computationally convenient for the situation of having the electron distributed in finite clouds around atom centers, making the basis set small, but not more than that. The puzzle is simply to have sufficient flexibility at any point in the molecule for the basis to capture the appropriate linear combination describing the molecule. An infinite sum of terms can be arbitrarily close.

In this, it isn’t necessary for the basis functions to exactly duplicate the true atomic orbitals since that isn’t what you’re looking for to begin with. In a way, the atomic orbitals are therefore disconnected from the molecular orbitals. Trying to exactly reproduce the atoms is misleading since you don’t actually have isolated atoms in a molecule. Presumably, a heavy atom will appear very atom-like deep within its localized potential, but not up on top.

 

(edit 12-11-18):

I’ve managed to build a working self-consistent field calculator for generating molecular wave functions. Hopefully, I’ll get a chance to talk more about it when there’s time.