Grade A crankery at Quantum University (part 2)

This is a repost of a section of my original quantum university post. I decided that I wanted to put it up as its own blog entry so that it would have some opportunity to be read in its own right.

A comment has lead me to believe that the august body of Quantum University is stung by my opinion as a professional physicist of their validity. Can you believe it? Right here on my little ol’ insignificant blog. Great! If I’m enough to rattle them, maybe I ought to keep writing articles about them.

If you want me to respect you, next time, bring physics, not pseudo-pop psychology. There is a right way to do quantum mechanics.

I do have some other thoughts about this comment. I will quote it here in its entirety so that you can see what the thinking looks like:

As someone who appears to be heavily indoctrinated in a “material-empirical” orientation with regards to science, it would be very hard for you to appreci- ate the type of education fostered at a place like Quantum University.

The material-empirical science oriented individual tends to live out of touch with Reality, for this to him/her is composed only of particles and myriad physical phenomena proven by math- matical formulas.

What does any of this have to do with your Life, your Consciousness, your Relationships… your own Soul. It’s only a Grand Illusion that you’ve been unable to perceive/discern the connect- ion between these and the brand of physics you’re pursuing.

If you’re ever fortunate enough to transition from the “ordinary mode consciousness,” dominated by obses- sive “left-brained,” rational/analytical thought, and shift toward the higher, more. transrational states for a break, you just may discover that indeed there is a connection to it all.

There is a lot in this little blurb. I think it may even have been written by the same fellow who wrote the “otological prison” quote I used above, though I couldn’t confirm that. He accuses me of being indoctrinated and claims that if only I escaped my rational analytical mind that maybe I would see the truth that we all live in the matrix or some such.

What is indoctrination?

According to the dictionary, it is “the process of teaching a person or group to accept a set of beliefs uncritically.” Direct quote from Google by my lazy ass.

The critical word here is “uncritically.” What does this mean?

Uncritically: “with a lack of criticism or consideration of whether something is right or wrong.” Another direct quote from Google by my even lazier and more tired ass.

So, indoctrination is an education where the student is not critical of the content of what they’ve been taught.

You have only my word to take for it, but I’ve walked all up and down physics. I’ve read 1920s articles on quantum mechanical spin translated from the original German trying to see what claims were being made about it. I’ve read Einstein. I’ve read Feynman. I’ve read Schrodinger… the real guys, their own words. I have worked probably thousands of hours rederiving math made famous by people dead sometimes hundreds of years ago just to be certain I understood how it worked (Do you really believe the Pythagorean theorem?) I’ve marched past the appendix of the textbook and gone to the original papers when I thought the author was lying to me or leaving something important out. And yes, I’ve found a few mistakes in the primary literature by noted physicists. Does that sound uncritical to you? In the 3 years since I originally wrote the Quantum U post above, I’ve earned a genuine physicist PhD from a major accredited university.

I would turn this analysis back on the fellow in the comments: have you done this kind of due diligence on what Quantum U taught you? Did you attack them to check if they were wrong? If not, you’ve been indoctrinated. Since they are about as wrong as it’s possible to be, my guess is that no, he didn’t and he isn’t about to… he’s a believer.

The next thought about this comment which pops up is a little claim about my dim-witted nature. I am clearly without a third eye and my life is definitely in the crapper because I am not seeing that other level beyond the workaday world where I could be mystically synergizing with some deeper aspect of reality in the hands of the Real truth. My dreadful left brain is clearly overwhelming my potential as a person. Do you actually believe that you know me?

By design I don’t speak often about my personal life on this blog. Fact is I’m not an unhappy or unfulfilled person. If you take the spine of that comment, the implication that if only I had a Soul, I’d see that Quantum U would have something to give me, truth is that I can say for certain that I need nothing from them in that regard. I came to a point in my life where I don’t need the training wheels… I, as a person, am enough. That has nothing to do with my scientist education, but everything to do with my complicated path through life. That path has lead me a long way and through a lot. Walk one mile in my shoes –I dare you!

Do not make assumptions about the soul of a person you know next to nothing about.

I have one piece of experience that I feel would inform a searcher who sees the allure of Quantum University and it’s “ability” to give students some deeper insight into consciousness, soul and self-actualization. The most difficult thing that people can ever grasp about themselves is the fact that we are all flawed in the sense that our very capacity to interact with reality is fundamentally confused about what’s real. Your brain, the generator of your reality, is not perfect and you can believe in a lie as if it were actually true. Did they find WMDs in Iraq?

I have to laugh at his “transrationalist” higher state of being nonsense because it seems that he’s bitten off the biggest lie imaginable. He believes that everything he thinks about the world is true! Why else would he sneer at material-empirical rationalist analytical mindsets? He wants to disconnect his mind from being connected to tangible reality… you can see that in every word he’s written, right down to the carefully chosen yet inappropriate caps.

The problem I have with that is a simple one: by decoupling your mind from everything else, you remove from yourself the ability to do an external error check based upon what is physically true in the world around you. This is pattern recognition with a broken compass. If you have no way of checking whether or not what you believe matches what is actually real, you have no way of confirming what, if anything, is false in what you see. Everybody can dream and imagine they have psychically contacted a dead relative or telepathically commanded a poodle to piss on a baby. There is no badge of honor to be gained by believing you can lie your hands on someone and heal them with the strength of your Chi because anybody can believe that. You can sit around, do deep breathing, and listen to the white noise in your own anatomy and ascribe all sorts of meanings to it. The hardest thing in a world is sorting out whether anything you imagine is actually true, particularly when you want something to be true. Your mind can dredge up some utter unreality that seems absolutely real in that instant. How can you ever be completely sure?

In my experience, the truth is true regardless of whether or not I believe in it.

That’s the thing about empirical reality. You have a chance to come back and interrogate something, or someone, external to yourself about whether or not you are seeing true things in the world around you. This is a timely subject, I think, because people have turned to filter silos –pocket realities where groups of people are telling you what you want to hear– to avoid having to do really painful self-checks. Empirical reality is imperfect because we never know everything about it, but at least it’s basically invariant and can serve as a good calibration point. That’s the thing about the truth: two contradictory things can’t be simultaneously true. Empiracism at least gives a stationary ground that every observer (literally every observer) can share. If we can all come back and agree that the sky is blue, we at least have something in common to work with, no matter what murmurings are pressing on the backs of our heads. You can’t show that “transrationalist” higher state of being is anything different from a schizophrenic fantasy because they have equal connectedness to the external world; there is no internal frame of reference by which to prove that the first isn’t actually the second. That somebody at Quantum U told you it’s so and you uncritically decided to believe them does not suddenly make it true… that’s almost like a filter bubble; you’re just using someone in particular as your authority whom you wish to believe. Never mind that the person you picked is, maliciously or not, lying their ass off to you.

I think the hardest thing in the world is facing when you’re really wrong about something you deeply want to believe. Sometimes people do get these things wrong. Are you among them? Clearly, the fellow in the comment understands that people can be wrong, or he wouldn’t accuse me of being wrong. Does he never turn his optics against himself?

Now, you may want to call me a hypocrite. Am I a believer? Surely I believe in physics, being a physicist. My answer here might surprise you. Only kind of. Quite a lot of it I don’t fully understand. I’m either agnostic or skeptical about the parts I don’t understand. And, I’ve gone to some pretty extreme ends to try to decide that I understand it well enough to believe certain things about it. This leads to two things, first, I know I don’t know everything and, second, I freely admit that I get things wrong. But that doesn’t mean that I have no idea what I’m talking about… what skill I have with Quantum Mechanics is well earned.

Let this serve as a warning: anybody else making comments about my soul or implying with heavy hand that there is a lack, I will delete what you say out of hand. That’s ad hominem, as far as I’m concerned. You don’t know me. That you make any such statement shows that you didn’t understand word one about human potential that anyone at any school tried to teach you. You have no idea who I am.

Because I made a different set of points in my immediate direct response to the original comment, here is that as well:

I will approve this comment so that people can read it.

First, there is no “brand” of physics. There is physics and then there is not physics. Because of how it’s fundamentally designed, physics is physics. It must truly burn you up that the words “quantum theory” were coined by someone who was indoctrinated to a “material-empirical” outlook on the world. I find it especially funny that the like of you, oh so high and mighty in your supposed depth and vision, are not creative enough to create anything believable without stealing your entire foundation from my ilk. Ask yourself if you would even have a Quantum University to defend if it wasn’t for us.

“The material-empirical science oriented individual tends to live out of touch with Reality” —Wow, that’s an amazing oxymoron. Great job!

“If you’re ever fortunate enough to transition from the “ordinary mode consciousness,” dominated by obses- sive “left-brained,” rational/analytical thought, and shift toward the higher, more. transrational states for a break, you just may discover that indeed there is a connection to it all.” —And if you stopped eschewing the math, you might eventually realize that lying to yourself doesn’t actually get you out of the garden. But, sure, go ahead, put on the blindfold, spin yourself around a few more times and try to pin the tail on the donkey. I don’t mind.

As an aside, I would recommend this guy for a writing gig on Star Trek, he has an amazing capacity for inventing jargon that sounds like it should means something. Transrational? We have rational and irrational. Argue for me as to where it helps to mix the two. I suppose this fellow and I are in agreement about something; nobody who isn’t in some turbid state of translucid parasanity would willfully spend money on Quantum University.

If you doubt the level of bile the idea of Quantum University brings up in me, please understand that if we lived 700 years ago, I would probably be riding out to help put these witches to the sword. If I can help to spread a single genuinely deep thought about them and what they do through the internet, I will.

Advertisements

The Organic Chemistry Lie

Fallout from my learning about Molecular Orbital theory and Hartree-Fock.

I’ve said repeatedly that Organic Chemistry is along the spectrum of pursuits that uses Quantum Mechanics. Organic Chemists learn a brutal regimen of details for constructing ball-and-stick models of complicated molecules. I’ve also recently discovered that chemistry –to this day– is teaching a fundamental lie to undergraduates about quantum mechanics… not because they don’t actually know the truth, but because it’s easier and more systematic to teach.

As a basic example, let’s use the model of methane (CH4) for a small demonstration.

4 methane schematic

This image is taken pretty much at random from The New World Encyclopedia via a Google image search. The article on that link is titled “covalent bond” and they actually do touch briefly on the lie.

A covalent bond is a structure that is formed between two atoms where each atom donates one electron to form a paired structure. You have probably heard of sigma- and pi- bonds.

5 sigma pi bonds

This image of Ethylene (a fairly close relative of methane) is taken from Brilliant and shows details of the two most major types of covalent bonds. Along this path, you might even remember my playing around in the first post I made in this series, where I directly plotted sigma- and pi- bonds from linear combinations of hydrogenic orbitals.

 

 

These bond structure ideas seem to emerge predominantly based on papers by Linus Pauling in the 1930s. The notion is that the molecule is fabricated out of overlapping atomic orbitals to make a structure sort of resembling a balloon animal, as seen in the figure above containing ethylene. Organic chemistry is largely about drawing sticks and balls.

6 methane bonding

With methane, you have four sticks joining the balls together. We understand the carbon to be in Sp3 hybridization, which is directly a construct offered by Linus Pauling in 1931, describing a four orbital system, with four sigma bonds, involving carbon with tetrahedral symmetry which is three parts p and one part s. The orbitals are formed specifically from hydrogenic s- and p- types. If you count, you’ll see that there are 8 electrons involved in the bonding in this model.

I used to think this was the story.

The molecular orbital calculations tell me something different. First I will recall for you the calculated density for methane achieved by closed-shell Hartree-Fock.

Density low threshold

This density sort of looks like the thing above, I will admit. To see the lie, you have to glance a little closer.7 molecular orbitals

This is a collection of the molecular orbitals calculated by STO-3G and the energy axis is not to perfect scale. The reported energies are high given the incompleteness of the basis. The arrows show the distribution of the electrons in the ground state with one spin up and spin down electron in each orbital. The -11.03 Hartree orbital is the deep 1s electrons of the carbon and these are so tightly held that the density is not very visible at this resolution. The -0.93 orbital is the next out and the density is mainly like a 2s orbital, though when you threshold to see the diffuse part of the wave function, it has a sort of tetrahedral shape. Note, this shape only emerges if you threshold so that it becomes visible. The next three orbitals at -0.53 are degenerate in energy and have these weird blob-like shapes that actually don’t really look like anything; one of them sort of looks like a Linus Pauling Sp-hybrid, but we’re stumped by the pesky fact that there are three rather than four. The next four orbitals above zero are virtual orbitals and are unpopulated in the ground state of the molecule –these could be called anti-bonding states.

Focusing on the populated degenerate orbitals:

 

 

These three seem to throw a wrench at everything that you might ever think from Linus Pauling. They do not look like the stick-like bonds that you would expect from your freshman chemistry balloon animal intuition. Fact is that these three are selected in the Hartree-Fock calculation as a composite rather than as individual orbitals. They occur at the same energy, meaning that they are fundamentally entangled with each other and the filter placed on finding them finds all three together in a mixture. This has to be the case because these orbitals examined in isolation do not preserve the symmetry of the molecule.

With methane, we must expect the eigenstates to have tetrahedral symmetry: the symmetry transformations for tetrahedral symmetry (120 degree rotations around each of the points) would leave the Hamiltonian unaltered (it transforms back into itself), so that the Hamiltonian and the symmetry operators commute. If these operators commute, the eigenstates of the molecule’s Hamiltonian must be simultaneous eigenstates of tetrahedral symmetry. This is basic quantum mechanics.

You can see by eye that these orbitals are not.

Now, with this in mind, you can look at the superposition of these which was found during the Hartree-Fock calculation:

density of superposition states 3 4 5 v2

This is the probability distribution for the superposition of the three degenerate eigenstates above. Now we have a thing that’s tetrahedral. Note, there is no thresholding here, this is the real intensity distribution for this orbital collection. This manifold structure contains 6 electrons in three up-down spin pairs where they are in superpositions of three unknown (unknowable) degenerate states.

The next lower energy set has two electrons in up-down and looks like this:

state 2 -0.925

This is the -0.93 orbital without thresholding so that you can see where the orbital is mostly distributed as a 2s-like orbital close to the Carbon atom in the center. It does have a diffuse fringe that reaches the hydrogens, but it’s mainly held to the carbon.

I have to conclude that the tetrahedral superposed orbital thing is what holds the hydrogens onto the molecule.

 

 

Where are my stick-like bonds? If you stop and think about the Linus Pauling Sp-hybrids, you realize that those orbitals in isolation also don’t preserve symmetry! Further, we’ve got a counting conundrum: the orbitals holding the molecule together have six electrons, while the ball-and-stick covalent sigma-bonded model has eight. In the molecular orbital version, two of the electrons have been drawn in close to the carbon, leaving the hydrogen atoms sitting out in a six-electron tetrahedral shell state.

This vividly shows the effect of electronegativity: carbon is withdrawing two of the electrons to itself while only six remain to hold the four hydrogen nuclei. There is not even one spin-up-down two electron sigma-bond in sight!

And so we hit the lie: there is no such thing as sigma- and pi- bonds!

…there is no spoon…

The ideas of the sigma- and pi-bonds come from a model not that different from the Bohr atom. They have power to describe the multiplicity that comes from angular momentum closure, having originated as a description in the 1930s explaining bonding effects noticed in the 1910 to 1920 range, but they are not a complete description. The techniques to produce the molecular orbitals originated later: ’50s, ’60s, ’70s and ’80s. These newer ideas are crazily different from the older ones and require a good dose of pure quantum mechanics to understand. I have a Physical Chemistry book for Chemists from the early 2000s that does not contain a good treatment of molecular orbital theory, stopping only with basically the variational methods Pauling and the workers in the 1930s were using. I asked one of my coworkers, who is versed in organic chemistry models, how many electrons she thought were in the methane bonding system and she said “8,” exactly as I would have prior to this little undertaking.

There’s a conspiracy! We’re living in a lie man!

 

Edit 2-12-19:

I spent some time looking at Ethylene, which is the molecule featuring the example of the balloon animal Pi-bond in the image above. I found a structure that resembles a Pi-bond at the highest energy occupied orbital of the molecule.

Density of Ethylene:

ethylene density threshold

Density of Ethylene:

ethylene density threshold 2

I’ve added two images of the density so that you can see the three dimensional structure.

Ethylene -0.32 hartrees molecular orbital, looks like a pi-bond:

-0.323 ethylene

The -0.53 hartrees orbital looks sort of sigma-like between the carbons:

-0.528 ethylene

The rest of the orbitals look nothing like conventional sigma- or pi- bonds. The hydrogens are again attached by a manifold of probability density which probably allows the entire system to be entangled and invertible based on symmetry.

Admittedly, ethylene has only one pi-bond and the first image above probably qualifies as the pi-bond. I would point out, however, that in the case of ethylene, the stereotypical sigma- and pi- configurations between the carbons matches the symmetry of the molecule, which has a reflection symmetry plane between the carbons and a 180 degree rotation axis along the long axis of the molecule. The sigma- and pi- bond configurations can be symmetry preserving here, but for the carbons only.

One other interesting observation is that the deep electrons in the 1s orbitals of the carbons are degenerate in energy, leading these orbitals to be entangled:

-11.02 ethylene

This also matches the reflection symmetry of the molecule (and would in fact be required by it). There are four electrons in this orbital and you can’t tell which are which, so the probability distribution allows them to be in both places at once… either on the one carbon or on the other. Note, this does not mean that they are actually in both places; it means that you could find them in one place or the other and that you cannot know where they are unless you look –I think this distinction is important and frequently overlooked.

An interesting accessory question here is what happens if you twist ethylene? Molecules like ethylene are not friendly to rotation along the long axis of the double bond because that supposedly breaks the pi-bonding. So, I did that. The total energy of the molecule increases from -77.07 to -76.86; that isn’t a huge amount, but it would constitute a barrier to rotation around the double bond.

Twisted Ethylene density, rotated about the bond by 90 degrees:

twist ethylene density threshold 1

In this case, you do get what appear –sort of– to look like four-fold degenerate sigma-bonds attaching the hydrogens:

T ethylene -0.551

But, the multiplicity is about two-fold degenerate, suggesting only four electrons in the orbital instead of eight, which badly breaks the sigma-bond idea (of two electrons to a sigma-bond). This again suggests strong electron withdrawing by carbon, and stronger with ethylene than methane.

The highest energy occupied state has an energy increased from -0.32 in the planar state to -0.177 in the twisted state… and it looks like a broken pi-bond:

T ethylene -0.177

I think that the conventional idea about why ethylene is rigid is probably fairly accurate. The pictures here might be regarded as a transition state between the two planar cases where the molecule has a barrier to twisting, but is permitted to do so at some slow rate.

In the twisted case, the deep 1s electrons on the carbons are broken from reflection symmetry and they become distinctly localized to one carbon or the other.

 

 

Overall, I can see why you would teach the ideas of the sigma- and pi- bonds, even though they are probably best regarded as special cases. If you’re not completely aware that they are special cases, and that pictures like the one on Brilliant.org are broken, then we have a problem.

This exercise has been a very helpful one for me, I think. I’ve heard a huge amount about symmetries and about different organic chemistry conventions. Performing this series of calculations really helps to bridge the gap. Seeing actual examples is eye-opening. Why aren’t there more out there?

 

Edit 2-23-19:

As I’ve continued to learn more about electronic bonds, I’ve learned that the structural details have been continuously argued for a long time. It becomes clear pretty quickly that the molecular orbital structures tend to exclude those notions you encounter early in schooling. Still, molecular orbitals have broken-physics problems themselves when you try to pull them apart by splitting a molecule in half. You end up having to be molecular orbital-like when the molecule is intact, but atomic orbital-like when the molecule is pulled apart into its separate atoms.

I found a paper from 1973 By Goddard and company which rescues some of the valance bond ideas as Generalized Valence Bonds (GVB). Within this framework, the molecular orbitals are again treated as linear combinations of atomic parts and answers the protestations of symmetry by saying simply that if you can make a combination of atomic orbitals that globally preserve the symmetry in a molecule, then that combination is an acceptable answer. GVB adds to the older ideas by putting in the notion that bonds can push and deform each other, which certainly fits with the things you start to see when you examine the molecular orbitals.

You can have sigma and pi bonds if you make adjustments. I’m not sure yet how the GVB version of methane would be constructed, but the direct treatment of carbon in the paper slays the idea of Sp-hybridization, as I understand it, while still producing the expected geometry of molecules.

Still thinking about this.

 

edit 3-5-19:

I’ve been strongly aware that my little Python program is simply not going to cut it in the long haul if I desire to be able to make some calculations that are actually useful to a modern level of research. I decided to learn how to use GAMESS.

For a poor academic with some desire to do quantum mechanics/ molecular mechanics type calculations, GAMESS is a godsend.

More than that actually. GAMESS is like stumbling over an Aston Martin Vanquish sitting in an alley way, unlocked, with the keys in the ignition, where the vanity plate says “wtng4U.” It isn’t actually shareware, but it could be called licensed freeware. GAMESS is an academic project whose roots existed clear back in the 1970s, roughly parallel to Gaussian, which still exists today and is accessible to people whom the curators deem reasonable. My academic email address probably helped with the vetting and I can’t say I know exactly how far they are willing to distribute their admittedly precious program.

To give you an idea of the performance gap between my little go-cart and this porche: the methane calculations I made above took 17 seconds for my Python program… GAMESS did it in 0.1 seconds. Roughly 170-fold! This would bring benzene down from two hours for my program to maybe a few minutes with GAMESS.

methane density

This image, produced by a GAMESS satellite program called wxMacMolPlt, is a methane coordinate model with a GAMESS calculated electron density depicted as a mesh to demonstrate a probability isosurface. What GAMESS adds to where I was in my own efforts is a sophistication including direct calculations of orbital electron occupancy. Under these calculations, it’s clear that electrons are withdrawn from the hydrogens, but maybe not quite as extremely as my crude estimations above would suggest: the orbitals associated with the hydrogens have 93% to 96% electron occupancy… withdrawn, but not so withdrawn as to be empty (I estimated 6 for 8 electrons above, or more like 75% occupancy, which was relatively naive). This presumably comes from the fringes of the 2s orbital centered on the carbon. Again, the analysis is very different from the simple notations of sigma- and pi-bonding, where the electrons are clearly set in clouds defined by the whole molecule rather than as distinct localizations.

I’ve really just learned how to make GAMESS work, so my ability to do this is very much limited. And, admittedly, since I have no access to real computer infrastructure (just a quadcore CPU) it will never reach its full profound ability. In my hands, GAMESS is still an atomic bomb used as a fly swatter. We’ll see if I can improve upon that.

 

edit 3-10-19:

Hit a few bumps learning how to make GAMESS dance, but it seems I’ve managed to turn it against the basic pieces I was able to attack on my own.

Here is Ethylene, both a model and a thresholded form of the total electron density.

I also went and found those orbitals in the carbon-carbon bond.

The first is the sigma-like bond at -0.54 and the second is the pi-like bond at -0.33. The numbers here are slightly off from what I quote above because the geometry is optimized and STO-3G ends up optimizing slightly shorter than X-ray observed bond lengths. These are somewhat easier to see than the clouds I was able to produce with my own program (though I think my work might be a little prettier). I’ve also noticed that you can’t plot density of orbital super-positions with the available GAMESS associated programs, as I did with methane above. I can probably get tricky by processing molecular orbitals on my own to create the superpositions and then plot them –GAMESS handily supplies all the eigenvectors and basis functions in its log files.

In the build of GAMESS that I acquired, I’ve stumbled over an apparent bug. The program can’t distinguish tetrahedral symmetry in a normal manner… it’s converting the Td point group of methane into what appears to be a D2h point group, apparently. I was able to work around this by calling symmetry C1. Considering that I started out with no idea how to enter anything at all, I take this as a victory. As open freeware, they work with a smaller budget and team, so I think the goof is probably understandable –though it sure felt malicious when I realized that the problem was with GAMESS itself. I’m not savvy enough with programming to dig in and fix this one myself, I think, though the pseudo-open source nature of GAMESS would certainly allow that.

Given how huge an effort my own python SCF program ended up requiring, I’m not too surprised that GAMESS has small problems floating around. As an academic product, they have funding limits. At the very least, I’m impressed that it cranks out in seconds what took my program minutes… that speed extends my range a lot. I was able to experiment with true geometry optimization in GAMESS where my program stopped with me scrounging atomic coordinates out of the literature.

(edit 4-10-19):

pyrophosphate -4 in water

Pyrophosphate, fully deprotonated.

This is an image of pyrophosphate, calculated with the 6-311G basis set in GAMESS by restricted Hartree-Fock. This includes geometry optimization and is in a polarized continuum model for representation of solvation in water. The wireframe itself represents an equi-probability surface in the electron density profile while the coloration of the wireframe represents the electrostatic potential at that surface (blue for negative, red for positive).

Building a Molecule; Time Spent in the Gap

How is a molecule built? Rather, what exactly is required to predict the electronic structure of a molecule using modern tools?

I will use this post to talk about my time spent learning how to apply the classic quantum mechanical calculation of Hartree-Fock (note, this is plain old Hartree-Fock rather than multi-configuration Hartree-Fock or something newer that gives more accurate results). I’ve spoken some about my learning of this theory in a previous post. Since writing that other post, I’ve passed through numerous travails and learned quite a lot more about the process of ab initio molecular calculation.

My original goal was several-fold. I decided that I wanted a structural tool that, at the very least, would allow me access to some new ways of looking at things in my own research. I chose it as a project to help me acquire some skill in a computer programming language. Finally, I also chose to pursue it because it turned out to be a very interesting question.

With several months of effort behind me, I know now several things. First, I do think it’s an interesting tool which will give new insight into my line of research, provided I access the tool correctly. Second, I think I was incredibly naive in my approach: the art and science of ab initio calculation is a much bigger project than can bear high quality fruit in the hands of one overly ambitious individual. It was a labor of years for a lot of people and the time spent getting around my deficits in programming are doubly penalized by the sheer scope of the project. My little program will never produce a calculation at a modern level! Third, I chose Python for my programming language for its ease of availability and ubiquity, but I think a better version of the self-consistent field theory would be written in C or Fortran. Without having this be my full-time job, which it isn’t, I doubt there’s any hope of migrating my efforts to a language better suited to the task. For any other intrepid explorers seeking to tread this ground in the future, I would recommend asking yourself how pressing your needs are: you will never catch up with Gaussian or GAMESS or any of the legion of other professionally designed programs intended to perform ab initio quantum mechanics.

Still, I did get somewhere.

The study of Hartree-Fock is a parallel examination of Quantum Mechanics and the general history of how computers and science have become entangled. You cannot perform Hartree-Fock by hand; it is so huge and so involved that a computer is needed to hold it together. I talked about the scope of the calculation previously and what I said before still holds. It cannot be done by hand. That said, the physics were still worked out mostly by hand.

I would say that part of the story started almost 90 years ago. Linus Pauling wrote a series of papers connecting the then newly devised quantum mechanics of Schrodinger and his ilk to the puzzle of molecular structure. Pauling took hydrogenic atomic orbitals and used linear combinations of these assemblies to come up with geometric arrangements for molecules like water and methane and benzene. A sigma-orbital is the product of two atomic orbitals placed side-by-side with an overlap and then adjusted energy optimization to pick the right distance. A pi-orbital is the same, but with two p-orbitals placed side-by-side and turned so that they lie parallel to one another.

Much of Pauling’s insights now form the backbone of what you learn in Organic Chemistry. The geometry of molecules as taught in that class came out of these years of development and Pauling’s spell of ground-breaking papers from that time will have you doing a double-take regarding exactly how much impact his work had on chemistry. Still, for the work of the 1930s by Pauling and his peers, they only had approximations, with limited accuracy for the geometry and no real ability to calculate spectra.

Hartree-Fock came together gradually. C.S.S. Roothaan published what are now called the Roothaan equations, which constitute the core of more modern closed-shell Hartree-Fock, in 1951. Nearly simultaneously, Frank Boys published a treatment of gaussian functions, showing that all needed integrals for molecular overlap could be calculated in closed form with the gaussian function family, something not possible with the Slater functions that were to that point being used in place of the hydrogenic functions. Hydrogenic functions do show one exact case of what these wave functions actually look like, but they are basically impossible to calculate for any atom except hydrogen and pretty much impossible to adapt to broader use. Slater functions took over in place of the exact hydrogenic functions because they were easier to use as approximations. Gaussian functions then took over from Slater functions because they are easier to use still and much easier to put into computers, a development largely kicked off by Boys. There is a whole host of names that stick out in the literature after that, including John Pople who duly won a Nobel prize in 1998 for his work leading to the creation of Gaussian, which to this day is a dominant force in molecular ab initio calculation (and will do everything you could imagine needing do as a chemist if you’ve got like $1,000 to afford the academic program license… or $30,000 if you’re commercial).

The depth of this field set me to thinking. Sitting here in the modern day, I am reminded slightly of Walder Frey and the Frey brethren in the Game of Thrones. This may seem an unsightly and perhaps unflattering comparison, but stick with me for a moment. In the Game of Thrones, the Freys own a castle which doubles as a bridge to span the waters of the Green Fork in the lands of Riverrun. The Frey castle is the only ford for miles and if you want to cut time on your trade (or marching your army), you have no choice but to deal with the Freys. They can charge whatever price they like for the service of providing a means of commerce –or, as the case may be, war– and if you don’t go with them, you have to go the long way around. Programs like Gaussian (and GAMESS, though it is basically protected freeware), are a bridge across a nearly uncrossable river. They have such a depth of provenance in the scientific service that they provide that you are literally up a creek if you try to go the long way around. This is something I’ve been learning the hard way. In truth, there are many more programs out there which can do these calculations, but they are not necessarily cheap, or -conversely- stable.

I think this feature is interesting on its own. There is a big gap between the Quantum Mechanics which everybody knows about, which began in the 1920s, and what can be done now. The people writing the textbooks now in many cases came to their own in an environment where the deepest parts of ab initio was mainly already solved. Two of the textbooks I delved into, the one by Szabo and Ostlund, and work by Helgaker, clearly show experts who are deeply knowledgeable of the field, but have characteristics suggesting that these authors themselves have never actually been able to cross the river between classical quantum mechanics and modern quantum chemistry fully unaided (Szabo and Ostlund never give theory that can handle more than gaussian S-orbitals, where what they give is merely a nod to Boys, while Helgaker is given to quoting as recently as 2010 from a paper that, to the best of my ability, actually gives faulty theory pending some deep epistimologic insight guarded by the cloistered brotherhood of Quantum Chemists). The workings hidden within the bridge of the Freys is rather impenetrable. The effort of going from doing toy calculations as seen in Linus Pauling’s already difficult work to doing modern real calculations is genuinely a herculean effort. Some of the modern textbooks cost hundreds of dollars and are still incomplete stories on how to get from here to there. Note, this only for gaining the underpinnings of Hartree-Fock, a flawed technique itself without including Configuration Interaction or other more modern adjustments and those even short shrift if you don’t have ways of dealing with the complexities of the boundary conditions.

Several times in the past couple months, I’ve been wishing for Aria Stark’s help.

I will break this story up into sections.

 

The Spine

The core of Hartree-Fock is perhaps as good a place to start as any. Everybody knows about the Schrodinger equation. If you’ve gone through physical chemistry, you may have cursed at it a couple times as you struggled to learn how to do the Particle-in-a-Box toy problem. You may be a physicist and have solved the hydrogen atom, or seen Heisenberg’s way of deriving spherical harmonics and might be aware that more than just Schrodinger was responsible for quantum mechanics.

Sadly, I would say you basically haven’t seen anything.

As the Egyptian Book of the Dead claims that Death is only a beginning, Schrodinger’s equation is but the surface of Quantum Mechanics. I will pick up our story in this unlikely place by pointing out that Schrodinger’s equation was put through the grinder in the 1930s and 1940s in order to spit out a ton of insight involving molecular symmetry and a lot of other thoughts about group symmetry and representation. Hartree and Fock had already spent time on their variational methods and systematic techniques for a combined method called Hartree-Fock began to emerge by the 1950s. Heck, an atomic bomb came out of that era. The variant of Schrodinger’s equation where I pick up my story is a little ditty now called the Roothaan equation.

1 Roothaan equation

It definitely doesn’t look like the Schrodinger equation. In fact, it looks almost as small and simple as E=mc^2 or F = ma, but that’s actually somewhat superficial. I won’t go terribly deeply into the derivation of this math because it would balloon what will already be a long post into a nightmare post. My initial brush with this form of the Roothaan equation came from Szabo and Ostlund, but I’ve since gone and tracked down Roothaan’s original paper… only to find that Szabo and Ostlund’s notation, which I found to be quite elegant, is actually almost directly Roothaan’s notation. Roothaan’s purpose seems to have been collecting prior insight regarding Hartree-Fock into a systematic method.

This equation emerges from taking the Schrodinger equation and expanding it into a many-body system where the Hamiltonian has been applied onto a wave equation that preserves electron fermion exchange anti-symmetry –literally a Slater determinant wave function, where you may have like 200! terms or more. ‘F’, ‘S’ and ‘C’ are all usually square matrices.

‘F’ is called the Fock matrix and it contains all of the terms of the Hamiltonian. Generally speaking, once the equation is in this form, you’re actually out in numerical calculation land and no mathematical dongles remain in the matrix. The matrix contains only numbers. The Fock matrix is a square matrix which is symmetric, meaning that the terms above and below the diagonal equal each other, which is an outcome of quantum mechanics using hermitian operators. To construct the Fock matrix, you’ve already done a ton of integrals and a huge amount of addition and subtraction on terms that look sort of like pieces of the Schrodinger equation. You can think of the Fock matrix as being a version of the Hamiltonian. Within the Fock matrix are terms referred to as the Core Hamiltonian, which looks like the Schrodinger Hamiltonian, and the ‘G’ matrix, which is a sum of electron repulsion and electron exchange terms, which only occur when you’ve expanded the Schrodinger equation to a many body system. The Fock matrix is usually symmetric rather than just hermitian because the Roothaan equations assume that every molecular orbital is closed… that is, every orbital has one spin up and one spin down electron which are degenerate and indistinguishable. The eigenstates are therefore spatial equations instead of spin-orbitals where spin was integrated out.

‘C’ is a way to represent the eigenstates of the Hamiltonian. Note, I did not say that ‘C’ is a wave function because these wave functions are actually impossible to write down (how many is ~200! or 400! or more terms?) ‘C’ is a representation of a way to write down the eigenstates that you might use to construct a wave function in the space of the Fock matrix. It actually isn’t even the eigenstates directly, but the coefficients for a basis set that could be used to represent the eigenstates you desire. ‘C’ is square unitary matrix, meaning that multiplying it by the transpose of itself produces identity. The eigenstates contained by ‘C’ are orbitals that are associated with the Hamiltonian in the form of the Fock matrix.

‘S’ is called the “overlap matrix.” The overlap matrix is a symmetric matrix that is constructed by use of the basis set. As you may have read in my other post on this subject, the basis set may be a bunch of gaussian functions or a bunch of slater functions or a bunch of some other miscellaneous basis set that you would use to represent the system at hand. The overlap matrix is introduced because, mathematically, whatever basis you chose may be composed of functions that are not orthogonal to one another. Gaussian basis functions are useful, but they are not orthogonal. The purpose of the overlap matrix then is to work through the calculus necessary to construct orthogonal combinations of the basis functions. For a basis set that is not orthogonal you need some way to account for the non-orthogonality.

The form of the Roothaan equation written above is an adapted form for an eigenvalue equation where ε is the eigenvalue. In the case of molecular orbitals, this eigenvalue is an energy that is called the orbital energy. These eigenstates are non-orthogonal as accommodated by the ‘S’ matrix, where the eigenvalues are distributed across combinations of basis functions, as expressed in ‘C’, that are orthogonal to each other.

What makes this equation truly a monstrosity is that the Fock matrix is dependent itself on the ‘C’ matrix. The way this dependence appears is that the integrals which are used to construct the Fock matrix are calculated from the values of the ‘C’ matrix. The Roothaan equation is a sort of feedback loop: the ‘C’ matrix is calculated from working an eigenvalue equation involving ‘F’ and ‘S’ to find ε, where ‘C’ is then used to calculate ‘F’. In practice, this operates as an iteration: you guess at a starting Fock matrix and calculate a ‘C’ matrix, which is then used to calculate a new Fock matrix, from which you calculate a new ‘C’ matrix.  The hope is that eventually the new ‘C’ matrix you calculate during each cycle of calculation converges to a constant value.

Oroboros eating its own tail.

This is the spine of Hartree-Fock: you’re looking for a convergence to give constant output values of ‘C’ and ‘F’. As I articulated poorly in my previous attempt at this topic, this is the self-consistent electron field. Electrons occupy some combination of the molecular orbitals expressed by ‘C’ at the energies of ε, forming a electrostatic force field that governs the form of ‘F’, from which ‘C’ is the only acceptable solution. ‘C’ is used to calculated the values of the hamiltonian inside ‘F’, where the integrals are the repulsions of electrons in the ‘C’ orbitals against each other or attractions of those electrons toward nuclei, giving the kinetic and potential energies that you usually expect in a hamiltonian.

Here is the scale of the calculation: a minimal basis set for methane (four hydrogens and a carbon) is 9 basis functions. The simplest, most basic basis set in common use is Pople’s STO-3G basis, which creates orbitals from sums of gaussian functions (called a contraction)… “3G” meaning three gaussians to a single orbital. One overlap integral between two functions therefore involves nine integrals. Generation of the 9 x 9 S-matrix mentioned above then involves 9*9*9 integrals, 729 integrals. Kinetic energy and nuclear attraction terms would each involve another 729 (3*729 = 2187 integrals) which can be shortened by the fact that the matrices are symmetric, so that only a few more than half actually need to be calculated. The electron-electron interactions, including repulsions and exchanges are a larger number still: one quarter of 9^4*9 or ~14,700 integrals (symmetry allows you to avoid the full 9^4 where basically the whole matrix must influence each matrix element, giving a square of a square). Roughly 17,000 integration operations for a molecule of only 5 atoms using the least expensive form of basis set.

The only way to do this calculation is by computer. Literally thousands of calculations go into making ‘F’ and then hundreds more to create ‘C’ and this needs to be done repeatedly. It’s all an intractably huge amount of busy work that begs for automation.

 

Solving Big Eigenvalue Problems

One big problem I faced in dealing with the Roothaan equations was trying to understand how to solve big eigenvalue problems.

Most of my experience with calculating eigenvalues has been while working analytically by hand. You may remember this sort of problem from your linear algebra class: you basically set up a characteristic equation by setting the determinant of the matrix equal to zero after having subtracted a dummy variable for the eigenvalue from the diagonal and then solving that characteristic equation. It’s kind of complicated by itself and depends on the eigenvalues being separable. A 3 x 3 matrix produces a cubic equation –which you hope to God is separable because nobody ever wants to do more than the quadratic equation. If it isn’t separable, you are up a creek without a paddle even at just 3 x 3.

For the example of the methane minimal basis set, the resulting matrices of the Roothaan equation are 9 x 9.

This is past where you can go by hand. Ideally, one would prefer to not be confined to molecules as small as molecular hydrogen, so you need some method of calculating eigenvalues that can be scaled and –preferably– automated.

This was actually where I started trying to write my program. Since I didn’t know at the time whether I would be able to build the matrix tools necessary to approach the math, I used solving the eigenvalue problem as my barometer for whether or not I should continue. If I couldn’t do even this, there would be no way to approach the Roothaan equations.

The first technique I figured out was a technique called Power Iteration. At the time, this seemed like a fairly straight-forward, accessible method to pull eigenvalues from a big matrix.

To perform power iteration, all you do is operate a square matrix onto a vector, normalize the resulting vector, then act the matrix again on that new vector. If you do this 10,000 times, you will eventually find a point where the resulting vector is just the initial vector times some constant factor. The constant ends up being the biggest eigenvalue in the matrix and the normalized vector is the associated eigenvector. This gives only the biggest eigenvalue in the matrix; you access the next smaller eigenvalue by “deflating” the matrix. This is accomplished by forming the outer product of the eigenvector and multiplying it by eigenvalue and subtracting the result from the initial matrix, which produces a new matrix where the already determined eigenvalue has been “deactivated.” Performing this set of actions repeatedly allows you to work your way through each eigenvalue in the matrix.

There are some difficulties with Power Iteration. In particular, you’re kind of screwed if an eigenvalue happens to be zero since you no longer have the ability to find the eigenvector.

Much of my initial work on self-consistent field theory used Power Iteration as the core solving technique. When I started to run into significant problems later in my efforts, and couldn’t tell whether my problems were due to the way I was finding eigenvalues or some other darker crisis, I ended up switching to a different solving technique.

The second solving technique that I learned was Jacobi Diagonalization. For Power Iteration, I stumbled over the technique from a list of computational eigenvalue calculation methods discovered on-line. Jacobi, on the other hand, was recommended by the Szabo and Ostlund Quantum Chemistry book. Power Iteration is an iterative method while Jacobi is a direct calculation method.

To my somewhat naive eye, the Jacobi method seems ready-made for quantum mechanics problems. A necessary precondition for this method is that the matrix of choice be at least a symmetric matrix, if not actually a hermitian matrix. And, since quantum chemistry seems to mostly reduce its basis sets to non-complex symmetric forms, the Fock matrix is assured to be symmetric as a result of the hermiticity of ground-level quantum mechanics.

Jacobi operates on the observation that the off-diagonal elements of a symmetric matrix can be reduced to zeros by a sequence of unitary rotations. The rotation matrix (called a Givens matrix) can be directly calculated to convert one particular off-diagonal element to a zero. If you do this repeatedly, you can work your way through each off-diagonal element, zeroing each in turn, until the matrix is diagonal. This works only if diagonalization proceeds in a particular order, where you pick the Givens matrix that zeros the largest off-diagonal element present on any particular turn. This largest element is referred to as “the pivot” since it’s the crux of a mathematical rotation. As the pivot is never assured to be in any particular spot during the process, the program must work its way through off-diagonal elements in an almost random order, picking only the largest present at that time.

Once the matrix is diagonalized, all the eigenvalues lie along the diagonal… easy peezy. Further, the product of all the Givens matrices is a unitary matrix containing the eigenvectors for each eigenvalue encoded in order by column.

In the process of learning these numerical methods, I discovered a wonderful diagnostic tool for quantum mechanics programs. As a check for whether or not Jacobi can be applied to a particular matrix, one should look at whether or not the matrix is symmetric. This isn’t an explicit precondition of Power Iteration and I didn’t write any tools to look for it while I was relying on that technique. After I started using Jacobi, I wrote a tool for checking whether or not an input matrix is symmetric and discovered that other routines in my program were failing in their calculations to produce the required symmetric matrices. This diagnostic helped me root out some very deep programming issues elsewhere in what turned out to be a very complex program (for an idiot like me, anyway).

 

Dealing with the Basis

I made an early set of design choices in order to construct the basis. As mentioned in detail in my previous post, the preferred basis sets are Gaussian functions.

It may seem trivial to most people who may read this post, but I was particularly proud of myself for learning how to import a comma delineated .csv file into a python list while converting select character strings into floating points. In the final version, I figured out how to exploit an exception call as a choice for whether an input was intended to be text or floating point.

In the earliest forms of the program, most of my work was done in lists, but I eventually discovered python libraries. If you’re working in python, I caution you to not use lists for tasks that require searchability: libraries are easier! For any astute computer scientists out there, I have no doubt they can chime up with plenty of advice about where I’m wrong, but boy oh boy, I fell in love with libraries really quick when I discovered the functionality they promote.

For dealing with the basis sets, python has a fair number of tools in its math and cmath libraries. This is limited to basic level operations, however. It may seem a no-brainer, but teaching a program how to do six-dimensional integrals is really not as easy as discovering the right programming library. This intended task defined my choices for how I stored my basis sets.

Within the academic papers, most of the gaussian basis sets can be found in tables stripped of everything but the vital constants. The orbitals are fabricated from “primitive” contractions, where a “contraction” is simply a sum of several bare-bones “primitive” gaussian functions with each identified uniquely by a weighting coefficient and an exponential coefficient. There is frequently also a standard exponent to scale a particular contraction to fit the orbitals for a desired atom. The weighting coefficient tells the magnitude of the gaussian function (often chosen so that the sequence of primitives has an overlap integral that is normalized to equal 1) while the exponential coefficient tells how wide the bell-curve of a particular gaussian primitive spreads. The standard exponent is then applied uniformly across an entire contraction to make it bigger or smaller for a particular atom.

In the earliest papers, these gaussian contractions are frequently intended to mimic atomic orbitals. The texts often refer to “linear combinations of atomic orbitals” when calculating molecular orbital functions. In later years, it seems pretty clear that these gaussian contractions are not necessarily specified atomic orbitals so much as an easy basis set which has the correct densities to give good approximations to atoms with relatively few functions. It’s simply an economical basis for the task at hand.

Since python doesn’t automatically know how to deal specifically with gaussian functions, my programming choice was to create a gaussian primitive class. Each primitive object automatically carried around all the numbers needed to identify a particular gaussian primitive. Within the class there were a few class methods necessary to establish the object and identify the associated constants and position. The orbitals were then lists of these primitive class objects. Later in my programming, I even learned how to make the class object callable so that the primitive could spit out the value of the gaussian for a particular position in space.

This is certainly trivial to the overall story, but it was no small amount of work. That I learned how to do class objects is a point of pride.

Constructing generalized basis functions turns out to be a distinct set of actions from fabricating the particular basis to attack a select problem. After a generalized basis is available to the program, I decided to build The Basis as a list identified by atoms at positions in space. This gave a list of lists, where each entry into the top list was a itself a list of primitive objects constituting an orbital, where each orbital was associated with a particular center in space as connected to an atom. It’s not really necessary that these basis functions be centered at the locations of the atoms, but their distributions are ideally suited if they are.

With a basis in hand, what do you do with it?

 

The Nightmare of Integration

I’ve been debating how much math I actually want to add to this post. I think probably the less the better. If any human eyes ever actually read this post and are curious about the deeper mathematical machinations, I will happily give whatever information is in my possession. In the end, I want merely to tell the story of what I did.

For calculating a quantum mechanical self-consistent electron field, there is a nightmare of integrals that need to be used. These integrals are a cutting rain that forms the deluge which will warn any normal human being away from trying to do this calculation. It isn’t small and the numbers do not scale linearly with the size of the problem. They get big very fast.

The basic paper which sits at the bottom of all contracted gaussian basis sets is the paper by Frank Boys in 1950. I ended up in this paper after I realized that Szabo and Ostlund had no intention of telling me how to do anything deeper than s-orbital gaussians. Being poor, I can’t just randomly buy textbooks to search around for a particular author who will tell how to do all the integrals I desired. So, I took advantage of being in an academic position and I turned over the scientific literature to find how this problem was actually addressed historically. This got me to Boys.

Boys lays out the case as to why one would ever use gaussian functions in these quantum mechanical calculations. As it turns out there are basically just four integral forms that are needed to perform Hartree-Fock; you need an overlap integral, a kinetic energy integral with several derivatives of the overlap, a one center nuclear attraction integral and a two center electron repulsion integral (which covers both repulsion and exchange interaction). For many basis function types that you might use, including the vanilla hydrogenic orbitals, each of these types of integrals is unique to the basis functions you put into them. This means that no specific method is common to the integration for whatever family you’re talking about and some may not even have analytic solutions. This makes the problem very computationally expensive in many cases, and likely impossible. With the Gaussian functions, you can perform this clever trick where a p-type gaussian can be accessed from a derivative on an s-type gaussian… if the derivative is on a free variable somewhere in the function, the operations of integration and differentiation can be reversed since they aren’t on the same variable. Instead of doing the integral on the p-type gaussian, you do the integral on the s-type gaussian and then perform the derivative to find the associated result for the p-type. Derivatives are always easier!

Boys showed that all the needed integrals can be analytically solved for the s-type gaussian function, meaning that any gaussian class function can be integrated just by integrating the s-type function. In the process he introduced a crazy special function related to the Error Function (erf) which is now often called the “Boys Function.” The Boys function is an intriguing machine because it’s a completely new way of dealing with the 1/distance propagator that makes electromagnetism so hard (for one example, refer to this post).

Boys Function family with member ‘n’:

3 Boys function2

While this is a simplification, I’ll tell you that it’s still not just easy.

I did troll around in the literature for a time looking for more explicit ways of dealing with implementing this whole gaussian integral problem and grew depressed that most of what I found seemed too complicated for what I had in mind. Several sources gave useful insight into how to systematize some of these integrals to higher angular momentum guassians, but not all. Admittedly, in my earliest pass, I think I didn’t understand the needs of the math well enough. Dealing with this whole problem ended up being an evolving process.

My earliest effort was an attempt to simply, methodically teach the computer how to do symbolic derivatives using chain rule and product rule. This was not a small thing and I figured that I would probably have a slow computer program, but –by god– it would be mine. I had a semi-working system which achieved this.

A parallel problem that cropped up here was learning how to deal with the Boys function. I started out with but one form of the Boys function and knew that I needed erf to use it. Unfortunately, I rapidly discovered that I needed to make derivatives of the Boys function (each derivative is related to the ‘n’ in the function above). I trolled around in the literature for a long time trying to figure out how to perform these derivatives and eventually analytically worked out a series expansion built around erf that successfully formed derivatives for the Boys function based on the derivatives of erf. Probably the smartest and least useful thing I did this entire project.

In its initial form, using these methods, I was able to successfully calculate the minimal basis hydrogen molecule and the HeH+ molecule, both of which have a minimal basis that contains only s-type gaussian functions.

hydrogen density

This image is electron density of the hydrogen molecule using the python maya vi package. Literally just two sigma-bonded hydrogen atoms, where the scales are in Bohr radii (1 = 0.53 Angstrom). This is the most simple quantum chemistry system and the hardest system I could perform using strictly my own methods. Sadly, this system can be done by hand in Hartree-Fock.

When I tried to use more complex basis functions for atoms above Lithium, trying things like Carbon monoxide (CO), my system failed rather spectacularly. Many of the difficulties appeared as non-symmetric matrices (mentioned above) and inexplicably ballooning integral values. The Hartree-Fock loop would oscillate rather than converge. Most of this tracked to issues occurring among my integrals.

One of the problems I discovered was that error function in the python math library couldn’t handle input values that approached zero; I supplied the zero limit input originally since the function is non-inclusive of zero, but I also found that the python math library erf function starts to freak out and generate crazy numbers when you get really close to zero, say within 10^-5. So, I got the appropriate values directly at zero due to my patch and mostly good values larger than 10^-4, but crazy weird values in this little block near zero. In one version of correction, I simply introduced a cut-off to send my Boys function to its zero value when I got sufficiently close, but this felt like a very inelegant fix to me. I searched the literature and around on-line and had myself 6 different representations of the Boys function before I was finished. The most stable version was a formulation that pulled the Boys function and all its derivatives out of the confluent hypergeometric function 1F1, which I was able find implemented in the scipy special functions library (I felt fortunate; I had added Scipy to my python environment for a completely unrelated reason that turned out to be a dead-end and ended up needing this special function… lucky!)

2 Boys function

I write the formulation here because somebody may someday benefit from it. In this, Fn differs from the nth derivative of the Boys function by -1^n (the ‘n’ functions are all positive while the derivatives of the Boys function alternate positive and negative. The official “Boys Function” is where n = 0.)

Initially, I could not distinguish whether the errors in my program were in the Boys function or in the method being used to form the gaussian derivatives. I knew I was having problems with my initial implementation of the Boys function from a modified erf, but problems persisted no matter how I altered my Boys function. The best conclusion was that the Boys function was not my only stumbling block. Eventually I waved the white flag and surrendered to the fact that my derivative routine was not going to cut it.

This lead to a period where I was back in the literature learning how gaussian derivatives were being made by the professional quantum chemists. I found a number of different strategies: Pople and Hehre produced a paper in 1978 outlining a method used in their Gaussian software which apparently performs a cartesian rotation to cause most of the busy work to go away, which is supposedly really fast. There was a method by Dupuis, Rys and King in the 1970s which generates higher angular momentum integrals by some method of quadrature. A paper by McMurchie and Davidson in 1978 detailed a method of recursion which generates the higher angular momentum gaussians from an s-type seed using hermite polynomials. Another by Obara and Saika in 1986 broke the system down into a three center gaussian integral and used recurrence relations to generate the integrals by another method of recursion. And still a further paper involving Pople elaborated something similar (I found the paper, but never read this one).

Because I had found a secondary source from a more modern quantum chemist detailing the method, I focused on McMurchie and Davidson. This method gave several fairly interesting techniques that I was able to successfully program. I learned here how to program recursive functions since McMurchie calculates their integrals by a recurrence relation.

About this time, also, I was butting my head against the difficulty that I had no way of really checking whether my integrals were functioning properly or not. For Szabo and Ostlund, values for the integrals are only given involving s-type orbitals. I had nothing in the way of numbers for p-type orbitals. I could tell that my Hartree-Fock routine was not converging, but I couldn’t tell why. I tried making a comparison calculation in Mathematica, but the integration failed miserably. I might’ve been able to go further with Mathematica, if only the time step of figuring out how to do that programming wasn’t steeper –when the functions aren’t directly in their colossal library, implementation can become pretty hard in Mathematica. Having hammered on the Boys function until I could basically build nothing more stable and then after I found a way to diagnose whether or not my routines were producing symmetric matrices, I had no other way of telling where faults existed. These integrals are not easy by hand.

I dug a paper out of Taketa, Huzinaga and O-hata in 1966 that was about the only paper I could find which actually reported values for these integrals given a particular set-up. Apparently, after 1966, it stopped being a novelty to show that you were able to calculate these integrals, so nobody has actually published values in the last fifty years! Another paper by Shavitt and Karplus a few years earlier references values calculated at MIT still earlier, but aside from these, I struggled to find reference values. This experience is a formative one because it shows how hard you have to work to be competitive if you’re not actually in the in-club of the field –for modern workers, the problem is solved and you refer to a program built during that time which can do these operations.

Comparing to Taketa, using the McMurchie-Davidson method, I was able to confirm that my overlap and kinetic energy integrals were functioning properly. The nuclear attraction integral was a bust, no dice and no better in the electron repulsion integrals. They worked for s-orbitals, but not for p-type and higher. Unfortunately, Taketa and company had mistransliterated one of their basis functions in a previous paper, leading me to worry that maybe the paper was actually lying about the values they reported. I eventually decided that Shavitt was probably not also lying, meaning that there was still something wrong with my integration, even as I had hammered on McMurchie until smoke was coming out of my ears and I was sure I implemented it correctly.

This was rock bottom for me. You can sort of see the VH1 special: and here he hit rock bottom. I didn’t know what else to do; I was failing at finding alternative ways to generate appropriate reference checks and simply could not see what was wrong in my programming. I had no small amount of doubt about my fitness to perform computer programming.

My selected path forward was to learn how to implement Obara-Saika and to turn that against McMurchie. This method is also a recursive method to perform exactly the same derivatives without actually doing a long-form derivative. Initially, Obara-Saika also gave a value different from Taketa and also Shavitt, but I was able to track down a -1 that changed everything. Suddenly, Obara-Saika was giving values right on with Taketa.

When I tried a comparison by hand of the outcomes of McMurchie-Davidson against those of Obara-Saika on the very simplest case of p-type primitives, I found that these two methods produce different values. For the same problem, Obara-Saika does not produce the same result as McMurchie-Davidson. I conclude that I either misunderstood McMurchie-Davidson, which is possible, or that the technique is either willfully mangled in the paper to protect a possibly valuable piece of work from a competitor or it’s simply wrong (somebody tell Helgaker: he actually teaches this method in classes where he is reproducing it faithfully, part of why I originally trusted it). I do not know why McMurchie-Davidson fails in my hands because everything they do looks right!

 

Pretty Pictures

After a huge amount of work, I broke through. My little python program was able to reproduce some historical Hartree-Fock calculations.

water density filter low 2

This image is a maya vi plot depicting the quantum mechanical electron density of water (H2O). The oxygen is centered while the two hydrogens are at the points of the stems. This calculation used the minimal basis set of STO-3G, which places three primitive gaussians for each orbital. Each hydrogen contains only a 1s orbital while the carbon contains a 1s, a 2s and three 2p orbitals in the x, y and z directions. The image above is threshold filtered to show the low density parts of the distribution, where the intensities are near zero, which was necessary because the oxygen has so much higher density than the hydrogens that you cannot see anything but the inner shell of the oxygen when plotting on absolute density. This system has ten electrons in five closed molecular orbitals where the electron density represents the superposition of those orbitals. The reported energies were right on the values expected for an STO-3G calculation.

With water, I worried initially that the calculation wouldn’t converge if I didn’t make certain symmetry considerations about the molecule, but that turned out to be unnecessary. After I solved my integral problems, the calculation converged immediately.

I also spent some time on methane using the same tools and basis…

Density low threshold

With methane (CH4) you can see quite clearly the tetrahedral shape of the molecule with the carbon at center and the hydrogens arranged in a halo around it. This image was also filtered to show the low density regions.

I had some really stunning revelations about organic chemistry when I examined the molecular orbital structure of methane somewhat more closely. Turns out that what you learn in organic chemistry class is a lie! I’ll talk about this in an independent blog post because it deserves to be highlighted, front and center.

As kind of a conclusion here, I will note that STO-3G is not the most perfect modern basis set for doing quantum mechanical calculations… and even with the most modern basis, you can’t quite get there. Hartree-Fock does not include cross-correlation between electrons with differing spin and therefore converges to a limit called the Hartree-Fock limit. The difference seen between molecular energies calculated at the Hartree-Fock limit and those actually observed in experiment is referred to as the correlation energy, which can be calculated with greater accuracy using more modern Post-Hartree-Fock techniques using higher quality basis sets than STO-3G. With a basis of infinite size and with calculations that include the correlation energy, you get close to the truth. What is seen here is still just another approximation… better than the last, but still just short of reality.

My little python program probably can’t go there without a serious redesign that would take more time than I currently have available (and probably involve me learning FORTRAN). The methane calculation took 12 seconds –as compared to molecular hydrogen, which took 5% of a second. Given the scaling of the problem, benzene (6 carbons and 6 hydrogens) would take something close to 2 hours to calculate and maybe all night to plot. This using only STO-3G, three gaussians per orbital, which is dinky and archaic compared to a more modern basis set, which might have 50 or 60 functions for the basis of a single atom. Compared to what modern programs can do, benzene itself is but a toy.

The Classical version of NMR

As I’ve been delving quite deeply into numerical solutions of quantum mechanics lately, I thought I would take a step back and write about something a little less… well… less. One thing about quantum mechanics that is sometimes a bit mind-boggling is that classical interpretations of certain systems can be helpful to understand things about the quantum mechanics of the same system.

Nuclear magnetic resonance (NMR) dovetails quite nicely with my on-going series about how magnetism works. You may be familiar with NMR from a common medical technique that makes heavy use of it: Magnetic Resonance Imaging (MRI). The imaging technique of MRI makes use of NMR to build density maps of the human anatomy. The imaging technique accomplishes this feat by using magnetism to excite a radio signal from NMR active atomic nuclei and then create a map in space from the 3D distribution of NMR signal intensity. NMR itself is due specifically to the quantum mechanics of spin, particularly spin flipping, but it also has a classical interpretation which can aid in understanding what these more difficult physics mean. The system is very quantum mechanical, don’t get me wrong, but the classical version is actually a pretty good analog for once.

I touched very briefly on the entry gate to classical NMR in this post. The classical physics describing the behavior of a compass needle depicts a magnetized needle which rotates in order to follow the lines of an external magnetic field. For a magnetic dipole, compass needle-like behavior will tend to dominate how that dipole interacts with a magnetic field unless the rotational moments of inertia of that dipole are very small. In this case, the compass needle no longer swings back and forth. So, What does it do?

Let’s consider again the model of a compass needle awash in a uniform magnetic field…

1 compass needle new

This model springs completely from part 3 of my magnetism series. The only difference I’ve added is that dipole points in some direction while the field is along the z-axis. The definition of the dipole is cribbed straight from part 4 of my magnetism series and is expressing quantum mechanical spin as ‘S.’ We can back off from this a little bit and recognize that spin is simply angular momentum, where I transit to calling it ‘L’ instead so that I can slip away from the quantum. In this particular post, I’m not delving into quantum!

2 magnetic dipole classic nmr

In this formula, ‘q’ is electric charge, ‘m’ is the mass of the object and ‘g’ is the gyromagnetic ratio which regularizes spin angular momentum to a classical rotational moment.

I will crib one more of the usual suspects from my magnetism series.

3 torque expression

I originally derived this torque expression to show how compass needles swing back and forth in a magnetic field. In this case, it helps to stop and think about the relationship between torque and angular momentum. It turns out that these two quantities are related in much the same manner as plain old linear momentum and force. You acquire torque by finding out how angular momentum changes with time. Given that magnetic moment can be expressed from angular momentum, as can torque, I rewrite the equation above in terms of angular momentum.

4 rewritten torque

This differential equation has the time derivative of angular momentum (signified in physicist shorthand as the ‘dot’ over the quantity of ‘L’) equal to a cross product involving angular momentum and the magnetic field. If you decompress the cross product, you can get to a fairly interesting little coupled differential equation system.

5 decompressing cross product

 

This simplifies the cross product to the two relevant surviving terms after considering that the B-field only lies along one axis. This gives a vector equation…

6 opening diff eqn

I’ve expressed the vector equation in component form so that you can see how it breaks apart. In this, you get three equations, one for each hatted vector which connect to each dimension of a three dimensional angular momentum. These can all be written separately.

7 differential equations

I’ve grouped the B-field into the coefficient because it’s a constant and I’ve tried to take control of my reckless highlighting problem so that you can see how these differential equations are coupled. The z-axis of the angular momentum is easy since it must solve as a constant and since it’s decoupled from ‘x’ and ‘y’. The other two are not so easy. The coefficient is a special quantity which is called the Larmor frequency.

8 Larmor frequency

This gives us a fairly tidy package.

9 revised differential eqn

I’ve always loved the solution of this little differential equation. There’s a neat trick here from wrapping the ‘x’ and ‘y’ components up as the two parts of a complex number.

10 complex number

You then just take a derivative of the complex number with respect to time and work your way through the definitions of the differential equation.

11 Complex num diff eqn

After working through this substitution, the differential equation is reduced to maybe the simplest first order differential equation you could possibly solve. The answer is but a guess.

12 soln 1

Which can be broken up into the original ‘x’ and ‘y’ components of angular momentum using the Euler relation.

12 soln 1a

There’s an argument here that ‘A’ is determined by the initial conditions of the system and might contain a complex phase, but I’m going to just say that we don’t really care. You can more or less just say that all angular momentum is distributed between the x, y and z components of the angular momentum, part of it a linear combination that lies in the x-y plane and the rest pointed along the z-axis.

13 basis solution

And, as the original casting of the problem is in terms of the magnetic dipole moment, I can switch angular momentum back to the dipole moment. Specifically, I can use the pseudo-quantum argument that the individual dipoles possess half-integer spin magnitude angular momentum as hbar over 2.

14 classical dipole moment

This gives an expression for how the classical atom sized spin dipole will move in a uniform magnetic field. The absolute value on the charge in the coefficient constrains the situation to reflect only the size of the magnetic moment given that the angular momentum was considered to be only a magnitude. Charge appears a second time inside the sine and cosine terms concerning the Larmor frequency: for example, if the charge is negative, the negative sign on the frequency will cause the sine to switch from negative to positive while the cosine is unaffected.

15 larmor precession

A classical magnetic dipole trapped in a uniform magnetic field pointed along the Z-axis will undergo a special motion called gyroscopic “precession.” In this picture, the ellipses are drawn to show the surfaces of a cylinder in order to follow the positions of the dipole moment vectors with time. Here, the ellipses are _not_ an electrical current loop as depicted in the first image above. The dipole moment vector traces out the surface of a cone as it moves; when viewed from above, the tip of the dipole moment with a +q charge sweeps clockwise while the -q charge sweeps CCW. This motion is very similar to a child’s top or a gyroscope…

gyroscope

This image taken from Real world physics, hopefully, you’ve had the opportunity to play with one of these. A mentioned, the direction of the gyroscopic motion is determined by the charge of the dipole moment. As also mentioned, this is a classical model of the motion and it breaks down when you start getting to the quantum mechanics, but it is remarkably accurate in explaining how a tiny, atom sized dipole “moves” when under the influence of a magnetic field.

Dipolar precession appears in NMR during the process of free induction decay. As illustrated in my earlier blog post on NMR, you can see the precession:

220px-nmr_fid_good_shim_en-svg

In the sense of classical magnetization, you can see the signal from the dipolar gyroscopes in the plot above. One period of oscillation in this signal is one full sweep of the dipole moment around the z-axis. As the signal here is nested in the “magnetization” of the NMR sample, the energy bleeding out of the magnetic dipoles into the radiowave signal that is actually observed saps energy from the system and causes the precession in the magnetization to die down until it lies fully along the z-axis (again, classical view!) In its relaxed state, the magnetization points along the axis of the external field, much as a compass needle does. The compass needle, of course, can’t precess the way an atomic dipole moment can. And, as I continue repeatedly to point out, this is a classical interpretation of NMR… where the quantum mechanics are appreciably similar, though decidedly not the same.

Because such a rotating dipole moment cannot exhibit this kind of oscillation indefinitely without radiating its energy away into electromagnetism, some action must be undertaken in order to set the dipole into precession. You must somehow tip it away from pointing along the external magnetic field, at which time it will begin to precess.

In my previous post on the topic, I gave several different explanations for how the dipoles are tipped in order to excite free induction decay. Verily, I said that you blast them with a radio frequency pulse in order to tip them. That is true, but very heavy handed. Classical NMR offers a very elegant interpretation for how dipoles are tipped.

To start, I will dial back to the picture we started with for the precession oscillation. In this set up, the dipole starts in a relaxed position pointing along the z-axis B-field and is subjected to a radio frequency pulse that is polarized so that the B-field of the radio wave lies in the x-y plane. The Poynting vector is somewhere along the x-axis and the radiowave magnetic field is along the y-axis.

16 setup 2nd field.png

In this, the radiowave magnetic field is understood to be much weaker than the powerful static magnetic field.

You can intuitively anticipate what must happen for a naive choice of frequency ‘ω.’ The direction of the magnetic dipole will bobble a tiny bit in an attempt to precess around the superposition of the B0 and B2 magnetic fields. But, because the B0 field is much stronger than the B2 field, the dipole will remain pointing nearly entirely along the z-axis. We could write it out in the torque equation in order to be explicit.

17 2 field torque

Without thinking about the tiny time dependence on the B2 field, we know the solution to this equation from above for atomic scale dipoles. The Larmor frequency would just depend on the vector sum of the two fields. This is of course a very naive response and the expected precession would be very small and hard to detect since the dipole is not displaced very far from the average direction of the field at any given time, again expecting B2 to be very small. And, if B2 is oscillatory, there is no point where the time average of the total field lies off the z-axis. The static field tends to dominate and precession would be weak at best.

Now, there is a condition where an arbitrarily weak B2 field can actually have a major impact on the direction of the magnetic dipole moment.

18 split field

This series of algebraic manipulations takes a cosinusoidal radiowave B-field and splits it into two parts. If you squint closely at the math, the time dependent B-fields present in the last line will spring out to you as counter-rotating magnetic fields. I got away with doing this by basically adding zero.

19 counter rotation

Why in the world did I do this? This seems like a horrible complexification of an already hard-to-visualize system.

To understand the reason for doing this, I need to make a digression. In physics, one of the most useful and sometimes overlooked tools you can run across is the idea of frame of reference. Frame of reference is simply the circumstance by which you define your units of measurement. You can think about this as being synonymous with where you have decided to park your lawn chair in order to take stock of the events occurring around you. In watching a train trundle past on its tracks, clearly I’ve decided my frame of reference is sitting someplace near the tracks where I can measure that the train is moving with respect to me. I can also examine the same situation from inside the train car looking out the window, watching the scenery scroll past. Both situations will yield the same mathematical observations from two different ways of looking at the same thing.

In this case, the frame of reference that is useful to step into is a rotating frame. If you’re on the playground, when you sit down on a moving merry-go-round, you have shifted to a rotating frame of reference where the world will appear as if it rotates around you. Sitting on this moving merry-go-round, if you watch someone toss a baseball across over your head, you would need to add some sort of fictitious force into your calculation to properly capture the path the ball will follow from your point of view. This means reinventing your derivative with respect to time.

20 rotating frame time derivative

This description of the rotating frame time derivative is simply a matter of tabulating all the different vectors that contribute to the final derivative. (The vectors here are misdrawn slightly because I initially had the rotating vector backward.) The vector as seen in the frame of reference moves through the rotation according to displacements that are due both to the internal (in) rotation and whatever external (ext) displacements contribute to its final state. The portion due to the rotation (rot) is a position vector that is simply shifted by the rotation at an angle I called ‘α’ where the rotation is defined with positive being in the right-handed sense –literally backward (lefthanded) when seen from within the rotating frame. The angular displacement ‘α’ is equal to the angular speed ‘Ω’ times time as Ωt and it can be represented by a vector that is defined to point along the z-axis. The little bit of trig here shows that the rotating frame derivative requires an extra term that is a cross product between the vector being differentiated and the rotational velocity vector.

How does this help me?

21 intro rotating frame

I’ve once again converted torque and magnetic moment into angular momentum in order to reveal the time derivative. It is noteworthy here that the term involving the Larmor frequency directly, the first term on the right, looks very similar to the form of the rotating frame if the Larmor frequency is taken to be angular velocity of the rotating frame. Moreover, I have already defined two other magnetic field terms that are both rotating in opposition to each other where I have not selected their frequencies of rotation.

22 cancelation is rotating frame

A rotating frame could be chosen where the term involving the static magnetic field will be canceled by the rotation. This will be a clockwise rotation at the speed of the Larmor frequency. If the frequency of rotation of B2 is chosen to be the Larmor frequency, the clockwise rotating B2 field term will enter into the rotating frame without time dependence while the frequency of the other term will double. As such, one version of the B2 field can be chosen to rotate with the rotating frame.

23 cancellation 2

In the final line, the primed unit vectors are taken to be with the rotating frame of reference. So, two things have happened here: the effect of the powerful static field is canceled out purely by the rotation of the rotating frame and the effect of the counter rotating field, spinning around at twice the Larmor frequency in the opposite direction, is on average in no direction. The only remaining significant term is the field that is stationary with respect to the rotating frame, which I’ve taken to be along the y’-axis.

The differential equation that I’ve ended up with here is exactly like the differential equation solved for the powerful static field by itself far above, but with a precession that will now occur at a frequency of ω2 around the y’-axis.

24 rotating frame solutions

If I take the starting state of the magnetic dipole moment to be relaxed along the z-axis, no component will ever exist along the y’-axis… the magnetic dipole moment will purely inhabit the z-x’ plane in the rotating frame.

25 motion in rotating frame

As long as the oscillating radiowave magnetic field is present, the magnetic dipole moment will continue to revolve around the y’-axis in the rotating frame. In the stationary frame, the dipole moment will tend to follow a complicated helical path both going around the static B-field and around the rotating y’-axis.

If you irradiate the sample with radiowaves for 1/4 of the period associated with the ω2 frequency, the magnetic dipole moment will rotate around until it lies in the x-y plane. You then shut off the radiowave source and watch as the NMR sample undergoes a free induction decay until the magnetization lies back along the static B-field.

This is a classical view of what’s going on: a polarized radiowave applied at the Larmor frequency will cause atomic magnetic dipoles to torque around in the sample until they are able to undergo oscillation. Once the radiowave is shut off, the magnetization performs a free induction decay. Applying the radiowave at the Larmor frequency is said to be driving the system at resonance since the static B-field will always be strong enough to overwhelm the comparably weak radiowave magnetic field.

I’ve completely avoided the quantum mechanics of this system. The rotating frame of Larmor precession is fairly accurate for describing what’s happening here until you need to consider other quantum mechanical effects present in the signal, such as spin-spin coupling of neighboring NMR active nuclei. Quantum mechanics are ultimately what’s going on, but you want to avoid the headaches associated with that wherever possible.

I do have it in mind to rewrite a long defunct post that described the quantum mechanics of the two-state system specifically in how it describes NMR. It will happen someday, honestly!

The Quantum Mechanics in the Gap

A big cat’s paw of mine is trying to fill the space between my backgrounds to understand how one thing leads to another.

When a biochemist learns quantum mechanics (QM), it happens from a background where little mathematical sophistication is required; maybe particle-in-a-box appears in the middle of a low grade Physical Chemistry class and many results of QM are qualitatively encountered in General Chemistry or perhaps in greater detail in Organic Chemistry. A biochemist does not need to be perfect at these things since the meat of biochemistry is a highly specialized corner of organic chemistry dealing with a relatively small number of molecule types where the complexity of the molecule tends to force the details into profound abstraction. Proteins and DNA, membranes and so on are all expressed mostly as symbols, sequences or structural motifs. Reactions occur symbolically where chemists have worked out the details of how a reaction proceeds (or not) without really saying anything very profound about it. This created a situation of deep frustration for me once upon a time because it always seemed like I was relying on someone else to tell me the specifics of how something actually worked. I always felt helpless. Enzymatic reaction mechanisms always drove me crazy because they seem very ad hoc; no reason they shouldn’t since evolution is always ad hoc, but the symbology used always made it opaque to me as to what was happening.

When I was purely a biochemist, an undergraduate once asked me whether they could learn QM in chemistry and I honestly answered “Yes” that everything was based on QM, but withheld the small disquiet I felt that I really didn’t believe that I understood how it fit in. Background that I had in QM being as it was at that point, I didn’t truly know a quantum dot from a deviled egg. Yes, quantum defines everything, but what does a biochemist know of quantum? Where does bond geometry come from? Everything seems like arbitrary tinker toys using O-Chem models. Why is it that these things stick together as huge, solid ball-and-stick masses when everything is supposed to be puffy wave clouds? Where is this uncertainty principle thing people vaguely talk about in hushed tones when referring to the awe inspiring weirdness that is QM? You certainly would never know such details looking at model structures of DNA. This frustration eventually drove me to multiple degrees in physics.

In physics, QM takes on a whole other dimension. The QM that a physicist learns is concerned with gaining the mathematical skill to deal with the core of QM while retaining the flexibility to specialize in a needed direction. Quantum Theory is a gigantic topic which no physicist knows in entirety. There are two general cousins of theory which move in different directions with Hamiltonian formalisms diverging from the Lagrangian. They connect, but have power in different situations. Where you get very specific on a topic is sometimes not well presented –you have to go a long way off the beaten path to hit either the Higgs Boson or General Relativity. Physicists in academia are interested in the weird things lying at the limits of physics and focus their efforts on pushing to and around those weirdnesses; you only focus efforts on specializations of quantum mechanics as they are needed to get to the untouched things physicists actually care to examine. This means that physicists sometimes focus little effort on tackling topics that are interesting to other fields, like chemistry… and the details of the foundations of chemistry, like the specifics of the electronic structure of the periodic table, are under the husbandry of chemists.

If you read my post on the hydrogen atom radial equation, you saw the most visible model atom. The expanded geometries of this model inform the structure of the periodic table. Most of the superficial parts of chemistry can be qualitatively understood from examining this model. S, P, D, F and so on orbitals are assembled from hydrogenic wave equations… at least they can be on the surface.

Unfortunately, the hydrogenic orbitals can only be taken as an approximation to all the other atoms. There are basically no analytic solutions to the wave functions of any atom beyond hydrogen.

Fine structure, hyper fine structure and other atomic details emerge from perturbations of the hydrogenic orbitals. Perturbation is a powerful technique, except that it’s not an exact solution. Perturbations approach solutions by assuming that some effect is a small departure from a much bigger situation that is already solved. You then do an expansion on which successive terms tend to approach the perturbative part more and more closely. Hydrogenic orbitals can be used as a basis for this. Kind of. If the “perturbation” becomes too big relative to the basis situation, the expansion necessary to approximate it becomes too big to express. Technically, you can express any solution for any situation from a “complete” basis, but the fraction of the basis required for an accurate expression becomes bigger than the “available” basis before you know it if the perturbation is too large compared to the context of the basis.

When I refer to “basis” here, I’m talking about Hilbert spaces. This is the use of orthogonal function sets as a method to compose wave equations. This works like Fourier series, which is one of the most common Hilbert space basis sets. Many Hilbert spaces contain infinitely many basis functions, which is bigger than the biggest number of functions any computer can use. The reality is that you can only ever actually use a small portion of a basis.

The hydrogen situation is merely a prototype. If you want to think about helium or lithium or so on, the hydrogenic basis becomes merely one choice of how to approach the problem. The hamiltonians of other atoms are structures that can in some cases be bigger than is easily approachable by the hydrogenic basis. Honestly, I’d never really thought very hard about the other basis sets that might be needed, but technically they are a very large subject since they are needed for the 120 odd other atoms on the periodic table beyond hydrogen. These other atoms have wave functions that are kind of like those of hydrogen, but are different. The S-orbital of hydrogen is a good example of S-orbitals found in many atoms, even though the functional form for other atoms is definitely different.

This all became interesting to me recently on the question of how to get to molecular bonds as more than the qualitative expression of hydrogenic orbital combinations. How do you actually calculate bond strengths and molecular wave functions? These are important to understanding the mechanics of chemistry… and to poking a finger from quantum mechanics over into biochemistry. My QM classes brushed on it, admittedly, deep in the quagmire of other miscellaneous quantum necessary to deal with a hundred different things. I decided to take a sojourn into the bowels of Quantum Chemistry and develop a competence with the Hartree-Fock method and molecular orbitals.

The quantum mechanics of quantum chemistry is, surprisingly enough, mechanically more simple than one might immediately expect. This is splitting hairs considering that all quantum is difficult, but it is actually somewhat easier than the difficulty of jumping from no quantum to some quantum. Once you know the basics, you pretty much have everything needed to get started. Still, as with all QM, this is not to scoff at; there are challenges in it.

This form of QM is a Hamiltonian formalism where the first mathematics originated in the 1930s. The basics revolve around the time independent Schroedinger equation. Where it jumps to being modern QM is in the utter complexity of the construct… simple individual parts, just crazily many of them. This type of QM is referred to as “Many Body theory” because it involves wave equations containing dozens to easily hundreds of interactions between individual electrons and atomic nuclei. If you thought the Hamiltonian I wrote in my hydrogen atom post was complicated, consider that it was only for one electron being attracted to a fixed center… and not even including the components necessary to describe the mechanics of the nucleus too. The many body theory used to build up atoms with many electrons works for molecules as well, so learning generalities about the one case is learning about it the other case too.

As an example of how complicated these Schrodinger equations become, here is the time independent Schrodinger equation for Lithium.

Lithium Schrodinger

This equation is simplified to atomic units to make it tractable. The part describing the kinetic energy of the nucleus is left in. All four of those double Del operators open up into 3D differentials like the single one present in the hydrogen atom. The next six terms describe electrostatic interactions between the three electrons among themselves and with the nucleus. This is only one nucleus and three electrons.

As I already mentioned, there are no closed-form analytical solutions for structures more complicated than hydrogen, so many body theory is about figuring out how to make useful approximations. And, because of the complexity, it must make some very elegant approximations.

One of the first useful features of QM for addressing situations like this I personally overlooked when I initially learned it. With QM, most situations that you might encounter have no exact solutions. Outside of a scant handful of cases, you can’t truly “solve” anything. But, for all the histrionics that goes along with that, the solutions, what are called the eigenstates, are a special case of lowest possible energy for the given circumstance. If you make a totally random guess about the form of the wave function which solves a given Hamiltonian, you are assured that the actual solution has a lower energy. Since that’s the case, you can play a game: if I make a some random guess about the form of the solution, another guess that has a lower energy is a better guess regarding the actual form. You can minimize this, always making adjustments to the guess such that it achieves a lower energy, where eventually it won’t go any lower. The actual solution still ends up being lower, but maybe not very far. Designing such energy minimizing guesses inevitably converges toward the actual solution and is usually accomplished by systematic mathematical minimization. This method is called “Variation” and is one of the most major methods for constructing approximations of an eigenstate. Also, as you might expect, this is a numerical strategy and it makes heavy use of computers in the modern day since the guesses are generally very big, complicated mathematical functions. Variational strategies are responsible for most of our knowledge of the electronic structure of the periodic table.

Using computers to make guesses has been elevated to a high art. Literally, a random function with a large number of unknown constants is tried against the Hamiltonian; you then take a derivative of the energy to see how it varies as a function of any one constant and then adjust that constant until the energy is at a minimum, where the derivative is near zero and where the second derivative shows an inflection indicative of a minimum. Do this over and over again with all the available constants in the function and eventually the trial wave function converges to the actual solution.

Take that in for a moment. We understand the periodic table mainly by guessing at it! A part of what makes these wave functions so complicated is that the state of any one electron in any system more complicated than hydrogen is dependent on every other electron and charged body present, as shown in the Lithium equation above. The basic orbital shapes are not that different from hydrogen, even requiring spherical harmonics to describe the angular shape, but the specific radial scaling and distribution is not solvable. These electrons influence each other in several ways. First, they place plain old electrostatic pressure on one another –all electrons push against each other by their charges and shift each other’s orbitals in subtle ways. Second, they exert what’s called “exchange pressure” on one another. In this, every electron in the system is indistinguishable from every other and electrons specifically deal with this by requiring that the wave function be antisymmetric such that no electron can occupy the same state as any other. You may have heard this called the Pauli Exclusion Principle and it is just a counting effect. In a way, this may be why quantum classes tend to place less weight on the hydrogen atom radial equation: even though it holds for hydrogen, it works for nothing else.

Multi-atom molecules stretch the situation even further. Multiple atoms, unsolvable in and of themselves, are placed together in some regularly positioned array in space, with unsolvable atoms now compounded into unsolvable molecules. Electrons from these atoms are then all lumped together collectively in some exchange antisymmetric wave function where the orbitals are dependent on all the bodies present in the system. These orbitals are referred to in quantum chemistry as molecular orbitals and describe how an electron cloud is dispersed among the many atoms present. Covalent electron bonds and ionic bonds are forms of molecular orbital, where electrons are dispersed between two atoms and act to hold these atoms in some fixed relation with respect to one another. The most basic workhorse method for dealing with this highly complicated arrangement is a technique referred to as the Hartree-Fock method. Modern quantum chemistry is all about extensions beyond Hartree-Fock, which often use this method as a spine for producing an initial approximation and then switch to other variational (or perturbative) techniques to improve the accuracy of the initial guess.

Within Hartree-Fock, molecular orbitals are built up out of atomic orbitals. The approximation postulates, in part, that each electron sits in some atomic orbital which has been contributed to the system by a given atom where the presence of many atoms tends to mix up the orbitals among each other. To obey exchange, each electron literally samples every possible contributed orbital in a big antisymmetric superposition.

Hartree-Fock is sometimes referred to as Self Consistent Field theory. It uses linear superpositions of atomic orbitals to describe the molecular orbitals that actually contain the valence electrons. In this, the electrons don’t really occupy any atomic orbital, but some combination of many orbitals all at once. For example, a version of the stereotypical sigma covalent bond is actually a symmetric superposition of two atomic S-orbitals. The sigma bond contains two electrons and is made antisymmetric by the solitary occupancy of electron spin states so that the spatial part of the S-orbitals from the contributing atoms can enter in as a symmetric combination –this gets weird when you consider that you can’t tell which electron is spin up and which is spin down, so they’re both in a superposition.

Sigma bond

The sigma bond shown here in Mathematica was actually produced from two m=0 hydrogenic p-orbitals. The density plot reflects probability density. The atom locations were marked afterward in powerpoint. The length of the bond here is arbitrary, and not energy minimized to any actual molecule. This was not produced by Hartree-Fock (though it would occur in Hartree-Fock) and is added only to show what molecular bonds look like.

From completeness, here is a pi bond.

Pi bond

At the start of the Hartree-Fock, the molecular orbitals are not known where the initial wave function guess is that every electron is present in a particular atomic orbital within the mixture. Electron density is then determined throughout the molecule and used to furnish repulsion and exchange terms among the electrons. This is then solved for energy eigenvalues and spits out a series of linear combinations describing the orbits where the electrons are actually located, which turns out to be different from the initial guess. These new linear combinations are then thrown back into the calculation to determine electron density and exchange, which is once more used to find energy eigenvalues and orbitals, which are once again different from the previous guess. As the crank is turned repeatedly, the output orbitals converge onto the orbitals used to calculate the electron density and exchange. When these no longer particularly change between cycles, the states describing electron density will be equal to those associated with the eigenvalues –the input becomes self consistent with the output, hence giving the name to the technique by production of a self-consistent field.

Once the self consistent electron field is reached, the atomic nuclei can be repositioned within it in order to minimize the electrostatic stresses on the nuclei. Typically, the initial locations of the nuclei must be guessed since they are themselves not usually exactly known. A basic approximation of the Hartree-Fock method is the Born-Oppenheimer approximation where massive atomic nuclei are expected to move on a much slower time scale than the electrons, meaning that the atomic nuclei create a stationary electrostatic field which arranges the electrons, but then are later moved by the average dispersion of the electrons around them. Minimizing the atomic positions necessitates re-calculation of the electron field, which in turn may require that atomic positions again be readjusted until eventually the electron field does not alter the atomic positions, whereby the atomic positions facilitate the configuration of the surrounding electrons. With the output energy of the Hartree-Fock method minimized by rearranging the nuclei, this gives the relaxed configuration of a molecule. And, from this, you automatically know the bonding angles and bond lengths.

The Born-Oppenheimer approximation is a natural simplification of the real wave function which splits the wave functions of the nuclei away from the wave functions of the electrons; it can be considered valid predominantly because of the huge difference in mass (a factor of ~100,000) between electrons and nuclei, where the nuclei are essentially not very wave-like relative to the electrons. In Lithium, above, it would simply mean removing the first term of the Schrodinger equation involving the nuclear kinetic energy and understanding that the total energy of the molecule is not E. Most of the shape of a molecule can treat atomic nuclei as point-like while electrons and their orbitals constitute pretty much all of the important molecular structure.

As you can see by the description, there are a huge number of calculations required. I’ve described them very topically. Figuring out the best way to run Hartree-Fock has been an ongoing process since the 1930s and has been raised to a high art nearly 90 years later. At the superficial level, Hartree-Fock approximation is hampered by the not placing the nuclei directly in the wave function and by not allowing full correlation among the electrons. This weakness is remedied by usage of variational and perturbative post-Hartree-Fock techniques that have come to flourish with the steady increase of computational power during the advancement of Moore’s Law in transistors. That said, the precision calculation of overlap integrals is so computationally demanding on the scale of molecules that the hydrogen atom eigenstate solutions are impractical as a basis set.

This actually really caught me by surprise. Hartree-Fock has a very weird and interesting basis set type which is used in place of the hydrogen atom orbitals. And, the reason for the choice is predominantly to reduce a completely intractable computational problem to an approachable one. When I say “completely intractable,” I mean that even the best supercomputers available today still cannot calculate the full, completely real wave functions of even small molecules. With how powerful computers have become, this should be a stunning revelation. This is actually one of the big motivating factors toward using quantum computers to make molecular calculations; the quantum mechanics arise naturally within the quantum computer enabling the approximations to strain credulity less. The approximation used for the favored Hartree-Fock basis sets is very important to conserving computational power.

The orbitals built up around the original hydrogen atom solution to approximate higher atoms have a radial structure that has come to be known as Slater orbitals. Slater orbitals are variational functions that resemble the basic hydrogen atom orbital which, as you may be aware, is an exponential-La Guerre polynomial combination. Slater orbitals are basically linear combinations of exponentials which are then minimized by variation to fit the Hamiltonians of higher atoms. As I understand it, Slater orbitals can be calculated through at least the first two rows of the periodic table. These orbitals, which are themselves approximations, are actually not the preferred basis set for molecular calculations, but ended up being one jumping off point to produce early versions of the preferred basis set.

The basis set that is used for molecular calculations is the so-called “Gaussian” orbital basis set. The Gaussian radial orbitals were first produced by use of simple least-squares fits of Slater orbitals. In this, the Slater orbital is taken as a prototype and several Gaussian functions in a linear combination are fitted to it until Chi-square becomes as small as possible… while the Slater orbital can be exactly reproduced by use of an infinite number of Gaussians, it can be fairly closely reproduced by typically just a handful. Later Gaussian basis sets were also produced by skipping the Slater orbital prototype and jumping to Hartree-Fock application directly on atomic Hamiltonians (as I understand it). The Gaussian fit to the Slater orbital is pretty good across most of the volume of the function except at the center where the Slater orbital has a cusp (from the exponential) when the Gaussian is smooth… with an infinite number of Gaussians in the fit, the cusp can be reproduced, but it is a relatively small part of the function.

Orbitals comparison

Here is a comparison of a Gaussian orbital with the equivalent Slater orbital for my old hydrogen atom post. The scaling of the Slater orbital is specific to the hydrogen atom while the Gaussian scaling is not specific to any atom.

The reason that the Gaussian orbitals are the preferred model is strictly because of a computational efficiency issue. Within the application of Hartree-Fock, there are several integral calculations that must be done repeated. Performing these integrations is computationally very very costly on functions like the original hydrogen atom orbitals. With Gaussian radial orbitals, superpositions of the gaussians are themselves gaussians and the integrals all end up having the same closed forms, meaning that one can simply transfer constants from one formula to another without doing any numerical busy work at all. Further, the Gaussian orbitals can be expressed in straight-forward cartesian forms, allowing them to be translated around space with little difficulty and generally making them easy to work with (I dare you: try displacing a hydrogen orbital away from the origin while it remains in spherical-polar form. You’ll discover you need the entire Hilbert space to do it!). As such, with Gaussians, very big calculations can be performed extremely quickly on a limited computational budget. The advantage here is a huge one.

One way to think about it is like this: Gaussian orbitals can be used in molecular calculations roughly the same way that triangles are used to build polyhedral meshes in computer graphics renderings.

Gaussians are not the only basis set used with Hartree-fock. I’ve learned only a little yet about this alternative implementation, but condensed matter folk also use the conventional Fourier series basis set of sines and cosines while working on a crystal lattice. Sines and cosines are very handy in situations with periodic boundaries, which you would find in the regimented array of a crystal lattice.

Admittedly, as far as I’ve read, Hartree-Fock is an imperfect solution to the whole problem. I’ve mentioned some of the aspects of the approximation above and it must always be remembered that the it fails to capture certain aspects of the real phenomenon. That said, Hartree-Fock provides predictions that are remarkably close to actual measured values and the approximation lends itself well to post-processing that further improves the outcomes to an impressive degree (if you have the computational budget).

I found this little project a fruitful one. This is one of those rare times when I actually blew through a textbook as if I was reading a novel. Some of the old citations regarding self-consistent field theory are truly pivotal, important papers: I found one from about the middle 1970s which had 10,000 citations on Web of Science! In the textbook I read, the chemists goofed up an important derivation necessary to produce a workable Hartree-Fock program and I was able to hunt down the 1950 paper detailing said calculation. Molecular Orbital theory is a very interesting subject and I think I’ve made some progress toward understanding where molecular bonds come from and what tools are needed to describe how QM produces molecules.

(Edit 11-6-18):

One cannot walk away from this problem without learning exactly how monumental the calculation is.

In Hartree-fock theory, the wave equations are expressed in the form of determinants in order to encapsulate the antisymmetry of the electron wave equation. These determinants are an antisymmetrized sum of permutations over the orbital basis set. Each permutation ends up being its own term in the total wave equation. The number of such terms goes as a factorial of the number of electrons contained in the wave. Moreover, probability density is the square of the wave equation.

Factorials become big very quickly.

Consider a single carbon atom. This atom contains 6 electrons. From this, the total wave equation for carbon has 6! terms. 6! = 720. The probability density then is 720^2 terms… which is 518,400 terms!

That should make your eyes bug out. You cannot ever write that in its full glory.

Now, for a simple molecule, let’s consider benzene. That’s six carbons and six hydrogens. So, 6×6+6 = 42 electrons. The determinant would contain 42! terms. That is 1.4 ×10^51 terms!!!! The probability density is about 2×10^102 terms…

Avogadro’s number is only 6.02×10^23.

If you are trying to graph the probability density with position, the cross terms are important to determining the value of the density at any location, meaning that you have 10^102 terms. This assures that you can never graph it in order to visualize it! If you integrate that across all of space for the spaces of each electron (an integral with 42 3D measures), every term with an electron in two different states dies, killing cross terms. And, because no integral can survive if it has even one zero among its 42 3D measures, only the diagonal terms survive in 42 cases, allowing the normalized probability to simply evaluate to the number of electrons in the wave function. Integrating the wave function totally cleans up the mess, meaning that you can basically still do integrals to find expectation values thinking only about sums across the 42 electrons. This orthogonality issue is why you can do quantum chemistry at all: for an operator working in a single electron space, every overlap that doesn’t involve that electron must only be 1 for a given term to survive, which is a vast minority of cases.

For purposes of visualization, these equations are unmanageably huge. Not merely unmanageably, but unimaginably so. So huge, in fact, that they cannot be expressed in full except in the most simplistic cases. Benzene is only six carbons and it’s effectively impossible to tackle in the sense of the total wave equation. The best you can do is look for expressions for the molecular orbitals… which may only contain N-terms (as many as 42 for benzene.) Molecular orbitals can be considered the eigenstates of the molecule, where each one can be approximated to contain only one electron (or one pair of electrons in the case of closed shell calculations). The fully quantum weirdness here is that every electron samples every eigenstate, which is basically impossible to deal with.

For anyone who is looking, some of the greatest revelations which constructed organic chemistry as you might know it occurred as early as 1930. Linus Pauling wrote a wonderful paper in 1931 where he outlines one way of anticipating the tetragonal bond geometry of carbon… performed without use of these crazy determinant wavefunctions and with simple consideration of the vanilla hydrogenic eigenstates. Sadly, these are qualitative results without resorting to more modern methods.

(Edit 11-21-18):

Because I can never just leave a problem alone, I’ve been steadily cobbling together a program for running Hartree-Fock. If you know me, you’ll know I’m a better bench chemist than I am a programmer, despite how much time I’ve spent on the math. I got interested because I just understand things better if I do them myself. You can’t calculate these things by hand, only by computer, so off I went into a programming language that I am admittedly pretty incompetent at.

In my steady process of writing this program, I’ve just hit a point where I can calculate some basic integrals. Using the STO-3G basis set produced from John Pople’s lab in 1969, I used my routines to evaluate the ground state energy of the hydrogen atom. There is a lot of script in this program in order to work the basic integrals and it becomes really really hard to diagnose whether the program is working or not because of the density of calculations. So, it spits out a number… is it the right number? This is very hard to tell.

I used the 1s orbital from STO-3G to compute the kinetic and nuclear interaction energies and then summed them together. With baited breath, one click of the key to convert to eV…

Bam—- -13.47 eV!

You have no idea how good that felt. The accepted value of the hydrogen atom ground state is -13.6 eV. I’m only off by about 1%! That isn’t bad using an archaic basis set which was intended for molecular calculations. Since my little lap top is a supercomputer next to the machines that originally created STO-3G, I’d say I’m doing pretty well.

Not sure how many lines of code that is, but for me, it was a lot. Particularly since my program is designed to accommodate higher angular momenta than the S orbital and more complicated basis sets than STO-3G. Cranking out the right number here feels really good. I can’t help but goggle at how cheap computational power has become since the work that got Pople his Nobel prize.

(edit 12-4-18):

Still spending time working on this puzzle. There are some other interesting adjustments to my thinking as I’ve been tackling this challenge which I thought I might write about.

First, I really didn’t specify the symmetry that I was referring to above which gives rise to the huge numbers of terms in the determinant style wave functions. In this sort of wave function, which contains many electrons all at once, the fermionic structure must be antisymmetric on exchange. This relies on an operator called the ‘exchange operator’ whose sole purpose is to trade electrons within the wave equation… the fermionic wave function has an eigenvalue of -1 when operated on by the exchange operator. This means that if you trade two electrons within the wave function that the wave function remains unchanged except to produce a -1. And, this is for any exchange you might perform between any two electrons in that wave function. The way to produce a wave function that preserves this symmetry is by permuting the positional variables of the electrons among the orbitals that they might occupy, as executed in a determinant where the orbitals form one axis and the electron coordinates form the other. The action of this permutation turns out huge numbers of terms, all of which are the same set of orbitals, but with the coordinates of the electrons permuted among them.

A second item I wanted to mention is the interesting disconnect between molecular wave functions and atomic functions. In the early literature on the implementation of Hartree-Fock, the basis sets for the molecular calculation are constructed from fits to atomic wave functions. They often referred to this as Linear Combination of Atomic Orbitals. As I was playing around with one of these early basis sets, I was using these basis functions against the hydrogen atom Hamiltonian in order to try to error check the calculus in my program by attempting to reproduce the hydrogenic state energies. Very frequently, these were giving erroneous energies even though the gaussians have symmetry very like the hydrogenic orbitals they were attempting to represent. Interestingly, as you may have read above, the lowest energy state, equivalent to the hydrogenic 1s, fit very closely to the ground state energy of hydrogen… where a basis with a larger number of gaussians for the same orbital fit even more closely to that energy.

I spent a little time stymied on the fact that the higher energy functions in the basis, the 2s and 2p functions, fit very very poorly to the higher energies of hydrogen. This is unnerving because the processing of these particular integrals in my program required a pretty complicated bit of programming to facilitate. I got accurate energies for 1s, but poor energies for 2s and 2p… maybe the program is working for 1s, but isn’t working for 2s or 2p. The infuriating part here is that 2s has very similar symmetry to 1s and is treated by the program in roughly the same manner, but the energy was off then too. I spent time analytically proving to myself that the most simple expression of the 2p orbital was being calculated correctly… and it is; I get consistent numbers across the board, just that there is a baked in inaccuracy in this particular set of basis functions which makes them not fit the equivalent hydrogenic orbital energies. It did not make much sense to me why the molecular community was citing this particular basis set so consistently, even though it really doesn’t seem to fit hydrogen very well. I’m not yet totally convinced that my fundamental math isn’t somehow wrong, but when numbers start emerging that are consistent with each other from different avenues, usually it means that my math isn’t failing. I still have some other error checks I’m thinking about, but one additional thought must be added.

In reality, the molecular orbitals are not required to mimic the atomic parts from which they can be composed. At the locations in a molecule very close to atomic nuclei, the basis functions need to look similar to the atomic functions in order to contain the flexibility to mimic atoms, but the same is not true at locations where multiple nuclei have sway all at once. The choice of making orbitals atom-like is a convenience that might save some computational overhead; you could have a sequence of any set of orthogonal functions you want and be able to calculate the molecular orbitals without looking very deeply at what the isolated atoms seem to be. For the first about two rows of the periodic table, up to Florine, most of the electrons in an atom are within reach of the valence band, meaning that they are contributed out into the molecule and distributed away from the nucleus. A convenient basis set for capturing this is to sort of appear atom-like around the nuclei, but not necessarily… if you have an infinite number of gaussians, slater functions or simple sines and cosines, the flexibility of a properly orthogonalized basis set can capture the actual orbital as a linear combination. The choice of using gaussians is computationally convenient for the situation of having the electron distributed in finite clouds around atom centers, making the basis set small, but not more than that. The puzzle is simply to have sufficient flexibility at any point in the molecule for the basis to capture the appropriate linear combination describing the molecule. An infinite sum of terms can be arbitrarily close.

In this, it isn’t necessary for the basis functions to exactly duplicate the true atomic orbitals since that isn’t what you’re looking for to begin with. In a way, the atomic orbitals are therefore disconnected from the molecular orbitals. Trying to exactly reproduce the atoms is misleading since you don’t actually have isolated atoms in a molecule. Presumably, a heavy atom will appear very atom-like deep within its localized potential, but not up on top.

 

(edit 12-11-18):

I’ve managed to build a working self-consistent field calculator for generating molecular wave functions. Hopefully, I’ll get a chance to talk more about it when there’s time.

The Difference between Quantity and Quality

I decided that I felt some need to speak up about a recent Elon Musk interview I saw on YouTube. You probably know the one I mean since it’s been making the rounds for a few days in the media over an incident where Mr. Musk took a puff of weed on camera. This is the interview between Mr. Musk and Joe Rogan.

I won’t focus on the weed. I will instead focus on some overall impressions of the interview and on something that Musk said in the context of AI.

I admit that I watch Joe Rogan’s podcast now and then. I don’t agree with some of his outlooks regarding drug use (had it been me on camera instead of Musk, I would have politely turned down the pot) but I do feel that Rogan is often a fairly discerning thinker; he advocates pretty strongly for rational inquiry when you would expect him to just be another mook. That said, I usually only watch clips rather than entire podcasts. God help me, media content would fill my life more than it already does if I devoted the 2.5 hours necessary to consume it.

Firstly, I must say that I really wasn’t that pleased with how Joe Rogan treated Elon Musk. He might well have just reached across the table and given the poor man a hand job with how much glad handling he started with. He very significantly played up Musk’s singularity, likening him –not unfavorably– to Nikolai Tesla. Later, he said flat out that “it’s as if Musk is an alien,” he’s so singular. Rogan jumped into talking about a dream where there were “a million” Nikolai Tesla’s, or some such, and speculated how unbelievable the world would be if there were a million Elon Musks, how much innovation would be achieved. In response to that, I think he’s over-blowing what is possible with innovation and not thinking that clearly about how Elon Musk got into the position he’s in.

I do not diminish Elon Musk as an innovator, to start with. The likelihood of my hitting it the way he has is not good, so I can’t say that he isn’t as singular as one might make him out to be. He is in a rarefied air of earning potential with the money he has to throw around; just a handful of people in the same room. A part of what made Elon Musk was an innovation that is shared across a few people, namely the money made from creating Paypal, for which Musk can’t take exclusive credit. Where Musk is now depends quite strongly on this foundation: the time which bootstrapped him into the stratosphere he current occupies was the big tech boom of the Dotcom era, where the internet was quite rapidly expanding, where many people were trying many new ideas and where the entire industry was in a phase of exponential growth. Big ideas were potentially very low hanging fruit, which are not possible to retread now. For instance, it would take a lot to get somewhere with a Paypal competitor today since you would have to justify your infrastructure as preferable somehow to Paypal, which has now had twenty years to entrench and fortify. It’s unlikely social networks will ever produce another Mark Zuckerberg without there being some unoccupied space to fill, which is more difficult to find with everyone trying to create yet another network. Musk is not that different; he landed on the field at a time when the getting was very good. Perhaps someone will hit it with an AI built in a garage and make a trillion dollars, but my feeling is that such an AI will emerge from a foundation that is already deep and hard to compete with, such as Google, which is itself an example of an entity that came into being when the soil was very ripe and would be difficult to retread, or compete with, twenty years later. It is this environment that grew Elon Musk.

Elon Musk won his freedom in an innovation that he cannot take exclusive credit for. Having gained a huge amount of money, he’s no longer beholden by the same checks that hold most everyone else in place. I think that were it not for this preexisting wallet, Musk would not be in the position to make the innovations he’s getting credit for today. This isn’t a bad thing, but you must hold it in context. The environment of the Dotcom era produced one Elon Musk and a bunch of others, like Pichai and Brin and Bezos, because there were a million people competing for those goals… and the ones that hit at the right time and worked hardest won out. This is why there can’t be a million Elon Musks; there aren’t really a million independent innovations worth that much money which won’t just cannibalize each other in the market place. Musk slipped through, as did Bezos, who is wielding as much if not more power for a similar reason (Steve Jobs was another of this scope, but he’s no longer on the field and Apple is simply coasting on what Jobs did.) There are not many checks holding Elon Musk back at this point because he has the spending power to more or less do whatever he feels like. This power counts for a lot. I would suggest that there are plenty of people existent right now who are capable of roughly the same thing as Musk did, who haven’t hit a hole that lifts them quite so far.

As in the video, one can certainly focus on the idea mill that Elon Musk has in his head, but a distinguishing feature of Musk is not just ideas; he is definable by an incredible work ethic. Would you pull 100 hour work weeks? Somebody who is holding down more than 2 forty hour a week jobs is probably earning at least twice as much as you can earn for forty hours a week! I would point out that Elon Musk has five kids and I’ve got to wonder if he even knows their names. My little angel is at least forty hours of my week that I am totally happy to give, but it means I’ve only got like forty hours otherwise to work;-)

Is he an alien? No. He’s a smart guy who worked his ass literally off at great, huge, personal expense and managed to hit a lucky spot that facilitated his freedom. Maybe he would have made it just as well if misplaced in time say forward or backward ten years, but my feeling is that the space currently occupied by his innovations would likely be occupied by someone else of similar qualities to Musk. The environment would have produced someone by simple selection. The idea mill in his head is also of dubious provenance given that Sci Fi novelists have been writing about things he’s trying to achieve since at least forty years prior to when Musk arrived on the scene: propulsive rocket landings were written about by Robert Heinlein and Ray Bradbury and executed first by NASA in the 1960s to land on the moon… SpaceX is doing something amazing with it now, but it isn’t an original idea in and of itself. Musk’s hard work is amazing hard work to actualize the concept, even though the concept isn’t new. Others should probably get some credit for the inspiration.

Joe Rogan glad-handling Elon Musk for his singularity overlooks all of this. I do not envy Musk his position and I can’t really imagine what he must’ve been thinking being on the receiving end of that.

I feel that Musk has put himself in an unfortunate position of being a popularizer. He’s become a go-to guru culturally for what futurism should be. This has the unfortunate side effect of working two directions: Musk is in a position where he can say a lot and have people listen, at the expense of the fact that people are paying attention to him when he would probably rather they not be. Oh dear God, Elon Musk just took a puff of that marijuana! The media is grilling him for that moment. How many people are smoking it up, nailing themselves in an exposed vein with a needle and otherwise sitting on a street corner somewhere, masturbating in public right this very second that the media is not focused on?

For Musk, in particular, I think the pressure of his position is starting to chafe. He may not even be able to see it in himself. Musk has so much power that he’s subject to Trumpian exclusivism; actual reality has been veiled to him behind a mask of yes-men, personal assistants and synchophants to such a degree that Musk is beginning to buy (or has already completely bought) the glad-handling. Elon Musk can fire anyone who doesn’t completely fit within the mold he envisions that this employee should. There is a power differential that insulates him most of the time and he’s gotten used to wielding it. For instance, Elon Musk relates a story while talking about the dangers of AI to Joe Rogan where he says that “nobody listened to him.” Who was he talking to? “Nobody” is Barack Obama. “Nobody” is senators and Capital Hill. As he said it, you can pretty clearly see that Elon Musk expected that these people should have listened to him! Not to say that someone like Obama should have ignored him about the existential threat posed by AI, but that Elon Musk felt that he personally should have been the standard bearer. Think about that. The mindset there is really rather amazing. The egotism is enormous. Egotism can certainly take you a long way by installing confidence, but it has a nasty manner of insulating a person from his or her own shortcomings. As a man who works 100 hour work weeks, one has to wonder if Musk is anyone but the CEO. Can he deal with reality not bending to his will when he says “You’re fired”? Musk decided to play superhero with the Thai soccer team cave crisis when he built the personal-sized submarine to try to help out. Is it any wonder that he didn’t respond too well being told that the concept wouldn’t have worked? I have no doubt he was being magnanimous and I feel bad that he certainly feels slighted for offering the help but being rebuffed. I don’t know that he was actually seeking the spotlight in the news so much as that he felt obligated to be the superhero that glad-handlers are conditioning him to believe that he is. Elon Musk has gotten used to the notion that when he breathes, the wind is felt on the other side of the world and he draws sustenance from people telling him on Twitter that they feel the air moving somewhere over there.

Beware the dangers of social media. It will intrinsically surface the extreme responses because it is designed to do exactly that. If you can’t handle the haters, stay clear of the lovers. Some fraction of the peanut gallery that you will never meet will always have something to say that you won’t like hearing…

(Yes, I am aware of the irony of being a member of the anonymous internet peanut gallery heckling the stage. Who will listen? Who knows; I’m comfortable with my voice being small. If Barack Obama reads what I’m saying, maybe he’ll read it to completion. If so, thanks!)

All that said, I think that Elon Musk is in a very difficult position psychologically. He spends nights sleeping on the floor of his office at Tesla (supposedly) working very very hard at managing people and projects, expecting that the things he says to do and is busy implementing go exactly as he says they should. For a 100 hour work week, this is tremendous isolation. He’s at the top locked in a box where his outlet, social media, always tells him that he is the man sitting on the top of the mountain, and then heckling him when he takes a second out to… do X, help rescue some children, take a puff on a joint, look away from the job at hand. Would you break? I’m happy I spend forty hours a week with my little angel. I’m happy my wife tells me when I’m full of shit. I couldn’t handle Elon Musk’s position. Can you imagine the fear of having the whole world looking over your shoulder, just waiting for one of your ideas to completely implode? Social isolation is profoundly dangerous in all its forms.

In answer to Joe Rogan, Elon Musk is not an alien and he isn’t singular. Maybe you don’t believe me, but I actually say this as a kindness to Elon Musk, in some hope that he finds a way around his isolation. He should find a better outlet than what he currently uses, or the pressure is going to break him. There are other people in this world whose minds are absolutely always exploding, who lay awake at night and struggle to keep it under control. I have no doubt that this takes different shapes for different people who feel it, but I definitely understand it as a guy who lies awake at night struggling to turn off the music, turn off the equations, turn off the visions. Some people do see things that lie just beyond where everyone else does and you don’t hear from them. They may work much smaller jobs and may not have a big presence on social media, but this doesn’t mean they don’t have clear vision. Poor old Joe Rogan, toking up on his joint, turns off the parts of himself that might work that way… he more or less admits that he can’t face himself and smokes the pot to shed the things he doesn’t like! Mr. Rogan went cold turkey on pot for a month and related a story during that time about having vivid dreams. What is your chance at vision? Is it like mine? Do you shuffle it under the rug?

Anyway, that’s a part of my response to how the interview was carried out. I want also to respond a little bit to some of the content that was said. For reference, here’s the relevant clip that has them talking about AI.

There is a section of that clip that has Elon Musk talking about some of the rationale for the startup Neuralink. He speaks about what he calls the “human bandwidth problem.” The idea here, as he relates it, is that one of the reasons humans can’t complete with AI is because we don’t acquire the breadth of information that a computer based AI can as quickly. In this, a picture is worth more than a thousand words because a picture can deliver more information to the human brain in a much shorter space of time than other possible means by which a human can import information. The point of Neuralink then is to increase human bandwidth. An example that Musk gives is that smartphones imbue their users with superhuman abilities and information access; the ability to navigate traffic or find hotels or restaurants without previously knowing of these things. He asserts that possession of a smartphone already makes people cyborgs. He then reasons that by making a link that circumvents the five senses and places remote information access and control straight into the human mind, humans gain some parity on AI, since AI will be able to gain access to information without having the delay associated with seeing or hearing an input.

I think Elon Musk is being somewhat naive about this. Bandwidth is not the only problem we face here in light of what AI might potentially be capable of. Yes, AI in a computer has a tremendous advantage in being able to parse information with speed; this is fundamentally what computers are good for, taking huge amounts of information and quickly executing a simple, repetitious and very fast methodology in order to sort the depths. A smart computer program starts with the advantage of being faster than people. Elon Musk sort of asserts in what he says that humans can become better than we are by breaking the plane and putting essentially a smartphone interface straight into our heads, that speeding up our ability to get hold of the information would put us at an advantage.

I don’t really agree with him.

Having access to a smartphone has revealed a number of serious problems with the capacity for humans to deal with greater bandwidth. Texting and driving together has become a way for people to die since the advent of cellphones. Filter silos occur because people simply don’t have enough time to absorb (and I mean “absorb” in the sense of “to Grok” rather than in the sense of Read or Watch, and the subtlety means the universe in this case) the amount of information that the internet places at our disposal. Musk has voiced the assessment that if only we could get past our meagre rate of information uptake that we might somehow be at a better advantage. Having access to all the information in the world has not stopped fake news from becoming a problem; it has made people confident that they can get answers quickly without installing in them an awareness that maybe they don’t understand the answer they got. Getting to answers ever more quickly won’t change this problem.

Humans are saddled with a fundamental set of limits in our ability to process the information that we uptake. Getting to information faster does not guarantee that anyone makes better decisions with that information once they have it. Would people spend all day stuck in social media, doing nothing of use but literally contemplating their own navel lint in the next big time waster app-game, if they could get to that app more quickly? I don’t think they would. Getting to garbage information faster does not assure anything but reaching the outcomes of bad decisions more quickly.

AI has the fundamental potential to simply circumvent this entire cognitive problem by getting rid of everything that is human from the outset. In fact, the weight of what we currently judge as “valuable AI” is a machine that fundamentally makes good decisions based on the data it acquires in a computer’s time frame. By definition, the AI we’re trying to construct doesn’t make bad decisions that a human would otherwise make and would self-optimize to make better decisions than it initially started out making.

What Elon Musk is essentially suggesting with Neuralink is that a computer could be made to regulate the bandwidth of what is going into someone’s skull without there being a tangible intermediary, but that says nothing about the agent that is necessary on the outside to pick and choose what information is sent down the pipe into someone’s head by the hypothetical link. Even if you replaced the soft matter in someone’s head with a monolithic computer chip that does exactly the same thing as a wet brain, you are saddled with the fact that the brain you duplicated is only sometimes making good decisions. The AI we might create, from inception, is going to be built to make more good decisions than the equivalent human brain. Why include a brain at all?

This reveals part of the problem with Neuralink. The requirement that we make better decisions than we do suggests that by placing links into our brains from the outside, we need to include some artificial agent that ultimately has to judge for us whether our brain will make the best decision based upon whatever information the agent might pipe to that brain –time is money and following a wrong path is wasted time. This is required in order for us to remain competitive. That is fundamentally a super intelligence that circumvents our ability to decide what is in our own best interest since people are verifiably not always capable of deciding that: would people be ODing on pain meds so frequently if they made better decisions? Moreover, our brain doesn’t even necessarily need to know what decisions the super intelligence governing our rate of information uptake is making on our behalf. The company that employs the stripped down super-intelligence is more efficient than the one which might make bad decisions based upon the brain that super-intelligence is plugged into. The logical extent of this reasoning is that the computer-person interface is reduced to a person’s brain more or less just being kept occupied and happy while an overarching machine makes all the decisions.

I don’t really like what I see there. It’s a very happy pleasurable little prison which more or less just ultimately says that we’re done. If this kind of super intelligence is created, very likely, we won’t be in a position to stop it, even if we plug our brains into it and pretend we’re catching a ride on the rocket.

I don’t believe that Elon Musk hasn’t thought of it this way. If we are just a boot drive for something better at our niche than us, I don’t see that as different from how things have been throughout the advent of life. If humans as we are go extinct, maybe the world our successor inhabits will be a green, clean heaven. Surely, it will make better decisions than us.

I do understand why Musk is making the effort with Neuralink. Maybe something can be done to place us in a position where, if we create this thing, we will be able to benefit at some level. I suppose that would be the next form of the Bill and Melinda Gates Foundation…

(Edit 9-12-18)

As I am wont to do, I’ve been thinking about this post a bit for several days since I posted it. I feel now that I have a relevant extension.

When I responded to what Elon Musk had said about neuralink, I interpreted his implication is such a way that would definitely not place a living brain on the same page as AI. It seemed to me, and still seems on looking back, that there is a distinct architectural division between the entity of the brain and the link being placed into it.

I think there is perhaps one way to blur the line a bit more. The internal machine link must be flexible and broadly parallel enough at interacting with the brain in such a way that the external component can become interleaved at the level of a neural network. It cannot be a separate neural network; there can be no apparent division for it to work. In such, the training of the brain itself would have to be in parallel to an external neural network in such a way that the network map smoothly spans between the two. In this case, “thinking together,” would have no duality. What it means is that you could probably only do it at this level with an infant whose brain is still rapidly growing and who doesn’t actually have a cohesive enough neural network to really have a full self.

I’m not sure this hybrid has a big advantage over a pure machine. The one possibility that could be open here is that the external part of the amalgamated neural network is open-ended; even though there is finite flexibility in the adult flesh-and-blood brain, awareness would have to be decentralized across the whole network, where the machine part continues to be flexible later in that person’s life. In this way, awareness could smoothly transition to additions into the machine neural network later.

Problem here is that I don’t know of any technology currently available that could build this sort of physical network. The interlinking of neurons in the brain are so casually parallel and flexible that they do not resemble the means by which neural networks are achieved in computers. I don’t believe it can happen by monolithic silicon; there would need to be something new. Given maturity of the technology, could such a thing be expanded to adults? I don’t know.

Science fiction is all well and good, but I think we’re probably not there yet. Maybe at the end of the century of biology using a combination of genetically tamed bacteria and organic semiconductors.

(edit 9-30-18):

One thing to add that I learned a bit earlier this week and maybe poke another little hole in the Cult of Elon. Please note that I never refer to him as “Elon”, I’ve never met him, I’m not on a first name basis with him and I definitely do not know him –to me, he’s Elon Musk or Mr. Musk, but not Elon. I will give him respect by not pretending familiarity with him. I do respect him, in as much as I can respect a celebrity whose exploits I hear and read about in the popular media, but I’m not a member of the Cult of Elon.

Elon Musk gets tremendous credit for Tesla the car company. He runs the company and is given a huge amount of credit for their existence. He does deserve credit for his hard work and his role in Tesla, but beware thinking of Tesla as his child or his creation. Elon Musk did not found Tesla.

Tesla was founded by Martin Eberhard and Marc Tarpenning. Elon Musk was apparently among the major round one investors of the company and ended up as chairman on the company board since he put down a controlling investment share. Musk did not become CEO of Tesla until he help oust Martin Eberhard from that role when Tesla apparently floundered. Eberhard and Tarpenning have since both departed from Tesla and it sounds as if the relationship is an acrimonious one with Eberhard claiming that Musk was rewriting history.

Who can say what claims are completely true, but if you read about Elon Musk, it seems like he doesn’t play very well with others if he isn’t in charge. And, being in charge, he gets a lion’s share of the credit for the vision and execution. Stan Lee gets this kind of credit too and is perhaps imbued with similar vision. It definitely overwrites the creativity of those other talented people who also had a hand in actualizing the creation.

Fact of Tesla is that someone other than Musk started the vision and Musk used his tremendous financial leverage to buy that vision. He now gets credit for it. I’ll let the reader decide how much credit he actually deserves.

Another thing I thought to spend a moment writing about is the reason why I chose the original title to this post. Why “Quality versus Quantity?” In the last part of the original blog post, I mentioned the dichotomy between humans being able to access information as quickly as AI and humans being able to make as good of decisions as AI. I think that making people faster does not equate to making people better. This is one of the potentially powerful (and dangerous) aspects of AI: the point is that AI could be made ab initio to convey human-like intelligence without incorporating the intrinsic, baked-in flaws in human reasoning that are the result of us being the evolved inhabitants of the African savanna rather than the engineered product of a lab.

The tech industry may not be thinking too carefully about this, but the AI that is being created right now is very savant-like; it incorporates mastery acquired in a manner that humans can also “sort of” achieve. Note, I say “sort of” because this superhumanity is achieved by humans at the expense of the parts of humanity that are recognizably human: Autistic savants are not typical people and do not relate to typical people as a typical person would. I believe this kind of intelligence is valuable because many people exhibit qualities of it to the benefit of the rest of the human race, but I think these people are often weak in other regards that place them out of sorts with what is otherwise “human.” Machines duplicating this intelligence are not headed toward being more human because the human parts in the equation slow down the genius. There is an intrinsic advantage to building the AI without the humanity because the parts that are recognizable as human fundamentally do not make the choices which would be a coveted characteristic of a high quality AI. This is not to say that such an AI would be unable to relate to people in manner that humans would be able to regard as “human-like”… to the contrary; I think that these machines can be made so that the human talking to one would be unable to tell the difference, but it would be a mistake to claim that the AI thinks as a human does just because it sounds like a person.

If people given cybernetic interfaces with computers are able to make deep decisions many times more quickly than unaltered humans, does this make them as good as an AI? The quantity of decisions attempted will be offset by the number of times those quickly made decisions turn out to be failures. On the other hand, the AI that people aspire to create is defined by the specifically selected capacity to make successful decisions more frequently than people can. You can see this in the victory of Deep Go over human opponents: the person and the machine made choices at the same rate, alternating turns at choices so that their decision rate was 1:1, but the machine made right choices more frequently and tended to win. Would the person have been better if they had made choices faster? If the AI makes one decision of sufficient foresight and quality that humans are required to stumble through ten decisions in order to just keep up, what point is there in humans being faster than they are? While the AI is intrinsically faster just by being a machine, this does not begin to touch the potential that the AI need not be intrinsically faster. It just needs to be able to make that one decision that the fastest person had no hope of ever seeing. Smarter is not always faster.

That’s what I mean by quality versus quantity. Put another way, would Elon Musk have made his notorious “funding secured” Tweet, which has since gotten him sued by the SEC, and lost him his position as chairman of the Tesla board, if he had a smartphone plugged straight into his brain? His out of control interface with his waistband mounted internet box is what caused him problems in the first place, would an even more intimate interface have improved matters? Where an AI could’ve helped is by interceding, recognizing that the decision would run afoul with the SEC in two months and prevented the Tweet from being carried out.

Think about that. It should scare the literal piss out of you.

Magnets, how do they work? (part 4)

(footnote: here lies Quantum Mechanical Spin)

This post continues the discussion of how ferromagnets work as considered in part 1, part 2 and part 3. The previous parts dealt with the basics of electromagnetism, introducing the connections from Maxwell’s equations to the magnetic field, illustrating the origin of the magnetic dipole and finally demonstrating how force is exerted on a magnetic dipole by a magnetic field.

In this post, I will extend in a totally different direction. All of the previous work was highlighting magnetism as it occurs with electromagnets, how electric currents create magnetic field and respond to those fields. The magnetic dipoles I’ve outlined to this point of time are loops of wire carrying electric current. Bar magnets have no actual electrical wires in them and do not possess any batteries or circuitry, so the magnetic field coming from them must be generated by some other means. The source of this is a cryptic phenomenon that is in its nature quantum mechanical. I did hint at it in part 3, but I will address it now head on.

In 1922, Walther Gerlach and Otto Stern published an academic paper where they brought to light a weird new phenomenon which nobody had seen prior (it’s actually the third paper in a series that describes the development of the experiment, with the first appearing in 1921). That paper may be found here if you aren’t stuck behind a pay wall. Granted, the paper is in German and will require you to find some means of translation, but that is the original paper. The paper containing the full hypothesis is here.

In their experiment Stern and Gerlach built an evaporator furnace to volatilize silver. Under a low pressure vacuum, as good as could be attained at the time, silver atomized from the furnace was piped through a series of slits to collimate a beam of flying silver atoms. This beam of silver atoms was then passed through the core of a magnetic field generated by an electromagnet in a situation much as mentioned previously in the context of Lorentz force.

2000px-lorentz_force-svg

As illustrated here, one would expect a flying positive charge ‘q’ with velocity ‘v’ to bend one way upon entering magnetic field ‘B’, while a negative charge bends the other. Without charge, there is no deflection due to Lorentz force. In the Stern-Gerlach experiment, the silver atom beam passing through the magnetic field then impinges on a plate of glass, where the atoms are deposited. This glass plate could be taken and subjected to photographic chemistry to “develop” and enhance the intensity of the silver deposited on the surface, enabling the experimenters to see more clearly any deposition on the surface of the glass. According to the paper, the atom beam was cast through the magnetic field for 8 hours in a stretch before the glass plate was developed to see the outcome.

The special thing about the magnetic field in the Stern Gerlach experiment is that, unlike the one in the figure above, it was intended to have inhomogeneity… that is, to be very non-uniform.

For the classical expectations, a silver atom is a silver atom is a silver atom, where all such atoms are identical to one another. From the evaporated source, the atoms are expected to have no charge and would be undeflected by a magnetic field due to conventional Lorentz force, as depicted above. So, what was the Stern-Gerlach experiment looking for?

Given the new quantum theory that was emerging at the time, Stern and Gerlach set out to examine quantization of angular momentum of a single atom. Silver is an interesting case because it has a full d-orbital, but only a half-filled s-orbital. In retrospect, s-orbitals are special because they have no orbital angular momentum themselves. This in addition to the other closed shells in the atom would suggest no orbital angular momentum for this atom. In 1922, the de Broglie matter wave was not yet proposed and Schrodinger and Heisenberg had not yet produced their mathematics; quantum mechanics was still “the old quantum” involving ideas like the Bohr atom. In the Bohr atom, electron orbits are allowed to have angular momentum because they explicitly ‘go’ around, exactly like the current loop that was used for calculations in the previous parts of this series. The idea then was to look for quantized angular momentum by trying to detect magnetic dipole moments. A detection would be exactly as detailed in part 3 of this series; magnetic moments are attracted or repelled depending on their orientation with respect to an external magnetic field.

In their experiment, Stern and Gerlach did what scientists do: they exposed a glass plate to the silver beam with the electromagnet turned off, and then they turned around and did the same experiment with the magnet turned on. It produced the following set of figures:

Stern gerlach figure 2 and 3

The first circle, seen at left, is Figure 2 from the paper, where there is no magnetic field exerted on the beam. The second circle, with the ruler in it, is Figure 3, where a magnetic field has now been turned on. In the region at the center or the image, the atom beam is clearly split into two bands relative to the control exposure. The section of field in the middle of the image contains a deliberate gradient, where the field points horizontally with respect to the image and changes strength going from left to right. One population of silver diverts left under the influence of the magnetic field while a second population diverts right.

Why do they deviate?

What this observation means is that the S-orbital electron in an evaporated silver atom, having no magnetic dipole moment due to the orbital angular momentum of going around the silver atom nucleus, has an intrinsic dipole moment in and of itself that can feel force under the influence of an external magnetic field gradient. This is very special.

The figure above is an example of a quantum mechanical “observation” where what has appeared is “eigenstates.” As I’ve repeated many times, when you make an observation in quantum mechanics, you only ever actually see eigenstates. In this case, it is a very special eigenstate with no fully classical analog, Spin. For fundamental spin, especially the spin of a silver atom with a single unpaired S-orbital, there are only two possible spin states, called now spin-up and spin-down. Spin appears by providing a magnetic dipole moment to a “spinning” quantum mechanical object. The electron, having a charge and a spin, has a magnetic dipole moment and is therefore responsive to magnetic field gradient. The population of silver atoms passing into the magnetic field deflect relative to this tiny electron dipole moment, where the nucleus is being dragged by the “S-orbital” electron state due to the electrostatic interaction between the electrons and the nucleus. The dipole moment is repelled or attracted in the magnetic field gradient exactly as described in part 3, and since this dipole is quantum mechanical, it samples only two possible states: oriented with the external field or oriented against it, giving two bands in the figure above.

The conventional depiction of the magnetic dipole formed by a wire loop can be adopted to the quantum mechanical phenomenon of spin by adding a scale adjustment called the gyromagnetic ratio. This number enables the angular momentum actually associated with the spin quantum number to be scaled slightly to account for the strength of the magnetic dipole produced by that spin. This is necessary since a particle carrying a spin is not actually a wire loop –the great peculiarity of spin is that if it is postulated as the internal rotation of a given particle, the calculated distribution of the object in question tends to break relativity in order to generate the appropriate angular momentum, leading most physicists to consider spin to be a quantum mechanical phenomenon that is not actually the object ‘spinning’. For all intents and purposes, spin is very like actual rotational spin and it shows up in a way that is very similar to electric charges running around a wire loop.

spin magnetic moment

The math in this figure is quick and fairly painless; it converts magnetic dipole moment from a wire loop into a magnetic dipole moment that is due to spin angular momentum. The equation at the start is classical. The equation at the end is quantum mechanical. One thing that you often see in non-relativistic quantum mechanics is that classical quantities adopt into quantum mechanics as operators, so the thing at the very end is the magnetic dipole moment operator. This quantity can be recast various ways, including with the Bohr magneton and in various adjustments of g while the full operator is useful in Zeeman splitting and in NMR.

The existence of spin gives us a very interesting quantity; this is a magnetic dipole moment that is intrinsic to matter in the same way as electric charge. It simply exists. You don’t have to create it, as in the wire loop electromagnet, because it is already just there. There is no requirement for batteries or wires. Spin is one candidate source for the magnetic dipole moment that is required to produce a bar magnet.

It is completely possible to attribute the magnetism of bar magnets to spin, but saying it this way is actually something of a cop-out. How are atoms organized so that the spin present in atoms of iron becomes large enough to create a field that can cause a piece of metal to literally jump out of your hand and go sliding across the table? Individual electronic and atomic spins are really very tiny and getting them to organize in such a  way that many of them can reinforce each other’s strengths is difficult. I’ve said previously that chemistry is wholly dependent on angular momentum closures and one will note that atomic orbitals fill or chemically bond in such a way as to negate angular momentum: for example, S-orbitals (and each and every available orbital) are filled by two electrons, one spin-up and one spin-down, so that no individual orbital is left with angular momentum. Sigma bonds and Pi bonds are formed so that unpaired electrons in any atom may be shared out to other atoms in order for participants to cancel their spin angular momentum. While there are exceptions, like radicals, nature generally abhors exposed spin. Even silver, the atoms of which are understood to have detectable spin, is not ferromagnetic: you can’t make a bar magnet out of silver! What conspires to make it possible for spin to become macroscopically big in bar magnets? This is the one big puzzle left unanswered.

As an interesting aside, in their paper, Stern and Gerlach add an acknowledgement thanking a “Mr. A. Einstein” for helping provide them with the electromagnet used in their experiment from the academic center he headed at the time.