A big cat’s paw of mine is trying to fill the space between my backgrounds to understand how one thing leads to another.

When a biochemist learns quantum mechanics (QM), it happens from a background where little mathematical sophistication is required; maybe particle-in-a-box appears in the middle of a low grade Physical Chemistry class and many results of QM are qualitatively encountered in General Chemistry or perhaps in greater detail in Organic Chemistry. A biochemist does not need to be perfect at these things since the meat of biochemistry is a highly specialized corner of organic chemistry dealing with a relatively small number of molecule types where the complexity of the molecule tends to force the details into profound abstraction. Proteins and DNA, membranes and so on are all expressed mostly as symbols, sequences or structural motifs. Reactions occur symbolically where chemists have worked out the details of how a reaction proceeds (or not) without really saying anything very profound about it. This created a situation of deep frustration for me once upon a time because it always seemed like I was relying on someone else to tell me the specifics of how something actually worked. I always felt helpless. Enzymatic reaction mechanisms always drove me crazy because they seem very ad hoc; no reason they shouldn’t since evolution is always ad hoc, but the symbology used always made it opaque to me as to what was happening.

When I was purely a biochemist, an undergraduate once asked me whether they could learn QM in chemistry and I honestly answered “Yes” that everything was based on QM, but withheld the small disquiet I felt that I really didn’t believe that I understood how it fit in. Background that I had in QM being as it was at that point, I didn’t truly know a quantum dot from a deviled egg. Yes, quantum defines everything, but what does a biochemist know of quantum? Where does bond geometry come from? Everything seems like arbitrary tinker toys using O-Chem models. Why is it that these things stick together as huge, solid ball-and-stick masses when everything is supposed to be puffy wave clouds? Where is this uncertainty principle thing people vaguely talk about in hushed tones when referring to the awe inspiring weirdness that is QM? You certainly would never know such details looking at model structures of DNA. This frustration eventually drove me to multiple degrees in physics.

In physics, QM takes on a whole other dimension. The QM that a physicist learns is concerned with gaining the mathematical skill to deal with the core of QM while retaining the flexibility to specialize in a needed direction. Quantum Theory is a gigantic topic which no physicist knows in entirety. There are two general cousins of theory which move in different directions with Hamiltonian formalisms diverging from the Lagrangian. They connect, but have power in different situations. Where you get very specific on a topic is sometimes not well presented –you have to go a long way off the beaten path to hit either the Higgs Boson or General Relativity. Physicists in academia are interested in the weird things lying at the limits of physics and focus their efforts on pushing to and around those weirdnesses; you only focus efforts on specializations of quantum mechanics as they are needed to get to the untouched things physicists actually care to examine. This means that physicists sometimes focus little effort on tackling topics that are interesting to other fields, like chemistry… and the details of the foundations of chemistry, like the specifics of the electronic structure of the periodic table, are under the husbandry of chemists.

If you read my post on the hydrogen atom radial equation, you saw the most visible model atom. The expanded geometries of this model inform the structure of the periodic table. Most of the superficial parts of chemistry can be qualitatively understood from examining this model. S, P, D, F and so on orbitals are assembled from hydrogenic wave equations… at least they can be on the surface.

Unfortunately, the hydrogenic orbitals can only be taken as an approximation to all the other atoms. There are basically no analytic solutions to the wave functions of any atom beyond hydrogen.

Fine structure, hyper fine structure and other atomic details emerge from perturbations of the hydrogenic orbitals. Perturbation is a powerful technique, except that it’s not an exact solution. Perturbations approach solutions by assuming that some effect is a small departure from a much bigger situation that is already solved. You then do an expansion on which successive terms tend to approach the perturbative part more and more closely. Hydrogenic orbitals can be used as a basis for this. Kind of. If the “perturbation” becomes too big relative to the basis situation, the expansion necessary to approximate it becomes too big to express. Technically, you can express any solution for any situation from a “complete” basis, but the fraction of the basis required for an accurate expression becomes bigger than the “available” basis before you know it if the perturbation is too large compared to the context of the basis.

When I refer to “basis” here, I’m talking about Hilbert spaces. This is the use of orthogonal function sets as a method to compose wave equations. This works like Fourier series, which is one of the most common Hilbert space basis sets. Many Hilbert spaces contain infinitely many basis functions, which is bigger than the biggest number of functions any computer can use. The reality is that you can only ever actually use a small portion of a basis.

The hydrogen situation is merely a prototype. If you want to think about helium or lithium or so on, the hydrogenic basis becomes merely one choice of how to approach the problem. The hamiltonians of other atoms are structures that can in some cases be bigger than is easily approachable by the hydrogenic basis. Honestly, I’d never really thought very hard about the other basis sets that might be needed, but technically they are a very large subject since they are needed for the 120 odd other atoms on the periodic table beyond hydrogen. These other atoms have wave functions that are kind of like those of hydrogen, but are different. The S-orbital of hydrogen is a good example of S-orbitals found in many atoms, even though the functional form for other atoms is definitely different.

This all became interesting to me recently on the question of how to get to molecular bonds as more than the qualitative expression of hydrogenic orbital combinations. How do you actually calculate bond strengths and molecular wave functions? These are important to understanding the mechanics of chemistry… and to poking a finger from quantum mechanics over into biochemistry. My QM classes brushed on it, admittedly, deep in the quagmire of other miscellaneous quantum necessary to deal with a hundred different things. I decided to take a sojourn into the bowels of Quantum Chemistry and develop a competence with the Hartree-Fock method and molecular orbitals.

The quantum mechanics of quantum chemistry is, surprisingly enough, mechanically more simple than one might immediately expect. This is splitting hairs considering that all quantum is difficult, but it is actually somewhat easier than the difficulty of jumping from no quantum to some quantum. Once you know the basics, you pretty much have everything needed to get started. Still, as with all QM, this is not to scoff at; there are challenges in it.

This form of QM is a Hamiltonian formalism where the first mathematics originated in the 1930s. The basics revolve around the time independent Schroedinger equation. Where it jumps to being modern QM is in the utter complexity of the construct… simple individual parts, just crazily many of them. This type of QM is referred to as “Many Body theory” because it involves wave equations containing dozens to easily hundreds of interactions between individual electrons and atomic nuclei. If you thought the Hamiltonian I wrote in my hydrogen atom post was complicated, consider that it was only for one electron being attracted to a fixed center… and not even including the components necessary to describe the mechanics of the nucleus too. The many body theory used to build up atoms with many electrons works for molecules as well, so learning generalities about the one case is learning about it the other case too.

As an example of how complicated these Schrodinger equations become, here is the time independent Schrodinger equation for Lithium.

Lithium Schrodinger

This equation is simplified to atomic units to make it tractable. The part describing the kinetic energy of the nucleus is left in. All four of those double Del operators open up into 3D differentials like the single one present in the hydrogen atom. The next six terms describe electrostatic interactions between the three electrons among themselves and with the nucleus. This is only one nucleus and three electrons.

As I already mentioned, there are no closed-form analytical solutions for structures more complicated than hydrogen, so many body theory is about figuring out how to make useful approximations. And, because of the complexity, it must make some very elegant approximations.

One of the first useful features of QM for addressing situations like this I personally overlooked when I initially learned it. With QM, most situations that you might encounter have no exact solutions. Outside of a scant handful of cases, you can’t truly “solve” anything. But, for all the histrionics that goes along with that, the solutions, what are called the eigenstates, are a special case of lowest possible energy for the given circumstance. If you make a totally random guess about the form of the wave function which solves a given Hamiltonian, you are assured that the actual solution has a lower energy. Since that’s the case, you can play a game: if I make a some random guess about the form of the solution, another guess that has a lower energy is a better guess regarding the actual form. You can minimize this, always making adjustments to the guess such that it achieves a lower energy, where eventually it won’t go any lower. The actual solution still ends up being lower, but maybe not very far. Designing such energy minimizing guesses inevitably converges toward the actual solution and is usually accomplished by systematic mathematical minimization. This method is called “Variation” and is one of the most major methods for constructing approximations of an eigenstate. Also, as you might expect, this is a numerical strategy and it makes heavy use of computers in the modern day since the guesses are generally very big, complicated mathematical functions. Variational strategies are responsible for most of our knowledge of the electronic structure of the periodic table.

Using computers to make guesses has been elevated to a high art. Literally, a random function with a large number of unknown constants is tried against the Hamiltonian; you then take a derivative of the energy to see how it varies as a function of any one constant and then adjust that constant until the energy is at a minimum, where the derivative is near zero and where the second derivative shows an inflection indicative of a minimum. Do this over and over again with all the available constants in the function and eventually the trial wave function converges to the actual solution.

Take that in for a moment. We understand the periodic table mainly by guessing at it! A part of what makes these wave functions so complicated is that the state of any one electron in any system more complicated than hydrogen is dependent on every other electron and charged body present, as shown in the Lithium equation above. The basic orbital shapes are not that different from hydrogen, even requiring spherical harmonics to describe the angular shape, but the specific radial scaling and distribution is not solvable. These electrons influence each other in several ways. First, they place plain old electrostatic pressure on one another –all electrons push against each other by their charges and shift each other’s orbitals in subtle ways. Second, they exert what’s called “exchange pressure” on one another. In this, every electron in the system is indistinguishable from every other and electrons specifically deal with this by requiring that the wave function be antisymmetric such that no electron can occupy the same state as any other. You may have heard this called the Pauli Exclusion Principle and it is just a counting effect. In a way, this may be why quantum classes tend to place less weight on the hydrogen atom radial equation: even though it holds for hydrogen, it works for nothing else.

Multi-atom molecules stretch the situation even further. Multiple atoms, unsolvable in and of themselves, are placed together in some regularly positioned array in space, with unsolvable atoms now compounded into unsolvable molecules. Electrons from these atoms are then all lumped together collectively in some exchange antisymmetric wave function where the orbitals are dependent on all the bodies present in the system. These orbitals are referred to in quantum chemistry as molecular orbitals and describe how an electron cloud is dispersed among the many atoms present. Covalent electron bonds and ionic bonds are forms of molecular orbital, where electrons are dispersed between two atoms and act to hold these atoms in some fixed relation with respect to one another. The most basic workhorse method for dealing with this highly complicated arrangement is a technique referred to as the Hartree-Fock method. Modern quantum chemistry is all about extensions beyond Hartree-Fock, which often use this method as a spine for producing an initial approximation and then switch to other variational (or perturbative) techniques to improve the accuracy of the initial guess.

Within Hartree-Fock, molecular orbitals are built up out of atomic orbitals. The approximation postulates, in part, that each electron sits in some atomic orbital which has been contributed to the system by a given atom where the presence of many atoms tends to mix up the orbitals among each other. To obey exchange, each electron literally samples every possible contributed orbital in a big antisymmetric superposition.

Hartree-Fock is sometimes referred to as Self Consistent Field theory. It uses linear superpositions of atomic orbitals to describe the molecular orbitals that actually contain the valence electrons. In this, the electrons don’t really occupy any atomic orbital, but some combination of many orbitals all at once. For example, a version of the stereotypical sigma covalent bond is actually a symmetric superposition of two atomic S-orbitals. The sigma bond contains two electrons and is made antisymmetric by the solitary occupancy of electron spin states so that the spatial part of the S-orbitals from the contributing atoms can enter in as a symmetric combination –this gets weird when you consider that you can’t tell which electron is spin up and which is spin down, so they’re both in a superposition.

Sigma bond

The sigma bond shown here in Mathematica was actually produced from two m=0 hydrogenic p-orbitals. The density plot reflects probability density. The atom locations were marked afterward in powerpoint. The length of the bond here is arbitrary, and not energy minimized to any actual molecule. This was not produced by Hartree-Fock (though it would occur in Hartree-Fock) and is added only to show what molecular bonds look like.

From completeness, here is a pi bond.

Pi bond

At the start of the Hartree-Fock, the molecular orbitals are not known where the initial wave function guess is that every electron is present in a particular atomic orbital within the mixture. Electron density is then determined throughout the molecule and used to furnish repulsion and exchange terms among the electrons. This is then solved for energy eigenvalues and spits out a series of linear combinations describing the orbits where the electrons are actually located, which turns out to be different from the initial guess. These new linear combinations are then thrown back into the calculation to determine electron density and exchange, which is once more used to find energy eigenvalues and orbitals, which are once again different from the previous guess. As the crank is turned repeatedly, the output orbitals converge onto the orbitals used to calculate the electron density and exchange. When these no longer particularly change between cycles, the states describing electron density will be equal to those associated with the eigenvalues –the input becomes self consistent with the output, hence giving the name to the technique by production of a self-consistent field.

Once the self consistent electron field is reached, the atomic nuclei can be repositioned within it in order to minimize the electrostatic stresses on the nuclei. Typically, the initial locations of the nuclei must be guessed since they are themselves not usually exactly known. A basic approximation of the Hartree-Fock method is the Born-Oppenheimer approximation where massive atomic nuclei are expected to move on a much slower time scale than the electrons, meaning that the atomic nuclei create a stationary electrostatic field which arranges the electrons, but then are later moved by the average dispersion of the electrons around them. Minimizing the atomic positions necessitates re-calculation of the electron field, which in turn may require that atomic positions again be readjusted until eventually the electron field does not alter the atomic positions, whereby the atomic positions facilitate the configuration of the surrounding electrons. With the output energy of the Hartree-Fock method minimized by rearranging the nuclei, this gives the relaxed configuration of a molecule. And, from this, you automatically know the bonding angles and bond lengths.

The Born-Oppenheimer approximation is a natural simplification of the real wave function which splits the wave functions of the nuclei away from the wave functions of the electrons; it can be considered valid predominantly because of the huge difference in mass (a factor of ~100,000) between electrons and nuclei, where the nuclei are essentially not very wave-like relative to the electrons. In Lithium, above, it would simply mean removing the first term of the Schrodinger equation involving the nuclear kinetic energy and understanding that the total energy of the molecule is not E. Most of the shape of a molecule can treat atomic nuclei as point-like while electrons and their orbitals constitute pretty much all of the important molecular structure.

As you can see by the description, there are a huge number of calculations required. I’ve described them very topically. Figuring out the best way to run Hartree-Fock has been an ongoing process since the 1930s and has been raised to a high art nearly 90 years later. At the superficial level, Hartree-Fock approximation is hampered by the not placing the nuclei directly in the wave function and by not allowing full correlation among the electrons. This weakness is remedied by usage of variational and perturbative post-Hartree-Fock techniques that have come to flourish with the steady increase of computational power during the advancement of Moore’s Law in transistors. That said, the precision calculation of overlap integrals is so computationally demanding on the scale of molecules that the hydrogen atom eigenstate solutions are impractical as a basis set.

This actually really caught me by surprise. Hartree-Fock has a very weird and interesting basis set type which is used in place of the hydrogen atom orbitals. And, the reason for the choice is predominantly to reduce a completely intractable computational problem to an approachable one. When I say “completely intractable,” I mean that even the best supercomputers available today still cannot calculate the full, completely real wave functions of even small molecules. With how powerful computers have become, this should be a stunning revelation. This is actually one of the big motivating factors toward using quantum computers to make molecular calculations; the quantum mechanics arise naturally within the quantum computer enabling the approximations to strain credulity less. The approximation used for the favored Hartree-Fock basis sets is very important to conserving computational power.

The orbitals built up around the original hydrogen atom solution to approximate higher atoms have a radial structure that has come to be known as Slater orbitals. Slater orbitals are variational functions that resemble the basic hydrogen atom orbital which, as you may be aware, is an exponential-La Guerre polynomial combination. Slater orbitals are basically linear combinations of exponentials which are then minimized by variation to fit the Hamiltonians of higher atoms. As I understand it, Slater orbitals can be calculated through at least the first two rows of the periodic table. These orbitals, which are themselves approximations, are actually not the preferred basis set for molecular calculations, but ended up being one jumping off point to produce early versions of the preferred basis set.

The basis set that is used for molecular calculations is the so-called “Gaussian” orbital basis set. The Gaussian radial orbitals were first produced by use of simple least-squares fits of Slater orbitals. In this, the Slater orbital is taken as a prototype and several Gaussian functions in a linear combination are fitted to it until Chi-square becomes as small as possible… while the Slater orbital can be exactly reproduced by use of an infinite number of Gaussians, it can be fairly closely reproduced by typically just a handful. Later Gaussian basis sets were also produced by skipping the Slater orbital prototype and jumping to Hartree-Fock application directly on atomic Hamiltonians (as I understand it). The Gaussian fit to the Slater orbital is pretty good across most of the volume of the function except at the center where the Slater orbital has a cusp (from the exponential) when the Gaussian is smooth… with an infinite number of Gaussians in the fit, the cusp can be reproduced, but it is a relatively small part of the function.

Orbitals comparison

Here is a comparison of a Gaussian orbital with the equivalent Slater orbital for my old hydrogen atom post. The scaling of the Slater orbital is specific to the hydrogen atom while the Gaussian scaling is not specific to any atom.

The reason that the Gaussian orbitals are the preferred model is strictly because of a computational efficiency issue. Within the application of Hartree-Fock, there are several integral calculations that must be done repeated. Performing these integrations is computationally very very costly on functions like the original hydrogen atom orbitals. With Gaussian radial orbitals, superpositions of the gaussians are themselves gaussians and the integrals all end up having the same closed forms, meaning that one can simply transfer constants from one formula to another without doing any numerical busy work at all. Further, the Gaussian orbitals can be expressed in straight-forward cartesian forms, allowing them to be translated around space with little difficulty and generally making them easy to work with (I dare you: try displacing a hydrogen orbital away from the origin while it remains in spherical-polar form. You’ll discover you need the entire Hilbert space to do it!). As such, with Gaussians, very big calculations can be performed extremely quickly on a limited computational budget. The advantage here is a huge one.

One way to think about it is like this: Gaussian orbitals can be used in molecular calculations roughly the same way that triangles are used to build polyhedral meshes in computer graphics renderings.

Gaussians are not the only basis set used with Hartree-fock. I’ve learned only a little yet about this alternative implementation, but condensed matter folk also use the conventional Fourier series basis set of sines and cosines while working on a crystal lattice. Sines and cosines are very handy in situations with periodic boundaries, which you would find in the regimented array of a crystal lattice.

Admittedly, as far as I’ve read, Hartree-Fock is an imperfect solution to the whole problem. I’ve mentioned some of the aspects of the approximation above and it must always be remembered that the it fails to capture certain aspects of the real phenomenon. That said, Hartree-Fock provides predictions that are remarkably close to actual measured values and the approximation lends itself well to post-processing that further improves the outcomes to an impressive degree (if you have the computational budget).

I found this little project a fruitful one. This is one of those rare times when I actually blew through a textbook as if I was reading a novel. Some of the old citations regarding self-consistent field theory are truly pivotal, important papers: I found one from about the middle 1970s which had 10,000 citations on Web of Science! In the textbook I read, the chemists goofed up an important derivation necessary to produce a workable Hartree-Fock program and I was able to hunt down the 1950 paper detailing said calculation. Molecular Orbital theory is a very interesting subject and I think I’ve made some progress toward understanding where molecular bonds come from and what tools are needed to describe how QM produces molecules.

(Edit 11-6-18):

One cannot walk away from this problem without learning exactly how monumental the calculation is.

In Hartree-fock theory, the wave equations are expressed in the form of determinants in order to encapsulate the antisymmetry of the electron wave equation. These determinants are an antisymmetrized sum of permutations over the orbital basis set. Each permutation ends up being its own term in the total wave equation. The number of such terms goes as a factorial of the number of electrons contained in the wave. Moreover, probability density is the square of the wave equation.

Factorials become big very quickly.

Consider a single carbon atom. This atom contains 6 electrons. From this, the total wave equation for carbon has 6! terms. 6! = 720. The probability density then is 720^2 terms… which is 518,400 terms!

That should make your eyes bug out. You cannot ever write that in its full glory.

Now, for a simple molecule, let’s consider benzene. That’s six carbons and six hydrogens. So, 6×6+6 = 42 electrons. The determinant would contain 42! terms. That is 1.4 ×10^51 terms!!!! The probability density is about 2×10^102 terms…

Avogadro’s number is only 6.02×10^23.

If you are trying to graph the probability density with position, the cross terms are important to determining the value of the density at any location, meaning that you have 10^102 terms. This assures that you can never graph it in order to visualize it! If you integrate that across all of space for the spaces of each electron (an integral with 42 3D measures), every term with an electron in two different states dies, killing cross terms. And, because no integral can survive if it has even one zero among its 42 3D measures, only the diagonal terms survive in 42 cases, allowing the normalized probability to simply evaluate to the number of electrons in the wave function. Integrating the wave function totally cleans up the mess, meaning that you can basically still do integrals to find expectation values thinking only about sums across the 42 electrons. This orthogonality issue is why you can do quantum chemistry at all: for an operator working in a single electron space, every overlap that doesn’t involve that electron must only be 1 for a given term to survive, which is a vast minority of cases.

For purposes of visualization, these equations are unmanageably huge. Not merely unmanageably, but unimaginably so. So huge, in fact, that they cannot be expressed in full except in the most simplistic cases. Benzene is only six carbons and it’s effectively impossible to tackle in the sense of the total wave equation. The best you can do is look for expressions for the molecular orbitals… which may only contain N-terms (as many as 42 for benzene.) Molecular orbitals can be considered the eigenstates of the molecule, where each one can be approximated to contain only one electron (or one pair of electrons in the case of closed shell calculations). The fully quantum weirdness here is that every electron samples every eigenstate, which is basically impossible to deal with.

For anyone who is looking, some of the greatest revelations which constructed organic chemistry as you might know it occurred as early as 1930. Linus Pauling wrote a wonderful paper in 1931 where he outlines one way of anticipating the tetragonal bond geometry of carbon… performed without use of these crazy determinant wavefunctions and with simple consideration of the vanilla hydrogenic eigenstates. Sadly, these are qualitative results without resorting to more modern methods.

(Edit 11-21-18):

Because I can never just leave a problem alone, I’ve been steadily cobbling together a program for running Hartree-Fock. If you know me, you’ll know I’m a better bench chemist than I am a programmer, despite how much time I’ve spent on the math. I got interested because I just understand things better if I do them myself. You can’t calculate these things by hand, only by computer, so off I went into a programming language that I am admittedly pretty incompetent at.

In my steady process of writing this program, I’ve just hit a point where I can calculate some basic integrals. Using the STO-3G basis set produced from John Pople’s lab in 1969, I used my routines to evaluate the ground state energy of the hydrogen atom. There is a lot of script in this program in order to work the basic integrals and it becomes really really hard to diagnose whether the program is working or not because of the density of calculations. So, it spits out a number… is it the right number? This is very hard to tell.

I used the 1s orbital from STO-3G to compute the kinetic and nuclear interaction energies and then summed them together. With baited breath, one click of the key to convert to eV…

Bam—- -13.47 eV!

You have no idea how good that felt. The accepted value of the hydrogen atom ground state is -13.6 eV. I’m only off by about 1%! That isn’t bad using an archaic basis set which was intended for molecular calculations. Since my little lap top is a supercomputer next to the machines that originally created STO-3G, I’d say I’m doing pretty well.

Not sure how many lines of code that is, but for me, it was a lot. Particularly since my program is designed to accommodate higher angular momenta than the S orbital and more complicated basis sets than STO-3G. Cranking out the right number here feels really good. I can’t help but goggle at how cheap computational power has become since the work that got Pople his Nobel prize.

(edit 12-4-18):

Still spending time working on this puzzle. There are some other interesting adjustments to my thinking as I’ve been tackling this challenge which I thought I might write about.

First, I really didn’t specify the symmetry that I was referring to above which gives rise to the huge numbers of terms in the determinant style wave functions. In this sort of wave function, which contains many electrons all at once, the fermionic structure must be antisymmetric on exchange. This relies on an operator called the ‘exchange operator’ whose sole purpose is to trade electrons within the wave equation… the fermionic wave function has an eigenvalue of -1 when operated on by the exchange operator. This means that if you trade two electrons within the wave function that the wave function remains unchanged except to produce a -1. And, this is for any exchange you might perform between any two electrons in that wave function. The way to produce a wave function that preserves this symmetry is by permuting the positional variables of the electrons among the orbitals that they might occupy, as executed in a determinant where the orbitals form one axis and the electron coordinates form the other. The action of this permutation turns out huge numbers of terms, all of which are the same set of orbitals, but with the coordinates of the electrons permuted among them.

A second item I wanted to mention is the interesting disconnect between molecular wave functions and atomic functions. In the early literature on the implementation of Hartree-Fock, the basis sets for the molecular calculation are constructed from fits to atomic wave functions. They often referred to this as Linear Combination of Atomic Orbitals. As I was playing around with one of these early basis sets, I was using these basis functions against the hydrogen atom Hamiltonian in order to try to error check the calculus in my program by attempting to reproduce the hydrogenic state energies. Very frequently, these were giving erroneous energies even though the gaussians have symmetry very like the hydrogenic orbitals they were attempting to represent. Interestingly, as you may have read above, the lowest energy state, equivalent to the hydrogenic 1s, fit very closely to the ground state energy of hydrogen… where a basis with a larger number of gaussians for the same orbital fit even more closely to that energy.

I spent a little time stymied on the fact that the higher energy functions in the basis, the 2s and 2p functions, fit very very poorly to the higher energies of hydrogen. This is unnerving because the processing of these particular integrals in my program required a pretty complicated bit of programming to facilitate. I got accurate energies for 1s, but poor energies for 2s and 2p… maybe the program is working for 1s, but isn’t working for 2s or 2p. The infuriating part here is that 2s has very similar symmetry to 1s and is treated by the program in roughly the same manner, but the energy was off then too. I spent time analytically proving to myself that the most simple expression of the 2p orbital was being calculated correctly… and it is; I get consistent numbers across the board, just that there is a baked in inaccuracy in this particular set of basis functions which makes them not fit the equivalent hydrogenic orbital energies. It did not make much sense to me why the molecular community was citing this particular basis set so consistently, even though it really doesn’t seem to fit hydrogen very well. I’m not yet totally convinced that my fundamental math isn’t somehow wrong, but when numbers start emerging that are consistent with each other from different avenues, usually it means that my math isn’t failing. I still have some other error checks I’m thinking about, but one additional thought must be added.

In reality, the molecular orbitals are not required to mimic the atomic parts from which they can be composed. At the locations in a molecule very close to atomic nuclei, the basis functions need to look similar to the atomic functions in order to contain the flexibility to mimic atoms, but the same is not true at locations where multiple nuclei have sway all at once. The choice of making orbitals atom-like is a convenience that might save some computational overhead; you could have a sequence of any set of orthogonal functions you want and be able to calculate the molecular orbitals without looking very deeply at what the isolated atoms seem to be. For the first about two rows of the periodic table, up to Florine, most of the electrons in an atom are within reach of the valence band, meaning that they are contributed out into the molecule and distributed away from the nucleus. A convenient basis set for capturing this is to sort of appear atom-like around the nuclei, but not necessarily… if you have an infinite number of gaussians, slater functions or simple sines and cosines, the flexibility of a properly orthogonalized basis set can capture the actual orbital as a linear combination. The choice of using gaussians is computationally convenient for the situation of having the electron distributed in finite clouds around atom centers, making the basis set small, but not more than that. The puzzle is simply to have sufficient flexibility at any point in the molecule for the basis to capture the appropriate linear combination describing the molecule. An infinite sum of terms can be arbitrarily close.

In this, it isn’t necessary for the basis functions to exactly duplicate the true atomic orbitals since that isn’t what you’re looking for to begin with. In a way, the atomic orbitals are therefore disconnected from the molecular orbitals. Trying to exactly reproduce the atoms is misleading since you don’t actually have isolated atoms in a molecule. Presumably, a heavy atom will appear very atom-like deep within its localized potential, but not up on top.

 

(edit 12-11-18):

I’ve managed to build a working self-consistent field calculator for generating molecular wave functions. Hopefully, I’ll get a chance to talk more about it when there’s time.

Published by foolish physicist

Low level academic enthralled with learning how things work.

Join the Conversation

2 Comments

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: