The Classical version of NMR

As I’ve been delving quite deeply into numerical solutions of quantum mechanics lately, I thought I would take a step back and write about something a little less… well… less. One thing about quantum mechanics that is sometimes a bit mind-boggling is that classical interpretations of certain systems can be helpful to understand things about the quantum mechanics of the same system.

Nuclear magnetic resonance (NMR) dovetails quite nicely with my on-going series about how magnetism works. You may be familiar with NMR from a common medical technique that makes heavy use of it: Magnetic Resonance Imaging (MRI). The imaging technique of MRI makes use of NMR to build density maps of the human anatomy. The imaging technique accomplishes this feat by using magnetism to excite a radio signal from NMR active atomic nuclei and then create a map in space from the 3D distribution of NMR signal intensity. NMR itself is due specifically to the quantum mechanics of spin, particularly spin flipping, but it also has a classical interpretation which can aid in understanding what these more difficult physics mean. The system is very quantum mechanical, don’t get me wrong, but the classical version is actually a pretty good analog for once.

I touched very briefly on the entry gate to classical NMR in this post. The classical physics describing the behavior of a compass needle depicts a magnetized needle which rotates in order to follow the lines of an external magnetic field. For a magnetic dipole, compass needle-like behavior will tend to dominate how that dipole interacts with a magnetic field unless the rotational moments of inertia of that dipole are very small. In this case, the compass needle no longer swings back and forth. So, What does it do?

Let’s consider again the model of a compass needle awash in a uniform magnetic field…

1 compass needle new

This model springs completely from part 3 of my magnetism series. The only difference I’ve added is that dipole points in some direction while the field is along the z-axis. The definition of the dipole is cribbed straight from part 4 of my magnetism series and is expressing quantum mechanical spin as ‘S.’ We can back off from this a little bit and recognize that spin is simply angular momentum, where I transit to calling it ‘L’ instead so that I can slip away from the quantum. In this particular post, I’m not delving into quantum!

2 magnetic dipole classic nmr

In this formula, ‘q’ is electric charge, ‘m’ is the mass of the object and ‘g’ is the gyromagnetic ratio which regularizes spin angular momentum to a classical rotational moment.

I will crib one more of the usual suspects from my magnetism series.

3 torque expression

I originally derived this torque expression to show how compass needles swing back and forth in a magnetic field. In this case, it helps to stop and think about the relationship between torque and angular momentum. It turns out that these two quantities are related in much the same manner as plain old linear momentum and force. You acquire torque by finding out how angular momentum changes with time. Given that magnetic moment can be expressed from angular momentum, as can torque, I rewrite the equation above in terms of angular momentum.

4 rewritten torque

This differential equation has the time derivative of angular momentum (signified in physicist shorthand as the ‘dot’ over the quantity of ‘L’) equal to a cross product involving angular momentum and the magnetic field. If you decompress the cross product, you can get to a fairly interesting little coupled differential equation system.

5 decompressing cross product

 

This simplifies the cross product to the two relevant surviving terms after considering that the B-field only lies along one axis. This gives a vector equation…

6 opening diff eqn

I’ve expressed the vector equation in component form so that you can see how it breaks apart. In this, you get three equations, one for each hatted vector which connect to each dimension of a three dimensional angular momentum. These can all be written separately.

7 differential equations

I’ve grouped the B-field into the coefficient because it’s a constant and I’ve tried to take control of my reckless highlighting problem so that you can see how these differential equations are coupled. The z-axis of the angular momentum is easy since it must solve as a constant and since it’s decoupled from ‘x’ and ‘y’. The other two are not so easy. The coefficient is a special quantity which is called the Larmor frequency.

8 Larmor frequency

This gives us a fairly tidy package.

9 revised differential eqn

I’ve always loved the solution of this little differential equation. There’s a neat trick here from wrapping the ‘x’ and ‘y’ components up as the two parts of a complex number.

10 complex number

You then just take a derivative of the complex number with respect to time and work your way through the definitions of the differential equation.

11 Complex num diff eqn

After working through this substitution, the differential equation is reduced to maybe the simplest first order differential equation you could possibly solve. The answer is but a guess.

12 soln 1

Which can be broken up into the original ‘x’ and ‘y’ components of angular momentum using the Euler relation.

12 soln 1a

There’s an argument here that ‘A’ is determined by the initial conditions of the system and might contain a complex phase, but I’m going to just say that we don’t really care. You can more or less just say that all angular momentum is distributed between the x, y and z components of the angular momentum, part of it a linear combination that lies in the x-y plane and the rest pointed along the z-axis.

13 basis solution

And, as the original casting of the problem is in terms of the magnetic dipole moment, I can switch angular momentum back to the dipole moment. Specifically, I can use the pseudo-quantum argument that the individual dipoles possess half-integer spin magnitude angular momentum as hbar over 2.

14 classical dipole moment

This gives an expression for how the classical atom sized spin dipole will move in a uniform magnetic field. The absolute value on the charge in the coefficient constrains the situation to reflect only the size of the magnetic moment given that the angular momentum was considered to be only a magnitude. Charge appears a second time inside the sine and cosine terms concerning the Larmor frequency: for example, if the charge is negative, the negative sign on the frequency will cause the sine to switch from negative to positive while the cosine is unaffected.

15 larmor precession

A classical magnetic dipole trapped in a uniform magnetic field pointed along the Z-axis will undergo a special motion called gyroscopic “precession.” In this picture, the ellipses are drawn to show the surfaces of a cylinder in order to follow the positions of the dipole moment vectors with time. Here, the ellipses are _not_ an electrical current loop as depicted in the first image above. The dipole moment vector traces out the surface of a cone as it moves; when viewed from above, the tip of the dipole moment with a +q charge sweeps clockwise while the -q charge sweeps CCW. This motion is very similar to a child’s top or a gyroscope…

gyroscope

This image taken from Real world physics, hopefully, you’ve had the opportunity to play with one of these. A mentioned, the direction of the gyroscopic motion is determined by the charge of the dipole moment. As also mentioned, this is a classical model of the motion and it breaks down when you start getting to the quantum mechanics, but it is remarkably accurate in explaining how a tiny, atom sized dipole “moves” when under the influence of a magnetic field.

Dipolar precession appears in NMR during the process of free induction decay. As illustrated in my earlier blog post on NMR, you can see the precession:

220px-nmr_fid_good_shim_en-svg

In the sense of classical magnetization, you can see the signal from the dipolar gyroscopes in the plot above. One period of oscillation in this signal is one full sweep of the dipole moment around the z-axis. As the signal here is nested in the “magnetization” of the NMR sample, the energy bleeding out of the magnetic dipoles into the radiowave signal that is actually observed saps energy from the system and causes the precession in the magnetization to die down until it lies fully along the z-axis (again, classical view!) In its relaxed state, the magnetization points along the axis of the external field, much as a compass needle does. The compass needle, of course, can’t precess the way an atomic dipole moment can. And, as I continue repeatedly to point out, this is a classical interpretation of NMR… where the quantum mechanics are appreciably similar, though decidedly not the same.

Because such a rotating dipole moment cannot exhibit this kind of oscillation indefinitely without radiating its energy away into electromagnetism, some action must be undertaken in order to set the dipole into precession. You must somehow tip it away from pointing along the external magnetic field, at which time it will begin to precess.

In my previous post on the topic, I gave several different explanations for how the dipoles are tipped in order to excite free induction decay. Verily, I said that you blast them with a radio frequency pulse in order to tip them. That is true, but very heavy handed. Classical NMR offers a very elegant interpretation for how dipoles are tipped.

To start, I will dial back to the picture we started with for the precession oscillation. In this set up, the dipole starts in a relaxed position pointing along the z-axis B-field and is subjected to a radio frequency pulse that is polarized so that the B-field of the radio wave lies in the x-y plane. The Poynting vector is somewhere along the x-axis and the radiowave magnetic field is along the y-axis.

16 setup 2nd field.png

In this, the radiowave magnetic field is understood to be much weaker than the powerful static magnetic field.

You can intuitively anticipate what must happen for a naive choice of frequency ‘ω.’ The direction of the magnetic dipole will bobble a tiny bit in an attempt to precess around the superposition of the B0 and B2 magnetic fields. But, because the B0 field is much stronger than the B2 field, the dipole will remain pointing nearly entirely along the z-axis. We could write it out in the torque equation in order to be explicit.

17 2 field torque

Without thinking about the tiny time dependence on the B2 field, we know the solution to this equation from above for atomic scale dipoles. The Larmor frequency would just depend on the vector sum of the two fields. This is of course a very naive response and the expected precession would be very small and hard to detect since the dipole is not displaced very far from the average direction of the field at any given time, again expecting B2 to be very small. And, if B2 is oscillatory, there is no point where the time average of the total field lies off the z-axis. The static field tends to dominate and precession would be weak at best.

Now, there is a condition where an arbitrarily weak B2 field can actually have a major impact on the direction of the magnetic dipole moment.

18 split field

This series of algebraic manipulations takes a cosinusoidal radiowave B-field and splits it into two parts. If you squint closely at the math, the time dependent B-fields present in the last line will spring out to you as counter-rotating magnetic fields. I got away with doing this by basically adding zero.

19 counter rotation

Why in the world did I do this? This seems like a horrible complexification of an already hard-to-visualize system.

To understand the reason for doing this, I need to make a digression. In physics, one of the most useful and sometimes overlooked tools you can run across is the idea of frame of reference. Frame of reference is simply the circumstance by which you define your units of measurement. You can think about this as being synonymous with where you have decided to park your lawn chair in order to take stock of the events occurring around you. In watching a train trundle past on its tracks, clearly I’ve decided my frame of reference is sitting someplace near the tracks where I can measure that the train is moving with respect to me. I can also examine the same situation from inside the train car looking out the window, watching the scenery scroll past. Both situations will yield the same mathematical observations from two different ways of looking at the same thing.

In this case, the frame of reference that is useful to step into is a rotating frame. If you’re on the playground, when you sit down on a moving merry-go-round, you have shifted to a rotating frame of reference where the world will appear as if it rotates around you. Sitting on this moving merry-go-round, if you watch someone toss a baseball across over your head, you would need to add some sort of fictitious force into your calculation to properly capture the path the ball will follow from your point of view. This means reinventing your derivative with respect to time.

20 rotating frame time derivative

This description of the rotating frame time derivative is simply a matter of tabulating all the different vectors that contribute to the final derivative. (The vectors here are misdrawn slightly because I initially had the rotating vector backward.) The vector as seen in the frame of reference moves through the rotation according to displacements that are due both to the internal (in) rotation and whatever external (ext) displacements contribute to its final state. The portion due to the rotation (rot) is a position vector that is simply shifted by the rotation at an angle I called ‘α’ where the rotation is defined with positive being in the right-handed sense –literally backward (lefthanded) when seen from within the rotating frame. The angular displacement ‘α’ is equal to the angular speed ‘Ω’ times time as Ωt and it can be represented by a vector that is defined to point along the z-axis. The little bit of trig here shows that the rotating frame derivative requires an extra term that is a cross product between the vector being differentiated and the rotational velocity vector.

How does this help me?

21 intro rotating frame

I’ve once again converted torque and magnetic moment into angular momentum in order to reveal the time derivative. It is noteworthy here that the term involving the Larmor frequency directly, the first term on the right, looks very similar to the form of the rotating frame if the Larmor frequency is taken to be angular velocity of the rotating frame. Moreover, I have already defined two other magnetic field terms that are both rotating in opposition to each other where I have not selected their frequencies of rotation.

22 cancelation is rotating frame

A rotating frame could be chosen where the term involving the static magnetic field will be canceled by the rotation. This will be a clockwise rotation at the speed of the Larmor frequency. If the frequency of rotation of B2 is chosen to be the Larmor frequency, the clockwise rotating B2 field term will enter into the rotating frame without time dependence while the frequency of the other term will double. As such, one version of the B2 field can be chosen to rotate with the rotating frame.

23 cancellation 2

In the final line, the primed unit vectors are taken to be with the rotating frame of reference. So, two things have happened here: the effect of the powerful static field is canceled out purely by the rotation of the rotating frame and the effect of the counter rotating field, spinning around at twice the Larmor frequency in the opposite direction, is on average in no direction. The only remaining significant term is the field that is stationary with respect to the rotating frame, which I’ve taken to be along the y’-axis.

The differential equation that I’ve ended up with here is exactly like the differential equation solved for the powerful static field by itself far above, but with a precession that will now occur at a frequency of ω2 around the y’-axis.

24 rotating frame solutions

If I take the starting state of the magnetic dipole moment to be relaxed along the z-axis, no component will ever exist along the y’-axis… the magnetic dipole moment will purely inhabit the z-x’ plane in the rotating frame.

25 motion in rotating frame

As long as the oscillating radiowave magnetic field is present, the magnetic dipole moment will continue to revolve around the y’-axis in the rotating frame. In the stationary frame, the dipole moment will tend to follow a complicated helical path both going around the static B-field and around the rotating y’-axis.

If you irradiate the sample with radiowaves for 1/4 of the period associated with the ω2 frequency, the magnetic dipole moment will rotate around until it lies in the x-y plane. You then shut off the radiowave source and watch as the NMR sample undergoes a free induction decay until the magnetization lies back along the static B-field.

This is a classical view of what’s going on: a polarized radiowave applied at the Larmor frequency will cause atomic magnetic dipoles to torque around in the sample until they are able to undergo oscillation. Once the radiowave is shut off, the magnetization performs a free induction decay. Applying the radiowave at the Larmor frequency is said to be driving the system at resonance since the static B-field will always be strong enough to overwhelm the comparably weak radiowave magnetic field.

I’ve completely avoided the quantum mechanics of this system. The rotating frame of Larmor precession is fairly accurate for describing what’s happening here until you need to consider other quantum mechanical effects present in the signal, such as spin-spin coupling of neighboring NMR active nuclei. Quantum mechanics are ultimately what’s going on, but you want to avoid the headaches associated with that wherever possible.

I do have it in mind to rewrite a long defunct post that described the quantum mechanics of the two-state system specifically in how it describes NMR. It will happen someday, honestly!

Advertisements

The Quantum Mechanics in the Gap

A big cat’s paw of mine is trying to fill the space between my backgrounds to understand how one thing leads to another.

When a biochemist learns quantum mechanics (QM), it happens from a background where little mathematical sophistication is required; maybe particle-in-a-box appears in the middle of a low grade Physical Chemistry class and many results of QM are qualitatively encountered in General Chemistry or perhaps in greater detail in Organic Chemistry. A biochemist does not need to be perfect at these things since the meat of biochemistry is a highly specialized corner of organic chemistry dealing with a relatively small number of molecule types where the complexity of the molecule tends to force the details into profound abstraction. Proteins and DNA, membranes and so on are all expressed mostly as symbols, sequences or structural motifs. Reactions occur symbolically where chemists have worked out the details of how a reaction proceeds (or not) without really saying anything very profound about it. This created a situation of deep frustration for me once upon a time because it always seemed like I was relying on someone else to tell me the specifics of how something actually worked. I always felt helpless. Enzymatic reaction mechanisms always drove me crazy because they seem very ad hoc; no reason they shouldn’t since evolution is always ad hoc, but the symbology used always made it opaque to me as to what was happening.

When I was purely a biochemist, an undergraduate once asked me whether they could learn QM in chemistry and I honestly answered “Yes” that everything was based on QM, but withheld the small disquiet I felt that I really didn’t believe that I understood how it fit in. Background that I had in QM being as it was at that point, I didn’t truly know a quantum dot from a deviled egg. Yes, quantum defines everything, but what does a biochemist know of quantum? Where does bond geometry come from? Everything seems like arbitrary tinker toys using O-Chem models. Why is it that these things stick together as huge, solid ball-and-stick masses when everything is supposed to be puffy wave clouds? Where is this uncertainty principle thing people vaguely talk about in hushed tones when referring to the awe inspiring weirdness that is QM? You certainly would never know such details looking at model structures of DNA. This frustration eventually drove me to multiple degrees in physics.

In physics, QM takes on a whole other dimension. The QM that a physicist learns is concerned with gaining the mathematical skill to deal with the core of QM while retaining the flexibility to specialize in a needed direction. Quantum Theory is a gigantic topic which no physicist knows in entirety. There are two general cousins of theory which move in different directions with Hamiltonian formalisms diverging from the Lagrangian. They connect, but have power in different situations. Where you get very specific on a topic is sometimes not well presented –you have to go a long way off the beaten path to hit either the Higgs Boson or General Relativity. Physicists in academia are interested in the weird things lying at the limits of physics and focus their efforts on pushing to and around those weirdnesses; you only focus efforts on specializations of quantum mechanics as they are needed to get to the untouched things physicists actually care to examine. This means that physicists sometimes focus little effort on tackling topics that are interesting to other fields, like chemistry… and the details of the foundations of chemistry, like the specifics of the electronic structure of the periodic table, are under the husbandry of chemists.

If you read my post on the hydrogen atom radial equation, you saw the most visible model atom. The expanded geometries of this model inform the structure of the periodic table. Most of the superficial parts of chemistry can be qualitatively understood from examining this model. S, P, D, F and so on orbitals are assembled from hydrogenic wave equations… at least they can be on the surface.

Unfortunately, the hydrogenic orbitals can only be taken as an approximation to all the other atoms. There are basically no analytic solutions to the wave functions of any atom beyond hydrogen.

Fine structure, hyper fine structure and other atomic details emerge from perturbations of the hydrogenic orbitals. Perturbation is a powerful technique, except that it’s not an exact solution. Perturbations approach solutions by assuming that some effect is a small departure from a much bigger situation that is already solved. You then do an expansion on which successive terms tend to approach the perturbative part more and more closely. Hydrogenic orbitals can be used as a basis for this. Kind of. If the “perturbation” becomes too big relative to the basis situation, the expansion necessary to approximate it becomes too big to express. Technically, you can express any solution for any situation from a “complete” basis, but the fraction of the basis required for an accurate expression becomes bigger than the “available” basis before you know it if the perturbation is too large compared to the context of the basis.

When I refer to “basis” here, I’m talking about Hilbert spaces. This is the use of orthogonal function sets as a method to compose wave equations. This works like Fourier series, which is one of the most common Hilbert space basis sets. Many Hilbert spaces contain infinitely many basis functions, which is bigger than the biggest number of functions any computer can use. The reality is that you can only ever actually use a small portion of a basis.

The hydrogen situation is merely a prototype. If you want to think about helium or lithium or so on, the hydrogenic basis becomes merely one choice of how to approach the problem. The hamiltonians of other atoms are structures that can in some cases be bigger than is easily approachable by the hydrogenic basis. Honestly, I’d never really thought very hard about the other basis sets that might be needed, but technically they are a very large subject since they are needed for the 120 odd other atoms on the periodic table beyond hydrogen. These other atoms have wave functions that are kind of like those of hydrogen, but are different. The S-orbital of hydrogen is a good example of S-orbitals found in many atoms, even though the functional form for other atoms is definitely different.

This all became interesting to me recently on the question of how to get to molecular bonds as more than the qualitative expression of hydrogenic orbital combinations. How do you actually calculate bond strengths and molecular wave functions? These are important to understanding the mechanics of chemistry… and to poking a finger from quantum mechanics over into biochemistry. My QM classes brushed on it, admittedly, deep in the quagmire of other miscellaneous quantum necessary to deal with a hundred different things. I decided to take a sojourn into the bowels of Quantum Chemistry and develop a competence with the Hartree-Fock method and molecular orbitals.

The quantum mechanics of quantum chemistry is, surprisingly enough, mechanically more simple than one might immediately expect. This is splitting hairs considering that all quantum is difficult, but it is actually somewhat easier than the difficulty of jumping from no quantum to some quantum. Once you know the basics, you pretty much have everything needed to get started. Still, as with all QM, this is not to scoff at; there are challenges in it.

This form of QM is a Hamiltonian formalism where the first mathematics originated in the 1930s. The basics revolve around the time independent Schroedinger equation. Where it jumps to being modern QM is in the utter complexity of the construct… simple individual parts, just crazily many of them. This type of QM is referred to as “Many Body theory” because it involves wave equations containing dozens to easily hundreds of interactions between individual electrons and atomic nuclei. If you thought the Hamiltonian I wrote in my hydrogen atom post was complicated, consider that it was only for one electron being attracted to a fixed center… and not even including the components necessary to describe the mechanics of the nucleus too. The many body theory used to build up atoms with many electrons works for molecules as well, so learning generalities about the one case is learning about it the other case too.

As an example of how complicated these Schrodinger equations become, here is the time independent Schrodinger equation for Lithium.

Lithium Schrodinger

This equation is simplified to atomic units to make it tractable. The part describing the kinetic energy of the nucleus is left in. All four of those double Del operators open up into 3D differentials like the single one present in the hydrogen atom. The next six terms describe electrostatic interactions between the three electrons among themselves and with the nucleus. This is only one nucleus and three electrons.

As I already mentioned, there are no closed-form analytical solutions for structures more complicated than hydrogen, so many body theory is about figuring out how to make useful approximations. And, because of the complexity, it must make some very elegant approximations.

One of the first useful features of QM for addressing situations like this I personally overlooked when I initially learned it. With QM, most situations that you might encounter have no exact solutions. Outside of a scant handful of cases, you can’t truly “solve” anything. But, for all the histrionics that goes along with that, the solutions, what are called the eigenstates, are a special case of lowest possible energy for the given circumstance. If you make a totally random guess about the form of the wave function which solves a given Hamiltonian, you are assured that the actual solution has a lower energy. Since that’s the case, you can play a game: if I make a some random guess about the form of the solution, another guess that has a lower energy is a better guess regarding the actual form. You can minimize this, always making adjustments to the guess such that it achieves a lower energy, where eventually it won’t go any lower. The actual solution still ends up being lower, but maybe not very far. Designing such energy minimizing guesses inevitably converges toward the actual solution and is usually accomplished by systematic mathematical minimization. This method is called “Variation” and is one of the most major methods for constructing approximations of an eigenstate. Also, as you might expect, this is a numerical strategy and it makes heavy use of computers in the modern day since the guesses are generally very big, complicated mathematical functions. Variational strategies are responsible for most of our knowledge of the electronic structure of the periodic table.

Using computers to make guesses has been elevated to a high art. Literally, a random function with a large number of unknown constants is tried against the Hamiltonian; you then take a derivative of the energy to see how it varies as a function of any one constant and then adjust that constant until the energy is at a minimum, where the derivative is near zero and where the second derivative shows an inflection indicative of a minimum. Do this over and over again with all the available constants in the function and eventually the trial wave function converges to the actual solution.

Take that in for a moment. We understand the periodic table mainly by guessing at it! A part of what makes these wave functions so complicated is that the state of any one electron in any system more complicated than hydrogen is dependent on every other electron and charged body present, as shown in the Lithium equation above. The basic orbital shapes are not that different from hydrogen, even requiring spherical harmonics to describe the angular shape, but the specific radial scaling and distribution is not solvable. These electrons influence each other in several ways. First, they place plain old electrostatic pressure on one another –all electrons push against each other by their charges and shift each other’s orbitals in subtle ways. Second, they exert what’s called “exchange pressure” on one another. In this, every electron in the system is indistinguishable from every other and electrons specifically deal with this by requiring that the wave function be antisymmetric such that no electron can occupy the same state as any other. You may have heard this called the Pauli Exclusion Principle and it is just a counting effect. In a way, this may be why quantum classes tend to place less weight on the hydrogen atom radial equation: even though it holds for hydrogen, it works for nothing else.

Multi-atom molecules stretch the situation even further. Multiple atoms, unsolvable in and of themselves, are placed together in some regularly positioned array in space, with unsolvable atoms now compounded into unsolvable molecules. Electrons from these atoms are then all lumped together collectively in some exchange antisymmetric wave function where the orbitals are dependent on all the bodies present in the system. These orbitals are referred to in quantum chemistry as molecular orbitals and describe how an electron cloud is dispersed among the many atoms present. Covalent electron bonds and ionic bonds are forms of molecular orbital, where electrons are dispersed between two atoms and act to hold these atoms in some fixed relation with respect to one another. The most basic workhorse method for dealing with this highly complicated arrangement is a technique referred to as the Hartree-Fock method. Modern quantum chemistry is all about extensions beyond Hartree-Fock, which often use this method as a spine for producing an initial approximation and then switch to other variational (or perturbative) techniques to improve the accuracy of the initial guess.

Within Hartree-Fock, molecular orbitals are built up out of atomic orbitals. The approximation postulates, in part, that each electron sits in some atomic orbital which has been contributed to the system by a given atom where the presence of many atoms tends to mix up the orbitals among each other. To obey exchange, each electron literally samples every possible contributed orbital in a big antisymmetric superposition.

Hartree-Fock is sometimes referred to as Self Consistent Field theory. It uses linear superpositions of atomic orbitals to describe the molecular orbitals that actually contain the valence electrons. In this, the electrons don’t really occupy any atomic orbital, but some combination of many orbitals all at once. For example, a version of the stereotypical sigma covalent bond is actually a symmetric superposition of two atomic S-orbitals. The sigma bond contains two electrons and is made antisymmetric by the solitary occupancy of electron spin states so that the spatial part of the S-orbitals from the contributing atoms can enter in as a symmetric combination –this gets weird when you consider that you can’t tell which electron is spin up and which is spin down, so they’re both in a superposition.

Sigma bond

The sigma bond shown here in Mathematica was actually produced from two m=0 hydrogenic p-orbitals. The density plot reflects probability density. The atom locations were marked afterward in powerpoint. The length of the bond here is arbitrary, and not energy minimized to any actual molecule. This was not produced by Hartree-Fock (though it would occur in Hartree-Fock) and is added only to show what molecular bonds look like.

From completeness, here is a pi bond.

Pi bond

At the start of the Hartree-Fock, the molecular orbitals are not known where the initial wave function guess is that every electron is present in a particular atomic orbital within the mixture. Electron density is then determined throughout the molecule and used to furnish repulsion and exchange terms among the electrons. This is then solved for energy eigenvalues and spits out a series of linear combinations describing the orbits where the electrons are actually located, which turns out to be different from the initial guess. These new linear combinations are then thrown back into the calculation to determine electron density and exchange, which is once more used to find energy eigenvalues and orbitals, which are once again different from the previous guess. As the crank is turned repeatedly, the output orbitals converge onto the orbitals used to calculate the electron density and exchange. When these no longer particularly change between cycles, the states describing electron density will be equal to those associated with the eigenvalues –the input becomes self consistent with the output, hence giving the name to the technique by production of a self-consistent field.

Once the self consistent electron field is reached, the atomic nuclei can be repositioned within it in order to minimize the electrostatic stresses on the nuclei. Typically, the initial locations of the nuclei must be guessed since they are themselves not usually exactly known. A basic approximation of the Hartree-Fock method is the Born-Oppenheimer approximation where massive atomic nuclei are expected to move on a much slower time scale than the electrons, meaning that the atomic nuclei create a stationary electrostatic field which arranges the electrons, but then are later moved by the average dispersion of the electrons around them. Minimizing the atomic positions necessitates re-calculation of the electron field, which in turn may require that atomic positions again be readjusted until eventually the electron field does not alter the atomic positions, whereby the atomic positions facilitate the configuration of the surrounding electrons. With the output energy of the Hartree-Fock method minimized by rearranging the nuclei, this gives the relaxed configuration of a molecule. And, from this, you automatically know the bonding angles and bond lengths.

The Born-Oppenheimer approximation is a natural simplification of the real wave function which splits the wave functions of the nuclei away from the wave functions of the electrons; it can be considered valid predominantly because of the huge difference in mass (a factor of ~100,000) between electrons and nuclei, where the nuclei are essentially not very wave-like relative to the electrons. In Lithium, above, it would simply mean removing the first term of the Schrodinger equation involving the nuclear kinetic energy and understanding that the total energy of the molecule is not E. Most of the shape of a molecule can treat atomic nuclei as point-like while electrons and their orbitals constitute pretty much all of the important molecular structure.

As you can see by the description, there are a huge number of calculations required. I’ve described them very topically. Figuring out the best way to run Hartree-Fock has been an ongoing process since the 1930s and has been raised to a high art nearly 90 years later. At the superficial level, Hartree-Fock approximation is hampered by the not placing the nuclei directly in the wave function and by not allowing full correlation among the electrons. This weakness is remedied by usage of variational and perturbative post-Hartree-Fock techniques that have come to flourish with the steady increase of computational power during the advancement of Moore’s Law in transistors. That said, the precision calculation of overlap integrals is so computationally demanding on the scale of molecules that the hydrogen atom eigenstate solutions are impractical as a basis set.

This actually really caught me by surprise. Hartree-Fock has a very weird and interesting basis set type which is used in place of the hydrogen atom orbitals. And, the reason for the choice is predominantly to reduce a completely intractable computational problem to an approachable one. When I say “completely intractable,” I mean that even the best supercomputers available today still cannot calculate the full, completely real wave functions of even small molecules. With how powerful computers have become, this should be a stunning revelation. This is actually one of the big motivating factors toward using quantum computers to make molecular calculations; the quantum mechanics arise naturally within the quantum computer enabling the approximations to strain credulity less. The approximation used for the favored Hartree-Fock basis sets is very important to conserving computational power.

The orbitals built up around the original hydrogen atom solution to approximate higher atoms have a radial structure that has come to be known as Slater orbitals. Slater orbitals are variational functions that resemble the basic hydrogen atom orbital which, as you may be aware, is an exponential-La Guerre polynomial combination. Slater orbitals are basically linear combinations of exponentials which are then minimized by variation to fit the Hamiltonians of higher atoms. As I understand it, Slater orbitals can be calculated through at least the first two rows of the periodic table. These orbitals, which are themselves approximations, are actually not the preferred basis set for molecular calculations, but ended up being one jumping off point to produce early versions of the preferred basis set.

The basis set that is used for molecular calculations is the so-called “Gaussian” orbital basis set. The Gaussian radial orbitals were first produced by use of simple least-squares fits of Slater orbitals. In this, the Slater orbital is taken as a prototype and several Gaussian functions in a linear combination are fitted to it until Chi-square becomes as small as possible… while the Slater orbital can be exactly reproduced by use of an infinite number of Gaussians, it can be fairly closely reproduced by typically just a handful. Later Gaussian basis sets were also produced by skipping the Slater orbital prototype and jumping to Hartree-Fock application directly on atomic Hamiltonians (as I understand it). The Gaussian fit to the Slater orbital is pretty good across most of the volume of the function except at the center where the Slater orbital has a cusp (from the exponential) when the Gaussian is smooth… with an infinite number of Gaussians in the fit, the cusp can be reproduced, but it is a relatively small part of the function.

Orbitals comparison

Here is a comparison of a Gaussian orbital with the equivalent Slater orbital for my old hydrogen atom post. The scaling of the Slater orbital is specific to the hydrogen atom while the Gaussian scaling is not specific to any atom.

The reason that the Gaussian orbitals are the preferred model is strictly because of a computational efficiency issue. Within the application of Hartree-Fock, there are several integral calculations that must be done repeated. Performing these integrations is computationally very very costly on functions like the original hydrogen atom orbitals. With Gaussian radial orbitals, superpositions of the gaussians are themselves gaussians and the integrals all end up having the same closed forms, meaning that one can simply transfer constants from one formula to another without doing any numerical busy work at all. Further, the Gaussian orbitals can be expressed in straight-forward cartesian forms, allowing them to be translated around space with little difficulty and generally making them easy to work with (I dare you: try displacing a hydrogen orbital away from the origin while it remains in spherical-polar form. You’ll discover you need the entire Hilbert space to do it!). As such, with Gaussians, very big calculations can be performed extremely quickly on a limited computational budget. The advantage here is a huge one.

One way to think about it is like this: Gaussian orbitals can be used in molecular calculations roughly the same way that triangles are used to build polyhedral meshes in computer graphics renderings.

Gaussians are not the only basis set used with Hartree-fock. I’ve learned only a little yet about this alternative implementation, but condensed matter folk also use the conventional Fourier series basis set of sines and cosines while working on a crystal lattice. Sines and cosines are very handy in situations with periodic boundaries, which you would find in the regimented array of a crystal lattice.

Admittedly, as far as I’ve read, Hartree-Fock is an imperfect solution to the whole problem. I’ve mentioned some of the aspects of the approximation above and it must always be remembered that the it fails to capture certain aspects of the real phenomenon. That said, Hartree-Fock provides predictions that are remarkably close to actual measured values and the approximation lends itself well to post-processing that further improves the outcomes to an impressive degree (if you have the computational budget).

I found this little project a fruitful one. This is one of those rare times when I actually blew through a textbook as if I was reading a novel. Some of the old citations regarding self-consistent field theory are truly pivotal, important papers: I found one from about the middle 1970s which had 10,000 citations on Web of Science! In the textbook I read, the chemists goofed up an important derivation necessary to produce a workable Hartree-Fock program and I was able to hunt down the 1950 paper detailing said calculation. Molecular Orbital theory is a very interesting subject and I think I’ve made some progress toward understanding where molecular bonds come from and what tools are needed to describe how QM produces molecules.

(Edit 11-6-18):

One cannot walk away from this problem without learning exactly how monumental the calculation is.

In Hartree-fock theory, the wave equations are expressed in the form of determinants in order to encapsulate the antisymmetry of the electron wave equation. These determinants are an antisymmetrized sum of permutations over the orbital basis set. Each permutation ends up being its own term in the total wave equation. The number of such terms goes as a factorial of the number of electrons contained in the wave. Moreover, probability density is the square of the wave equation.

Factorials become big very quickly.

Consider a single carbon atom. This atom contains 6 electrons. From this, the total wave equation for carbon has 6! terms. 6! = 720. The probability density then is 720^2 terms… which is 518,400 terms!

That should make your eyes bug out. You cannot ever write that in its full glory.

Now, for a simple molecule, let’s consider benzene. That’s six carbons and six hydrogens. So, 6×6+6 = 42 electrons. The determinant would contain 42! terms. That is 1.4 ×10^51 terms!!!! The probability density is about 2×10^102 terms…

Avogadro’s number is only 6.02×10^23.

If you are trying to graph the probability density with position, the cross terms are important to determining the value of the density at any location, meaning that you have 10^102 terms. This assures that you can never graph it in order to visualize it! If you integrate that across all of space for the spaces of each electron (an integral with 42 3D measures), every term with an electron in two different states dies, killing cross terms. And, because no integral can survive if it has even one zero among its 42 3D measures, only the diagonal terms survive in 42 cases, allowing the normalized probability to simply evaluate to the number of electrons in the wave function. Integrating the wave function totally cleans up the mess, meaning that you can basically still do integrals to find expectation values thinking only about sums across the 42 electrons. This orthogonality issue is why you can do quantum chemistry at all: for an operator working in a single electron space, every overlap that doesn’t involve that electron must only be 1 for a given term to survive, which is a vast minority of cases.

For purposes of visualization, these equations are unmanageably huge. Not merely unmanageably, but unimaginably so. So huge, in fact, that they cannot be expressed in full except in the most simplistic cases. Benzene is only six carbons and it’s effectively impossible to tackle in the sense of the total wave equation. The best you can do is look for expressions for the molecular orbitals… which may only contain N-terms (as many as 42 for benzene.) Molecular orbitals can be considered the eigenstates of the molecule, where each one can be approximated to contain only one electron (or one pair of electrons in the case of closed shell calculations). The fully quantum weirdness here is that every electron samples every eigenstate, which is basically impossible to deal with.

For anyone who is looking, some of the greatest revelations which constructed organic chemistry as you might know it occurred as early as 1930. Linus Pauling wrote a wonderful paper in 1931 where he outlines one way of anticipating the tetragonal bond geometry of carbon… performed without use of these crazy determinant wavefunctions and with simple consideration of the vanilla hydrogenic eigenstates. Sadly, these are qualitative results without resorting to more modern methods.

(Edit 11-21-18):

Because I can never just leave a problem alone, I’ve been steadily cobbling together a program for running Hartree-Fock. If you know me, you’ll know I’m a better bench chemist than I am a programmer, despite how much time I’ve spent on the math. I got interested because I just understand things better if I do them myself. You can’t calculate these things by hand, only by computer, so off I went into a programming language that I am admittedly pretty incompetent at.

In my steady process of writing this program, I’ve just hit a point where I can calculate some basic integrals. Using the STO-3G basis set produced from John Pople’s lab in 1969, I used my routines to evaluate the ground state energy of the hydrogen atom. There is a lot of script in this program in order to work the basic integrals and it becomes really really hard to diagnose whether the program is working or not because of the density of calculations. So, it spits out a number… is it the right number? This is very hard to tell.

I used the 1s orbital from STO-3G to compute the kinetic and nuclear interaction energies and then summed them together. With baited breath, one click of the key to convert to eV…

Bam—- -13.47 eV!

You have no idea how good that felt. The accepted value of the hydrogen atom ground state is -13.6 eV. I’m only off by about 1%! That isn’t bad using an archaic basis set which was intended for molecular calculations. Since my little lap top is a supercomputer next to the machines that originally created STO-3G, I’d say I’m doing pretty well.

Not sure how many lines of code that is, but for me, it was a lot. Particularly since my program is designed to accommodate higher angular momenta than the S orbital and more complicated basis sets than STO-3G. Cranking out the right number here feels really good. I can’t help but goggle at how cheap computational power has become since the work that got Pople his Nobel prize.

(edit 12-4-18):

Still spending time working on this puzzle. There are some other interesting adjustments to my thinking as I’ve been tackling this challenge which I thought I might write about.

First, I really didn’t specify the symmetry that I was referring to above which gives rise to the huge numbers of terms in the determinant style wave functions. In this sort of wave function, which contains many electrons all at once, the fermionic structure must be antisymmetric on exchange. This relies on an operator called the ‘exchange operator’ whose sole purpose is to trade electrons within the wave equation… the fermionic wave function has an eigenvalue of -1 when operated on by the exchange operator. This means that if you trade two electrons within the wave function that the wave function remains unchanged except to produce a -1. And, this is for any exchange you might perform between any two electrons in that wave function. The way to produce a wave function that preserves this symmetry is by permuting the positional variables of the electrons among the orbitals that they might occupy, as executed in a determinant where the orbitals form one axis and the electron coordinates form the other. The action of this permutation turns out huge numbers of terms, all of which are the same set of orbitals, but with the coordinates of the electrons permuted among them.

A second item I wanted to mention is the interesting disconnect between molecular wave functions and atomic functions. In the early literature on the implementation of Hartree-Fock, the basis sets for the molecular calculation are constructed from fits to atomic wave functions. They often referred to this as Linear Combination of Atomic Orbitals. As I was playing around with one of these early basis sets, I was using these basis functions against the hydrogen atom Hamiltonian in order to try to error check the calculus in my program by attempting to reproduce the hydrogenic state energies. Very frequently, these were giving erroneous energies even though the gaussians have symmetry very like the hydrogenic orbitals they were attempting to represent. Interestingly, as you may have read above, the lowest energy state, equivalent to the hydrogenic 1s, fit very closely to the ground state energy of hydrogen… where a basis with a larger number of gaussians for the same orbital fit even more closely to that energy.

I spent a little time stymied on the fact that the higher energy functions in the basis, the 2s and 2p functions, fit very very poorly to the higher energies of hydrogen. This is unnerving because the processing of these particular integrals in my program required a pretty complicated bit of programming to facilitate. I got accurate energies for 1s, but poor energies for 2s and 2p… maybe the program is working for 1s, but isn’t working for 2s or 2p. The infuriating part here is that 2s has very similar symmetry to 1s and is treated by the program in roughly the same manner, but the energy was off then too. I spent time analytically proving to myself that the most simple expression of the 2p orbital was being calculated correctly… and it is; I get consistent numbers across the board, just that there is a baked in inaccuracy in this particular set of basis functions which makes them not fit the equivalent hydrogenic orbital energies. It did not make much sense to me why the molecular community was citing this particular basis set so consistently, even though it really doesn’t seem to fit hydrogen very well. I’m not yet totally convinced that my fundamental math isn’t somehow wrong, but when numbers start emerging that are consistent with each other from different avenues, usually it means that my math isn’t failing. I still have some other error checks I’m thinking about, but one additional thought must be added.

In reality, the molecular orbitals are not required to mimic the atomic parts from which they can be composed. At the locations in a molecule very close to atomic nuclei, the basis functions need to look similar to the atomic functions in order to contain the flexibility to mimic atoms, but the same is not true at locations where multiple nuclei have sway all at once. The choice of making orbitals atom-like is a convenience that might save some computational overhead; you could have a sequence of any set of orthogonal functions you want and be able to calculate the molecular orbitals without looking very deeply at what the isolated atoms seem to be. For the first about two rows of the periodic table, up to Florine, most of the electrons in an atom are within reach of the valence band, meaning that they are contributed out into the molecule and distributed away from the nucleus. A convenient basis set for capturing this is to sort of appear atom-like around the nuclei, but not necessarily… if you have an infinite number of gaussians, slater functions or simple sines and cosines, the flexibility of a properly orthogonalized basis set can capture the actual orbital as a linear combination. The choice of using gaussians is computationally convenient for the situation of having the electron distributed in finite clouds around atom centers, making the basis set small, but not more than that. The puzzle is simply to have sufficient flexibility at any point in the molecule for the basis to capture the appropriate linear combination describing the molecule. An infinite sum of terms can be arbitrarily close.

In this, it isn’t necessary for the basis functions to exactly duplicate the true atomic orbitals since that isn’t what you’re looking for to begin with. In a way, the atomic orbitals are therefore disconnected from the molecular orbitals. Trying to exactly reproduce the atoms is misleading since you don’t actually have isolated atoms in a molecule. Presumably, a heavy atom will appear very atom-like deep within its localized potential, but not up on top.

 

(edit 12-11-18):

I’ve managed to build a working self-consistent field calculator for generating molecular wave functions. Hopefully, I’ll get a chance to talk more about it when there’s time.

The Difference between Quantity and Quality

I decided that I felt some need to speak up about a recent Elon Musk interview I saw on YouTube. You probably know the one I mean since it’s been making the rounds for a few days in the media over an incident where Mr. Musk took a puff of weed on camera. This is the interview between Mr. Musk and Joe Rogan.

I won’t focus on the weed. I will instead focus on some overall impressions of the interview and on something that Musk said in the context of AI.

I admit that I watch Joe Rogan’s podcast now and then. I don’t agree with some of his outlooks regarding drug use (had it been me on camera instead of Musk, I would have politely turned down the pot) but I do feel that Rogan is often a fairly discerning thinker; he advocates pretty strongly for rational inquiry when you would expect him to just be another mook. That said, I usually only watch clips rather than entire podcasts. God help me, media content would fill my life more than it already does if I devoted the 2.5 hours necessary to consume it.

Firstly, I must say that I really wasn’t that pleased with how Joe Rogan treated Elon Musk. He might well have just reached across the table and given the poor man a hand job with how much glad handling he started with. He very significantly played up Musk’s singularity, likening him –not unfavorably– to Nikolai Tesla. Later, he said flat out that “it’s as if Musk is an alien,” he’s so singular. Rogan jumped into talking about a dream where there were “a million” Nikolai Tesla’s, or some such, and speculated how unbelievable the world would be if there were a million Elon Musks, how much innovation would be achieved. In response to that, I think he’s over-blowing what is possible with innovation and not thinking that clearly about how Elon Musk got into the position he’s in.

I do not diminish Elon Musk as an innovator, to start with. The likelihood of my hitting it the way he has is not good, so I can’t say that he isn’t as singular as one might make him out to be. He is in a rarefied air of earning potential with the money he has to throw around; just a handful of people in the same room. A part of what made Elon Musk was an innovation that is shared across a few people, namely the money made from creating Paypal, for which Musk can’t take exclusive credit. Where Musk is now depends quite strongly on this foundation: the time which bootstrapped him into the stratosphere he current occupies was the big tech boom of the Dotcom era, where the internet was quite rapidly expanding, where many people were trying many new ideas and where the entire industry was in a phase of exponential growth. Big ideas were potentially very low hanging fruit, which are not possible to retread now. For instance, it would take a lot to get somewhere with a Paypal competitor today since you would have to justify your infrastructure as preferable somehow to Paypal, which has now had twenty years to entrench and fortify. It’s unlikely social networks will ever produce another Mark Zuckerberg without there being some unoccupied space to fill, which is more difficult to find with everyone trying to create yet another network. Musk is not that different; he landed on the field at a time when the getting was very good. Perhaps someone will hit it with an AI built in a garage and make a trillion dollars, but my feeling is that such an AI will emerge from a foundation that is already deep and hard to compete with, such as Google, which is itself an example of an entity that came into being when the soil was very ripe and would be difficult to retread, or compete with, twenty years later. It is this environment that grew Elon Musk.

Elon Musk won his freedom in an innovation that he cannot take exclusive credit for. Having gained a huge amount of money, he’s no longer beholden by the same checks that hold most everyone else in place. I think that were it not for this preexisting wallet, Musk would not be in the position to make the innovations he’s getting credit for today. This isn’t a bad thing, but you must hold it in context. The environment of the Dotcom era produced one Elon Musk and a bunch of others, like Pichai and Brin and Bezos, because there were a million people competing for those goals… and the ones that hit at the right time and worked hardest won out. This is why there can’t be a million Elon Musks; there aren’t really a million independent innovations worth that much money which won’t just cannibalize each other in the market place. Musk slipped through, as did Bezos, who is wielding as much if not more power for a similar reason (Steve Jobs was another of this scope, but he’s no longer on the field and Apple is simply coasting on what Jobs did.) There are not many checks holding Elon Musk back at this point because he has the spending power to more or less do whatever he feels like. This power counts for a lot. I would suggest that there are plenty of people existent right now who are capable of roughly the same thing as Musk did, who haven’t hit a hole that lifts them quite so far.

As in the video, one can certainly focus on the idea mill that Elon Musk has in his head, but a distinguishing feature of Musk is not just ideas; he is definable by an incredible work ethic. Would you pull 100 hour work weeks? Somebody who is holding down more than 2 forty hour a week jobs is probably earning at least twice as much as you can earn for forty hours a week! I would point out that Elon Musk has five kids and I’ve got to wonder if he even knows their names. My little angel is at least forty hours of my week that I am totally happy to give, but it means I’ve only got like forty hours otherwise to work;-)

Is he an alien? No. He’s a smart guy who worked his ass literally off at great, huge, personal expense and managed to hit a lucky spot that facilitated his freedom. Maybe he would have made it just as well if misplaced in time say forward or backward ten years, but my feeling is that the space currently occupied by his innovations would likely be occupied by someone else of similar qualities to Musk. The environment would have produced someone by simple selection. The idea mill in his head is also of dubious provenance given that Sci Fi novelists have been writing about things he’s trying to achieve since at least forty years prior to when Musk arrived on the scene: propulsive rocket landings were written about by Robert Heinlein and Ray Bradbury and executed first by NASA in the 1960s to land on the moon… SpaceX is doing something amazing with it now, but it isn’t an original idea in and of itself. Musk’s hard work is amazing hard work to actualize the concept, even though the concept isn’t new. Others should probably get some credit for the inspiration.

Joe Rogan glad-handling Elon Musk for his singularity overlooks all of this. I do not envy Musk his position and I can’t really imagine what he must’ve been thinking being on the receiving end of that.

I feel that Musk has put himself in an unfortunate position of being a popularizer. He’s become a go-to guru culturally for what futurism should be. This has the unfortunate side effect of working two directions: Musk is in a position where he can say a lot and have people listen, at the expense of the fact that people are paying attention to him when he would probably rather they not be. Oh dear God, Elon Musk just took a puff of that marijuana! The media is grilling him for that moment. How many people are smoking it up, nailing themselves in an exposed vein with a needle and otherwise sitting on a street corner somewhere, masturbating in public right this very second that the media is not focused on?

For Musk, in particular, I think the pressure of his position is starting to chafe. He may not even be able to see it in himself. Musk has so much power that he’s subject to Trumpian exclusivism; actual reality has been veiled to him behind a mask of yes-men, personal assistants and synchophants to such a degree that Musk is beginning to buy (or has already completely bought) the glad-handling. Elon Musk can fire anyone who doesn’t completely fit within the mold he envisions that this employee should. There is a power differential that insulates him most of the time and he’s gotten used to wielding it. For instance, Elon Musk relates a story while talking about the dangers of AI to Joe Rogan where he says that “nobody listened to him.” Who was he talking to? “Nobody” is Barack Obama. “Nobody” is senators and Capital Hill. As he said it, you can pretty clearly see that Elon Musk expected that these people should have listened to him! Not to say that someone like Obama should have ignored him about the existential threat posed by AI, but that Elon Musk felt that he personally should have been the standard bearer. Think about that. The mindset there is really rather amazing. The egotism is enormous. Egotism can certainly take you a long way by installing confidence, but it has a nasty manner of insulating a person from his or her own shortcomings. As a man who works 100 hour work weeks, one has to wonder if Musk is anyone but the CEO. Can he deal with reality not bending to his will when he says “You’re fired”? Musk decided to play superhero with the Thai soccer team cave crisis when he built the personal-sized submarine to try to help out. Is it any wonder that he didn’t respond too well being told that the concept wouldn’t have worked? I have no doubt he was being magnanimous and I feel bad that he certainly feels slighted for offering the help but being rebuffed. I don’t know that he was actually seeking the spotlight in the news so much as that he felt obligated to be the superhero that glad-handlers are conditioning him to believe that he is. Elon Musk has gotten used to the notion that when he breathes, the wind is felt on the other side of the world and he draws sustenance from people telling him on Twitter that they feel the air moving somewhere over there.

Beware the dangers of social media. It will intrinsically surface the extreme responses because it is designed to do exactly that. If you can’t handle the haters, stay clear of the lovers. Some fraction of the peanut gallery that you will never meet will always have something to say that you won’t like hearing…

(Yes, I am aware of the irony of being a member of the anonymous internet peanut gallery heckling the stage. Who will listen? Who knows; I’m comfortable with my voice being small. If Barack Obama reads what I’m saying, maybe he’ll read it to completion. If so, thanks!)

All that said, I think that Elon Musk is in a very difficult position psychologically. He spends nights sleeping on the floor of his office at Tesla (supposedly) working very very hard at managing people and projects, expecting that the things he says to do and is busy implementing go exactly as he says they should. For a 100 hour work week, this is tremendous isolation. He’s at the top locked in a box where his outlet, social media, always tells him that he is the man sitting on the top of the mountain, and then heckling him when he takes a second out to… do X, help rescue some children, take a puff on a joint, look away from the job at hand. Would you break? I’m happy I spend forty hours a week with my little angel. I’m happy my wife tells me when I’m full of shit. I couldn’t handle Elon Musk’s position. Can you imagine the fear of having the whole world looking over your shoulder, just waiting for one of your ideas to completely implode? Social isolation is profoundly dangerous in all its forms.

In answer to Joe Rogan, Elon Musk is not an alien and he isn’t singular. Maybe you don’t believe me, but I actually say this as a kindness to Elon Musk, in some hope that he finds a way around his isolation. He should find a better outlet than what he currently uses, or the pressure is going to break him. There are other people in this world whose minds are absolutely always exploding, who lay awake at night and struggle to keep it under control. I have no doubt that this takes different shapes for different people who feel it, but I definitely understand it as a guy who lies awake at night struggling to turn off the music, turn off the equations, turn off the visions. Some people do see things that lie just beyond where everyone else does and you don’t hear from them. They may work much smaller jobs and may not have a big presence on social media, but this doesn’t mean they don’t have clear vision. Poor old Joe Rogan, toking up on his joint, turns off the parts of himself that might work that way… he more or less admits that he can’t face himself and smokes the pot to shed the things he doesn’t like! Mr. Rogan went cold turkey on pot for a month and related a story during that time about having vivid dreams. What is your chance at vision? Is it like mine? Do you shuffle it under the rug?

Anyway, that’s a part of my response to how the interview was carried out. I want also to respond a little bit to some of the content that was said. For reference, here’s the relevant clip that has them talking about AI.

There is a section of that clip that has Elon Musk talking about some of the rationale for the startup Neuralink. He speaks about what he calls the “human bandwidth problem.” The idea here, as he relates it, is that one of the reasons humans can’t complete with AI is because we don’t acquire the breadth of information that a computer based AI can as quickly. In this, a picture is worth more than a thousand words because a picture can deliver more information to the human brain in a much shorter space of time than other possible means by which a human can import information. The point of Neuralink then is to increase human bandwidth. An example that Musk gives is that smartphones imbue their users with superhuman abilities and information access; the ability to navigate traffic or find hotels or restaurants without previously knowing of these things. He asserts that possession of a smartphone already makes people cyborgs. He then reasons that by making a link that circumvents the five senses and places remote information access and control straight into the human mind, humans gain some parity on AI, since AI will be able to gain access to information without having the delay associated with seeing or hearing an input.

I think Elon Musk is being somewhat naive about this. Bandwidth is not the only problem we face here in light of what AI might potentially be capable of. Yes, AI in a computer has a tremendous advantage in being able to parse information with speed; this is fundamentally what computers are good for, taking huge amounts of information and quickly executing a simple, repetitious and very fast methodology in order to sort the depths. A smart computer program starts with the advantage of being faster than people. Elon Musk sort of asserts in what he says that humans can become better than we are by breaking the plane and putting essentially a smartphone interface straight into our heads, that speeding up our ability to get hold of the information would put us at an advantage.

I don’t really agree with him.

Having access to a smartphone has revealed a number of serious problems with the capacity for humans to deal with greater bandwidth. Texting and driving together has become a way for people to die since the advent of cellphones. Filter silos occur because people simply don’t have enough time to absorb (and I mean “absorb” in the sense of “to Grok” rather than in the sense of Read or Watch, and the subtlety means the universe in this case) the amount of information that the internet places at our disposal. Musk has voiced the assessment that if only we could get past our meagre rate of information uptake that we might somehow be at a better advantage. Having access to all the information in the world has not stopped fake news from becoming a problem; it has made people confident that they can get answers quickly without installing in them an awareness that maybe they don’t understand the answer they got. Getting to answers ever more quickly won’t change this problem.

Humans are saddled with a fundamental set of limits in our ability to process the information that we uptake. Getting to information faster does not guarantee that anyone makes better decisions with that information once they have it. Would people spend all day stuck in social media, doing nothing of use but literally contemplating their own navel lint in the next big time waster app-game, if they could get to that app more quickly? I don’t think they would. Getting to garbage information faster does not assure anything but reaching the outcomes of bad decisions more quickly.

AI has the fundamental potential to simply circumvent this entire cognitive problem by getting rid of everything that is human from the outset. In fact, the weight of what we currently judge as “valuable AI” is a machine that fundamentally makes good decisions based on the data it acquires in a computer’s time frame. By definition, the AI we’re trying to construct doesn’t make bad decisions that a human would otherwise make and would self-optimize to make better decisions than it initially started out making.

What Elon Musk is essentially suggesting with Neuralink is that a computer could be made to regulate the bandwidth of what is going into someone’s skull without there being a tangible intermediary, but that says nothing about the agent that is necessary on the outside to pick and choose what information is sent down the pipe into someone’s head by the hypothetical link. Even if you replaced the soft matter in someone’s head with a monolithic computer chip that does exactly the same thing as a wet brain, you are saddled with the fact that the brain you duplicated is only sometimes making good decisions. The AI we might create, from inception, is going to be built to make more good decisions than the equivalent human brain. Why include a brain at all?

This reveals part of the problem with Neuralink. The requirement that we make better decisions than we do suggests that by placing links into our brains from the outside, we need to include some artificial agent that ultimately has to judge for us whether our brain will make the best decision based upon whatever information the agent might pipe to that brain –time is money and following a wrong path is wasted time. This is required in order for us to remain competitive. That is fundamentally a super intelligence that circumvents our ability to decide what is in our own best interest since people are verifiably not always capable of deciding that: would people be ODing on pain meds so frequently if they made better decisions? Moreover, our brain doesn’t even necessarily need to know what decisions the super intelligence governing our rate of information uptake is making on our behalf. The company that employs the stripped down super-intelligence is more efficient than the one which might make bad decisions based upon the brain that super-intelligence is plugged into. The logical extent of this reasoning is that the computer-person interface is reduced to a person’s brain more or less just being kept occupied and happy while an overarching machine makes all the decisions.

I don’t really like what I see there. It’s a very happy pleasurable little prison which more or less just ultimately says that we’re done. If this kind of super intelligence is created, very likely, we won’t be in a position to stop it, even if we plug our brains into it and pretend we’re catching a ride on the rocket.

I don’t believe that Elon Musk hasn’t thought of it this way. If we are just a boot drive for something better at our niche than us, I don’t see that as different from how things have been throughout the advent of life. If humans as we are go extinct, maybe the world our successor inhabits will be a green, clean heaven. Surely, it will make better decisions than us.

I do understand why Musk is making the effort with Neuralink. Maybe something can be done to place us in a position where, if we create this thing, we will be able to benefit at some level. I suppose that would be the next form of the Bill and Melinda Gates Foundation…

(Edit 9-12-18)

As I am wont to do, I’ve been thinking about this post a bit for several days since I posted it. I feel now that I have a relevant extension.

When I responded to what Elon Musk had said about neuralink, I interpreted his implication is such a way that would definitely not place a living brain on the same page as AI. It seemed to me, and still seems on looking back, that there is a distinct architectural division between the entity of the brain and the link being placed into it.

I think there is perhaps one way to blur the line a bit more. The internal machine link must be flexible and broadly parallel enough at interacting with the brain in such a way that the external component can become interleaved at the level of a neural network. It cannot be a separate neural network; there can be no apparent division for it to work. In such, the training of the brain itself would have to be in parallel to an external neural network in such a way that the network map smoothly spans between the two. In this case, “thinking together,” would have no duality. What it means is that you could probably only do it at this level with an infant whose brain is still rapidly growing and who doesn’t actually have a cohesive enough neural network to really have a full self.

I’m not sure this hybrid has a big advantage over a pure machine. The one possibility that could be open here is that the external part of the amalgamated neural network is open-ended; even though there is finite flexibility in the adult flesh-and-blood brain, awareness would have to be decentralized across the whole network, where the machine part continues to be flexible later in that person’s life. In this way, awareness could smoothly transition to additions into the machine neural network later.

Problem here is that I don’t know of any technology currently available that could build this sort of physical network. The interlinking of neurons in the brain are so casually parallel and flexible that they do not resemble the means by which neural networks are achieved in computers. I don’t believe it can happen by monolithic silicon; there would need to be something new. Given maturity of the technology, could such a thing be expanded to adults? I don’t know.

Science fiction is all well and good, but I think we’re probably not there yet. Maybe at the end of the century of biology using a combination of genetically tamed bacteria and organic semiconductors.

(edit 9-30-18):

One thing to add that I learned a bit earlier this week and maybe poke another little hole in the Cult of Elon. Please note that I never refer to him as “Elon”, I’ve never met him, I’m not on a first name basis with him and I definitely do not know him –to me, he’s Elon Musk or Mr. Musk, but not Elon. I will give him respect by not pretending familiarity with him. I do respect him, in as much as I can respect a celebrity whose exploits I hear and read about in the popular media, but I’m not a member of the Cult of Elon.

Elon Musk gets tremendous credit for Tesla the car company. He runs the company and is given a huge amount of credit for their existence. He does deserve credit for his hard work and his role in Tesla, but beware thinking of Tesla as his child or his creation. Elon Musk did not found Tesla.

Tesla was founded by Martin Eberhard and Marc Tarpenning. Elon Musk was apparently among the major round one investors of the company and ended up as chairman on the company board since he put down a controlling investment share. Musk did not become CEO of Tesla until he help oust Martin Eberhard from that role when Tesla apparently floundered. Eberhard and Tarpenning have since both departed from Tesla and it sounds as if the relationship is an acrimonious one with Eberhard claiming that Musk was rewriting history.

Who can say what claims are completely true, but if you read about Elon Musk, it seems like he doesn’t play very well with others if he isn’t in charge. And, being in charge, he gets a lion’s share of the credit for the vision and execution. Stan Lee gets this kind of credit too and is perhaps imbued with similar vision. It definitely overwrites the creativity of those other talented people who also had a hand in actualizing the creation.

Fact of Tesla is that someone other than Musk started the vision and Musk used his tremendous financial leverage to buy that vision. He now gets credit for it. I’ll let the reader decide how much credit he actually deserves.

Another thing I thought to spend a moment writing about is the reason why I chose the original title to this post. Why “Quality versus Quantity?” In the last part of the original blog post, I mentioned the dichotomy between humans being able to access information as quickly as AI and humans being able to make as good of decisions as AI. I think that making people faster does not equate to making people better. This is one of the potentially powerful (and dangerous) aspects of AI: the point is that AI could be made ab initio to convey human-like intelligence without incorporating the intrinsic, baked-in flaws in human reasoning that are the result of us being the evolved inhabitants of the African savanna rather than the engineered product of a lab.

The tech industry may not be thinking too carefully about this, but the AI that is being created right now is very savant-like; it incorporates mastery acquired in a manner that humans can also “sort of” achieve. Note, I say “sort of” because this superhumanity is achieved by humans at the expense of the parts of humanity that are recognizably human: Autistic savants are not typical people and do not relate to typical people as a typical person would. I believe this kind of intelligence is valuable because many people exhibit qualities of it to the benefit of the rest of the human race, but I think these people are often weak in other regards that place them out of sorts with what is otherwise “human.” Machines duplicating this intelligence are not headed toward being more human because the human parts in the equation slow down the genius. There is an intrinsic advantage to building the AI without the humanity because the parts that are recognizable as human fundamentally do not make the choices which would be a coveted characteristic of a high quality AI. This is not to say that such an AI would be unable to relate to people in manner that humans would be able to regard as “human-like”… to the contrary; I think that these machines can be made so that the human talking to one would be unable to tell the difference, but it would be a mistake to claim that the AI thinks as a human does just because it sounds like a person.

If people given cybernetic interfaces with computers are able to make deep decisions many times more quickly than unaltered humans, does this make them as good as an AI? The quantity of decisions attempted will be offset by the number of times those quickly made decisions turn out to be failures. On the other hand, the AI that people aspire to create is defined by the specifically selected capacity to make successful decisions more frequently than people can. You can see this in the victory of Deep Go over human opponents: the person and the machine made choices at the same rate, alternating turns at choices so that their decision rate was 1:1, but the machine made right choices more frequently and tended to win. Would the person have been better if they had made choices faster? If the AI makes one decision of sufficient foresight and quality that humans are required to stumble through ten decisions in order to just keep up, what point is there in humans being faster than they are? While the AI is intrinsically faster just by being a machine, this does not begin to touch the potential that the AI need not be intrinsically faster. It just needs to be able to make that one decision that the fastest person had no hope of ever seeing. Smarter is not always faster.

That’s what I mean by quality versus quantity. Put another way, would Elon Musk have made his notorious “funding secured” Tweet, which has since gotten him sued by the SEC, and lost him his position as chairman of the Tesla board, if he had a smartphone plugged straight into his brain? His out of control interface with his waistband mounted internet box is what caused him problems in the first place, would an even more intimate interface have improved matters? Where an AI could’ve helped is by interceding, recognizing that the decision would run afoul with the SEC in two months and prevented the Tweet from being carried out.

Think about that. It should scare the literal piss out of you.

Magnets, how do they work? (part 4)

(footnote: here lies Quantum Mechanical Spin)

This post continues the discussion of how ferromagnets work as considered in part 1, part 2 and part 3. The previous parts dealt with the basics of electromagnetism, introducing the connections from Maxwell’s equations to the magnetic field, illustrating the origin of the magnetic dipole and finally demonstrating how force is exerted on a magnetic dipole by a magnetic field.

In this post, I will extend in a totally different direction. All of the previous work was highlighting magnetism as it occurs with electromagnets, how electric currents create magnetic field and respond to those fields. The magnetic dipoles I’ve outlined to this point of time are loops of wire carrying electric current. Bar magnets have no actual electrical wires in them and do not possess any batteries or circuitry, so the magnetic field coming from them must be generated by some other means. The source of this is a cryptic phenomenon that is in its nature quantum mechanical. I did hint at it in part 3, but I will address it now head on.

In 1922, Walther Gerlach and Otto Stern published an academic paper where they brought to light a weird new phenomenon which nobody had seen prior (it’s actually the third paper in a series that describes the development of the experiment, with the first appearing in 1921). That paper may be found here if you aren’t stuck behind a pay wall. Granted, the paper is in German and will require you to find some means of translation, but that is the original paper. The paper containing the full hypothesis is here.

In their experiment Stern and Gerlach built an evaporator furnace to volatilize silver. Under a low pressure vacuum, as good as could be attained at the time, silver atomized from the furnace was piped through a series of slits to collimate a beam of flying silver atoms. This beam of silver atoms was then passed through the core of a magnetic field generated by an electromagnet in a situation much as mentioned previously in the context of Lorentz force.

2000px-lorentz_force-svg

As illustrated here, one would expect a flying positive charge ‘q’ with velocity ‘v’ to bend one way upon entering magnetic field ‘B’, while a negative charge bends the other. Without charge, there is no deflection due to Lorentz force. In the Stern-Gerlach experiment, the silver atom beam passing through the magnetic field then impinges on a plate of glass, where the atoms are deposited. This glass plate could be taken and subjected to photographic chemistry to “develop” and enhance the intensity of the silver deposited on the surface, enabling the experimenters to see more clearly any deposition on the surface of the glass. According to the paper, the atom beam was cast through the magnetic field for 8 hours in a stretch before the glass plate was developed to see the outcome.

The special thing about the magnetic field in the Stern Gerlach experiment is that, unlike the one in the figure above, it was intended to have inhomogeneity… that is, to be very non-uniform.

For the classical expectations, a silver atom is a silver atom is a silver atom, where all such atoms are identical to one another. From the evaporated source, the atoms are expected to have no charge and would be undeflected by a magnetic field due to conventional Lorentz force, as depicted above. So, what was the Stern-Gerlach experiment looking for?

Given the new quantum theory that was emerging at the time, Stern and Gerlach set out to examine quantization of angular momentum of a single atom. Silver is an interesting case because it has a full d-orbital, but only a half-filled s-orbital. In retrospect, s-orbitals are special because they have no orbital angular momentum themselves. This in addition to the other closed shells in the atom would suggest no orbital angular momentum for this atom. In 1922, the de Broglie matter wave was not yet proposed and Schrodinger and Heisenberg had not yet produced their mathematics; quantum mechanics was still “the old quantum” involving ideas like the Bohr atom. In the Bohr atom, electron orbits are allowed to have angular momentum because they explicitly ‘go’ around, exactly like the current loop that was used for calculations in the previous parts of this series. The idea then was to look for quantized angular momentum by trying to detect magnetic dipole moments. A detection would be exactly as detailed in part 3 of this series; magnetic moments are attracted or repelled depending on their orientation with respect to an external magnetic field.

In their experiment, Stern and Gerlach did what scientists do: they exposed a glass plate to the silver beam with the electromagnet turned off, and then they turned around and did the same experiment with the magnet turned on. It produced the following set of figures:

Stern gerlach figure 2 and 3

The first circle, seen at left, is Figure 2 from the paper, where there is no magnetic field exerted on the beam. The second circle, with the ruler in it, is Figure 3, where a magnetic field has now been turned on. In the region at the center or the image, the atom beam is clearly split into two bands relative to the control exposure. The section of field in the middle of the image contains a deliberate gradient, where the field points horizontally with respect to the image and changes strength going from left to right. One population of silver diverts left under the influence of the magnetic field while a second population diverts right.

Why do they deviate?

What this observation means is that the S-orbital electron in an evaporated silver atom, having no magnetic dipole moment due to the orbital angular momentum of going around the silver atom nucleus, has an intrinsic dipole moment in and of itself that can feel force under the influence of an external magnetic field gradient. This is very special.

The figure above is an example of a quantum mechanical “observation” where what has appeared is “eigenstates.” As I’ve repeated many times, when you make an observation in quantum mechanics, you only ever actually see eigenstates. In this case, it is a very special eigenstate with no fully classical analog, Spin. For fundamental spin, especially the spin of a silver atom with a single unpaired S-orbital, there are only two possible spin states, called now spin-up and spin-down. Spin appears by providing a magnetic dipole moment to a “spinning” quantum mechanical object. The electron, having a charge and a spin, has a magnetic dipole moment and is therefore responsive to magnetic field gradient. The population of silver atoms passing into the magnetic field deflect relative to this tiny electron dipole moment, where the nucleus is being dragged by the “S-orbital” electron state due to the electrostatic interaction between the electrons and the nucleus. The dipole moment is repelled or attracted in the magnetic field gradient exactly as described in part 3, and since this dipole is quantum mechanical, it samples only two possible states: oriented with the external field or oriented against it, giving two bands in the figure above.

The conventional depiction of the magnetic dipole formed by a wire loop can be adopted to the quantum mechanical phenomenon of spin by adding a scale adjustment called the gyromagnetic ratio. This number enables the angular momentum actually associated with the spin quantum number to be scaled slightly to account for the strength of the magnetic dipole produced by that spin. This is necessary since a particle carrying a spin is not actually a wire loop –the great peculiarity of spin is that if it is postulated as the internal rotation of a given particle, the calculated distribution of the object in question tends to break relativity in order to generate the appropriate angular momentum, leading most physicists to consider spin to be a quantum mechanical phenomenon that is not actually the object ‘spinning’. For all intents and purposes, spin is very like actual rotational spin and it shows up in a way that is very similar to electric charges running around a wire loop.

spin magnetic moment

The math in this figure is quick and fairly painless; it converts magnetic dipole moment from a wire loop into a magnetic dipole moment that is due to spin angular momentum. The equation at the start is classical. The equation at the end is quantum mechanical. One thing that you often see in non-relativistic quantum mechanics is that classical quantities adopt into quantum mechanics as operators, so the thing at the very end is the magnetic dipole moment operator. This quantity can be recast various ways, including with the Bohr magneton and in various adjustments of g while the full operator is useful in Zeeman splitting and in NMR.

The existence of spin gives us a very interesting quantity; this is a magnetic dipole moment that is intrinsic to matter in the same way as electric charge. It simply exists. You don’t have to create it, as in the wire loop electromagnet, because it is already just there. There is no requirement for batteries or wires. Spin is one candidate source for the magnetic dipole moment that is required to produce a bar magnet.

It is completely possible to attribute the magnetism of bar magnets to spin, but saying it this way is actually something of a cop-out. How are atoms organized so that the spin present in atoms of iron becomes large enough to create a field that can cause a piece of metal to literally jump out of your hand and go sliding across the table? Individual electronic and atomic spins are really very tiny and getting them to organize in such a  way that many of them can reinforce each other’s strengths is difficult. I’ve said previously that chemistry is wholly dependent on angular momentum closures and one will note that atomic orbitals fill or chemically bond in such a way as to negate angular momentum: for example, S-orbitals (and each and every available orbital) are filled by two electrons, one spin-up and one spin-down, so that no individual orbital is left with angular momentum. Sigma bonds and Pi bonds are formed so that unpaired electrons in any atom may be shared out to other atoms in order for participants to cancel their spin angular momentum. While there are exceptions, like radicals, nature generally abhors exposed spin. Even silver, the atoms of which are understood to have detectable spin, is not ferromagnetic: you can’t make a bar magnet out of silver! What conspires to make it possible for spin to become macroscopically big in bar magnets? This is the one big puzzle left unanswered.

As an interesting aside, in their paper, Stern and Gerlach add an acknowledgement thanking a “Mr. A. Einstein” for helping provide them with the electromagnet used in their experiment from the academic center he headed at the time.

Magnets, how do they work? (part 3)

In this section I intend to detail the source of magnetic force, particularly as experienced by loops of wire in the form of magnetic dipoles. The intent here is to address ultimately how compass needles turn and how ferromagnets attract each other.

I should start by asking for forgiveness. I’ve recently defended my PhD. While the weight is off now, the experience has hobbled my writing voice. It really should be easier at this point but there’s a hollowness that gnaws at me every time I sit down to write. Please forgive the listless undercurrent I’m trying to shake off. The protracted effort of finishing an advanced degree is not small by itself, but it was combined in this case with the first couple months of my daughter’s life. If you’ve ever tried to finish a PhD and survive the first six weeks of an infant’s life simultaneously, you will perhaps know the scope of this strain. I feel thin. But, I’m surviving. This post has lingered for a few months with me going back and forth trying to find the strength to soldier through.

If you will recall the previous sections I posted, part 1 and part 2, you’ll remember that I’m pursuing the lofty goal of explaining how magnets work. In part 1, I detailed some of the very basic equations for magnetism, including connections from Biot-Savart to Ampere’s Law, producing some of the basic definitions of the magnetic field. In part 2, I tackled the construct of the magnetic dipole in the form of a loop of wire. My ultimate goal is to explain how it is that an object like a compass needle can possess and respond to magnetic fields without anything like a loop of wire present. The goal today is to tackle where magnetic force comes from, that is how an object like a magnetic dipole can be dragged through space or rotated so that it changes its orientation in a magnetic field.

As you may already know, the fundamental equation describing magnetic force is the Lorentz force equation.

Lorentz force

This particular version combines electric force (from the E-field) with magnetic force (from the B-field). In this equation ‘F’ is force, ‘q’ is electric charge, ‘v’ is the velocity of that charge, ‘E’ is the E-field and ‘B’ is the B-field. The electric field part of the equation is not needed and we can focus solely on the magnetic part. Magnetic force is a cross product, signified by the ‘x’, which means that the force of the interaction occurs at right angles to the magnetic field acting on the object and the path that object is traveling. If you stop and think about it, this is kind of weird since it means that an electrically charged object must be moving in order to feel a magnetic force. But magnets appear to feel force even if they aren’t moving, right?

A fundamental part of what makes electronics special is that, while the mass of the circuitry stays firmly fixed in position, the electric charges within the wires are able to move. The electricity inside moves even while the computer sits stupidly on the desk. I know this comes as a surprise to no one, but electricity is definitely a something that moves even though the object it moves through appears to remain stationary.

One typical way to deal with the magnetic part of the Lorentz force equation is to cast it in a form conducive to electric current (defined as ‘moving charge’) rather than to directly consider ‘a charge that is moving.’ To do this, you fragment force as a whole into just a piece of force as exerted on a fragment of the charge present in the electric current.

equation 1 lorentz eqn rejigger

In this recast, the force is considered to be due to that tiny fraction of charge. Velocity opens up into length traveled per time where the length contains the fragment of charge ‘dq’. The differential for time is shifted from the length to the charge, creating a current present within the length, “electric current” being defined as “amount of charge passing a point of measurement during a length of time.” In the final form, the fragment of force is due to a current in a length of wire as crossed into the B-field. You could add up all the lengths of a wire containing the current and find the sum of all magnetic force on that wire. One thing to note is that the sign on the current by convention follows the vector direction associated with the length, where the current is considered to be moving positive charges traveling along the length. The direction on the differential length is residual from the velocity. In reality, for real electric current, the current ‘I’ carries a negative sign for the ‘minus’ value of electric charge, creating a negative sign on the force. Negative current will behave as if it is positive current traveling backward.

From previous work, all the elements now exist for dealing with an electromagnet, where the magnetic field comes from and how force is exerted. As illustrated in the previous post, a magnetic field is a mathematical object which is produced in a region of space around a moving charge. As demonstrated here, a basic force is felt by a moving charge when it passes through a magnetic field. These are empirical observations which can be adjusted to say simply that one moving charge has a way of exerting force on a second moving charge, where the force between them is strictly dependent on their movement. If neither such charge were moving, they would feel no force and, if only one charge or the other was moving, they would also feel no force. The idea of magnetic force is in this way a profoundly alien thing, we can understand it only to be a result of the basic precondition of having electric charges moving with respect to one another.

The savvy, science-literate reader may stop and think hard about this and say “Wait a minute, neutron stars, objects made of material lacking any electric charge, have a very powerful magnetic field.” To this, I would smile and refer you to quantum mechanics. The fact that an electrically uncharged neutron can possess or respond to a magnetic field is one of the pieces of evidence that suggests that the protons and neutrons in atomic nuclei are themselves divisible into smaller objects, quarks. One of the great successes of Quantum Electrodynamics was precision calculation of the gyromagnetic ratio of the electron, connecting the magnetic dipole moment of a stationary electron to that electron’s quantum mechanical spin. Spin can be regarded very simply as true to its name: a motion undertaken by an object that does not shift the location of that object’s center of mass. Therefore, magnetic field resulting from spin is still a product of some sort of motion. I probably will never talk very deeply about the Quantum Electrodynamics because I don’t believe I have a very good understanding of it.

There is also a mathematical trick that one can play using Einstein’s Special Relativity to unify electric force and magnetic force, showing that magnetic field is a frame of reference effect and that electric fields are essentially the same thing as magnetic fields, but I will speak no more of this in the current post.

The bottom line, though, is simply this: magnetic force and field, in terms put forward by the Lorentz force law written above and the Biot Savart Law written previously, are due to the motion of charges as currents, either fractional (quarks) or integer charges (electrons, protons and ions) both. This motion can be either translational, such that the charge moves in some direction, or rotational, such that an apparently stationary charge sits there “spinning” sort of like a top.

How these moving currents exert force can be illustrated using the math derived above. The most basic assembly that usually appears in physics classes is the example of two metal wires, each conducting an electric current.

two wiresIn this image, I’ve sketched the basic situation where two wires exist in a cartesian space. The arrangement is in forced perspective because I felt like trying to be artistic. These wires are parallel to each other and the separation between them is constant everywhere along their lengths. Both wires contain an electric current of positive sign that is moving parallel to the z-direction with both currents moving in the same direction. We will assume for simplicity that the separation is much larger than the cross-sectional width of the wire so that we don’t have to do more math than is necessary… in other words, the current is traveling along a line placed along the center of the wire. Here, both wires will produce magnetic fields and, conversely, the currents inside both wires will feel force exerted on them by the magnetic field produced by the other wire.

Electric currents remain trapped within wires because these objects stay electrically neutral: a moving electron is held from leaving the wire by the force of oppositely charged atoms arranged in the crystal lattice of the wire. Force exerted on the current by the magnetic field is transferred to the mass of the wire by these electrical interactions. In a metal wire, “loose” electrons reside in a quantum mechanical structure called a “conduction band” that only exists within the lattice of the crystalline host. Electrons are able to flow freely within this conduction band and cannot leave unless they have been provided with enough energy to jump out of the crystal, this amount of energy called the work function, as illustrated –for instance– by the photoelectric effect. Even under magnetic force, which is felt by the moving charges within the wire and not directly by the mass of the wire, moving charges don’t suddenly jump out of the stationary wire. Magnetic forces on such a current carrying wire can cause the entire wire to move, where the magnetically responsive current drags the entire mass of the wire with it by electrostatic interactions. If enough energy is supplied to loose charges within the wire bulk, these charges can be forced to jump out of the wire, but they usually won’t since most interactions do not provide them with sufficient energy to exceed the work function. Einstein won his Nobel prize for essentially predicting this in the form of the photoelectric effect.

These details not withstanding, the magnetic field produced by one wire can be calculated using Ampere’s Law generated in the previous post.

Loop integral

This magnetic field is the magnetic field of the wire. The only thing you truly need to know here is that the magnetic field will wrap around the wire in the direction of the arrow in the figure above, assuming that the current with positive sign is coming straight out of the page at you. It is noteworthy that the field strength will tend to fall off something like 1/distance moving away from the wire.

Here is the force on the second wire given the magnetic field (from above) imposed on it from the first wire.

attracting wires

With the currents pointed parallel, the wires will tend to experience forces that are directed inward between them. They will tend to pull together.

Suppose we flip the direction of the current in wire 2…

repelling wires

Here, the situation is reversed. The forces are outward such that the wires tend to repel each other. Consider that I’ve done a very soft calculation to see this: all I did was use the direction of the magnetic field at one wire as generated by the opposite wire, filled in the direction of the current for the relevant wire and worked the cross product in my head. There is a subtlety due to the fact that real currents in real wires have the negative charge of real electrons, but the result doesn’t change: parallel currents going the same direction tend to attract while parallel currents going in opposite directions tend to repel.

With the simple construct of two parallel wires, we have the basic tools necessary to go crazy and build us one of these:

railgun

Here, we’ve got two parallel wires with current running in opposite directions where we place a third wire perpendicular, in a current arc, between the two. Here is the arrangement:

railgun diagram

In this case, the Lorentz force on the third wire is directed parallel to the first two wires. If the third wire is just a sliding bridge, the magnetic force will accelerate it parallel to the direction of the first two wires: given very high currents and a long accelerating path, this could produce very high velocities.

The advantage is actually quite remarkable in the case of a railgun. For a conventional gun, the muzzle velocity is limited by the detonation rate of the gunpowder, so that the projectile can’t ever go faster than the explosion of the gunpowder expands. For a railgun, there is no such limit. Further, this suggests some architectural requirements in the railgun: the two rails are parallel to each other and have current running in opposite directions, meaning that the rails of the railgun push outward against each other, so that the railgun wants to explode apart. The barrel of the railgun must therefore be built strongly enough to prevent this explosion from occurring. This device is ridiculously simple, but has been militarily difficult to realize because nobody has had a compact or powerful enough electrical generator to realize velocities higher than gunpowder alone that could be transported with the mechanism.

The railgun is really just a momentary curiosity in this post to show that the basic idea of magnetic force has a tangible realization. The next objective it to pursue the compass needle…

For this, we come back to the notion of a current loop as seen in the magnetic dipole post. To begin with, you could fabricate a simplified version of the current loop by simply expanding the model used for the railgun.

self force of loop

In this construction, the wires are all physically connected to each other with the current of wire 1 spilling into wire 4, then from 4 into 2 and so on, going around. The currents in each wire would therefore all be equal. Further, the magnetic field would also be equal on each wire and pointed upward normal to the plane of the loop –if you look back at the images of the magnetic field produced by a wire loop as in the previous post, you can convince yourself that this is the case. The cross product would therefore cause the force to be pointing outward at every location in the plane of the loop. Since the magnitudes of the forces are all equal and the directions are all in opposition, there would be no net force on the object. This is not to say no force; the forces just all balance. For a current loop, as in the railgun, the self-forces are making the loop want to explode outward. The magnetic field of a loop on itself therefore can’t cause that object to translate, but if you increase the current high enough, the force would exceed the tensile strength of the loop and cause it to explode apart.

As I’ve previously mentioned, the wire loop is an analog to the magnetic dipole. I will once again assert totally without proof that a compass needle is essentially a magnetic dipole and will have the same behaviors as a magnetic dipole. If we learn how a current carrying wire loop moves, we will have shown how a compass needle also moves.

Consider first the wire loop immersed in an external magnetic field. This magnetic field will be at an angle to the loop and will be uniform everywhere, which is to say that the strength of the external field is the same on all parts of the loop. Once again, the loop will carry a circulating current of ‘I’.

Current loop in field at angle

First, we could calculate the net force exerted on this wire loop by the external field. You may have an intuition about it, but I’ll calculate it anyway.

Here, I will set up the Lorentz force so that I can calculate each element of the loop and them sum them up by integral. This will ultimately lead me to finding the net force of a uniform magnetic field on a current loop.

Integrating force on loop p1

This converts the force into a cartesian form that can be calculated in a polar geometry, integrating only over the angle Phi in the x-y plane.

Integrating force on loop p2

After working through the cross product, of which only four terms survive, careful examination of these terms shows that there are only two unique integrals in terms of Phi. When you see which they are, since I’m integrating over the full circle, you should know instantly what will happen…

Integrating force on loop p3

Despite the fact that there’s an angle in this calculation, a uniform magnetic field on a current loop will not cause the loop to translate since there is no net force, meaning that the loop cannot be dragged in any direction.

Even though I explicitly ran the calculation so that the astute observer notes where the structure collapses to zero, a little bit of simple logic should also reveal the truth. For the ring of current, there are always two points along the ring which can be selected which are diametrically opposed: these points always experience the same force, but in opposite directions. Therefore, for any set of two such points selected on the ring, the forces cancel to zero, even though the magnetic field is at an angle to the ring, which covers every location along the ring. This depends on the fact that the magnetic field is everywhere uniform. If the strengths of the B-field had been dependent of Phi in the calculation above, there could have been four unique terms, of which maybe none would have integrated to zero.

I’ve concluded here that the ring cannot be dragged in any direction. Note, I did not say that the ring doesn’t move! A more interesting case is to consider what happens if we look instead for torque on the ring. Remember that torque is the rotational equivalent of force, which can cause an object to turn without actually dragging it in any direction.

For convenience, I will calculate the torque from an origin at the center of the ring. I can place my origin anywhere in space that I like, but I’ll fix it to a location which removes a few mathematical steps. I would also note that the magnetic field and the differential length element for a section of ring also have the same forms that I found for them above.

Integrating torque on loop p1

The vector identity I’ve used here is a very simple one which removes the intricacy of the cross product and leaves me with just a vector dot product. I’ve used the fact that the vector describing the location of the unit length of the ring is perpendicular to that unit length at every location where this calculation would ever be made, so long as I calculate torque from the center of the ring.

I already found ‘B’ and ‘dl’ above, so I just need to find a compatible form for the position vector ‘r.’

Integrating torque on loop p2

With this I can finally put all the elements together and start integrating.

Integrating torque on loop p3

After cleaning up the vectors and performing a bit of algebra to consolidate terms, we see that there are only three integrals sitting inside that mess. I chose the limits of integration because I want to work the integral through 360 degrees of the current loop, so 0 to 2Pi. I will work each in turn, but they are easy integrals.

torq integral 1

The first integral simply goes to zero, meaning that the first term in the torque will die. What about the next integral?

torq integral 2

This integral didn’t die. It gave me a piece of pi. The next integral works in a similar manner.

torq integral 3

So, we substitute these three results into the torque equation.

Integrating torque on loop p4

If you squint at the vector portion of that final result there, you might realize that it looks very much like a cross product.

Integrating torque on loop p5

So, a current loop does experience torque when immersed in a magnetic field. Moreover, the vector quantity in that cross product that I left unpacked should look eerily familiar. You might look back at that previous post I did on the magnetic dipole in order to recognize the magnetic dipole moment.

Integrating torque on loop p6

I have achieved a compact expression that says that the current loop will experience a torque within a magnetic field. If the magnetic field is uniform in strength everywhere over the loop, the loop will not be dragged in any direction, but it can be rotated since it will experience a torque. The nature of this rotation can be predicted from the form of the cross product.

Picture1

If a plane is formed between the magnetic dipole moment of the loop and the magnetic field, the loop will tend to rotate around an axis perpendicular to that plane. Also, because of the form of the cross product, the torque is maximum if the angle between the dipole and the field is 90 degrees; if the vectors point in the same direction (or in exactly opposite directions), the torque goes to zero given the sine. So, the magnetic dipole will tend to want to oscillate around pointing the same direction as the magnetic field and if the action involves friction –so that energy imparted by work done from the torque can be dispersed– these vectors will tend to point in the same direction.

Does this description remind you of anything?

1200px-Kompas_Sofia

Image from wikipedia

If the needle of a magnetic compass contains a magnetic dipole that points along the needle’s axis, this equation perfectly describes how that needle behaves.

Magnetic dipoles tend to rotate to point along magnetic fields.

There is a non-trivial provision in this statement. The rotation effect I’ve described will occur if the current or moving charge has a trivially small angular momentum with respect to the total rotational inertia of the rotating object. If the angular momentum is large, something very different will happen: the magnetic dipole moment will actually try to precess around the axis of the magnetic field… that is, it will tend to move more like a gyroscope instead of a compass needle. I won’t back this statement up right now, but I hope instead to write a bit more about NMR, of which the classical view involves magnetic precession (magnetic precession fits into the quantum mechanical view of NMR as well, but the effect is much more difficult to see).

This bit of physics also explains why bar magnets tend to rotate in magnetic fields, which is one of the original objectives of this series of posts. This is how magnets (and I use that in the ICP sense of the word) tend to rotate.

How bar magnets move in a magnetic field can be accessed with just a bit more work.

After having collapsed away the directionality of the vectors to produce a scalar version of magnetic torque that shows only the magnitude of torque (so that you can see the sine in the equation), it’s possible to construct a magnetic energy involving the magnetic dipole moment and the field by simply finding the work performed in rotation. The rotational analog of work is torque imposed over a rotation, yielding another integral.

potential

The potential here is a very special one because it’s also the Hamiltonian for spin in a magnetic field in quantum mechanics. I’ll stop short of jumping into the quantum and simply manipulate classical physics. One thing to note here is that I earlier stated that a magnetic dipole experiences no net force if the magnetic field is uniform. What if the magnetic field is no longer uniform?

This sort of potential depends not only on the angle between the vectors, but on the form of the vectors themselves. One way to return to directional force from a potential is to simply take the (spatial) gradient of the potential: it’s important to note that the vectors above are in a dot product, reducing the combination to a scalar… working the gradient of this dot product goes backward through the calculus which produces work from force, instead producing vectoral force from a scalar potential.

Force on dipole

It’s initially difficult to see what this will do, so I’m going to create a situation of simple constructs to demonstrate it. Suppose we have a magnetic dipole sitting in a magnetic field where the dipole and field are pointing in the same direction. Now, suppose that the intensity of this magnetic field gets weaker in some direction, conveniently along the axis that is shared by both vectors.

force on dipole 2

In this particular case, the magnetic dipole will tend to feel a force, as indicated, running opposite the z-axis. It is literally running toward where the magnetic field gets stronger. Note, if you flip the direction of the magnetic dipole, you also flip the sign on the force, making the dipole want to accelerate toward where the magnetic field is weaker. What is this in terms of “toward” or “away from” when considering a real magnetic dipole? Recall my fancy picture of the dipolar magnetic field from three neighboring dipoles:

mag_dipole1

Here, the colors show the intensity of the field, with red as strong and blue as weak. The fields are red where the dipoles are located and blue further away, meaning that the intensity of the magnetic fields decrease as you go away from a magnetic dipole. In the demonstration of magnetic force above, if the dipole is oriented so that it is in the same direction as the field, it will want to accelerate toward stronger field…. or toward the source of that field if that field is from another dipole. Conversely, if the dipole is oriented so that it faces where the field gets stronger, it will be pushed toward weaker field. In the case of a dipole pointed parallel to the z-axis and positioned at (0,0), the directions of the field look like this:

magnetic dipole

Where the intensity of the field will decrease going away from the origin. A second dipole positioned at location (0,5) and pointed along the z-axis will want to accelerate toward the origin (be attracted), but if rotated to point -z, it will accelerate away (be repelled).

This actually sums up all the behaviors of the bar magnets. In the case of bar magnets, the ends are assigned polarity as the north and south poles. If the magnets are faced with their north ends pointed at each other, the magnets tend to repel, while north end facing south end, they tend to attract. If two magnets are allowed to accelerate toward each other when the south end is pointed to north end, they impact and stick. Meanwhile if they are positioned to repel, north to north, they tend to accelerate away from one another, unless the orientation of one is bumped, whereby one magnet abruptly rotates around 180 degrees (given the non-zero torque mentioned above), and both magnets attract each other again and may accelerate toward each other to stick.

Wow, huh? That sums up how bar magnets work.

So, why doesn’t a compass needle jump out of your hand and accelerate toward one of the poles of planet Earth? Both are dipoles, right. It’s mainly because the field of the Earth is nearly uniform at the location where the compass needle experiences it and therefore with such small gradient, it can’t pull the compass out of your hand.

One subtlety a physics student may note here is that magnetic fields are universally understood to do no work. But, two magnets accelerating across the table and sticking to each other sounds a lot like work. The force I’ve provided as the source of this work is actually due to a spatial derivative of the magnetic field, a gradient, which turns out to be an electric field of a sort. What? Yeah, I know. Weird, but true.

Keep in mind that I haven’t actually solved the final problem of the original post: all of my magnetic dipoles to this point are generated by electrical currents in wires. I still need to show where the magnetic dipole comes from in a metal like iron since there aren’t any batteries in a bar magnet or a compass needle. This is actually a very hard question that dips directly into quantum mechanics and I will end this post here because quantum is its own arena.

Disagreeing with “Our Mathematical Universe”

My wife and I have been listening to Max Tegmark’s book “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality” as an audiobook during our trips to and from work lately.

When he hit his chapter explaining Quantum Mechanics and his “Level 3 multiverse” I found that I profoundly disagree with this guy. It’s clear that he’s a grade A cosmologist, but I think he skirts dangerously close to being a quantum crank when it comes to multi-universe theory. I’ve been disagreeing with his take for the last couple driving sessions and I will do my best to try to sum for memory the specific issues that I’ve taken. Since this is a physicist making these claims, it’s important that I be accurate about my disagreement. In fact, I’ll start with just one and see whether I feel like going further from there…

The first place where I disagree is where he seems to show physicist Dunning-Kruger when regarding other fields in which he is not an expert. Physicists are very smart people, but they have a nasty habit of overestimating their competence in neighboring sciences… particularly biology. I am in a unique position in that I’ve been doubly educated; I have a solid background in biochemistry and cell molecular biology in addition to my background in quantum mechanics. I can speak at a fair level on both.

Professor Tegmark uses an anecdote (got to be careful here; anecdotes inflate mathematical imprecision) to illustrate how he feels quantum mechanics connects to events at a macroscopic level in organisms. There are many versions, but essentially he says this: when he is biking, the quantum mechanical behavior of an atom crossing through a gated ion channel in his brain affects whether or not he sees an oncoming car, which then may or may not hit him. By quantum mechanics, whether he gets hit or not by the car should be a superposition of states depending on whether or not the atom passes through the membrane of a neuron and enables him to have the thought to save himself or not. He ultimately elaborates this by asserting that “collapse free” quantum mechanics states that there is one universe where he saved himself and one universe where he didn’t… and he uses this as a thought experiment to justify what he calls a “level 3” multiverse with parallel realities that are coherent to each other but differ by the direction that a quantum mechanical wave function collapse took.

I feel his anecdote is a massive oversimplification that more or less throws the baby out with the bath water. Illustration of the quantum event in question is “Whether or not a calcium ion in his brain passes through a calcium gate” as connected to the macroscopic biological phenomenon of “whether he decides to bike through traffic” or alternatively “whether or not he decides to turn his eye in the appropriate direction” or alternatively “whether or not he sees a car coming when he starts to bike.”

You may notice this as a variant of the Schrodinger “Cat in a box” thought experiment. In this experiment, a cat is locked in a perfectly closed box with a sample of radioactive material and a Geiger counter that will dump acid onto the cat if it detects a decay; as long as the box is closed, the cat will remain in some superposition of states, conventionally considered “alive” or “dead” as connected with whether or not the isotope emitted a radioactive decay or not. I’ve made my feelings of this thought experiment known before here.

The fundamental difficulty comes down to what the superposition of states means when you start connecting an object with a very simple spectrum of states, like an atom, to an object with a very complex spectrum of states, like a whole cat. You could suppose that the cat and the radioactive emission become entangled, but I feel that there’s some question whether you could ever actually know whether or not they were entangled simply because you can’t discretely figure out what the superposition should mean: alive and dead for the cat are not a binary on-off difference from one another as “emitted or not” is for the radioactive atom. There are a huge number of states the cat might occupy that are very similar to one another in energy and the spectrum spanning “alive” to “dead” is so complicated that it might as well just be a thermal universe. If the entanglement actually happened or not, in this case, the classical thermodynamics and statistical mechanics should be enough to tell you in classically “accurate enough” terms what you find when you open the box. If you wait one half-life of a bulk radioactive sample, when you open the box, you’ll find a cat that is burned by acid to some degree or another. At some point, quantum mechanics does give rise to classical reality, but where?

The “but where” is always where these arguments hit their wall.

In the anecdote Tegmark uses, as I’ve written above, the “whether a calcium ion crossed through a channel or not” is the quantum mechanical phenomenon connected to “whether an oncoming car hit me or not while I was biking.”

The problem that I have with this particular argument is that it loses scale. This is where quantum flapdoodle comes from. Does the scale make sense? Is all the cogitation associated with seeing a car and operating a bike on the same scale as where you can actually see quantum mechanical phenomena? No, it isn’t.

First, all the information coming to your brain from your eyes telling you that the car is present originate from many many cells in your retina, involving billions of interactions with light. The muscles that move your eyes and your head to see the car are instructed from thousands of nerves firing simultaneously and these nerves fire from gradients of Calcium and other ions… molar scale quantities of atoms! A nerve doesn’t fire or not based on the collapse of possibilities for a single calcium ion. It fires based on thermodynamic quantities of ions flowing through many gated ion channels all at once. The net effect of one particular atom experiencing quantum mechanical ambivalence is swamped under statistically large quantities of atoms picking all of the choices they can pick from the whole range of possibilities available to them, giving rise to the bulk phenomenon of the neuron firing. Let’s put it this way: for the nerve to fire or not based on quantum mechanical superposition of calcium ions would demand that the nerve visit that single thermodynamic state where all the ions fail to flow through all the open ion gates in the membrane of the cell all at once… and there are statistically few states where this has happened compared to the statistically many states where some ions or many ions have chosen to pass through the gated pore (this is what underpins the chemical potential that drives the functioning of the cell). If you bothered to learn any stat mech at all, you would know that this state is such a rare one that it would probably not be visited even once in the entire age of the universe. Voltage gradients in nerve cells are established and maintained through copious application of chemical energy, which is truthfully constructed from quantum mechanics and mainly expressed in bulk level by plain old classical thermodynamics. And this is merely the state of whether a single nerve “fired or not” taken in aggregate with the fact that your capacity for “thought” doesn’t depend enough on a single nerve that you can’t lose that one nerve and fail to think –if a single nerve in your retina failed to fire, all the sister nerves around it would still deliver an image of the car speeding toward you to your brain.

Do atoms like a single calcium ion subsist in quantum mechanical ambivalence when left to their own devices? Yes, they do. But, when you put together a large collection of these atoms simultaneously, it is physically improbable that every single atom will make the same choice all at once. At some point you get a bulk thermodynamic behavior and the decision that your brain makes are based on bulk thermodynamic behaviors, not isolated quantum mechanical events.

Pretending that a person made a cognitive choice based on the quantum mechanical outcomes of a single atom is a reductio ad absurdum and it is profoundly disingenuous to start talking about entire parallel universes where you swerved right on your bike instead of left based on that single calcium atom (regardless of how liberally you wave around the butterfly effect). The nature of physiology in a human being at all levels is about biasing fundamentally random behavior into directed, ordered action, so focusing on one potential speck of randomness doesn’t mean that the aggregate should fail to behave as it always does. All the air in the room where you’re standing right now could suddenly pop into the far corner leaving you to suffocate (there is one such state in the statistical ensemble), but that doesn’t mean that it will…. closer to home, you might win a $500 million Power Ball Jackpot, but that doesn’t mean you will!

I honestly do not know what I think about the multiverse or about parallel universes. I would say I’m agnostic on the subject. But, if all parallel universe theory is based on such breathtaking Dunning-Kruger as Professor Tegmark exhibits when talking about the connection between quantum mechanics and actualization of biological systems, the only stance I’m motivated to take is that we don’t know nearly enough to be speculating. If Tegmark is supporting multiverse theory based on such thinking, he hasn’t thought about the subject deeply enough. Scale matters here and neglecting the scale means you’re neglecting the math! Is he neglecting the math elsewhere in his other huge, generalizing statements? For the scale of individual atoms, I can see how these ideas are seductive, but stretching it into statistical systems is just wrong when you start claiming that you’re seeing the effects of quantum mechanics at macroscopic biological levels when people actually do not. It’s like Tegmark is trying to give Deepak Chopra ammunition!

Ok, just one gripe there. I figure I probably have room for another.

In another series of statements that Tegmark makes in his discussion of quantum mechanics, I think he probably knows better, but by adopting the framing he has, he risks misinforming the audience. After a short discussion of the origins of Quantum Mechanics, he introduces the Schrodinger Equation as the end-all, be-all of the field (despite speaking briefly of Lagrangian path integral formalism elsewhere). One of the main theses of his book is that “the universe is mathematical” and therefore the whole of reality is deterministic based on the predictions of equations like Schrodinger’s equation. If you can write the wave equation of the whole universe, he says, Schrodinger’s equation governs how all of it works.

This is wrong.

And, I find this to miss most of the point of what physics is and what it actually does. Math is valuable to the physics, but one must always be careful that the math not break free of its observational justification. Most of what physics is about is making measurements of the world around us and fitting those measurements to mathematical models, the “theories” (small caps) provided to us by the Einsteins and the Sheldon Coopers… if the fit is close enough, the regularity of a given equation will sometimes make predictions about further observations that have not yet been made. Good theoretical equations have good provenance in that they predict observations that are later made, but the opposite can be said for bad theory, and the field of physics is littered with a thick layer of mathematical theories which failed to account for the observations, in one way or another. The process of physics is a big selection algorithm where smart theorists write every possible theory they can come up with and experimentalists take those theories and see if the data fit to them, and if they do accommodate observation, such a theory is promoted to a Theory (big caps) and is explored to see where its limits exist. On the other hand, small caps “theories” are discarded if they don’t accommodate observation, at which point they are replaced by a wave of new attempts that try to accomplish what the failure didn’t. As a result, new theories fit over old theories and push back predictive limits as time goes on.

For the specific example of Schrodinger’s equation, the mathematical model that it offers fits over the Bohr model by incorporating deBroglie’s matter wave. Bohr’s model itself fit over a previous model and the previous models fit over still earlier ideas had by the ancient Greeks. Each later iteration extends the accuracy of the model, where the development is settled depending on whether or not a new model has validated predictive power –this is literally survival of the fittest applied to mathematical models. Schrodinger’s equation itself has a limit where its predictive power fails: it cannot handle Relativity except as a perturbation… meaning that it can’t exactly predict outcomes that occur at high speeds. The deficiencies of the Schrodinger equation are addressed by the Klein-Gordon equation and by the Dirac equation and the deficiencies of those in turn are addressed by the path integral formalisms of Quantum Field Theory. If you knew the state equation for the whole universe, Schrodinger’s equation would not accurately predict how time unfolds because it fails to work under certain physically relevant conditions. The modern Quantum Field Theories fail at gravity, meaning that even with the modern quantum, there is no assured way of predicting the evolution of the “state equation of the universe” even if you knew it. There are a host of follow-on theories, String Theory, Quantum loop gravity and so and so forth that vy for being The Theory That Fills The Holes, but, given history, probably will only extend our understanding without fully answering all the remaining questions. That String Theory has not made a single prediction that we can actually observe right now should be lost on no one –there is a grave risk that it never will. We cannot at the moment pretend that the Schrodinger equation perfectly satisfies what we actually know about the universe from other sources.

It would be most accurate to say that reality seems to be quantum mechanical at its foundation, but that we have yet to derive the true “fully correct” quantum theory. Tegmark makes a big fuss about trying to explain “wave function collapse” doesn’t fit within the premise of Schrodinger’s equation but that the equation could hold as good quantum regardless if a “level three multiverse” is real. The opposite is also true: we’ve known Schrodinger’s equation is incomplete since the 1930s, so “collapse” may simply be another place where it’s incomplete that we don’t yet know why. A multiverse does not necessarily follow from this. Maybe pilot wave theory is correct quantum, for all I know.

It might be possible to masturbate over the incredible mathematical regularity of physics in the universe, but beware of the fact that it wasn’t particularly mathematical or regular until we picked out those theories that fit the universe’s behavior very closely. Those theories have predictive power because that is the nature of the selection criteria we used to find them; if they lacked that power, they would be discarded and replaced until a theory emerged meeting the selection criteria. To be clear, mathematical models can be written to describe anything you want, including the color of your bong haze, but they only have power because of their self consistency. If the universe does something to deviate from what the math says it should, the math is simply wrong, not the universe. Every time you find neutrino mass, God help your massless neutrino Standard Model!

Wonderful how the math works… until it doesn’t.

Edit 12-19-17:

We’re still listening to this book during our car trips and I wanted to point out that Tegmark uses an argument very similar to my argument above to suggest why the human brain can’t be a quantum computer. He approaches the matter from a slightly different angle. He says instead that a coherent superposition of all the ions either inside or outside the cell membrane is impossible to maintain for more than a very very short period of time because eventually something outside of the superposition would rapidly bump against some component of the superposition and that since so many ions are involved, the frequency of things bumping on the system from the outside and “making a measurement” becomes high. I do like what he says here because it starts to show the scale that is relevant to the argument.

On the other hand, it still fails to necessitate a multiverse. The simple fact is that human choice is decoupled from the scale of quantum coherence.

Edit 1-10-18:

As I’m trying desperately to recover from stress in the process of thesis writing, I thought I would add a small set of thoughts in this subject in an effort to defocus and defrag a little. My wife and I have continued to listen to this book and I think I have another fairly major objection with Tegmark’s views.

Tegmark lives in a version of quantum mechanics that fetishizes the notion of wave function collapse where he views himself as going against the grain by offering an alternative where collapse does not have to happen.

For a bit of context, “collapse” is a side effect of the Copenhagen convention of quantum mechanics. In this way of looking at the subject, the wave function will remain in superposition until something is done to determine what state the wave function is in… at this point, the wave function will cease to be coherent and will drop into some allowed eigenstate, after which it will remain in that eigenstate. This is a big, dominant part of quantum mechanics, but I would suggest that it misses some of the subtlety of what actually happens in quantum mechanics by trying to interpret, perhaps wrongly, what the wave function is.

Fact of the matter is that you can never observe a wave function. When you actually look at what you have, you only ever find eigenstates. But, there is an added subtlety to this. If you make an observation, you find an object somewhere, doing something. That you found the object is indisputable and you can be pretty certain what you know about it at the time slice of the observation. Unfortunately, you only know exactly what you found; from this –directly– you actually have no idea either what the wave function was or even really what the eigenstates are. Location is clearly an eigenstate of the position operator, as quantum mechanics operates, but from finding a particle “here” you really don’t actually know what the spectrum of locations it was potentially capable of occupying actually were. In order to learn this, the experiment which is performed is to set up the situation in a second instance, put time in motion and see that you find the new particle ending up “there,” then to tabulate the results together. This is repeated a number of times until you get “here,” “there” and “everywhere.” Binning each trial together, you start to learn a distribution of how the possibilities could have played out. From this distribution, you can suddenly write a wave function, which tells the probability of making some observation across the continuum of the space you’re looking at… the wave function says that you have “this chance of finding the object ‘here’ or ‘there’.”

The wave function, however you try to pack it, is fundamentally dependent on the numerical weight of a statistically significant number of observations. From one observation, you can never know anything about the wave function.

The same thing holds true for coherence. If you make one observation, you find what you found that one time; you know nothing about the spectrum of possibilities. For that one hit, the particle could have been in coherence, or it could have been collapsed to an eigenstate. You don’t know. You have to build up a battery of observations, which gives you the ability to say “there’s a xx% chance this observation and that observation were correlated, meaning that coherence was maintained to yy degree.”

This comes back to Feynman’s old double slit experiment anecdote. For one BB passing through the system and striking the screen, you only know that it did, and not anything about how it did. The wave function written for the circumstances of the double slit provides a forecast of what the possible outcomes of the experiment could be. If you start measuring which slit a BB went through, the system becomes fundamentally different based upon how the observation is made and different things are knowable, giving the chance that the wave function will forecast different statistical outcomes. But, you cannot know this unless you make many observations in order to see the difference. If you measure the location of 1 BB at the slit and the location of 1 BB at the screen, that’s all you know.

In this way, the wave function is a bulk phenomenon, a beast of statistical weight. It can tell you observations that you might find… if you know the set up of the system. An interference pattern at the screen tells that the history was muddy and that there are multiple possible histories that could explain an observation at the screen. This doesn’t mean that a BB went through both slits, merely that you don’t know what history brought it to the place where it is. “Collapse” can only be known after two situations have been so thoroughly examined that the chances for the different outcomes are well understood. In a way, it is as if the phenomenon of collapse is written into the outcome of the system by the set-up of the experiment and that the types of observations that are possible are ordained before the experiment is carried out. In that way, the wave function really is basically just a forecast of possible outcomes based on what is known about a system… sampling for the BB at the slit or not, different information is present about the system, creating different possible outcomes, requiring the wave function to make a different forecast that includes that something different is known about the system. The wave function is something that never actually exists at all except to tell you the envelope of what you can know at any given time, based upon how the system is different from one instance to the next.

This view directly contradicts the notions in Tegmark’s book that individual quantum mechanical observations at “collapse” allow for two universes to be created based upon whether the wave function went one way or another. On a statistical weight of one, it cannot be known whether the observed outcome was from a collection of different possibilities or not. The possible histories or futures are unknown on a data point of one; that one is what it is and it can’t be known that there may have been other choices without a large conspiracy to know what other choices could have happened and what that gives you is the ability to say is “there’s a sixty percent chance this observation matches this eigenstate and a forty percent chance it’s that one.” Which is fundamentally not the same as the decisiveness which would be required for a collapse of one data point to claim “we’re definitely in the universe where it went through the right slit.”

I guess I would say this: Tegmark’s level 3 multiverse is strongly contradicted by the Uncertainty Principle. Quantum mechanics is structurally based on indecisiveness, while Tegmark’s multiverse is based on a clockwork decisiveness. Tegmark is saying that the history of every particle is always known.

This is part of the issue with quantum computers: the quantum computer must run its processing experiment repeatedly, multiple times, in order to establish knowledge about coherence in the system. On a sampling of one, the wave function simply does not exist.

Tegmark does this a lot. He routinely puts the cart ahead of the horse; saying that math implies the universe rather than that math describes the universe (Tegmark: Math therefore Universe. Me: Universe, therefore Math). The universe is not math; math is simply so flexible that you can pick out descriptions that accurately tell what’s going on in the universe (until they don’t). For all his cherry picking the “mathematical regularity of the universe,” Tegmark quite completely turns his eye to where math fails to work: most problems in quantum mechanics are not exactly solvable and most quantum advancement is based strongly on perturbation… that is approximations and infinite expansions that are cranked through computers to churn out compact numbers that are close to what we see. In this, the math that ‘works’ is so overloaded with bells and whistles to make it approach the actual observational curve that one can only ever say that the math is adopting the form of the universe, not that the universe arises from the math.

edit 1-17-18:

Still listening to this book. We listened through a section where Tegmark admits that he’s putting the cart ahead of the horse by putting math ahead of reality. He simply refers to it as a “stronger assertion” which I think is code for “where I know everyone will disagree with me.”

Tegmark slipped gently out of reality again when he started into a weird observer-observation duality argument about how time “flows” for a self-aware being. You know he’s lost it when his description fails to even once use the word “entropy.” Tegmark is under the impression that the quantum mechanical choice of every distinct ion in your brain is somehow significant to the functioning of thought. This shows an unbelievable lack of understanding of biology, where mass structures and mass action form behavior. Fact of the matter is that biological thought (the awareness of a thinking being) is not predictable from the quantum mechanical behavior of its discrete underpinning parts. In reality, quantum mechanics supplies the bulk steady state from which a mass effect like biological self-awareness is formed. Because of the difference in scale between the biological level and the quantum mechanical level, biology depends only on the prevailing quantum mechanical average… fluctuations away from that average, the weirdness of quantum, are almost entirely swamped out by simple statistical weight. A series of quantum mechanical arguments designed to connected the macroscale of thought to the quantum scale is fundamentally broken without taking this into account.

Consider this: the engine of your gas fueled car is dependent on a quantum mechanical behavior. Molecules of gasoline are mixed with molecules of oxygen in the cylinder head and are triggered by a pulse of heat to undergo a chemical reaction where the atoms of the gas and oxygen reconfigure the quantum mechanical states of their electrons in order to organize into molecules of CO2 and CO. After the reorganization, the collected atoms in these new molecules of CO2 and CO are at a different average state of quantum mechanical excitation than they were prior to the reconfiguration –you could say that they end up further from their quantum mechanical zero point for their final structure as compared to prior to the reorganization. In ‘human baggage’ we call this differential “heat” or “release of heat.” The quantum mechanics describe everything about how the reorganization would proceed, right down to the direction a CO2 molecule wants to speed off after it has been formed. What the quantum mechanics does not directly tell you is that 10^23 of these reactions happen and for all the different directions that CO2 molecules are moving after they are formed, the average distribution of their expansion is all that is needed to drive the cylinder head… that this molecule speeds right or that one speeds left are immaterial: if it didn’t, another would, and if that one didn’t still another would and so on and so forth until you achieve a bulk behavior of expansion in CO2 atmosphere that can push the piston. The statistics are important here. That the gasoline is 87 octane versus 91 octane, two quantum mechanically different approaches to the same thing, does not change that both drive the piston… you could use ethanol or kerosine or RP-1 to perform the same action and the specifics of the quantum mechanics result in an almost indistinguishable state where an expanding gas pushes back the piston head to produce torque on the crankshaft to drive the wheels around. The quantum mechanics are watered out to a simple average where the quantum mechanical differences between one firing of the piston are indistinguishable from the next. But, to be sure, every firing of the piston is not quantum mechanically exactly the same as the one before it. In reality, that piston moves despite these differences. There is literally an unthinkably huge ensemble of quantum mechanical states that result in the cylinder head moving and you cannot distinguish any of them from any other. There is literally no choice but to group them all together by what they hold in common and to treat them as if they are the same thing, even though at the basement layer of reality, they aren’t. Without what Tegmark refers to as “human baggage” there would be no way to connect the quantum level to the one we can actually observe in this case. That this particular molecule of fuel failed to react or not based on fluctuations of the quantum mechanics is pretty much immaterial.

The brain is not different. If you were to consider “thought” to be a quantum mechanical action, the specific difference between one thought and the next are themselves huge ensembles of different quantum mechanical configurations… even the same thought twice is not the same quantum mechanical configuration twice. The “units” of thought are in this way decoupled from the fundamental level since two versions of the “same thing” are actually so statistically removed from their quantum mechanical foundation as to be completely unpredictable from it.

This is a big part of the problem with Tegmark’s approach; he basically says “Quantum underlies everything, therefore everything should be predictable from quantum.” This is a fool’s errand. The machineries of thought in a biological person are simply at a scale where the quantum mechanics has salted out into Tegmark’s “human baggage”… named conceptual entities, like neuroanatomy, free energy and entropy, that are not mathematically irreducible. He gets to ignore the actual mechanisms of “thought” and “self-awareness” in order to focus on things he’s more interested in, like what he calls the foundation structure of the universe. Unfortunately, he’s trying to attach to levels of reality that are not naturally associated… thought and awareness are by no means associated with fundamental reality –time passage as experienced by a human being, for instance, has much more in common with entropy and statistical mechanics than it does with anything else, and Tegmark totally ignored it in favor of a rather ridiculous version of the observer paradox.

One thing that continues to bother me about this book is something that Tegmark says late in it. The man is clearly very skilled and very capable at what he does, but he dedicates the last part of his book to all the things he will not publish on for fear of destroying his career. He feels the ideas deserve to be out (and as an arrogant theorist, he feels that even the dross in his theories are gold), but by publishing a book about them, he gets to circumvent peer review and scientific discussion and bring these ideas straight to an audience that may not be able to sort which parts of what he says are crap from those few trinkets which are good. I don’t mean that he should be muzzled, he has the freedom of speech, but if his objective is to favor dissemination of scientific education, he should be a model of what he professes. If Tegmark truly believes these ideas are useful, he should damned well be publishing them directly into the scientific literature so that they can be subjected to real peer review. Like all people, this one should face his hubris. The first of which is his incredible weakness at stat mech and biology.