Magnets, how do they work? (part 2)

I will talk about the origin of the magnetic dipole construct here. Consider a loop of wire…

You may have noticed that I posted an entry entitled “NMR and Spin flipping (part 2)” which has since disappeared. It turns out that wordpress doesn’t synch so well between its mobile app and its main page: I had an incomplete version of the NMR post on a mobile phone which I accidentally pushed to publish and over-wrote the completed post that I had finished several days before. Thank you wordpress for not synching properly! The incomplete version had none of the intended content. As I don’t feel like reconstructing a 5,000 word post right now, I thought I would scale back a bit and bite off a tiny chunk of the big subject of how magnets work. In part, I figure I can use some of what is derived here in the next version of the NMR post, which I intend to rewrite.

So, this will be the continuation of my series about magnets.

Reading through the initial magnets post, you will see that I did a rather spectacular amount of math, some of it unquestionably uncalled for. But, hey, the basic point of a blog is excess. One of the windfalls of all that math can yield an important theoretic construct which turns out to be one of the most major contributors of the explanation of how magnets work.

What this has to do with a loop of wire, I’ll come back to…

When an exact answer is not available to a physics question, one of the go-to strategies used by physicists is series approximation. Often, the low orders of a series tend to contribute to solutions more strongly than the high orders, meaning that the first couple terms in an expansion can be good approximations. One such expansion is used in magnetism.

Recall the relation between the magnetic field and the magnetic vector potential:

BS to amp 2

This expression is useful because the crazy vector junk is moved outside the integral. The magnetic potential is easier to work with than the magnetic field as a result. The expansion of interest is usually directed at the vector potential and is called the “multipole expansion.” There are many ways to run the multipole expansion, but maybe the easiest (for me) is to come back to our old friends the spherical harmonics Ylm.


1 multipole expansion

In the vector potential of the magnetic field, that r-r’ factor in the denominator is really hard to work with. By itself, it is usually too complicated to integrate over. The multipole expansion lets us replace it with something that can be calculated. In this expansion, r is the location where we’re looking for the field while r’ is where the current which sources the field is located. The expansion is converting the difference in these (the propagator which pushes influence from the location of the current to the location of the field) into an infinite series of terms: in the sum, r< is whichever of the two distances is lesser, while r> is which of these two is greater. If you’re looking at a location inside the current distribution, r’ is bigger than r… but if you’re looking at a location outside of the current distribution, r is bigger than r’. The Ylms appear because space has a spherical polar geometry.

The substitution changes the form of the vector potential:

2 expanded vector potential


The vector potential is now a sum of an infinite number of terms inside the integral. You still can’t just compute that because this sequence converges to 1/r-r’, which you can’t calculate by itself anyway. What you can do is introduce a cut-off. This is literally where the multipole terms all come from: instead of calculating the entire series all at once, you only calculate one term (or one level of terms, as the case may be). If you take l=0, you get the monopole term, if you take l=1, you get the dipole term, and so on and so forth for higher orders of l.

Since I’m interested in magnetic dipoles right now, this is the crux: I’ve simply called the l=1 term “the dipole” by definition. Further, I care only about locations where I’m looking for the magnetic field well outside of the dipole, since I’m not going to look directly inside of the bar magnet to start with, so that r>r’. For the dipole, l=1 and I only care about m=-1,0 and 1 of the Ylms. This collapses the sum to just three terms.

3 dipole term of multipole

If you’ve spent any time messing around with either E&M or quantum, you may remember those three Ylms off the top of your head. They’re basically just sines and cosines.

4 ylms

I will note, this whole expansion can be done in terms of Legendre polynomials too, but I remember the Ylms better. For some expedience, I will focus on the Ylm part of the integral in order to help bring it into a more manageable form before moving on.

5 compressing ylmsThere’s a lot of trig in here, but the final form is actually very much more manageable than where I started. I’ve highlighted the pattern in red and green. If you squint really really hard at this, you’ll realize that it’s a dot product of the cartesian form of the hatted unit vector r. So, it’s just a dot product of cartesian unit vectors…

6 simplification of spherical polar terms

This dials down to just a dot product of two unit vectors pointing in the directions toward either where the current is located or where the field is. I’ve installed it in the vector potential in the last line. I note explicitly that both of these are functions of the spherical polar angles since this will be important when I start working integrals.

If all things were equal, I could start doing calculus right now. Unfortunately, I don’t know the form of the current vector. That could be any distribution of currents imaginable and not all of them have pure dipole contributions. Working the problem as is, the set-up will respond to the dipole moment of whatever J-current I choose to install. You could imagine a case with a non-zero current where this particular integral goes to zero –if I did a line of current going in some constant direction, that would probably kill this integral. But, I do know of one current distribution in particular that has a very high dipole contribution… you might recognize this as post hoc reasoning, but I’m doing this to try to focus our attention on how one particular term in the multipole expansion behaves. The current distribution which is most interesting here is a loop of wire with a electric current circling it.

7 dipole current

I’ve sketched out the current vector here as well as a set of axes showing the relationship between the spherical polar and cartesian coordinates where the unit vectors are all labeled. This vector current is just a current ‘I’ constrained to the X-Y plane, maintaining a loop around the origin at a radius of R. The current runs in a direction phi, which is tangential to the loop in a counterclockwise sense, and presumably has a positive current definition. The delta functions do the constraining to the X-Y plane. The factor of sine and radius in the denominator is a correction for use of the delta function in a spherical polar measure. The factor 2 is included to avoid a double-counting problem with a loop which shows up more explicitly, for example, in Jackson E&M, where the definition of the magnetic dipole moment is directly written with respect to the current vector. You’ll be happy to know that pretty much none of my work here actually follows Jackson, though the set-up is based strongly on the methods used in Jackson (I hated how Jackson set up his delta functions because I found them opaque as hell! But, that’s Jackson for you…)

The measure of integration is the typical spherical polar measure. You may remember my defining this in my post on the radial solution of the hydrogen atom. I’ll just quote it here. If you’ve done any vector calculus, it should be familiar anyway.

8 integration measure

I can then put these all together in the vector potential, collect the terms and begin solving it.

9 beginning integration

In the third line, I pulled everything out front that I don’t need inside the integral. The radial portion of the integral collapses on the delta function. The angular portion is somewhat harder because it involves a couple unit vectors that vary with the angles; one of the unit vectors, the unprimed r, could actually be pulled outside the integral, but I left it in to help display a useful construct that will help me simplify the integral again. I will again focus on the vector portion inside this integral:

10 unit vector adjustment

This use of the BAC-CAB rule allows me to change the unit vectors around into a cross product and flip the direction slightly. In the next step, by converting the theta unit vector into a cartesian form, the integral becomes trivial.

11 solving integral

This solves the integral. Use of the delta function guts the theta coordinate and no remaining dependence exists for phi. After the hatted unit vectors are decoupled from the integration coordinates, the cross product gets pulled out front in an uncomplicated form. You can then collect and cancel in what remains:

12 vector potential of magnetic dipole moment

Here, I’ve collected a particular quantity dependent on electric current running around in a loop which I have called a “magnetic dipole moment.” I conspired pretty strongly to get all the variable terms to pop out in a form that people will find familiar. A magnetic dipole is simply a loop, which can be of arbitrary shape, it turns out. This current loop is always right-hand defined, as above, to be “current x area” pointed in a direction normal to the area. This object could simply be a wire loop. At this point, you should be having images of stereotypical electromagnets which are many wire loops wrapped around some solid core. This electrical current configuration is very special because of the magnetic field that it tends to produce.

As an aside, I’ve seen dipole moment derived in a much more simplistic fashion than presented here, but my purpose was to be a bit more complete without actually duplicating Jackson… which I’ve mostly avoided, believe it or not… and to produce the form which can generate the whole dipole magnetic field, which can’t be done in the E&M 102 variety derivation. The simple derivation tends to operate on the axis of the magnetic dipole only, and does not calculate the shape of the field elsewhere in space. To get the whole field, you need to be a bit more sophisticated.

Magnetic field is produced by taking the curl of the vector potential, as I wrote far above. The fastest way I’ve found to take this curl is using the spherical polar definition of the curl, found here. You can derive this form of the curl in a manner very similar to what I did in my hydrogen atom radial equation post, but I’m going to hold off deriving it here: I’m somewhat short on time and I had hoped that this post wouldn’t get too very long.

13 setting up curl

My starting point here is to figure out how much of the curl I actually need. If you massage the terms inside the vector potential, you rapidly discover that only one of the three vector components is present, thus simplifying the curl. And, of course, to get to the magnetic field from here, I just need to take a curl…

14 getting magnetic field

The last thing I end up with here is an accepted form for the dipolar magnetic field:

15 magnetic dipole field

This is an exact solution for the magnetic field from a current dipole. This particular solution is dependent on the assumption that the location where you’re examining the field is large compared to the size of the loop; for real physical dipoles of appreciable size, there can be other non-zero terms in the multipole expansion, meaning that the field will be predominantly what’s written here with some small deviations.

Admittedly, this mathematical equation doesn’t have a very intuitive form. Why in the world do I care about deriving this particular equation? To understand, we need some choice pictures…

edit 10-25-17: It seemed kind of ridiculous that I worked through all that math to find the dipole field and then stole other people’s diagrams of it. For completeness, here’s a vector plot of mine in Mathematica of the field equation written above:

magnetic dipole.jpg

Another magnetic dipole picture with the location of the dipole explicitly drawn in:


This image, where the dipole is rotated by 90 degrees from how I plotted it, is taken from wikipedia.

My interest in this field becomes more obvious when compared side-by-side with the magnetic field produced by a bar magnet…

This image is taken from The field produced by a bar magnet is very similar in shape to the field produced by the loop of wire. Going further out, here is a diagram of the magnetic field produced by the Earth:


Notice some similarity? You’ll notice the Earth’s field lines are assigned to point oppositely from my diagram above, but that has to do with how compass needles orient rather than from any actual fundamental difference in the field!

Physical ferromagnets tend frequently to have dipolar magnetic fields. As such, the quantity of the magnetic dipole moment has huge physical importance. Granted, the field of the Earth isn’t perfectly dipolar, but it has an overwhelming dipole contribution. Other planets also have fields that are dipolar in shape.

Understanding how magnets work, compass needles, bar magnets and most sorts of permanent magnets, requires dipolar behavior as the underlying structure. Even the NMR post that got ruined was about a quantum mechanical phenomenon which revolves around magnetic dipoles.

This is a large step forward. I haven’t explained much, but I will write another post later showing why it is that magnets, particularly dipoles, respond to magnetic fields, as well as what the source of magnetism is in ferromagnets (no off-switch on the current for God’s sake!) Stay tuned for part 3!

edit 11-5-17

Playing around with matplotlib, I constructed a streamplot of the magnetic field produced by three dipoles, all flattened into the same plane and oriented facing different directions in that plane. This is all just superpositions using the field determined above. Kind of pretty…



Magnets, how do they work? (part 1)

Subtitle: Basic derivation of Ampere’s Law from the Biot-Savart equation.

Know your meme.

It’s been a while since this became a thing, but I think it’s actually a really good question. Truly, the original meme exploded from an unlikely source who wanted to relish in appreciating those things that seem magical without really appreciating how mind-bending and thought-expanding the explanation to this seemingly earnest question actually is.

As I got on in this writing, I realized that the scope of the topic is bigger than can be tackled in a single post. What is presented here will only be the first part (though I haven’t yet had a chance to write later parts!) The succeeding posts may end up being as mathematical as this, but perhaps less so. Moveover, as I got to writing, I realized that I haven’t posted a good bit of math here in a while: what good is the the mathematical poetry of physics if nobody sees it?

Magnets do not get less magical when you understand how they work: they get more compelling.


This image, taken from a website that sells quackery, highlights the intriguing properties of magnets. A solid object with apparently no moving parts has this manner of influencing the world around it. How can that not be magical? Lodestones have been magic forever and they do not get less magical with the explanation.

Truthfully, I’ve been thinking about the question of how they work for a couple days now. When I started out, I realized that I couldn’t just answer this out of hand, even though I would like to think that I’ve got a working understanding of magnetic fields –this is actually significant to me because the typical response to the Insane Clown Posse’s somewhat vacuous pondering is not really as simple as “Well, duh, magnetic fields you dope!” Someone really can explain how magnets work, but the explanation is really not trivial. That I got to a level in asking how they work where I said, “Well, um, I don’t really know this,” got my attention. How the details fit together gets deep in a hurry. What makes a bar magnet like the one in the picture above special? You don’t put batteries in it. You don’t flick a switch. It just works.

For most every person, that pattern above is the depth of how it works. How does it work? Well, it has a magnetic field. And, everybody has played with magnets at some point, so we sort of all know what they do, if not how they do it.


In this picture from penguin labs, these magnets are exerting sufficient force on one another that many of them apparently defy gravity. Here, the rod simply keeps the magnets confined so that they can’t change orientations with respect to one another and they exert sufficient repulsive force to climb up the rod as if they have no weight.

It’s definitely cool, no denying. There is definitely a quality to this that is magical and awe inspiring.

But, is it better knowing how they work, or just blindly appreciating them because it’s too hard to fill in the blank?

The central feature of how magnets work is quite effortlessly explained by the physics of Electromagnetism. Or, maybe it’s better to say that the details are laboriously and completely explained. People rebel against how hard it is to understand the details, but no true explanation is required to be easily explicable.

The forces which hold those little pieces of metal apart are relatively understandable.

Lorentz force

Here’s the Lorentz force law. It says that the force (F) on an object with a charge is equal to sum of the electric force on the object (qE) plus the magnetic force (qvB). Magnets interact solely by magnetic force, the second term.


In this picture from Wikipedia, if a charge (q) moving with speed (v) passes into a region containing this thing we call a “magnetic field,” it will tend to curve in its trajectory depending on whether the charge is negative or positive. We can ‘see’ this magnetic field thing in the image above with the bar magnet and iron filings. What is it, how is it produced?

The fundamental observation of magnetic fields is tied up into a phenomenological equation called the Biot-Savart law.


This equation is immediately intimidating. I’ve written it in all of it’s horrifying Jacksonian glory. You can read this equation like a sentence. It says that all the magnetic field (B) you can find at a location in space (r) is proportional to a sum of all the electric currents (J) at all possible locations where you can find any current (r’) and inversely proportional to the square of the distance between where you’re looking for the magnetic field and where all the electrical currents are –it may say ‘inverse cube’ in the equation, but it’s actually an inverse square since there’s a full power of length in the numerator. Yikes, what a sentence! Additionally, the equation says that the direction of the magnetic field is at right angles to both the direction that the current is traveling and the direction given by the line between where you’re looking for magnetic field and where the current is located. These directions are all wrapped up in the arrow scripts on every quantity in the equation and are determined by the cross-product as denoted by the ‘x’. The difference between the two ‘r’ vectors in the numerator creates a pure direction between the location of a particular current element and where you’re looking for magnetic field. The ‘d’ at the end is the differential volume that confines the electric currents and simply means that you’re adding up locations in 3D space. The scaling constants outside the integral sign are geometrical and control strength; the 4 and Pi relate to the dimensionality of the field source radiated out into a full solid angle (it covers a singularity in the field due to the location of the field source) and the ‘μ’ essentially tells how space broadcasts magnetic field… where the constant ‘μ’ is closely tied to the speed of light. This equation has the structure of a propagator: it takes an electric current located at r’ and propagates it into a field at r.

It may also be confusing to you that I’m calling current ‘J’ when nearly every basic physics class calls it ‘I’… well, get used to it. ‘Current vector’ is a subtle variation of current.

I looked for some diagrams to help depict Biot-Savart’s components, but I wasn’t satisfied with what Google coughed up. Here’s a rendering of my own with all the important vectors labeled.

biotsavart diagram

Now, I showed the crazy Biot-Savart equation, but I can tell you right now that it is a pain in the ass to work with. Very few people wake up in the morning and say “Boy oh boy, Biot-Savart for me today!” For most physics students this equation comes with a note of dread. Directly using it to analytically calculate magnetic fields is not easy. That cross product and all the crazy vectors pointing in every which direction make this equation a monster. There are some basic feature here which are common to many fields, particularly the inverse square, which you can find in the Newtonian gravity formula or Coulomb’s law for electrostatics, and the field being proportional to some source, in this case an electric current, where gravity has mass and electrostatics have charge.

Magnetic field becomes extraordinary because of that flipping (God damned, effing…) cross product, which means that it points in counter-intuitive directions. With electrostatics and gravity, the field is usually going toward or away from the source, while magnetism has the field seems to be going ‘around’ the source. Moreover, unlike electrostatics and gravity, the source isn’t exactly a something, like a charge or a mass, it’s dynamic… as in a change in state; electric charges are present in a current, but if you have those charges sitting stationary, even though they are still present, they can’t produce a magnetic field. Moreover, if you neutralize the charge, a magnetic field can still be present if those now invisible charges are moving to produce a current: current flowing in a copper wire is electric charges that are moving along the wire and this produces a magnetic field around the wire, but the presence of positive charges fixed to the metal atoms of the wire neutralizes the negative charges of the moving electrons, resulting in a state of otherwise net neutral charge. So, no electrostatic field, even though you have a magnetic field. It might surprise you to know that neutron stars have powerful magnetic fields, even though there are no electrons or protons present in order give any actual electric currents at all. The requirement for moving charges to produce a magnetic field is not inconsistent with the moving charge required to feel force from a magnetic field as well. Admittedly, there’s more to it than just ‘currents’ but I’ll get to that in another post.

With a little bit of algebraic shenanigans, Biot-Savart can be twisted around into a slightly more tractable form called Ampere’s Law, which is one of the four Maxwell’s equations that define electromagnetism. I had originally not intended to show this derivation, but I had a change of heart when I realized that I’d forgotten the details myself. So, I worked through them again just to see that I could. Keep in mind that this is really just a speed bump along the direction toward learning how magnets work.

For your viewing pleasure, the derivation of the Maxwell-Ampere law from the Biot-Savart equation.

In starting to set up for this, there are a couple fairly useful vector identities.

Useful identities 1

This trio contains several basic differential identities which can be very useful in this particular derivation. Here, the variables r are actually vectors in three dimensions. For those of you who don’t know these things, all it means is this:


These can be diagrammed like this:

vector example

This little diagram just treats the origin like the corner of a 3D box and each distance is a length along one of the three edges emanating from the corner.

I’ll try not to get too far afield with this quick vector tutorial, but it helps to understand that this is just a way to wrap up a 3D representation inside a simple symbol. The hatted symbols of x,y and z are all unit vectors that point in the relevant three dimensional directions where the un-hatted symbols just mean a variable distance along x or y or z. The prime (r’) means that the coordinate is used to tell where the electric current is located while the unprime (r) means that this is the coordinate for the magnetic field. The upside down triangle is an operator called ‘del’… you may know it from my hydrogen wave function post. What I’m doing here is quite similar to what I did over there before. For the uninitiated, here are gradient, divergence and curl:


Gradient works on a scalar function to produce a vector, divergence works on a vector to produce a scalar function and curl works on a vector to produce a vector. I will assume that the reader can take derivatives and not go any further back than this. The operations on the right of the equal sign are wrapped up inside the symbols on the left.

One final useful bit of notation here is the length operation. Length operation just finds the length of a vector and is denoted by flat braces as an absolute value. Everywhere I’ve used it, I’ve been applying it to a vector obtained by finding the distance between where two different vectors point:


As you can see, notation is all about compressing operations away until they are very compact. The equations I’ve used to this point all contain a great deal of math lying underneath what is written, but you can muddle through by the examples here.

Getting back to my identity trio:

Useful identities 1

The first identity here (I1) takes the vector object written on the left and produces a gradient from it… the thing in the quotient of that function is the length of the difference between those two vectors, which is simply a scalar number without a direction as shown in the length operation as written above.

The second identity (I2) here takes the divergence of the gradient and reveals that it’s the same thing as a Dirac delta (incredibly easy way to kill an integral!). I’ve not written the operation as divergence on a gradient, but instead wrapped it up in the ‘square’ on the del… you can know it’s a divergence of a gradient because the function inside the parenthesis is a scalar, meaning that the first operation has to be a gradient, which produces a vector, which automatically necessitates the second operation to be a divergence, since that only works on vectors to produce scalars.

The third identity (I3) shows that the gradient with respect to the unprimed vector coordinate system is actually equal to a negative sign times the primed coordinate system… which is a very easy way to switch from a derivative with respect to the first r and the same form of derivative with respect to the second r’.

To be clear, these identities are tailor-made to this problem (and similar electrodynamics problems) and you probably will never ever see them anywhere but the *cough cough* Jackson book. The first identity can be proven by working the gradient operation and taking derivatives. The second identity can be proven by using the vector divergence theorem in a spherical polar coordinate system and is the source of the 4*Pi that you see everywhere in electromagnetism. The third identity can also be proven by the same method as the first.

There are two additional helpful vector identities that I used which I produced in the process of working this derivation. I will create them here because, why not! If the math scares you, you’re on the wrong blog. To produce these identities, I used the component decomposition of the cross product and a useful Levi-Civita kroenecker delta identity –I’m really bad at remembering vector identities, so I put a great deal of effort into learning how to construct them myself: my Levi-Civita is ghetto, but it works well enough. For those of you who don’t know the ol’ Levi-Civita symbol, it’s a pretty nice tool for constructing things in a component-wise fashion: εijk . To make this work, you just have to remember it as I just wrote it… if any indices are equal, the symbol is zero, if they are all different, they are 1 or -1. If you take it as ijk, with the indices all different as I wrote, it equals 1 and becomes -1 if you reverse two of the indices: ijk=1, jik=-1, jki=1, kji=-1 and so on and so forth. Here are the useful Levi-Civita identities as they relate to cross product:


Using these small tools, the first vector identity that I need is a curl of a curl. I derive it here:

vector id 1

Let’s see how this works. I’ve used colors to show the major substitutions and tried to draw arrows where they belong. If you follow the math, you’ll note that the Kroenecker deltas have the intriguing property of trading out indices in these sums. Kroenecker delta works on a finite sum the same way a Dirac delta works on an integral, which is nothing more than an infinite sum. Also, the index convention says that if you see duplicated indices, but without a sum on that index, you associate a sum with that index… this is how I located the divergences in that last step. This identity is a soft stopping point for the double curl: I could have used the derivative produce rule to expand it further, but that isn’t needed (if you want to see it get really complex, go ahead and try it! It’s do-able.) One will note that I have double del applied on a vector here… I said that it only applies on scalars above… in this form, it would only act on the scalar portion of each vector component, meaning that you would end up with a sum of three terms multiplied by unit vectors! Double del only ever acts on scalars, but you actually don’t need to know that in the derivation below.

This first vector identity I’ve produced I’ll call I4:

useful vector id 1

Here’s a second useful identity that I’ll need to develop:

useful vector id 2

This identity I’ll call I5:

vector id 2

*Pant Pant* I’ve collected all the identities I need to make this work. If you don’t immediately know something off the top of your head, you can develop the pieces you need. I will use I1, I2, I3, I4 and I5 together to derive the Maxwell-Ampere Law from Biot-Savart. Most of the following derivation comes from Jackson Electrodynamics, with a few small embellishments of my own.

first line amp devIn this first line of the derivation, I’ve rewritten Biot-Savart with the constants outside the integral and everything variable inside. Inside the integral, I’ve split the meat so that the different vector and scalar elements are clear. In what follows, it’s very important to remember that unprimed del operators are in a different space from the primed del operators: a value (like J) that is dependent on the primed position variable is essentially a constant with respect to the unprimed operator and will render a zero in a derivative by the unprimed del. Moreover, unprimed del can be moved into or out of the integral, which is with respect to the primed position coordinates. This observation is profoundly important to this derivation.

BS to amp 1

The usage of the first two identities here manages to extract the cross product from the midst of the function and puts it into a manipulable position where the del is unprimed while the integral is primed, letting me move it out of the integrand if I want.

BS to amp 2

This intermediate contains another very important magnetic quantity in the form of the vector potential (A) –“A” here not to be confused with the alphabetical placeholder I used while deriving my vector identities. I may come back to vector potential later, but this is simply an interesting stop-over for now. From here, we press on toward the Maxwell-Ampere law by acting in from the left with a curl onto the magnetic field…

BS to amp 3

The Dirac delta I end with in the final term allows me to collapse r’ into r at the expense of that last integral. At this point, I’ve actually produced the magnetostatic Ampere’s law if I feel like claiming that the current has no divergence, but I will talk about this later…

BS to amp 4

This substitution switches del from being unprimed to primed, putting it in the same terms as the current vector J. I use integration by parts next to switch which element of the first term the primed del is acting on.

BS to amp 5

Were I being really careful about how I depicted the integration by parts, there would be a unit vector dotted into the J in order to turn it into a scalar sum in that first term ahead of the integral… this is a little sloppy on my part, but nobody ever cares about that term anyway because it’s presupposed to vanish at the limits where it’s being evaluated. This is a physicist trick similar to pulling a rug over a mess on the floor –I’ve seen it performed in many contexts.

BS to amp 6

This substitution is not one of the mathematical identities I created above, this is purely physics. In this case, I’ve used conservation of charge to connect the divergence of the current vector to the change in charge density over time. If you don’t recognize the epic nature of this particular substitution, take my word for it… I’ve essentially inverted magnetostatics into electrodynamics, assuring that a ‘current’ is actually a form of moving charge.

BS to amp 75

In this line, I’ve switched the order of the derivatives again. Nothing in the integral is dependent on time except the charge density, so almost everything can pass through the derivative with respect to time. On the other hand, only the distance is dependent on the unprimed r, meaning that the unprimed del can pass inward through everything in the opposite direction.

BS to amp 8

At this point something amazing has emerged from the math. Pardon the pun; I’m feeling punchy. The quantity I’ve highlighted blue is a form of Coulomb’s law! If that name doesn’t tickle you at the base of your spine, what you’re looking at is the electrostatic version of the Biot-Savart law, which makes electric fields from electric charges. This is one of the reasons I like this derivation and why I decided to go ahead and detail the whole thing. This shows explicitly a connection between magnetism and electrostatics where such connection was not previously clear.

BS to amp 9

And thus ends the derivation. In this casting, the curl of the magnetic field is dependent both on the electric field and on currents. If there is no time varying electric field, that first term vanishes and you get the plain old magnetostatic Ampere’s law:

Ampere's law

This says simply that the curl of the magnetic field is equal to the current. There are some interesting qualities to this equation because of how the derivation leaves only a single positional dependence. As you can see, there is no separate position coordinate to describe magnetic field independently from its source. And, really, it isn’t describing the magnetic field as ‘generated’ by the current, but rather that a deformation to the linearity of the magnetic field is due to the presence of a current at that location… which is an interesting way to relate the two.

This relationship tends to cause magnetic lines to orbit around the current vector.


This image from hyperphysics sums up the whole situation –I realize I’ve been saying something similar from way up, but this equation is proof. If you have current passing along a wire, magnetic field will tend to wrap around the wire in a right handed sense. For all intents and purposes, this is all the Ampere’s law says, neglecting that you can manipulate the geometry of the situation to make the field do some interesting things. But, this is all.

Well, so what? I did a lot of math. What, if anything, have I gained from it? How does this help me along the path to understanding magnets?

The Ampere Law is useful in generating very simple magnetic field configurations that can be used in the Lorentz force law, ultimately showing a direct dynamical connection between moving currents and magnetic fields. I have it in mind to show a freshman level example of how this is done in the next part of this series. Given the length of this post, I will do more math in a different post.

This is a big step in the direction of learning how magnets work, but it should leave you feeling a little unsatisfied. How exactly do the forces work? In physics, it is widely known that magnetic fields do no work, so why is it that bar magnets can drag each other across the counter? That sure looks like work to me! And if electric currents are necessary to drive magnets, why is it that bar magnets and horseshoe magnets don’t require batteries? Where are the electric currents that animate a bar magnet and how is it that they seem to be unlimited or unpowered? These questions remain to be addressed.

Until the next post…