# Parity symmetry in Quantum Mechanics

I haven’t written about my problem play for a while. Since last I wrote about rotational problems, I’ve gone through the entire Sakurai chapter 4, which is an introduction to symmetry. At the moment, I’m reading Chapter 5 while still thinking about some of the last few problems in Chapter 4.

I admit that I had a great deal of trouble getting motivated to attack the Chapter 4 problems. When I saw the first aspects of symmetry in class, I just did not particularly understand it. Coming back to it on my own was not much better. Abstract symmetry is not easy to understand.

In Sakurai chapter 4, the text delves into a few different symmetries that are important to quantum mechanics and pretty much all of them are difficult to see at first. As it turns out, some of these symmetries are very powerful tools. For example, use of the reflection symmetry operation in a chiral molecule (like the C-alpha carbon of proteins or the hydrated carbons of sugars) can reveal neighboring degenerate ground states which can be accessed by racemization, where an atomic substituent of the molecule tunnels through the plane of the molecule and reverses the chirality of the state at some infrequent rate. Another example is translation symmetry operation, where a lattice of identical attractive potentials serves to hide a near infinite number of identical states where a bound particle can hop from one minimum to the next and traverse the lattice… this behavior essentially a specific model describing the passage of electrons through a crystalline semiconductor.

One of the harder symmetries was time reversal symmetry. I shouldn’t say “one of the harder;” for me time reversal was the hardest to understand and I would be hesitant to say that I completely understand it yet. Time reversal operator causes time to translate backward, making momenta and angular momenta reverse. Time reversal is really hard because the operator is anti-unitary, meaning that the operation switches the sign on complex quantities that it operates on. Nevertheless, time reversal has some interesting outcomes. For instance, if a spinless particle is bound to a fixed center where the state in question is not degenerate (Only one state at the given energy), time reversal says that the state can have no average angular momentum (it can’t be rotating or orbiting). On the other hand, if the particle has spin, the bound state must be degenerate because the particle can’t have no angular momentum!

A quick digression here for the laymen: in quantum mechanics, the word “degenerate” is used to refer to situations where multiple states lie on top of one another and are indistinguishable. Degeneracy is very important in quantum mechanics because certain situations contain only enough information to know an incomplete picture of the model where more information is needed to distinguish alternative answers… coexisting alternatives subsist in superposition, meaning that a wave function is in a superposition of its degenerate alternative outcomes if there is no way to distinguish among them. This is part of how entanglement arises: you can generate entanglement by creating a situation where discrete parts of the system simultaneously occupy degenerate states encompassing the whole system. The discrete parts become entangled.

Symmetry is important because it provides a powerful tool by which to break apart degeneracy. A set of degenerate states can often be distinguished from one another by exploiting the symmetries present in the system. L- and R- enantiomers in a molecule are related by a reflection symmetry at a stereo center, meaning that there are two states of indistinguishable energy that are reflections of one another. People don’t often notice it, but chemists are masters of quantum mechanics even though they typically don’t know as much of the math: how you build molecules is totally governed by quantum mechanics and chemists must understand the qualitative results of the physical models. I’ve seen chemists speak competently of symmetry transformations in places where the physicists sometimes have problems.

Another place where symmetry is important is in the search for new physics. The way to discover new physical phenomena is to look for observational results that break the expected symmetries of a given mathematical model. The LHC was built to explore symmetries. Currently known models are said to hold CPT symmetry, referring to Charge, Parity and Time Reversal symmetry… I admit that I don’t understand all the implications of this, but simply put, if you make an observation that violates CPT, you have discovered physics not accounted for by current models.

I held back talking about Parity in all this because I wanted to speak of it in greater detail. Of the symmetries covered in Sakurai chapter 4, I feel that I made the greatest jump in understanding on Parity.

Parity is symmetry under space inversion.

What?

Just saying that sounds diabolical. Space inversion. It sounds like that situation in Harry Potter where somebody screws up trying to disapparate and manages to get splinched… like they space invert themselves and can’t undo it.

The parity operation carries all the cartesian variables in a function to their negative values.

Here Phi just stands in for the parity operator. By performing the parity operation, all the variables in the function which denote spatial position are turned inside out and sent to their negative value. Things get splinched.

You might note here that applying parity twice gets you back to where you started, unsplinching the splinched. This shows that parity operator has the special property that it is it’s own inverse operation. You might understand how special this is by noting that we can’t all literally be our own brother, but the parity operator basically is.

Applying parity twice is like multiplying by 1… which is how you know parity is its own inverse. This also makes parity a unitary operator since it doesn’t effect absolute value of the function. Parity operation times inverse parity is one, so unitary.

or

Here, the daggered superscript means “complex conjugate” which is an automatic requirement for the inverse operation if you’re a unitary operator. Hello linear algebra. Be assured I’m not about the break out the matrices, so have no fear. We will stay in a representation free zone. In this regard, parity operation is very much like a rotation: the inverse operation is the complex conjugate of the operation, never mind the details that the inverse operation is the operation.

Parity symmetry is “symmetry under the parity operation.” There are many states that are not symmetric under parity, but we would be interested in searching particularly for parity operation eigenstates, which are states that parity operator will transform to give back that state times some constant eigenvalue. As it turns out, the parity operator can only ever have two eigenvalues, which are +1 and -1. A parity eigenstate is a state that only changes its sign (or not) when acted on by the parity operator. The parity eigenvalue equations are therefore:

All this says is that under space inversion, the parity eigenstates will either not be affected by the transformation, or will be negative of their original value. If the sign doesn’t change, the state is symmetric under space inversion (called even). But, if the sign does change, the state is antisymmetric under space inversion (called odd). As an example, in a space of one dimension (defined by ‘x’), the function sine is antisymmetric (odd) while the function cosine is symmetric (even).

In this image, taken from a graphing app on my smartphone, the white curve is plain old sine while the blue curve is the parity transformed sine. As mentioned, cosine does not change under parity.

As you may be aware, sines and cosines are energy eigenstates for the particle-in-the-box problem and so would constitute one example of legit parity eigenstates with physical significance.

Operators can also be transformed by parity. In order to see the significance, you just note that the definition of parity is that the position operation is reversed. So, a parity transformation of the position operator is this:

Kind of what should be expected. Position under parity turns negative.

As expressed, all of this is really academic. What’s the point?

Parity can give some insights that have deep significance. The deepest result that I understood is that matrix elements and expectation values will conserve with parity transformation. Matrix elements are a generalization of the expectation value where the bra and ket are not necessarily to the same eigenfunction. The proof of the statement here is one line:

At the end, the squiggles all denote parity transformed values, ‘m’ and ‘n’ are blanket eigenstates with arbitrary parity eigenvalues and V is some miscellaneous operator. First, the complex conjugation that turns a ket into a bra does not affect the parity eigenvalue equation, since parity is its own inverse operation and since the eigenvalues of 1 and -1 are not complex, so the bra above has just the same eigenvalue as if it were a ket. So, the matrix element does not change with the parity transformation –the combined parity transformation of all these parts are as if you just multiplied by identity a couple times, which should do nothing but return the original value.

What makes this important is that it sets a requirement on how many -1 eigenvalues can appear within the parity transformed matrix element (which is equal to the original matrix element): it can never be more than an even number (either zero or two). For the element to exist (that is, for it to have a non-zero value), if the initial and final states connected by the potential are both parity odd or parity even, the potential connecting them must be symmetric. Conversely, if the potential is parity odd, either the initial or final state must be odd, while the other is even. To sum up, a parity odd operator has non-zero matrix elements only when connecting states of differing parity while a parity even operator must connect states of the same parity. This restriction is observed simply by noting that the sign can’t change between a matrix element and the parity transformed matrix element.

Now, since an expectation value (average position, for example) is always a matrix element connecting an eigenket to itself, expectation values can only be non-zero for operators of even parity. For example, in a system defined across all space, average position ends up being zero because the position operator is odd, while both eigenbra and eigenket are of the same function, and therefore have the same parity. For average position to be non-zero, the wavefunction would need to be a superposition of eigenkets of opposite parity (and therefore not an eigenstate of parity at all!)

A tangible, far reaching result of this symmetry, related particularly to the position operator, is that no pure eigenstate can have an electric dipole moment. The dipole moment operator is built around the position operator, so a situation where position expectation value goes to zero will require dipole moment to be zero also. Any observed electric dipole moment must be from a mixture of states.

If you stop and think about that, that’s really pretty amazing. It tells you whether an observable is zero or not depending on which eigenkets are present and whether the operator for that observable can be inverted or not.

Hopefully I got that all correct. If anybody more sophisticated than me sees holes in my statement, please speak up!

Welcome to symmetry.

(For the few people who may have noticed, I still have it in mind to write more about the magnets puzzle, but I really haven’t had time recently. Magnets are difficult.)

# A Spherical Tensor Problem

Since last I wrote about it, my continued sojourn through Sakurai has brought me back to spherical tensors, a topic I didn’t well understand when last I saw it. The problem in question is Sakurai 3.21. We will get to this problem shortly…

I’ve been thinking about how best to include math on this blog. The fact of the matter is that it isn’t easy to do very fast. It looks awful if I photograph pages from my notebook, but it takes forever if I use a word processor to make it nice and neat and presentable. I’ve tried a stylus in OneNote before, but I don’t very much like the feeling compared to working on paper.

After my tirade the other day about the Smith siblings, I’ve been thinking again about everything I wanted this blog to be. It isn’t hard to find superficial level explanations of most of physics, but I also don’t want this to read like a textbook. If Willow Smith hosts ‘underground quantum mechanics teachings,’ I actually honestly envisioned this effort on my part as a sort of underground teaching –regardless of the nonexistent audience. What better way to put it. I didn’t want to put in pure math, at least not quite; I wanted to present here what happens in my head while I’m working with the math. How exactly do you do that?

Here’s an image of the mythological notebook where all my practicing and playing takes place:

I’ve never been neat and pretty while working with problems, but all that scratching doesn’t look like scratching to me while I’m working with it. It’s almost indescribable. I could shovel metaphors on top of it or take pictures of beautiful things and call that other thing ‘what I see.’ But there isn’t anything like it. If you’ve spent time on it yourself, maybe you know. It’s addictive. It’s conceptual tourism in the purest form, standing on the edge of the Grand Canyon looking out, then climbing down inside, feeling the crags of stone on my fingertips as I pass down toward where the river flows. It’s tourism in a way, going to a place that isn’t a place, not necessarily pushing back the frontiers since people have been there before, but climbing to the top of a mountain that nobody ever just visits in daily life. You can’t simply read it and you don’t just walk there.

The pages pictured above are of my efforts to derive a formula from Schwinger’s harmonic oscillator representation to produce the rotation matrices for any value of angular momentum. Writing the words will mean nothing to practically anybody who reads this. But what do I do to make it genuine? How do you create a travelogue for a landscape of mathematical ideas?

For the moment, at least, I hope you will forgive me. I’m going to use images of my notebook in all its messy glory.

Where we started in this post was mentioning Spherical Tensors. I hit this topic again while considering Sakurai problem 3.21. ‘Tensor’ is admittedly a very cool word. In Brandon Sanderson’s “Steelheart,” Tensors are a special tool that lets people use magic power to scissor through solid material.

For all the coolness of the word, what are Tensors really?

In the most general sense, a tensor is a sort of container. Here is a very simple tensor:

Ai

This construct holds things. Computer programmers call them Arrays sometimes, but here it’s just a very simple container. The subscript ‘i’ could stand for anything. If you make ‘i’ be 1,2 or 3, this tensor can contain three things. I could make it be a vector in 3 dimensions, describing something as simple as position.

In the problem I’m going to present, you have to think twice about what ‘tensor’ means in order to drag out the idea of a ‘spherical’ tensor.

Here is Sakurai 3.21 as written in my notebook:

Omitting the |j,m> ket at the bottom, Sakurai 3.21 is innocuous enough. You’re just asked to evaluate two sums between parts a.) and b.). No problem right? Just count some stuff and you’re done! Trick is, what the hell are you trying to count?

Contrary to my using the symbol ‘Ai’ above to sneak in the meaning of ‘love,’ the dj here do not play in dance clubs, even if they are spinning like a turntable! These ‘d’s are symbols for a rotation operation which can transform a state as if rotating it by an angle (here angle β). Each ‘d’ transforms a state with a particular z-axis angular momentum, labeled by ‘m’, to a second state with a different label, where the angle between the two states is a rotation of β around the y-axis. Get all that? You’ve got a spinning object and you want to alter the axis of the spin by an angle β. Literally you’re spinning a spin! That’s a headache, I know.

Within quantum mechanics, you can know only certain things about the rotation of an object, but not really know others. This is captured in the label ‘j’. ‘j’ describes the total angular momentum contained in an object; literally how much it’s spinning. This is distinct from ‘m’ which describes the rotation around a particular axis. Together, ‘m’ and ‘j’ encapsulate all of the knowable rotational qualities of our quantum mechanical object, where you can know it’s rotating a certain amount and that some of that rotation is around a particular axis. The rest of the rotation is in some unknowable combination not along the axis of choice. This whole set of statements is good for both an object spinning and for an object revolving around another object, like a planet in orbit.

The weird trick that quantum mechanics plays is that only a certain number of rotational state are allowed for a particular state of total angular momentum; the more total angular momentum you have, the larger the library of rotational states you can select from. In the sum in the problem, you’re including all the possible states of z-axis angular momentum allowable by the particular total angular momentum. Simultaneous rotation around x and y-axis is knowable only to an extent depending on the magnitude of rotation about the z-axis (so says the Heisenberg Uncertainty Principle, in this case–but the problem doesn’t require that…).

Here is an example of how you ‘rotate a state’ in quantum mechanics. I expect that only readers familiar with the math will truly be able to follow, but it’s a straightforward application of an operator to carry out an operation at a symbolic level:

All this shows is that a rotation operator ‘R’ works on one state to produce another. By the end of the derivation, operator R has been converted into a ‘dj’ like what I mentioned above. Each dj is a function of m and m” in a set of elements which can be written as a 2-dimensional matrix… dj is literally mapping the probability amplitude at m onto m”, which can be considered how you route one element of a 2-dimensional matrix into another based upon the operation of rotating the state. In this case, the example starts out without a representation, but ultimately shifts over to representing in a space of ‘j’ and ‘m.’ The final state can be regarded as a superposition of all the states in the set, as defined by the sum. In all of this, dj can be regarded as a tensor with three indices, j, m and m”, making it a 3-dimensional entity which  contains a variable number of elements depending on each level of j: dj is only the face-plate of that tensor, coughing up whatever is stored at the element indexed by a particular j, m and m”.

In problem 3.21, what you’re counting up is a series of objects that transform other objects as paired with whatever z-axis angular momentum they represent within the total angular momentum contained by the system. This collection of objects is closed, meaning that you can only transform among the objects in the set. If there were no weighting factor in the sum, the sum of these squared objects actually goes to ‘1’… the ‘d’ symbols become probability amplitudes when they’re squared and, for a closed set, you must have 100% probability of staying within that set. The headache in evaluating this sum, then, is dealing with the weighting factor, which is different for each element in the sum, particularly for whatever state they are ultimately supposed to rotate to.

My initial idea looking at this problem was that if I can calculate each ‘d,’ then I can just work the sum directly. Just square each ‘d’ and multiply it by the weighting factor and voila! There was no thought in my head about spherical tensors, despite the overwhelming weight of that hint following part b.)

Naively, this approach could work. You just need some way of calculating a generalized ‘d.’ This can be done using Schwinger’s simple harmonic oscillator model. All you need to do is rotate the double harmonic oscillator and then pick out the factor that appears in place of ‘d’ in the appropriate sum –an example of which can be seen in the rotation transformation above. Not hard, right?

A month ago, I would have agreed with you. I had spent only a little bit of time learning how the Schwinger model works and I thought, “Well, solve ‘d’ using the Schwinger method and then boom, we’re golden.” It didn’t seem too bad, except that days eventually converted themselves into weeks before I had a good enough understanding of the method to be able to crank out a ‘d.’ You can see one of my pages of work on this near the top of this post… there were factorials and sums everywhere. By the time I had it completely figured out –which I really don’t regret, by the way– I had actually pretty much forgotten why I went to all that trouble in the first place. My thesis here is that, yes, you can solve for each and every ‘d’ you may ever want using Schwinger’s method. On the other hand, when I came back to look at Sakurai 3.21, I realized that if I were to try to horsewhip a version of ‘d’ from the Schwinger method into that sum, I was probably never going to solve the problem. The formula to derive each ‘d’ is itself a big sum with a large number of working parts, the square of which would turn into a _really_ large number of moving parts. I know I’m not a computer and trying to go that way is begging for a trouble.

It was a bit of a letdown when I realized that I was on the wrong track. As a lesson, that happens to everyone: almost nobody gets it first shot. This should be an abstract lesson to many people: what you think is a truth isn’t always a truth, or necessarily the simplest path to a truth. I still expect that if you were a horrific glutton for punishment, you could work the problem the way I started out trying, but you would get old in the attempt.

I spent some introspective time reading Chapter 3 of Sakurai, looking at simple methods for obtaining the necessary ‘d’ matrix elements. Most of these can’t be used in the context of problem 3.21 because they are too specific. With half-integer j or j of 1, you can directly calculate rotation matrices, except that this is not a general solution. I had a feeling that you could suck the weighting factor of ‘m’ back into the square of the ‘d’ and use an eigenvalue equation to change the ‘m’ into the Jz operator, but I wasn’t completely sure what to do with it if I did. About a week ago, I started to look a bit more closely at the section outlining operator transformations using spherical tensor formalism. I had a feeling I could make something work in these new ideas, especially following that heavy-handed hint in part b.)

The spherical tensor formalism is very much like the Heisenberg picture; it enables one to rotate an operator using the same sorts of machineries that one might use to rotate a state. This, it turns out, is the necessary logical leap required by the problem. To be honest, I didn’t actually understand this while I was reading the math and trying to work through it. I only really understood very recently. Rotating the state is not the same as rotating operators. The math posted above is the rotation of a state.

As it turns out, with an operator written in a cartesian form, different parts will rotate differently from one another; you can’t just apply one rotation to the whole thing and expect the same operator back.

This becomes challenging because the angular momentum operators are usually written in a cartesian form and because operator transformations in quantum mechanics are usually handled as unitary transformations. Constructing a unitary transformation requires careful analysis of what can rotate and remain intact.

Here is a derivation which shows rotation converted into a unitary operation:

In this case, the rotation matrix ‘d’ has been replaced by a more general form. The script ‘D’ is generally used to represent a transformation involving all three Euler angles, whereas the original ‘d’ was a rotation only around the y-axis. In principle, this transformation can work for any reorientation. In this derivation, you start with a spherical harmonic and show, if you create a representation of something else with that spherical harmonic, that you can rotate that other object within the Ylm. In this derivation, the object being rotated is just a vector used to indicate direction, called ‘n’. The spherical harmonics have this incredible quality in that they are ready-made to describe spherical, angle-space objects and that they rotate naturally without distortion… if you want to rotate anything, writing it as an object which transforms like a spherical harmonic is definitely the best way to go.

In the last line of that derivation, the spherical harmonic containing the direction vector has been replaced with a construct labeled simply as ‘T’. T is a spherical tensor. This object contains whatever you put into it and resides in the description space of the spherical harmonics. It rotates like a spherical harmonic.

The last line of algebra contains another ramification that I think is interesting. In this math, for this particular case, the unitary transform of D*Object*D reduces to a simple linear transform D*Object.

This brings me roughly full circle: I’m back at spherical tensors.

A spherical tensor is a multi-dimensional object which sits in a space which uses the spherical harmonics as a descriptive basis set. Each index of the spherical tensor transforms like the Ylm that resides at that index location. In some ways, this looks very like a state function in spherical harmonic space, but it’s different since the object being represented is an operator native to that space rather than a state function. Operators and state functions must be treated differently in quantum mechanics because they are different. A state function is a nascent form of a probability distribution while an operator is an entity that can be used to manipulate that distribution in eigenvalue equations.

This may seem a non-sequitur, but I’ve just introduced you to a form of trans-dimensional travel. I’ve just shown you the gap for moving between a space involving the dimensions of length, width and depth into a space which replaces those descriptive commodities with angles. A being living in spherical harmonic space is a being constructed directly out of turns and rotations, containing nothing that we can directly witness as a physical volume. You will never find something so patently crazy in the best science fiction! Quantum mechanics is replete with real expressions for moving from one space to another.

The next great challenge of Sakurai 3.21 is learning how to convert a cartesian operator construct into a spherical one. You can put whatever you want into a spherical tensor, but this means figuring out how to transfer the meaning of the cartesian expression into the spherical expression. As far as I currently understand it, the operator can’t be directly applied while residing within the spherical tensor form –I screwed this problem up a number of times before I understood that. To make the problem work, you have to convert from cartesian objects into the spherical object, perform the rotation, then convert backward into the cartesian object in order to come up with the final expression. The spherical tensor forms of the operators end up being linear combinations of the cartesian forms.

Here is the template for using spherical harmonics to guide conversion of cartesian operators into spherical tensor components:

In this case, I’m converting the momentum operators into a spherical tensor. This requires only the rank 1 spherical harmonics. The spherical tensor of rank one is a three dimensional object with indices 1,0 and -1, which relate to the cartesian components of the momentum vector Jz, Jx and Jy as shown. For position, cosine = z/radius and the x and y conversions follow from that, given the relations above. Angular momentum needs no spatial component because of normalization in length, so z-axis angular momentum just converts directly into cosine.

As you can see, all the tensor does here is store things. In this case, the geometry of conversion between the spaces stores these things in such a way that they can be rotated with no effort.

Since I’ve slogged through the grist of the ideas needed to solve Sakurai 3.21, I can turn now to how I solved it. For all the rotation stuff that I’ve been talking about, there is one important, very easy technique for rotating spherical harmonics which is relevant to this particular problem. If you are rotating an m=0 state, of which there is only one in every rank of total angular momentum, the dj element is a spherical harmonic. No crazy Schwinger formulas, just bang, use the spherical harmonic. Further, both sections of problem 3.21 involve converting m into Jz and Jz converts to the m=0 element of the spherical tensor with nothing but a normalization (to see this, look at the conversion rules that I included above). This means that the unitary transform of Jz can be mediated either by rotating from any state into the m=0 state, or rotating m=0 toward any state, which lets the dj be a spherical harmonic in either direction.

Now, since part a.) is easy, here’s the solution to problem 3.21 part b.)

I apologize here that the clarity of the images is not the best; the website downgraded the resolution. I included a restatement of problem 3.21 part b.) in the first line here and then began by expanding the absolute value and pulling the eigenvalue of m back into the expression so that I could recast it as operator Jz using an eigenvalue equation to give me Jz^2. Jz must then be manipulated to produce the spherical tensor, the process expanded below.

Where I say “three meaningful terms,” I’m looking ahead to an outcome further along in the problem in order to avoid writing 6 extra terms from the multiplication that I don’t ultimately need. I do write my math exhaustively, but in this particular case, I know that any term that isn’t J0*J0, J1*J-1 or J-1*J1 will cancel out after the J+ and J- ladder operators have had their way. For anyone versed, J1 is directly the ladder operator J+ and J-1 is J-. If the m value doesn’t end up back where it started, with J+J- or J-J+ combinations, when you take the resulting expectation value, anything like <m|m+1> is zero. Knowing this a page in advance, I simply omitted writing all that math. I then worked out the two unique coefficients that show up in the sum of only three elements…

In the middle of this last page, I converted the operators Jx and Jy into a combination of J^2 and Jz. The ladder operators composed of Jx and Jy served to strain out 2/3 of the mathematical extra and I more or less omitted writing all of that from the middle of the second page. After you’re back in the cartesian form, once you’ve made the rotation, which occurs once the sum has been expanded, there is no need to stay in terms of Jx and Jy because the system can’t be simultaneously expressed as eigen functions of Jx, Jy and Jz. You can have simultaneous eigen functions of only total angular momentum and one axis, typically chosen to be the z-axis. By converting to J^2 and Jz only, I get the option to use eigen values instead of operators, which is almost always where you want to end up in a quantum problem. This is why I started writing |m> as |j,m>… most of the time in this problem I only care about tracking the m values, but I understand from the very beginning of the problem that I have a j value hiding in there that I can use on choice.

One thing that eases your burden considerably in this problem is understanding how j compartmentalizes m values. As I mentioned before, each rank of j contains a small collection of m value eigenfunctions which only transform amongst themselves. Even though the problem is asking for a solution that is general to every j, by using transformations of the angular momentum operator, which is a rank 1 operator, I only needed the j=1 spherical harmonics to represent it. This allows me to work in a small space which can be general across all values of j. This is part of what makes the Schwinger approach to this problem so unwieldy; by trying to represent d for every j, I basically swelled the number of terms I was working with to infinity. You can work with situations like this, but it just gets too big too quickly in this case –I’m just not that smart.

It’s also possible to work omitting the normalization coefficients needed in the spherical harmonics, but do this with caution. It can be hard to tell which part of the coefficient is dedicated to flattening multiplicity and which is canceling out of the solid angle. In cases where terms are getting mixed, I hold onto normalization so that I know down the line whether or not all my 2s and -1 are going to turn out. I always screw things like this up, so I do my best to give myself whatever tools I can for figuring out where I’ve messed up arithmetic. I found an answer to this problem on-line which leaves cartesian indices on the transformations through the problem and completely omits the normalization… technically, this sort of solution is wrong and bypasses the mechanics. You can’t transform a cartesian tensor like a spherical tensor; getting yourself screwed up by missing the proper indices misses the math. How the guy hammered out the right answer from doing it so poorly makes no sense to me.