Effective molecular Hamiltonian and Hund cases

In summary, the effective Hamiltonian is built by solving the Schrodinger equation for fixed internuclear distance for the electrostatic potential, then adding perturbative terms off diagonal in the electronic wavefunctions. These perturbative expansions create an effective hamiltonian for each electronic level, hiding these off diagonal interactions in an effective constant rising the degeneracy of the rotational levels within a given electronic level. The basis for rotational levels is usually a Hund case basis, and when you fit data to the effective hamiltonian, you use the energy differences between the rotational levels to extract ##B## and ##\gamma##.
  • #71
amoforum said:
I think I can answer the second question for now. Eqn. 6.333 I believe has some sloppy notation. The second integral should maybe have a different symbol for ##R_\alpha## for the electronic part. It's meant to be at a single internuclear distance, usually the equilibrium distance. So you don't integrate over it. Some other texts might call this the "crude" BO approximation, and Eqn. 6.330 would be the usual BO approximation. Then there's also the Condon approximation which assumes there's no dependence on the nuclear coordinates at all.
Thank you for your reply. I will look at the sections you suggested for questions 1. For the second one, I agree that if that ##R_\alpha## is a constant, we can take the electronic integral out of the vibrational integral, but I am not totally sure why can we do this. If we are in the BO approximation, the electronic wavefunction should be a function of ##R##, for ##R## not constant, and that electronic integral would be a function of ##R##, too. But why would we assume it is constant? I understand the idea behind BO approximation, that the electrons follow the nuclear motion almost instantaneously, but I don't get it here. It is as if the nuclei oscillate so fast that the electrons don't have time to catch up and they just the see the average inter-nuclear distance, which is kinda the opposite of BO approximation. Could you help me a bit understand this assumption that the electronic integral is constant? Thank you!
 
Physics news on Phys.org
  • #72
BillKet said:
It is as if the nuclei oscillate so fast that the electrons don't have time to catch up and they just the see the average inter-nuclear distance.
Can you elaborate on how you arrived at this interpretation? Why does it imply that the electrons can't catch up? The "crude" BO approximation gives you a dipole moment result at a specific ##R##. If you on average only observe a specific ##R_{eq}## (equilibrium distance), then the electronic integral at ##R_{eq}## will be your observed dipole moment.
 
  • #73
BillKet said:
@amoforum I looked into more details at the derivation in B&C in section 6.11.6 and I am actually confused a bit. Using spherical tensors for now, the transition between 2 electronic levels would be:

$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el}+d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}_{0q}T_q^1(d_{el})|r>|\nu>|\eta> + E_z\sum_q<\eta'|<\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu>|\eta> = $$
$$E_z\sum_q<\nu'|<\eta'|T_q^1(d_{el})|\eta>|\nu><r'|\mathcal{D}_{0q}|r> + E_z\sum_q<\eta'||\eta><\nu'|<r'|\mathcal{D}T_q^1(d_{nucl})|r>|\nu> $$

Okay, here's my stab at the first question:

The derivation is completely written out in Lefebvre-Brion/Field in Section 6.1.2.1, and it looks like yours is consistent.

Now as to why there's only cos##\theta## in B&C's version. I suspect this is all because of Eqn. 6.330 in B&C. Notice that the rotational wavefunctions are spherical harmonics for both the initial and final states. The symmetric top wavefunctions reduce to spherical harmonics if ##\Omega = 0##. (See the text above Eqn 5.52 and reconcile that with Eqn. 5.145). This is a very constraining assumption, because that means both states must be ##\Omega = 0##, like ##^1\Sigma## states. (I guess we can constrain ourselves to ##M = 0## states too?) And if that's the case, then the 3j-symbol has ##\Omega = 0## and ##\Omega' = 0## in its bottom row, meaning ##q## must equal zero for it to not vanish.

So then the only thing I can't reconcile is the sentence after Eqn. 6.331 that says ##\Delta J = 0## is allowed. To me that's only true if you have symmetric top wavefunctions, because then you can have a change in both ##\Omega## and ##J## that adds to zero.

I wouldn't be surprised that this detail was glossed over, considering that the main point they wanted to get across in that section was the electronic-vibrational stuff like Franck-Condon factors and allowed electronic transitions in homonuclears.
 
Last edited:
  • Informative
Likes Twigg
  • #74
amoforum said:
Can you elaborate on how you arrived at this interpretation? Why does it imply that the electrons can't catch up? The "crude" BO approximation gives you a dipole moment result at a specific ##R##. If you on average only observe a specific ##R_{eq}## (equilibrium distance), then the electronic integral at ##R_{eq}## will be your observed dipole moment.
I guess I don't understand what is the mathematical approximation used that allows you to assume that the electronic integral, which is a function of ##R## can be approximated to be constant. In the BO approximation you would use the adiabatic approximation, but I am not sure here, formally, what allows you to do that. Intuitively, if you have, say, the function ##cos^2##, but your response time to this oscillation is too slow, what you see is the average over many periods which is ##1/2##. Given that that electronic integral sees just the average internuclear distance, I assumed it is something similar to this i.e. the electrons see just an average of the internuclear distance.
 
  • #75
amoforum said:
Okay, here's my stab at the first question:

The derivation is completely written out in Lefebvre-Brion/Field in Section 6.1.2.1, and it looks like yours is consistent.

Now as to why there's only cos##\theta## in B&C's version. I suspect this is all because of Eqn. 6.330 in B&C. Notice that the rotational wavefunctions are spherical harmonics for both the initial and final states. The symmetric top wavefunctions reduce to spherical harmonics if ##\Omega = 0##. (See the text above Eqn 5.52 and reconcile that with Eqn. 5.145). This is a very constraining assumption, because that means both states must be ##\Omega = 0##, like ##^1\Sigma## states. (I guess we can constrain ourselves to ##M = 0## states too?) And if that's the case, then the 3j-symbol has ##\Omega = 0## and ##\Omega' = 0## in its bottom row, meaning ##q## must equal zero for it to not vanish.

So then the only thing I can't reconcile is the sentence after Eqn. 6.331 that says ##\Delta J = 0## is allowed. To me that's only true if you have symmetric top wavefunctions, because then you can have a change in both ##\Omega## and ##J## that adds to zero.

I wouldn't be surprised that this detail was glossed over, considering that the main point they wanted to get across in that section was the electronic-vibrational stuff like Franck-Condon factors and allowed electronic transitions in homonuclears.
Oh I see, that makes sense. Thanks a lot! I still have a quick question about the electronic integral. In order to have transitions between different electronic states, as you mentioned, terms of the form $$<\eta'|T_{\pm 1}^1(r)|\eta>$$ should not be zero. But this is equivalent to $$<\eta'|x|\eta>$$ not being zero (and same for ##y##). However, the electronic wavefunctions have cylindrical symmetry, so they should be even functions of x and y (here all the coordinates are in the intrinsic molecular (rotating) frame). Wouldn't in this case $$<\eta'|x|\eta>$$ be zero?
 
  • #76
BillKet said:
I guess I don't understand what is the mathematical approximation used that allows you to assume that the electronic integral, which is a function of ##R## can be approximated to be constant. In the BO approximation you would use the adiabatic approximation, but I am not sure here, formally, what allows you to do that. Intuitively, if you have, say, the function ##cos^2##, but your response time to this oscillation is too slow, what you see is the average over many periods which is ##1/2##. Given that that electronic integral sees just the average internuclear distance, I assumed it is something similar to this i.e. the electrons see just an average of the internuclear distance.
I'd say it's more of a physical approximation than a mathematical one. For low vibrational states (shorter internuclear distances), the region of the dipole moment function is relatively flat. So just picking the equilibrium distance actually approximates it pretty well. At high vibrational states, where you'd sample large internuclear distances, the curve starts to get wobbly on the outskirts and the approximation breaks down. This makes sense because you'd expect BO breakdown at higher vibrational energies.
 
  • #77
BillKet said:
Oh I see, that makes sense. Thanks a lot! I still have a quick question about the electronic integral. In order to have transitions between different electronic states, as you mentioned, terms of the form $$<\eta'|T_{\pm 1}^1(r)|\eta>$$ should not be zero. But this is equivalent to $$<\eta'|x|\eta>$$ not being zero (and same for ##y##). However, the electronic wavefunctions have cylindrical symmetry, so they should be even functions of x and y (here all the coordinates are in the intrinsic molecular (rotating) frame). Wouldn't in this case $$<\eta'|x|\eta>$$ be zero?

Time to look at some molecular orbitals! Only ##\Sigma## states have cylindrical symmetry, which as you've pointed out, means ##\Sigma## to ##\Sigma## transitions are not allowed, unless you go from ##\Sigma^+## to ##\Sigma^-##, the latter of which is not symmetric along the internuclear axis, but still only ##q = 0## transitions allowed.

Take a look at some ##\Pi## or ##\Delta## orbitals. They are absolutely not cylindrically symmetric, because they look like linear combinations of ##p## and ##d## orbitals.
 
  • #78
amoforum said:
Time to look at some molecular orbitals! Only ##\Sigma## states have cylindrical symmetry, which as you've pointed out, means ##\Sigma## to ##\Sigma## transitions are not allowed, unless you go from ##\Sigma^+## to ##\Sigma^-##, the latter of which is not symmetric along the internuclear axis, but still only ##q = 0## transitions allowed.

Take a look at some ##\Pi## or ##\Delta## orbitals. They are absolutely not cylindrically symmetric, because they look like linear combinations of ##p## and ##d## orbitals.
Thanks for the vibrational explanation! I understand what you mean now.

I should check molecular orbitals indeed, I kinda looked at the rotational part only. But if that is the case, then it makes sense. Thank you for that, too!
 
  • #79
I have a feeling that mathematically the approximation of taking ##\frac{\partial \mu_e}{\partial R} \approx 0## could be obtained from the BO approximation with the adiabatic theorem, taking the dynamical and Berry's phases evolved to be negligibly small since the nuclei barely move over a transition lifetime. I could just be crazy though. I never put a lot of thought into it before.
 
  • #80
amoforum said:
Time to look at some molecular orbitals! Only ##\Sigma## states have cylindrical symmetry, which as you've pointed out, means ##\Sigma## to ##\Sigma## transitions are not allowed, unless you go from ##\Sigma^+## to ##\Sigma^-##, the latter of which is not symmetric along the internuclear axis, but still only ##q = 0## transitions allowed.

Take a look at some ##\Pi## or ##\Delta## orbitals. They are absolutely not cylindrically symmetric, because they look like linear combinations of ##p## and ##d## orbitals.
I came across this reading, which I found very useful in understanding the actual form of the Hund cases (not sure if this is derived in B&C, too), mainly equations 6.7 and 6.12. I was wondering how this would be expanded to the case of nuclear spin (call it ##I##). Given that in most cases the hyperfine interaction is very weak, we can assume that the basis we build including ##I## would be something similar to the Hund case b) coupling of ##N## and ##S## in 6.7 i.e. we would need to use Clebsch–Gordan coefficients.

So in a Hund case a, the total basis wavefunction after adding the nuclear spin, with ##F## the total angular momentum we would have:

$$|\Sigma, \Lambda, \Omega, S, J, I, F, M_F>=\sum_{M_J=-J}^{J}\sum_{M_I=-I}^I <J,M_J;I,M_I|F,M_F> |\Sigma, \Lambda, \Omega, S, J, M_J>|I, M_I>$$

where ##|\Sigma, \Lambda, \Omega, S, J, M_J>## is a Hund case a function in the absence in nuclear spin. For Hund case b we would have something similar, but with different quantum numbers:

$$|\Lambda, N, S, J, I, F, M_F>=\sum_{M_J=-J}^{J}\sum_{M_I=-I}^I <J,M_J;I,M_I|F,M_F> |\Lambda, N, S, J, M_J>|I, M_I>$$

with ## |\Lambda, N, S, J, M_J>## being a Hund case b basis function in the absence of nuclear spin. Is this right? Thank you!
 
  • Like
Likes Twigg
  • #81
Yep, that's right. Nuclear angular momentum is just tacked on at the end of the hierarchy (though it need not be the smallest spectral splitting) with another addition of angular momenta.
 
  • Like
Likes BillKet and amoforum
  • #82
BillKet said:
I came across this reading, which I found very useful in understanding the actual form of the Hund cases (not sure if this is derived in B&C, too), mainly equations 6.7 and 6.12. I was wondering how this would be expanded to the case of nuclear spin (call it ##I##). Given that in most cases the hyperfine interaction is very weak, we can assume that the basis we build including ##I## would be something similar to the Hund case b) coupling of ##N## and ##S## in 6.7 i.e. we would need to use Clebsch–Gordan coefficients.

So in a Hund case a, the total basis wavefunction after adding the nuclear spin, with ##F## the total angular momentum we would have:

$$|\Sigma, \Lambda, \Omega, S, J, I, F, M_F>=\sum_{M_J=-J}^{J}\sum_{M_I=-I}^I <J,M_J;I,M_I|F,M_F> |\Sigma, \Lambda, \Omega, S, J, M_J>|I, M_I>$$

where ##|\Sigma, \Lambda, \Omega, S, J, M_J>## is a Hund case a function in the absence in nuclear spin. For Hund case b we would have something similar, but with different quantum numbers:

$$|\Lambda, N, S, J, I, F, M_F>=\sum_{M_J=-J}^{J}\sum_{M_I=-I}^I <J,M_J;I,M_I|F,M_F> |\Lambda, N, S, J, M_J>|I, M_I>$$

with ## |\Lambda, N, S, J, M_J>## being a Hund case b basis function in the absence of nuclear spin. Is this right? Thank you!

There's also a nice discussion in B&C Section 6.7.8 about the different ways ##I## couples in Hund's cases (a) and (b). For example, if ##I## couples to ##J## in Hund's case (b), that's actually called Hund's case (b##_{\beta J}##), which is one of the different ways it can couple in.
 
  • Like
Likes BillKet
  • #83
Twigg said:
Thanks for the clarification, @amoforum! And @BillKet, I'd actually be curious to see what you come up with for the stark shift, if you find the time. I tried to spend some time learning this once but my coworkers weren't having it and sent me back to mixing laser dye :doh: No pressure, of course!

As far as the BO approximation, when we did spectroscopy we didn't really keep a detailed effective Hamiltonian, we would just re-measure the lifetimes and rotational constants in other vibrational states if there was a need to do so. I think in molecules where the BO violation is weak, you can take this kind of pragmatic approach. Then again, we only thought about molecules with highly diagonal Franck-Condon factors so we never really ventured above ##\nu = 2## or so.
@Twigg, here is my take at deriving the Stark shift for a Hund case a. In principle it is in the case of 2 very close ##\Lambda##-doubled levels (e.g. in a ##\Delta## state as in the ACME experiment) in a field pointing in the z-direction, ##E_z##. Please let me know if there is something wrong with my derivation.

$$H_{eff} = <n\nu J M \Omega \Sigma \Lambda S|-dE|n\nu J' M' \Omega' \Sigma' \Lambda' S'>=$$
$$<n\nu J M \Omega \Sigma \Lambda S|-E_z\Sigma_q\mathcal{D}_{0q}^1T_q^1(d)|n\nu J' M' \Omega' \Sigma' \Lambda' S'>=$$
$$-E_z\Sigma_q<n\nu J M \Omega \Sigma \Lambda S|\mathcal{D}_{0q}^1T_q^1(d)|n\nu J' M' \Omega' \Sigma' \Lambda' S'>=$$
$$-E_z\Sigma_q<n\nu|T_q^1(d)|n\nu><J M \Omega \Sigma \Lambda S|\mathcal{D}_{0q}^1| J' M' \Omega' \Sigma' \Lambda' S'>$$

For the ##<n\nu|T_q^1(d)|n\nu>##, given that we are in a given electronic state, the difference between ##\Lambda## and ##-\Lambda## can only be 0, 2, 4, 6..., (for a ##\Delta## state it would be 4) so the terms with ##q=\pm 1## will give zero. So we are left with

$$-E_z<n\nu|T_0^1(d)|n\nu><J M \Omega \Sigma \Lambda S|\mathcal{D}_{00}^1| J' M' \Omega' \Sigma' \Lambda' S'>$$

If we use the variable ##D## for ##<n\nu|T_0^1(d)|n\nu>##, which is usually measured experimentally as the intrinsic electric dipole moment of the molecule (I might have missed a complex conjugate in the Wigner matrix, as it is easier to type without it :D) we have:

$$-E_zD<\Sigma S||\Sigma' S'><\Lambda||\Lambda'><J M \Omega |\mathcal{D}_{00}^1| J' M' \Omega' >$$

From here we get that ##S=S'##, ##\Sigma=\Sigma'## and ##\Lambda = \Lambda'##, which also implies that ##\Omega = \Omega'##. By calculating that Wigner matrix expectation value we get:

$$-E_zD(-1)^{M-\Omega}
\begin{pmatrix}
J & 1 & J' \\
\Omega & 0 & \Omega' \\
\end{pmatrix}
\begin{pmatrix}
J & 1 & J' \\
M & 0 & M' \\
\end{pmatrix}
$$

This gives us that ##M=M'## and ##\Delta J = 0, \pm 1##. If we are in the ##\Delta J = \pm 1## case, we connect different rotational levels, which are much further away from each other relative to ##\Lambda##-doubling levels, so I assume ##\Delta J = 0##. The expression above becomes:

$$-E_zD(-1)^{J-\Omega}\frac{M\Omega}{J(J+1)}$$

Now, the parity eigenstates are linear combinations of hund a cases:

$$|\pm>\frac{|J M S \Sigma \Lambda \Omega>\pm|J M S -\Sigma -\Lambda -\Omega>}{\sqrt{2}}$$

If we build the 2x2 Hamiltonian in the space spanned by ##|\pm>## with the Stark shift included it will then look like this (I will assume the ACME case, with ##J=1## and ##\Omega = 1##):

$$
\begin{pmatrix}
\Delta & -E_zD M\\
-E_zD M & -\Delta \\
\end{pmatrix}
$$

Assuming the 2 levels are very close we have ##\Delta << E_zD## and by diagonalizing the matrix we get for the energies and eigenstates (with a very good approximation): ##E_{\pm} = \pm E_zD M## and ##\frac{|+>\pm|->}{\sqrt{2}}##. Hence the different parities are fully mixed so the system is fully polarized.
 
  • Love
Likes Twigg
  • #84
Thank you! I really appreciate it! Your derivation helped put a lot of puzzle pieces together for me.

I was able to get the polarizability out of your 2x2 Hamiltonian. It has eigenvalues $$E_{\Lambda,M} = \frac{\Lambda}{|\Lambda|} \sqrt{\Delta^2 + (E_z DM)^2} \approx \frac{\Lambda}{|\Lambda|} (\Delta + \frac{1}{2} \frac{E_z ^2 D^2 M^2}{\Delta} +O((\frac{E_z DM}{\Delta})^2))$$
From this, polarizability is $$\alpha = \frac{\Lambda}{|\Lambda|}\frac{D^2 M^2}{2\Delta}$$, since the polarizability is associated with the energy shift that is quadratic in electric field. This seems to be in full agreement with what that review paper was saying (one of these days, I'll find that paper again).
BillKet said:
I might have missed a complex conjugate in the Wigner matrix, as it is easier to type without it :D
I can never reproduce something I derived using Wigner matrices because of all the little mistakes here and there. They're just cursed. I'd sell my soul for a simpler formalism :oldbiggrin:

By the way, I found a thesis from the HfF+ eEDM group that derives the Stark shift, and it exactly agrees with your expression for no hyperfine coupling (##F=J## and ##I=0##). Nice work!
 
  • #85
Twigg said:
Thank you! I really appreciate it! Your derivation helped put a lot of puzzle pieces together for me.

I was able to get the polarizability out of your 2x2 Hamiltonian. It has eigenvalues $$E_{\Lambda,M} = \frac{\Lambda}{|\Lambda|} \sqrt{\Delta^2 + (E_z DM)^2} \approx \frac{\Lambda}{|\Lambda|} (\Delta + \frac{1}{2} \frac{E_z ^2 D^2 M^2}{\Delta} +O((\frac{E_z DM}{\Delta})^2))$$
From this, polarizability is $$\alpha = \frac{\Lambda}{|\Lambda|}\frac{D^2 M^2}{2\Delta}$$, since the polarizability is associated with the energy shift that is quadratic in electric field. This seems to be in full agreement with what that review paper was saying (one of these days, I'll find that paper again).
I can never reproduce something I derived using Wigner matrices because of all the little mistakes here and there. They're just cursed. I'd sell my soul for a simpler formalism :oldbiggrin:

By the way, I found a thesis from the HfF+ eEDM group that derives the Stark shift, and it exactly agrees with your expression for no hyperfine coupling (##F=J## and ##I=0##). Nice work!
I am glad it's right! :D Please send me the link to that paper when you have some time. About the polarization, I am a bit confused. Based on that expression it looks like it can go to infinity, shouldn't it be between 0 and 1 (I assumed that if you bring the 2 levels to degeneracy you would get a polarization of 1)?

Side note, unrelated to EDM calculations: I am trying to derive different expressions in my free time just to make sure I understood well all the details of diatomic molecules formalism. It's this term for the Hamiltonian due to parity violation. For example in this paper equation 1 (I just chose this one because I read it recently, but it is the same formula in basically all papers about parity violation) gets turned into equation 3 after doing the effective H formalism. I didn't get a chance to look closely into it, but if you have any suggestions about going from 1 to 3 or any paper that derives it (their references don't help much) please let me know. I guess that cross product comes from the Dirac spinors somehow but it doesn't look obvious to me.
 
  • #86
Here's that thesis. I was looking at equation 6.11 on page 103. Also, I used ##\Lambda## instead of ##\Omega## in my last post, just a careless error.

I don't have APS access right now, so I can't see the Victor Flambaum paper that is cited for that Hamiltonian. Just looking at the form of that Hamiltonian, the derivation might have little to do with the content of Brown and Carrington because it's talking about spin perpendicular to the molecular axis.

If you're reading papers on parity violation, this one on Schiff's theorem is excellent if you can get access. I used to have a copy but lost it. Also, talk about a crazy author list :oldlaugh: What is this, a crossover episode?
 
  • #87
Just noticed I missed your question about polarizability. I'm not sure why it would be limited between 0 and 1. Are you thinking of spin polarization? What I mean here is electrostatic polarizability ##\vec{d}_{induced} = \alpha \vec{E}##. It only appears to go to infinity as ##\Delta \rightarrow 0## because the series expansion I did assumed ##\Delta \gg E_z DM##. The reason for this inequality is that polarizability is usually quoted for ##E_z \rightarrow 0## by convention.
 
  • #89
So I tried to derive the Zeeman effect for a Hund case b, with the nuclear spin included in the wavefunction. The final result seems a bit too simple, tho. I will look only at the ##S\cdot B## term and ignore the ##g\mu_B## prefactor. For a Hund case b, the wavefunction with nuclear spin is:

$$|NS\Lambda J I F M_F> = \Sigma_{M_J}\Sigma_{M_I}<JM_JIM_I|FM_F>|NS\Lambda J M_J>|IM_I>$$

And we also have:

$$|NS\Lambda J M_J> = \Sigma_{M_N}\Sigma_{M_S}<NM_NSM_S|JM_J>|NM_N\Lambda>|SM_S>$$

where ##<JM_JIM_I|FM_F>## and ##<JM_SIM_N|JM_J>## are Clebsch-Gordan coefficients. Now, calculating the matrix element we have:

##<NS\Lambda J I F M_F|S\cdot B|N'S'\Lambda' J' I' F' M_F'>##

I will assume that the magnetic field is in the z direction. Also, given that we are in Hund case b we can look at the spin quantized in the lab frame, so we don't need Wigner rotation matrices, so we get ##S\cdot B = T_{p=0}^1(S)T_{p=0}^1(B) = B_zS_z##, where both ##B_z## and ##S_z## are defined in the lab frame, with ##S_z## being an operator, such that ##S_z|SM_S> = M_S|SM_S>##. So we have:

$$<NS\Lambda J I F M_F|B_zS_z|N'S'\Lambda' J' I' F' M_F'>=$$

$$B_z (\Sigma_{M_J}\Sigma_{M_I}<JM_JIM_I|FM_F><NS\Lambda J M_J|<IM_I|)S_z(\Sigma_{M_J'}\Sigma_{M_I'}<J'M_J'I'M_I'|F'M_F'>|N'S'\Lambda' J' M_J'>|I'M_I'>)$$

As ##S_z## doesn't act on the nuclear spin we get:

$$B_z \Sigma_{M_J}\Sigma_{M_I}\Sigma_{M_J'}\Sigma_{M_I'}<JM_JIM_I|FM_F><J'M_J'I'M_I'|F'M_F'><IM_I||I'M_I'><NS\Lambda J M_J| S_z |N'S'\Lambda' J' M_J'> = $$

$$B_z \Sigma_{M_J}\Sigma_{M_J'}\Sigma_{M_I}<JM_JIM_I|FM_F><J'M_J'IM_I|F'M_F'><NS\Lambda J M_J| S_z |N'S'\Lambda' J' M_J'> = $$

(basically we got ##I=I'## and ##M_I = M_I'##). For the term ##<NS\Lambda J M_J| S_z |N'S'\Lambda' J' M_J'>## we get:

$$(\Sigma_{M_N}\Sigma_{M_S}<NM_NSM_S|JM_J><NM_N\Lambda|<SM_S|)S_z(\Sigma_{M_N'}\Sigma_{M_S'}<N'M_N'S'M_S'|J'M_J'>|N'M_N'\Lambda'>|S'M_S'>)$$

As ##S_z## doesn't act on the ##|NM_N\Lambda>## part we have:

$$\Sigma_{M_N}\Sigma_{M_S}\Sigma_{M_N'}\Sigma_{M_S'}<NM_NSM_S|JM_J><N'M_N'S'M_S'|J'M_J'><NM_N\Lambda||N'M_N'\Lambda'><SM_S|S_z|S'M_S'>$$

From which we get ##N=N'##, ##M_N=M_N'## and ##\Lambda=\Lambda'##. So we have:

$$\Sigma_{M_N}\Sigma_{M_S}\Sigma_{M_S'}<NM_NSM_S|JM_J><NM_NS'M_S'|J'M_J'><SM_S|S_z|S'M_S'> = $$

$$\Sigma_{M_N}\Sigma_{M_S}\Sigma_{M_S'}<NM_NSM_S|JM_J><NM_NS'M_S'|J'M_J'>M_S'<SM_S||S'M_S'> = $$

And now we get that ##S=S'## and ##M_S=M_S'## so we have:

$$\Sigma_{M_N}\Sigma_{M_S}<NM_NSM_S|JM_J><NM_NSM_S|J'M_J'>M_S = $$

$$\delta_{JJ'}\delta_{M_JM_J'}M_S$$

So we also have ##J=J'## and ##M_J=M_J'##. Plugging in in the original equation, which was left at:

$$B_z \Sigma_{M_J}\Sigma_{M_J'}\Sigma_{M_I}<JM_JIM_I|FM_F><J'M_J'IM_I|F'M_F'><NS\Lambda J M_J| S_z |N'S'\Lambda' J' M_J'> = $$

$$B_z \Sigma_{M_J}\Sigma_{M_I}<JM_JIM_I|FM_F><JM_JIM_I|F'M_F'>M_S = $$

$$B_z M_S \delta_{FF'}\delta_{M_FM_F'}$$

So in the end we get ##F=F'## and ##M_F=M_F'##, so basically all quantum numbers need to be equal and the matrix element is ##B_zM_S##. It looks a bit too simple and too intuitive. I've seen mentioned in B&C and many other readings that hund case b calculations are more complicated than Hund case a. This was indeed quite tedious, but the result looks like what I would expect without doing these calculations (for example for the EDM calculations before I wouldn't see that ##\frac{1}{J(J+1)}## scaling as obvious). Also, is there a way to get to this result easier than what I did i.e. figure out that ##B_zM_S## should be the answer without doing all the math? Thank you!
 
  • #90
I haven't gone through your derivation yet, but yes, there's a way easier method, which is how B&C derive all their matrix elements.

Look at equation 11.3. Its derivation is literally three steps, by invoking only two equations (5.123 first and 5.136 twice, once for ##F## and once for ##J##). The whole point of using Wigner symbols is to avoid the Clebsch-Gordan coefficient suffering.

By the way, almost every known case is in the B&C later chapters for you to look up. Every once in a while it's not. It happened to me actually, but I was able to derive what I needed using the process above.
 
  • Like
Likes BillKet
  • #91
amoforum said:
I haven't gone through your derivation yet, but yes, there's a way easier method, which is how B&C derive all their matrix elements.

Look at equation 11.3. Its derivation is literally three steps, by invoking only two equations (5.123 first and 5.136 twice, once for ##F## and once for ##J##). The whole point of using Wigner symbols is to avoid the Clebsch-Gordan coefficient suffering.

By the way, almost every known case is in the B&C later chapters for you to look up. Every once in a while it's not. It happened to me actually, but I was able to derive what I needed using the process above.
Thanks a lot! This really makes things a lot easier!

I have a few questions about electronic and vibrational energy upon isotopic substitution. For now I am interested in the changes in mass, as I understand that there can also be changes in the size of the nucleus, too, that add to the isotope effects.

We obtain the electronic energy (here I am referring mainly to equation 7.183 in B&C) by solving the electrostatic SE with fixed nuclei. Once we obtain these energies, their value doesn't change anymore, regardless of the order of perturbation theory we go to in the effective Hamiltonian. The energy of the vibrational and spin-rotational will change, but this baseline energy of the electronic state is the same. When getting this energy, as far as I can tell, all we care about is the distance between the electrons and nuclei, as well as their charges. We also care about the electron mass, but not the nuclear one. This means that the electronic energy shouldn't change when doing an isotopic substitution. This is reflected in equation 7.199. However in equation 7.207 we have a dependence on the mass of the nuclei. From the paragraphs before, the main reason for this is the breaking of BO approximation. However, this breaking of BO approximation, and hence the mixing of electronic levels is reflected only in the effective Hamiltonian. As I mentioned above, the electronic energy should always be the same as its zero-th order value. Where does this mass dependence of the electronic energy ##Y_{00}## from equation 7.207 come from?

For vibrational energy, we have equation 7.184. I assume that the ##G^{(0)}_{\eta\nu}## term has the isotopic dependence given by 7.199. Do the corrections in 7.207 come from the other 2 terms: ##V^{ad}_{\eta\nu}## and ##V^{spin}_{\eta\nu}##? And if so, is this because these terms can also be expanded as in equation 7.180? For example, from ##V^{ad}_{\eta\nu}## we might get a term of the form ##x_{ad}(\nu+1/2)## so overall the first term in the vibrational expansion becomes ##(\omega_{\nu e}+x_{ad})(\nu+1/2)## which doesn't have the nice expansion in 7.199 anymore but the more complicated one in 7.207? Is this right? Also do you have any recommendations for readings that go into a bit more details about this isotopic substitution effects? Thank you!
 
  • #92
I'm much less familiar with vibrational corrections. And as you've probably noticed, it's not the main focus of B&C either. A couple places to start would be:

1. Dunham's original paper: http://jupiter.chem.uoa.gr/thanost/papers/papers4/PR_41(1932)721.pdf
It shows the higher order corrections that are typically ignored in all those ##Y_{ij}## coefficients.

2. In that section B&C refer to Watson's paper: https://doi.org/10.1016/0022-2852(80)90152-6
I don't have access to it, but it seems highly relevant to this discussion.
 
  • Like
Likes BillKet
  • #93
amoforum said:
I'm much less familiar with vibrational corrections. And as you've probably noticed, it's not the main focus of B&C either. A couple places to start would be:

1. Dunham's original paper: http://jupiter.chem.uoa.gr/thanost/papers/papers4/PR_41(1932)721.pdf
It shows the higher order corrections that are typically ignored in all those ##Y_{ij}## coefficients.

2. In that section B&C refer to Watson's paper: https://doi.org/10.1016/0022-2852(80)90152-6
I don't have access to it, but it seems highly relevant to this discussion.
Thanks for the references, they helped a lot. I was wondering if you know of any papers that extended this isotope shift analysis to molecules that are not closed shell. For example the isotope dependence of spin-orbit, spin-rotation or lambda doubling parameters. I see in B&C that they mention that this hasn't been done, but the book was written in 2003 and perhaps someone did the calculations meanwhile.
 
  • #94
I looked a bit at some actual molecular systems and I have some questions.

1. In some cases, a given electronic state, say a ##^2\Pi## state is far from other electronic states except for one, which is very close (sometimes even in between the 2 spin-orbit states i.e. ##^2\Pi_{1/2}## and ##^2\Pi_{3/2}##) and the rotational energy is very small. Would that be more of a Hund case a or c?

2. I noticed that for some ##^2\Pi## states, some molecules have the electronic energy difference between this state and the other state bigger than the spin-orbit coupling and the rotational energy, which would make them quite confidently a Hund case a. However, the spin orbit coupling is bigger than the vibrational energy splitting of both ##^2\Pi_{1/2}## and ##^2\Pi_{3/2}##. How would I do the vibrational averaging in this case? Wouldn't the higher order perturbative corrections to the spin-orbit coupling diverge? Would I need to add the SO Hamiltonian to the zeroth order hamiltonian, together with the electronic energy?

3. In the Hund case c, will my zeroth order Hamiltonian (and I mean how it is usually done in literature) be ##H_{SO}##, instead of the electronic one, ##H_e## or do I include both of them ##H_e+H_{SO}##? And in this case, if the spin orbit coupling would be hidden in the new effective ##V(R)##, how can I extract the spin-orbit constant, won't it be mixed with the electronic energy?
 
  • #95
BillKet said:
... One question I have is: is this Hamiltonian (with the centrifugal corrections) correct for any ##J## in a given vibrational level? I have seen in several papers mentioned that this is correct for low values of ##J## and I am not sure why would this not hold for any ##J##. I understand that for higher ##J## the best Hund case might change, but why would the Hamiltonian itself change? ...
Greetings,

I am late to this party and forgive me please if I have missed some of the discussion given a rather quick read of a complex topic.

I have not seen any explicit comments regarding Rydberg-Rydberg or Rydberg-valence perturbations (interactions). Such interactions certainly influence observed rotationally resolved spectra, often in very subtle and unexpected ways. Lefebvre-Brion and Field is the most comprehensive discussion of such perturbations of which I am aware.

Just another detail to keep you up at night.ES
 
  • #96
I've not heard of these perturbations. Are we talking Rydberg as in electrons that are excited to >>10th electronic state? I knew Rydberg molecules are a thing, but I always assumed that stuff was limited to alkali-alkali dimers.
 
  • #97
Twigg said:
I've not heard of these perturbations. Are we talking Rydberg as in electrons that are excited to >>10th electronic state? I knew Rydberg molecules are a thing, but I always assumed that stuff was limited to alkali-alkali dimers.
Greetings,

If you have an unpaired outer electron, for example as in ##\textup{NO}##, there is an associated set of Rydberg states corresponding to excitations of that unpaired outer electron. The valence states correspond to excitations of an inner, core electron. Thus doublet states ##(S= 1/2)## would have a set of Rydberg states.

The perturbations occur, for example, when two rotational transitions associated with different electronic states are fortuitously nearly degenerate. A Fortrat diagram, ##E= f\left ( J \right )##, will show small discontinuities resulting from mixing of the nearly degenerate rotational states. Figuring out the details can be a challenge!ES
 
  • #98
Hello again. So I read more molecular papers meanwhile, including cases where perturbation theory wouldn't work and I want to clarify a few things. I would really appreciate your input @Twigg @amoforum. For simplicity assume we have only 2 electronic states, ##\Sigma## and ##\Pi## and each of them has only 1 vibrational level (this is just to be able to write down full equations). The Hamiltonian (full, not effective) in the electronic space is:

$$
\begin{pmatrix}
a(R) & c(R) \\
c(R) & b(R)
\end{pmatrix}
$$

where, for example ##a(R) = <\Sigma |a(R)|\Sigma >## and it contains stuff like ##V_{\Sigma}(R)##, while the off diagonal contains stuff like ##<\Sigma |L_-|\Pi >##. If we diagonalize this explicitly, we get, say, for the ##\Sigma## state eigenvalue:

$$\frac{1}{2}[a+b+\sqrt{(a-b)^2+4c^2}]$$

Assuming that ##c<<a,b## we can do a first order Taylor expansion and we get:

$$\frac{1}{2}[a+b+(a-b)\sqrt{1+\frac{4c^2}{(a-b)^2}}] = $$

$$\frac{1}{2}[a+b+(a-b)(1+\frac{2c^2}{(a-b)^2})] = $$

$$\frac{1}{2}[2a+\frac{2c^2}{(a-b)})] = $$

$$a+\frac{c^2}{(a-b)} $$

Here by ##c^2## I actually mean the product of the 2 off diagonal terms i.e. ##<\Sigma|c(R)|\Pi><\Pi|c(R)|\Sigma>##This is basically the second order PT correction presented in B&C. So I have a few questions:

1. Is this effective Hamiltonian in practice a diagonalization + Taylor expansion in the electronic space, or does this happened to be true just in the 2x2 case above?

2. I am a bit confused how to proceed in a derivation similar to the one above, if I account for the vibrational states, too. If I continue from the result above, and average over the vibrationally states, I would get, for the ##\Sigma## state:

$$<0_\Sigma|(a(R)+\frac{c(R)^2}{(a(R)-b(R))})|0_\Sigma> = $$

$$<0_\Sigma|a(R)|0_\Sigma>+<0_\Sigma|\frac{c(R)^2}{(a(R)-b(R))}|0_\Sigma> $$

where ##|0_\Sigma> ## is the vibrational level of the ##\Sigma## state (again I assume just one vibrational level per electronic state). This would be similar to the situation in B&C for the rotational constant in equation 7.87. However, if I include the vibration averaging before diagonalizing I would have this Hamiltonian:

$$
\begin{pmatrix}
<0_\Sigma|a(R)|0_\Sigma> & <0_\Sigma|c(R)|0_\Pi> \\
<0_\Pi|c(R)|0_\Sigma> & <0_\Pi|b(R)|0_\Pi>
\end{pmatrix}
$$

If I do the diagonalization and Taylor expansion as before, I end up with this:

$$<0_\Sigma|a(R)|0_\Sigma>+\frac{<0_\Sigma|c(R)|0_\Pi><0_\Pi|c(R)|0_\Sigma>}{(<0_\Sigma|a(R)|0_\Sigma>-<0_\Pi|b(R)|0_\Pi>)} $$

But this is not the same as above. For the term ##<0_\Sigma|c(R)|0_\Pi><0_\Pi|c(R)|0_\Sigma>##, I can assume that ##|0_\Pi><0_\Pi|## is identity (for many vibrational states that would be a sum over them that would span the whole vibrational manifold of the ##\Pi## state), so I get ##<0_\Sigma|c(R)^2|0_\Sigma>##, but in order for the 2 expression to be equal I would need:

$$\frac{<0_\Sigma|c(R)^2|0_\Sigma>}{(<0_\Sigma|a(R)|0_\Sigma>-<0_\Pi|b(R)|0_\Pi>)} =
<0_\Sigma|\frac{c(R)^2}{(a(R)-b(R))}|0_\Sigma>
$$

Which doesn't seem to be true in general (the second one has vibrational states of the ##\Pi## states involved, while the first one doesn't). Again, just to be clear by, for example, ##<0_\Sigma|a(R)|0_\Sigma>##
I mean ##<0_\Sigma|<\Sigma|a(R)|\Sigma>|0_\Sigma>## i.e. electronically + vibrational averaging.

What am I doing wrong? Shouldn't the 2 approaches i.e. vibrational averaging before or after the diagonalization + Taylor expansion give exactly the same results?
 
  • #99
BillKet said:
Thank you for your reply. I will look at the sections you suggested for questions 1. For the second one, I agree that if that ##R_\alpha## is a constant, we can take the electronic integral out of the vibrational integral, but I am not totally sure why can we do this. If we are in the BO approximation, the electronic wavefunction should be a function of ##R##, for ##R## not constant, and that electronic integral would be a function of ##R##, too. But why would we assume it is constant? I understand the idea behind BO approximation, that the electrons follow the nuclear motion almost instantaneously, but I don't get it here. It is as if the nuclei oscillate so fast that the electrons don't have time to catch up and they just the see the average inter-nuclear distance, which is kinda the opposite of BO approximation. Could you help me a bit understand this assumption that the electronic integral is constant? Thank you!
You should read some day the original Born-Oppenheimer paper.
The point is that the electronic wavefunction changes on a distance ##O(1)##, while the nuclear wavefunctions change on a distance ##O(\sqrt{m_\mathrm{e}/M_\mathrm{nuc}})## around the equilibrium distance. So you can expand the electronic matrix elements in a power series in ##R-R_0##. The matrix elements of the vibrational functions of ##(R-R_0)^n\sim O((\sqrt{m_\mathrm{e}/M_\mathrm{nuc}})^n)## whence usually all but the term with n=0 are negligible.
I think this expansion of the electronic dipole moment is called Herzberg-Teller coupling.
 
  • #100
DrDu said:
You should read some day the original Born-Oppenheimer paper.
The point is that the electronic wavefunction changes on a distance ##O(1)##, while the nuclear wavefunctions change on a distance ##O(\sqrt{m_\mathrm{e}/M_\mathrm{nuc}})## around the equilibrium distance. So you can expand the electronic matrix elements in a power series in ##R-R_0##. The matrix elements of the vibrational functions of ##(R-R_0)^n\sim O((\sqrt{m_\mathrm{e}/M_\mathrm{nuc}})^n)## whence usually all but the term with n=0 are negligible.
I think this expansion of the electronic dipole moment is called Herzberg-Teller coupling.
I am not sure how does this answer my question. I agree with what you said about the perturbative expansion, this is basically what I used in my derivation in the Taylor series. My question was why the 2 methods I used (the 2 different perturbative expansions) don't give the same result. I also think that Herzberg-Teller coupling doesn't apply to diatomic molecules, no?
 
  • #101
BillKet said:
@amoforum Also the expectation value ##<\eta'|T_q^1(d_{el})|\eta>## is a function of R (the electronic wavefunctions have a dependence on R), so we can't just take them out of the vibrational integral like B&C do in 6.332. What am I missing?
I was trying to answer this question.
 
  • #102
I'm also going to invite @EigenState137 to chime in (see questions in post #98), as they seem to know more about diatomic spectroscopy than I do.

Neat trick, @BillKet! As far as question 1, this feature (Taylor series = perturbation theory results) is not unique to the effective Hamiltonian. It's a mathematical fact that Taylor expanding the exact spectrum will give you the same results as perturbation theory of the same order, for any hamiltonian. That's just the way that perturbation theory works (if you want to convince yourself, review the derivation of PT in an undergrad level textbook-- the grad level stuff is too stuffy and notational for a chump like me o0)). Alternatively, you can just try to directly compute the first and 2nd order perturbation terms of the toy Hamiltonian ##\left( \begin{array} aa(R) & c(R) \\ c(R) & d(R) \end{array} \right)##. To sum up, perturbation theory is just a shortcut to the terms in the taylor series when you don't have a closed formula for the spectrum to begin with.

Gimme some time to think about #2. I mostly responded just to get ES137 in on this. To my eyes, Eqn 7.85 looks more like your second expression, since ##\eta \neq 0##. Am I missing something?
 
  • #103
Twigg said:
I'm also going to invite @EigenState137 to chime in (see questions in post #98), as they seem to know more about diatomic spectroscopy than I do.

Neat trick, @BillKet! As far as question 1, this feature (Taylor series = perturbation theory results) is not unique to the effective Hamiltonian. It's a mathematical fact that Taylor expanding the exact spectrum will give you the same results as perturbation theory of the same order, for any hamiltonian. That's just the way that perturbation theory works (if you want to convince yourself, review the derivation of PT in an undergrad level textbook-- the grad level stuff is too stuffy and notational for a chump like me o0)). Alternatively, you can just try to directly compute the first and 2nd order perturbation terms of the toy Hamiltonian ##\left( \begin{array} aa(R) & c(R) \\ c(R) & d(R) \end{array} \right)##. To sum up, perturbation theory is just a shortcut to the terms in the taylor series when you don't have a closed formula for the spectrum to begin with.

Gimme some time to think about #2. I mostly responded just to get ES137 in on this. To my eyes, Eqn 7.85 looks more like your second expression, since ##\eta \neq 0##. Am I missing something?
Thank you! I guess what confused me and made me ask the first questions was that the B&C derivation of the effective Hamiltonian is long and he basically gives 2 (or 3) derivations for it, when all that is, is just a Taylor series expansion. I was afraid I was missing something.

For the second questions, I agree with you, 7.85 looks like my second expression i.e. first diagonalize then take the vibrational averaging. However, I am not sure why doing it the other way around i.e. vibrational averaging of the matrix elements and then diagonalization doesn't give the same result (or maybe it does and the 2 expressions are equivalent?). Actually, equation 7.69 in B&C confuses me even more. In that equation he seems to take the vibrational average only of the numerator i.e. ##<i|H'|k>##, but not the denominator. So the denominator would still have an R dependence. But in 7.85, he implies that the vibrational averaging should include the denominator, too. So that kinda makes me believe that the 2 approaches are equivalent (or he did a mistake?), but I am not sure why I don't get that in my derivation.
 
  • #104
Twigg said:
It's a mathematical fact that Taylor expanding the exact spectrum will give you the same results as perturbation theory of the same order, for any hamiltonian. That's just the way that perturbation theory works (if you want to convince yourself, review the derivation of PT in an undergrad level textbook-- the grad level stuff is too stuffy and notational for a chump like me o0)).
I fear that's not true in general. Most perturbation series in physics are singular perturbation series. Take the Born-Oppenheimer theory expanding the Hamiltonian in the ratio of nuclear to electron mass. The nuclear mass premultiplies the highest derivative (the second derivative with respect to R in diatomic molecules). Hence the zeroth order Hamiltonian would be qualitatively different from the case with finite nuclear mass.
 
  • #105
Greetings,

@Twigg , thank you for your confidence in me. I hope it is not totally misplaced.

I have rather quickly read over this thread and a few questions come immediately to mind that I would like to address to @BillKet . You opted to label this discussion at an "I" level. That surprises me because I would consider it a rather esoteric topic. Thus my questions:

Is this just an exercise in understanding how to construct an Hamiltonian? If so, I think you have already received substantive responses.

Is this part of a research program with which you are associated? If so, what is the research objective? To develop an Hamiltonian to be used for the analysis of experimental spectra?

If the objective is the analysis of experimental spectra, then you need to consider the experimental data in detail. Do you know what the diatomic molecule is? What is the spectroscopic resolution: is it sub-Doppler for example? What angular momenta are relevant? Perhaps most importantly, why approach the analysis of a spectrum by creating an Hamiltonian rather than beginning with one of the numerous Hamiltonians already in the literature? Why begin by attempting to reinvent the wheel?

I would make two additional general comments.

First, keep it as simple as possible. Why on Earth even think about the quagmire that is the Stark Effect (DC and/or AC) unless absolutely necessary? Same for the Zeeman Effect.

Second, if this is indeed part of a formal research program, then I will not intrude on your research. Your research is for you to do in collaboration with your immediate colleagues and your mentor.ES
 

Similar threads

  • Atomic and Condensed Matter
Replies
0
Views
448
  • Atomic and Condensed Matter
Replies
1
Views
2K
  • Atomic and Condensed Matter
Replies
7
Views
1K
  • Atomic and Condensed Matter
Replies
1
Views
1K
  • Atomic and Condensed Matter
Replies
3
Views
1K
  • Atomic and Condensed Matter
Replies
1
Views
904
  • Atomic and Condensed Matter
Replies
3
Views
2K
  • Atomic and Condensed Matter
Replies
13
Views
3K
  • Atomic and Condensed Matter
Replies
0
Views
892
  • Atomic and Condensed Matter
Replies
1
Views
864
Back
Top