B.1 Classical mechanics - University of Illinois,...

24
B-1 Appendix B: Classical, semiclassical and quantum tools for Chem 542 This appendix provides a brief summary of some tools used in classical, semiclassical, and quantum mechanics. For a more in-depth reference on classical mechanics, consult Landau & Lifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics and quantum mechanics are discussed in standard texts such as Sakurai's, Schiff's, Cohen-Tannoudji's, Bohm's or many others. B.1 Classical mechanics This section assumes that the reader is already familiar with the brief review in part 2 of the notes, or has had a similar introduction to classical mechanics, as often provided in Chem 540. 1.1 Least-action principle In part 2 of the notes, we motivated, using Cartesian coordinates, that energy conservation implies the following equation of motion 2-3 for the Lagrangian: d dt ( ˙ x i L ) x i L = 0 B.1.1.1 Comparing this to equation 94 from appendix A, we immediately see that the problem of trajectories in classical mechanics is one of the calculus of variations: the actual trajectory taken by the system is the one which minimizes the following functional of L, S[q i q i , t ] = L(q i (t ), ˙ q (t ), t)dt q 0 q final ; δS = 0. B.1.1.2 This functional is known as the action, and the fact that the actual path taken is the one which minimizes eq. 2, leading to eq. 1, is known as the Principle of Least Action. Because the derivations in appendix A.9 are independent of coordinate system, it is clear that eqs. 1 and 2 must hold for any coordinate system. Whatever coordinates we transform to from cartesian coordinates, the action is always minimized and the Lagrangian always obeys eq. 1. 1.2 Canonical transformations When we transform from one coordinate system to another in the Lagrangian formulation, we just define a mapping {q i } {q' i }. In Hamiltonian dynamics, we must also transform the {p i } but we cannot pick the transformation of the {q i } and {p i } independently; doing so would result in a Lagrangian which does not satisfy eq. 1. One way to do it right would be to transform back from Hamiltonian to Lagrangian dynamics, do the transformation, and then transform back to Hamiltonian dynamics. A more direct way is to stay within Hamiltonian dynamics. This is particularly convenient if one is dealing also with a quantum version of the problem because quantum dynamics are usually formulated in the Hamiltonian framework. (The Lagrangian formulation is known as 'Path Integrals,' and is covered in Chem 550 when offered.) Correct coordinate-momentum transformations are known as canonical transformations. They are effected by a generating function, which is defined as follows. Let L be the initial Lagrangian, and L' be the final one (which we seek to avoid in the end). Then, for a conservative system,

Transcript of B.1 Classical mechanics - University of Illinois,...

Page 1: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-1

Appendix B: Classical, semiclassical and quantum tools for Chem 542 This appendix provides a brief summary of some tools used in classical, semiclassical, and quantum mechanics. For a more in-depth reference on classical mechanics, consult Landau & Lifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics and quantum mechanics are discussed in standard texts such as Sakurai's, Schiff's, Cohen-Tannoudji's, Bohm's or many others. B.1 Classical mechanics This section assumes that the reader is already familiar with the brief review in part 2 of the notes, or has had a similar introduction to classical mechanics, as often provided in Chem 540. 1.1 Least-action principle In part 2 of the notes, we motivated, using Cartesian coordinates, that energy conservation implies the following equation of motion 2-3 for the Lagrangian:

ddt

( ∂∂˙ x i

L ) − ∂∂xi

L = 0 B.1.1.1

Comparing this to equation 94 from appendix A, we immediately see that the problem of trajectories in classical mechanics is one of the calculus of variations: the actual trajectory taken by the system is the one which minimizes the following functional of L,

S[qi , ˙ q i ,t] = L(qi(t), ˙ q (t), t)dt

q0

q final

∫ ; δS = 0. B.1.1.2

This functional is known as the action, and the fact that the actual path taken is the one which minimizes eq. 2, leading to eq. 1, is known as the Principle of Least Action. Because the derivations in appendix A.9 are independent of coordinate system, it is clear that eqs. 1 and 2 must hold for any coordinate system. Whatever coordinates we transform to from cartesian coordinates, the action is always minimized and the Lagrangian always obeys eq. 1. 1.2 Canonical transformations When we transform from one coordinate system to another in the Lagrangian formulation, we just define a mapping {qi} →{q'i}. In Hamiltonian dynamics, we must also transform the {pi} but we cannot pick the transformation of the {qi} and {pi} independently; doing so would result in a Lagrangian which does not satisfy eq. 1. One way to do it right would be to transform back from Hamiltonian to Lagrangian dynamics, do the transformation, and then transform back to Hamiltonian dynamics. A more direct way is to stay within Hamiltonian dynamics. This is particularly convenient if one is dealing also with a quantum version of the problem because quantum dynamics are usually formulated in the Hamiltonian framework. (The Lagrangian formulation is known as 'Path Integrals,' and is covered in Chem 550 when offered.) Correct coordinate-momentum transformations are known as canonical transformations. They are effected by a generating function, which is defined as follows. Let L be the initial Lagrangian, and L' be the final one (which we seek to avoid in the end). Then, for a conservative system,

Page 2: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-2

δS = δ L(q, ˙ q )dt∫ = δ L' (Q, ˙ Q )dt∫

= δ [ ˙ q p − H(q, p)]dt∫ = δ [ ˙ Q P − H' (Q, P)]dt∫= 0.

B1.2.1

according to eq. 2-5 (see part 2 of the notes). These equations can only be simultaneously valid if the middle and rightmost sides differ at most by a constant factor and a perfect differential with the same value at the endpoints:

δ afdt∫ = aδ fdt∫ ,

δ a dF(q(t), p(t),Q(t),P(t))dt

dtta

tb∫ = δ[F(ta ) − F(tb )] = 0. B.1.2.2

Therefore

c[ ˙ q p − H ] = ˙ Q P − H' + dFdt

. B.1.2.3

The generating function can be a function of q,p,Q,P, but must at least depend on one of the old and one of the new momenta/coordinates (the other is then implicit in Hamilton's equations. For example, consider a generating function of the 'first kind' F(q,Q). Taking the time derivative of F but also inserting into eq. 3 we have

dFdt

=∂F∂q

∂q∂t

+∂F∂Q

∂Q∂t

+∂F∂t

but also

=p ∂q∂t

− P ∂Q∂t

+ (H '− H ). B.1.2.4

Therefore p = ∂F/∂q and P = -∂F/∂Q defines the momenta, once the coordinate transformation F(q,Q) has been chosen. Of course, other Fs, such as the generating function of the second kind F2(q,P) are also possible. As an example, consider the action-angle variables for the harmonic oscillator, the simplest version of the problems discussed in section 2.3.1. In cartesian coordinates, the Hamiltonian is

H =p2

2µ+k2x 2 . B.1.2.5

This looks very similar to the equation of a circle (x2+y2=1), for which a simpler equation in angular/radial coordinates is r=1. Thus, one should be able to bring the Hamiltonian into a form H' = f (I) B1.2.6 using a transformation of the type

x = 2 f (I )

ksinθ

p = 2µ f (I ) cosθ. B.1.2.7

Here, I will be our new momentum and θ will be its new canonically conjugate coordinate. In order to satisfy eq. B.1.1.2, not just any function f will do, and we can determine the correct f using a generating function of the first kind. Taking the ratio of the equations in 7, f(I) drops out and we have

Page 3: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-3

p = µk x cot θ = ∂F / ∂x B.1.2.8 The solution of this simple differential equation is

F =µk2

x2 cot θ + F0 , B.1.2.9(11)

where F0 is a constant. We now have our generating function of the first kind F(q,Q). Our new momentum is therefore

I = −∂F∂θ

=µk2

x2

sin2θ B.1.2.10

from which we can solve for x and p using eq. 8:

x = 2I

µksinθ

p = 2I µk cosθ. B.1.2.11

Finally, the transformed Hamiltonian becomes

H ' = k

µI =ω I = ωn B.1.2.12

where we used the definition of frequency in terms of k and µ, and defined n ≡ I / . Because H' does not depend on θ, I is a constant of the motion. The correspondence principle between quantum numbers and action variables for the harmonic oscillator is also clear from eq. 12. From Hamilton's equations,

˙ θ =∂H∂I

= ω or θ = ωt + θ0 . B.1.2.13

This is why we call this coordinate system 'action-angle' variables. Whenever the coordinate has been reduced to the form eq. 13 and H is independent of θ, the conjugate momentum I automatically satisfies the following for a full cycle of the motion (θ from 0 to 2π): 1

2π I(E)dθcycle∫ = I(E) 1

2π dθcycle∫ = I(E) . B.1.2.14

But in one dimension, the integral on the left is in general invariant to canonical transformation; keeping in mind that F=F(q,Q) we have

12π PdQ

cycle∫ = 1

2π − ∂F∂Q

dQcycle∫

= 12π [∂F

∂q∂q∂Q

− dFdQ]dQ

cycle∫

= 12π [∂F

∂q∂q∂Q]dQ

cycle∫

= 12π pdq

cycle∫

B.1.2.15

The chain rule takes us from the first to second line. The cancellation between the second and third lines arises because F has the same value at the endpoints of a closed loop around a full cycle of the motion. Therefore it must always be true that the canonical momentum defined such

Page 4: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-4

Fig. B-1 On a bound potentialsurface, a quantum particle cannot sit at the zero point withoutviolating the uncertainty princi-ple. In the p=0 plane, E=V(q).

E

p

q

Area ~2h

Area ~h Zero point particle-not allowed

that the Hamiltonian is coordinate-independent and the coordinate has the form θ = θ0+ωt, is the action-angle variable given by P = I = 1

2πp(q, E)dq = I(E)∫ B.1.2.16

thus proving the generality of eq. 2-17.

B.2 Semiclassical mechanics According to the uncertainty principle, a quantum particle moving on a potential surface and plotted in phase space cannot be a point (x,p). Rather, it has to be extended over an area ≈h (volume hn in n dimensions). As shown in figure B-1, this implies that on a bound potential surface, a quantum particle must have a zero point energy, and that energy levels must have a spacing so the change in area from one level to the next is ~h. It should be possible to derive a theory in which h is a variable. As h→0, we recover classical mechanics, and as h approaches its true value, we obtain quantum mechanics. Such a theory indeed exists in one dimension. In optics, it is called the eikonal theory (which connects classical ray optics with wave optics), and in mechanics, we call it the Jordan-Wentzel-Kramers-Brillouin theory. (In many dimensions, the existence of classical chaos, which is not possible in quantum mechanics because phase space locations smaller than hn cannot be defined, makes the theory much more complicated. See the books by Gutzwiller or de Almeida for details.) 2.1 WKB theory At constant energy and in one dimension, eq. 1.1.2 and the connection between L and H in eq. 2-5 imply that the classical action is given by

Sclass(x,t) = pdx'x 0

x

∫ − Et . B.2.1.1(19)

An eigenstate Ψ(x) of the Hamiltonian H = −2 / 2m ∂2 / ∂x2 +V (x) satisfies the Schrödinger equation

Page 5: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-5

[−

2

2m∂2

∂x2+V (x)]Ψ(x) = EΨ(x) . B.2.1.2

For the present discussion we will restrict ourselves to continuous real-valued potentials. The time-dependent part of such an eigenstate is exp[-iEt/]. The x-dependent part is different for different potentials V(x). We can therefore consider the following Ansatz for this eigenstate:

Ψ(x,t) = eiS( x ) / h where S(x,t) = y(x' )dx '−Et

x 0

x

∫ . B.2.1.3

Of course, there is no obvious reason why S or y in eq. 3 should equal Sclass or p in eq. 1. Nonetheless, this replaces the function Ψ by the new function y. To see what differential equation y has to obey, let us insert eq. 3 into the Schrödinger equation 2. After taking the derivatives (keeping in mind that ∂/∂x [yΨ] = Ψ∂y/∂x + y∂Ψ/∂x) and letting y(x0)=0 (an arbitrary choice of overall phase), we obtain the following equation for y:

y2

2m+V (x) = E +

i2m

dydx

B.2.1.4

Except for the rightmost term, this looks just like the definition of the classical Hamiltonian. Unfortunately the rightmost term makes eq. 4, known as the quantum Hamilton-Jacobi equation, a differential equation. Unlike the Schrödinger equation it is only first order, but nonlinear. (If the rightmost term is truncated, eq. 4 becomes the classical Hamilton-Jacobi equation, which formally solves Hamilton’s equations of motion in terms of the generating function S. Thus quantum mechanics formally reduces to classical mechanics in the limit → 0 .) The important thing about eq. 4 is that also unlike the Schrödinger equation, it does not give us nonsense as we let → 0 . Eq. 4 obeys the correspondence principle: we recover the definition of the classical Hamiltonian and see that in the classical limit y becomes the momentum p (or at least |y| = |p|). With eq. 4 we have an alternative formulation of quantum mechanics which is amenable to taking the semiclassical limit. Let us now assume that is small, and expand y in a Taylor series in ,

y = y0 +

iy1 +

i

⎛⎝⎜

⎞⎠⎟2

y2 + B.2.1.5

Rewriting the qHJ equation as

y2 − 2m(E −V ) = i dy

dx B.2.1.6

and inserting eq. 5 we obtain

y02 +2iy0y1 +− 2m(E −V ) = i dy0

dx+ 2

dy1dx

+. B.2.1.7

In order for this to consistently go to the classical limit, all terms of a given order of on the left and right hand sides must converge together, and so we have

y02 = 2m(E −V ) ⇒ y0 = ± 2m(E −V ),

2y0y1 = −dy0 / dx⇒ y1 =dV / dx

4(E −V ),

y12 + 2y0y2 = −dy1 / dx⇒ y2 =

5V '2+ 4V ''(E −V )32 2m (E −V )5/2 , etc.

B.2.1.8

In the classical limit, the even terms (y0, y2, ...) depend on an odd power of p and can have ± signs, while the odd terms always are given as in eq. 8. We can therefore write y as

Page 6: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-6

y± = ±y0 +

iy1

2y2 + , B.2.1.9

depending on whether the momentum is positive or negative. The most general form for the wavefunction is thus

Ψ(x,t) = c+e

i /h y +dx'x 0

x

∫+ c−e

i /h y− dx'x0

x

∫⎧ ⎨ ⎪

⎩ ⎪ ⎫ ⎬ ⎪

⎭ ⎪ e−iEt /h , B.2.1.10

where c+ and c- are constants in front of the positive and negative momentum terms. For example, c- = 0 would correspond to a wave packet of well-defined energy propagating in the positive x direction. The eigenfunction Ψ(x) will of course have both positive and negative momentum contributions. As a matter of fact, for an eigenstate of a bound potential well, the net momentum expectation value must be 0 (or else the quantum particle would drift out of the potential well to ±∞!), and the contributions from positive and negative momentum must be equal. Therefore c+ = c-. This ensures that the eigenfunction Ψ goes to 0 as x goes to ±∞. Another condition one would like to enforce is that the coordinate part of the eigenfunctions be purely real. Any continuous bounded non-degenerate 1-D Hamiltonian can have a purely real set of eigenfunctions. This is easy to see: we can calculate the eigenfunctions for any such Hamiltonian H using a harmonic oscillator basis, which is real and has real matrix elements; the Hamiltonian matrix is therefore real-symmetric, and the eigenvectors are real. The eigenfunctions of H are thus given by real linear combinations of real functions. We can therefore take c+ = c- to be real at the point x0 where the exponential phase terms in 10 vanish. For Ψ to remain real not just at x0 but anywhere on the x axis, the integrals in equation 10 must therefore satisfy

i

y+dx 'x0

x∫ =r+nπ i

i

y−dx 'x0

x∫ =r'+n'π i

B.2.1.11

where r and r' are real numbers and n and n' are integers; that way the exponentials in eq. 10 can only be equal to ±er or ±er' at x, and Ψ remains real. As it turns out, both eqs. in 11 above end up enforcing the same condition, and we can look in more detail at just one of them. The function Ψ has essential singularities of the square-root type at the two turning points where V(x) = E (see fig. B-2a). This makes it difficult to evaluate the integrals in eq. 11 on the real axis, and so we extend Ψ(x) into the complex x-plane by analytic continuation. The singularities are connected to branch cuts, which we can choose as shown in fig. B-2b. The function y has two leaves connected by these branch cuts, one corresponding to the positive branch of the root in eq. 9, and one to the negative branch. (See also fig. A-2, which depicts a similar situation.) Above the cuts, we are on the leaf where y=y+, and below the cuts, we move to the leaf y=y-. First, we rewrite the upper integral in eq. 11 as

y+dx '

C1∫ =-ir+ nh

2 B.2.1.12

The contour C1 goes from x0 (below the inner turning point) to x (above the outer turning point), as shown in fig. B-2b. Adding the complex conjugate to equation 12, one obtains

y+dx'C1∫ + y+dx'

C1∫

⎣ ⎢ ⎢

⎦ ⎥ ⎥

*

= nh . B.2.1.13

Page 7: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-7

Fig. B-2. (a) shows the potential surface V(x) with two turning points. If Ψ(x) is analytically continued into the complex plane, the situation is as shown in (b). Contour C2 is the complexconjugate of contour C1, and if we go aroundC2 in reverse the two make a closed contourwith integral nh. The circles of the closed con-tour make no contribution because their phasecancels, so the integral around C is equal tothe integrals along the straigh lines, or twicethe integral of y between the turning points.

E

x

V(x)

Re(x)

Im(x)

C1

C2x0 x

(a)

(b)Branchcut

But y+* = -y-, and the complex conjugate of contour C1 is C2, so y+dx'

C1∫ - y−dx'

C2∫ = nh . B.2.1.14

If we traverse C2 in reverse (changing the sign of the second integral), it and C1 form a closed loop C. Furthermore, y+ and y- switch as we go across branch cuts just as required by analytic continuation, so they can be replaced by y. This yields the final integral

ydx '

C∫ = y0dx '

C∫ +

iy1dx '

C∫ +=nh . B.2.1.15

In eq. 8, y1 is a simple singularity, so its contribution to eq. 15 can be evaluated explicitly by the residue theorem (see appendix A):

i

y1C∫ dx =

i1

4(E − V)dVdxC

∫ dx

=i

dV4(E − V )C∫

=i

2πi Res{ 14(E − V)

}turning points∑

= h{− 14−

14

} = − h2

B.2.1.16

In eq. 8, y0 has an essential singularity of the square-root type, but near the singular points |y0| ~|E-V|1/2, which vanishes at the singular point. The integrals around the circles in fig. B-2b are thus zero. Furthermore, the two integrals along the straight line segments add up: they go in

Page 8: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-8

opposite directions (one - sign), but the bottom one is in the lower branch of eq. 9 (second - sign which cancels the first). Thus we can write eq. 15 as

2 2m(E −V (x))xmin

xmax

∫ dx + = h(n + 12 ) B.2.1.17

where integration is between the turning points. Eq. 17 is the first order semiclassical expression relating the energy level E to the quantum number n, and can be used to solve for E(n) after evaluating the integral. The extra factor of 1/2 in 17 provides the zero point energy; it comes from eq. 16 and is known as the Maslov index. Depending on the number of turning points in the potential function, the Maslov index can vary. Although the situation can get more complicated, generally each turning point at energy E contributes 1/4 to the Maslov index. B.3 Time dependent quantum mechanics 3.1 The Schrödinger and Heisenberg time-dependent pictures

One way to view the principal difference between classical mechanics and quantum mechanics is in the way they treat conjugate variables. Conjugate variables are pairs whose products have units of Planck’s constant, ≈ 6.62.10-34 J.s (or kg-m/s2 . m). The main example encountered in time-independent quantum mechanics is x and p, so we review them briefly.

Position and momentum are independent variables in classical mechanics; both are needed to specify the initial condition of a trajectory, after which the trajectory can be computed in principle for all future times. (In practice, the extreme sensitivity of trajectories to initial conditions, called classical chaos, may make that difficult.) Rewriting Newton’s equation F = ma = m x as two first-order differential equations, we obtain

x = v = p / m, p = F = −

∂H∂x

B.3.1.1

Making further use of x = p / m and H = p2/2m + V(x) in Cartesian coordinates, we obtain Newton’s 3d law in the form of Hamilton’s equations

x = ∂H

∂ p, p = −

∂H∂x

B.3.1.2

One can prove that these equations hold in any canonical coordinate system, not just in Cartesian coordinates. To integrate them to yield x(t) and p(t), x0 and p0 must be specified.

In quantum mechanics, x and p are not independent variables. They are Fourier-conjugate, which means that the operator p is given in terms of position by (see Appendix A)

p = −i∂ / ∂x B.3.1.3 In the position representation, the system is entirely specified by the wavefunction Ψ(x), and the expectation value of the momentum is obtained as <p> = -i∫dx Ψ∗(x)∂Ψ(x)/∂x = <x|p|x>. Vice-versa, the position operator and position expectation value are given in the momentum representation by analogous equations. The Fourier-conjugate relationship leads to the Heisenberg principle ΔpΔx ≥ / 2 B.3.1.4 When the equal sign holds, this principle is not indicative of any ‘uncertainty’ in x and p (Heisenberg himself called it the ‘Unschärfeprinzip,’ not the ‘Unsicherheitsprinzip’). It is far more radical: it states that x and p are not independent variables, and it makes no sense to attempt to specify them independently.

Page 9: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-9

Fig. B-3. Inverse relationship between awavein the time domain and in the frequency do-main, connected by Fourier tranform. Thecomplex Fourier tranform is actually twotransforms, a sine transform yielding AS anda cosine transform yielding AC, such that I =AS

2+AC2, and = tan-1(AS/AC).A(t)

t

A(t)

t

I( )

I( )

0

+ phase information

+ phase information

=2 / Fourier

Transform

narrow wide

wide narrow

This is normal for Fourier-conjugate variables, and shows up classically in the property of

waves. The best-known classical Fourier principle is ω = i∂ / ∂t B.3.1.5 ΔωΔt ≥ 1 / 2 B.3.1.6

where t is time and ω is the angular frequency 2πν. Time and frequency are not independent variables, and figure B-3 illustrates how waveforms can be represented equivalently as a function of time or as a function of frequency via the Fourier transform. Time derivatives in the time domain correspond to multiplication by –iω in the frequency domain. The key is that waveforms narrow in time will be broad in frequency, and vice-versa. This is again not an uncertainty, but a property of waves: a very short chunk of a waveform simply does not have a single frequency, but must be synthesized out of a very large number of frequency components sinωit and cosωit.

Using Planck’s relationship E = hν between the frequency of electromagnetic radiation and the corresponding energy difference, eqs. 5 and 6 can be rewritten as

E = i∂ / ∂t B.3.1.7 ΔEΔt ≥ / 2 B.3.1.8

Schrödinger realized that the full quantum theory of motion would involve replacing p and E by the corresponding Fourier-conjugate operators in terms of x and t (eqs. 3 and 8), and applying the resulting operators to a function Ψ(x,t) to obtain a differential equation describing the motion of the system:

H = E⇒ [ p2

2m+V (x)]Ψ(x,t) = i ∂

∂tΨ(x,t)

⇒ [− 2

2m∂2

∂x2+V (x)]Ψ = i ∂

∂tΨ

B.3.1.9

The time dependent Schrödinger equation of course cannot be derived from classical dynamics, but it is very similar in form to Hamilton’s equations of motion. The Schrödinger equation is actually two coupled differential equations: it contains a factor i, so the wave function Ψ(x,t) = Ψρ(x,t) + iΨι(x,t) is a complex function, which is really two functions. Splitting the Schrödinger equation into its independent real and imaginary components, it can be rewritten as

Ψ i = −HΨr / Ψ r = +HΨ i /

B.3.1.10

to be compared with eq. 2. Such pairs of coupled differential equations for two functions, one having a (-) sign, the other a (+) sign, are called symplectic, or ‘area preserving.’ Classically this means an area ΔxΔp is mapped into an equal area in phase space at a later time. Quantum

Page 10: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-10

mechanically it means that the normalization of the wave function is preserved, and that the Heisenberg principle of eq. 4, once satisfied, will always be satisfied.

The solutions to eq. 9 can be very complex (pun intended) functions of position and time. The time-dependent Schrödinger equation has one particularly simple kind of solution when the wave function can be split into position and time-dependent parts of the form

Ψ(x,t) = Ψ(x)e− iEt / B.3.1.11 Inserting this Ansatz into eq. 9, we obtain the time-independent Schrödinger equation, which solves for the stationary position-dependent part only. Thus the states we usually think of as time-independent eigenstates actually have a simple time-dependence, given by the phase factor exp[-itEi/] from equation 11. Their phase oscillates, and after a period τ = π / E , Ψ(x,τ ) = −Ψ(x,0) .

Quantum systems are not generally in an eigenstate, but evolve in time just as classical systems do. However, if we know the complete set of eigenstates, it is easy to compute the dynamics for any initial state Ψ(x,t=0). Switching to Dirac notation |t> for the time evolving state and |i> for the eigenstates at energy Ei, we insert a complete set of states:

| 0 >= Σ | i >< i | 0 >= Σci0 | i > B.3.1.12 At t>0, the stationary states in eq. 12 simply evolve with the phase factor of equation 11, so the state |t> evolves as | t >= Σ ci0e

− iEi t / | i > B.3.1.13 This formally reduces the problem of time-dependent quantum mechanics to finding the overlap of the initial state with all eigenstates. Of course, in practice finding all eigenstates and their eigenenergies is not a trivial task.

There is another formal solution of the time-dependent Schrödinger equation useful for derivations, although again it is difficult to use for direct computation. Rewriting equation 9 as

HΨ = i∂Ψ / ∂t ⇒ iHdt = dΨ /Ψ , B.3.1.14 we can integrate both sides to obtain the formal solution Ψ(t) = e

− iHt /Ψ(0) = U(t)Ψ(0) . B.3.1.15 This is an operator equation, which requires exponentiation of an operator, the Hamiltonian, to obtain the time evolution operator U, also known as the propagator. If the Hamiltonian were represented in terms of a matrix

H (via its matrix elements Hij in some basis {|j>}), this would correspond to computing the exponential of a matrix, or in a Taylor approximation, exp(i

Ht / ) =I + iHt / − (

Ht / )2 + , B.2.1.16

where I is the identity matrix. This involves a large number of matrix multiplications even

when a truncated approximation is used, a computationally very expensive task. Only if the basis {|j>} for H is the eigenbasis is the matrix multiplication easy, but then we are back to solving the full eigenvalue problem. At very short times, only the linear term is significant, but note that the truncated operator does not preserve the norm of the wave function (after all,

IΨ by itself is already normalized). The linear approximation by itself will blow up at long times.

Figure B-4 considers two examples of time evolution. In one case, a quantum particle is moving in a double well potential. The ground state |0> and first excited state |1> wave functions are shown. A system in one of these eigenstates is delocalized over both sides of the well, i.e. a classical statement such as ‘ammonia in its lowest energy state has all three protons below the nitrogen nucleus’ is meaningless. But now consider the system in the initial state |t = 0> = |0>+|1>. This state is localized on the left side of the well. It time-evolves as | t >=| 0 > +e− iΔEt / | 1 > B.3.1.17 When t = h/2ΔE, the wave function has evolved to |t = h/2ΔE> = |0> - |1>. The system is now localized on the right side of the well. It has tunneled through the barrier separating the two

Page 11: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-11

Fig. B-4. Eigenstates and resulting wave packets in a double well tunneling problem, and for a harmonic oscillator.

<x|0><x|1>E

E

x

E

x

<x|t=0> ~ <x|0>+<x|1>

E

x

<x|t= / E> ~ <x|0>-<x|1>

tunneling

x

E

v

v+1

v+2

v+3

x

E

(t=0) = v+ v+1+ v+2+ v+3

x

E

(t= / ) ~ v v+1+ v+2 v+3

Time evolution

Time evolution

Wave packetis localized,closer to aclassicalparticle.

wells, even though both eigenstates have energies below the barrier, a classically forbidden process.

In the second example, consider exciting a wave packet consisting of several successive harmonic oscillator eigenstates. They will tend to add up on the left, and cancel on the right (because odd/even quantum number eigenstates switch ‘lobes’ from positive to negative on the right side). The resulting wave packet is localized on the left side. As it time-evolves, the phases of the higher energy states advance more rapidly, until eventually the lobes on the right add up and the ones on the left cancel. The wave packet now has moved to the right side, in a time t = 1/2ω, where ω is the harmonic oscillator vibrational frequency. This is very similar to what a classical particle would do: move through half a vibrational cycle.

In the Schrödinger picture, operators such as p and x are time-independent, and the wave function depends on time. In classical mechanics, p(t) and x(t) depend on time directly. Quantum mechanics can also be formulated in this way, and indeed, was originally formulated in this way by Werner Heisenberg. To see how this comes about, remember that wave functions are not observable; quantum mechanics instead computes expectation values of observables as matrix elements of operators. For example, <p(t)> = < Ψ(t) | p |Ψ(t) > tells us what the average expected value of p is at time t, and we could compute higher moments <p(t)n> to determine the full distribution of values of p.

Rewriting this explicitly in terms of the wave function at t = 0, an observable A's expectation value as a function of time is given by

< Ψ(t)*ΨS

| AOS |Ψ(t)

ΨS

>= < Ψ(0)*ΨH

| +iHte A −

iHte

OH

|Ψ(0)ΨH

>=< Ψ(0) | A(t) |Ψ(0) > B.3.1.18

where U = − iHt /he is the time-evolution operator. By inserting eq. 13 twice on the left hand side to express Ψ(t) in terms of eigenfunctions, we obtain an expression for the expectation value of

Page 12: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-12

any operator in terms of its diagonal and off diagonal matrix elements, and phase factors exp(–i[Ei-Ej]/ ).

As shown by the right hand side of eq. 18., instead of taking the wave functions as time-dependent and operators as time-independent, we can take the operator as time-dependent and the wave functions as time-independent:

AH(t) = e+ iHtA(0)e

− iHt

B.3.1.19 Taking the derivative of eq. 19, one obtains the equation of motion (compare with the Liouville equation in chapter 2):

dAdt

=1i[A(t),H]+ ∂A

∂t B.3.1.20

The second term is usually zero since Schrödinger operators do not usually depend explicitly on time, so we have

dAdt

=1i[A(t),H] . B.3.1.21

This is the Heisenberg equation of motion. It is a more natural equation to use with operators in matrix form because the commutator of two matrices is relatively easy to compute. It does not simplify the computational problem because the time derivative still means that formally exponentiations are involved in the solution, just like in eq. 15. However, eq. 21 makes the following statement transparent: operators that commute with the Hamiltonian do not evolve in time, and share eigenfunctions with the Hamiltonian.

Eq. 21 is very close to Hamilton’s equations of motion in form. Inserting x and p and evaluating the commutators,

dxdt

=1i[x,H] = 1

2im[x, p2 ] = p / m =

∂H∂p

dpdt

=1i[p,H] = 1

i[p,V(x)] = −∂V (x) / ∂x = −

∂H∂x

B.3.1.22

Taking the expectation values on both sides of eq. 22, we obtain Ehrenfest’s theorem for the expectation values as a function of time.

3.2 Time dependent perturbation theory: the goal Time-independent perturbation theory is based on the idea that if H = H0 + V , and the matrix elements or eigenfunctions of H0 are known, we can derive corrected energy levels based on the smallness of the perturbation V compared to H0 . Time dependent perturbation theory follows a very similar idea. Let us say we have a Hamiltonian H(t) = H0 + V(t) , B.3.2.1

where V(t) is a small time dependent perturbation (of course the special case where V(t) is time-independent is also allowed). We know how to propagate Ψ(0) to get Ψ(t) for H0 alone, but we do not know how to propagate Ψ(0) for the full Hamiltonian. We want to derive an approximate propagator that is based on the smallness of V(t) compared to H0 . A typical case where this is useful would be a molecule interacting with a weak time-dependent radiation field, a molecule interacting with a surface or with another molecule which is treated only via a

Page 13: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-13

potential V(t) instead of fully quantum mechanically, or the time-dependent decay of an energy level due to coupling to other levels as in a dissociation reaction or spontaneous emission. 3.3 TDPT: Interaction representation Time-dependent perturbation theory attempts to replace the full propagator

U(t) = Ote−i

dt 'H (t ')0

t

∫ B.3.3.1

by a simpler propagator valid only for short times or small V(t). The problem with eq. 1 is twofold: 1) U(t) is an exponential function, which is very difficult computationally; 2) if H(t) is time-dependent, it may not commute with itself at a different time, and certainly not with H0. For example, an expression of the type U† (t)ρ0U (t) B.3.3.2 is unambiguous when H is time-independent, but leads to commutation problems when V(t) ≠ 0. Hence a time-ordering operator Ot has to be included in eq. 1. To make the propagator simpler, we would like to replace eq. 1 by a power series expansion. At a first glance, it seems reasonable to try a Taylor expansion of the type

Ψ(t) =U(t)Ψ(0) = [1− i

dt 'H (t ') +]

0

t

∫ Ψ(0) . B.3.3.3

The problem is that while V(t) may be very small, H0 usually is not. By expanding it also, eq. 3 can be valid only for the very shortest of times. Furthermore, the eigenstates of H0 , or its action on Ψ(0) are often known. Therefore it would be better to use an expansion which leaves the H0 part of the propagator in exponential form, and expands only the V(t) part in a power series. We achieve this by introducing a new propagator defined as

UI (t) ≡ e+iH0 tU(t) , B.3.3.4

the so-called propagator in the interaction representation. Consider how this propagator acts on a wavefunction Ψ(0): first U(t) propagates the wavefunction properly to time t; then the exponential undoes (+ sign in exponent) only the part of the propagation done by H0 . This leads to a much 'slower' propagation of the wavefunction by the new operator UI (t) . In fact, if V(t) = 0 then U(t) = exp[-iH0t/] andUI (t) is simply the identity operator, not propagating the wavefunction at all. The propagator obeys the Schrödinger equation just like the wavefunction (but as an operator differential equation). For example, if U(t) = exp[-iH0t/] , we have

HΨ(t) = i ∂Ψ(t)∂t

⇒ HU(t)Ψ(0) = i ∂U(t)∂t

Ψ(0)

⇒ HU(t) = i ∂U(t)∂t

B.3.3.5

Solving 4 for U(t) and inserting into the Schrödinger equation with Hamiltonian B.3.2.1,

H0e

−iH0 tUI (t) +V (t)e

−iH0 tUI (t) = H0e

−iH0 tUI (t) + ie

−iH0 t ∂UI (t)

∂t. B.3.3.6

Canceling identical terms on both sides and multiplying by exp[iH0t/] on both sides yields

VI (t)UI (t) = i

∂UI (t)∂t

, B.3.3.7

where VI (t) ≡ exp[iH0t / ]V (t)exp[−iH0t / ] . The interaction propagator thus satisfies a much more slowly evolving Schrödinger equation, from which all time evolution due to H0 has

Page 14: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-14

disappeared. Integrating both sides, we have the fundamental equation of time-dependent perturbation theory

UI (t) = I −

i

dt 'VI (t)UI (t)0

t

∫ . B.3.3.8

The identity operator is the integration constant because when t=0 the integral vanishes and we must have UI (t)=I to satisfy the boundary conditions. When the propagator in the interaction representation propagates Ψ(0), it does not yield Ψ(t) of course. Rather,

Ψ I (t) ≡UI (t)Ψ(0) = e+iH0 tU(t)Ψ(0) = e

+iH0 tΨ(t) . B.3.3.9

The interaction wavefunction is just the 'slower propagated' wavefunction discussed below eq. B.3.3.4: it consists of the full wavefunction Ψ(t) with the propagation by H0 undone. As long as we can easily propagate with H0 , we can move back and forth between the interaction representation and the usual Schrödinger representation. 3.4 Formal time-dependent perturbation theory We can use eq. B.3.3.8 as the basis for a perturbation series expansion. Because H0 no longer shows up, we would only be expanding the 'small' VI(t) part and avoid the pitfalls of eq. B.3.3.3. (Actually, H0 does show up in eq. B.3.3.8 in the definition of VI, but only in exponential form, which is preserved even if we expand UI in series form.) The idea is thus the following: propagate Ψ(0) using an approximation derived from eq. B.3.3.9; this yields ΨI. The latter is easy to convert back to Ψ(t) using the zero-order propagator, since we presumably know how to deal with H0 . We generate different orders of perturbation by starting with the zero-order solution of B3.3.8, valid only when t=0: UI (t) = I . We then insert this back into the right hand side of B3.3.8 and keep doing so iteratively to generate better solutions:

Zero order: UI (t) = I

First order: UI (t) = I −i

dt 'VI (t ')I0

t

Second order: UI (t) = I −i

dt 'VI (t ') I − i

dt ''VI (t '')I0

t '

∫⎧⎨⎪

⎩⎪

⎫⎬⎪

⎭⎪0

t

= I − i

dt 'VI (t)I0

t

∫ −12 dt 'VI (t ') dt ''VI (t '')

0

t '

∫0

t

. B.3.4.1

If we apply the initial state to both sides of eq. 1 we obtain the same series in terms of the interaction wavefunction:

Page 15: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-15

Zero order: Ψ I (t) = Ψ(0)

First order: Ψ I (t) = Ψ(0) − i

dt 'VI (t ')Ψ(0)0

t

Second order: Ψ I (t) = Ψ(0) − i

dt 'VI (t ') I − i

dt ''VI (t '')Ψ(0)0

t '

∫⎧⎨⎪

⎩⎪

⎫⎬⎪

⎭⎪0

t

= Ψ(0) − i

dt 'VI (t)Ψ(0)0

t

∫ −12 dt 'VI (t ') dt ''VI (t '')Ψ(0)

0

t '

∫0

t

B.3.4.2

The prescription of first order TDPT is thus: take the known initial wavefunction; calculate VI from the known V and H0 ; integrate the product of Ψ(0) and VI to time t; subtract that result from the initial wavefunction to get the approximate interaction wavefunction; finally, use the known H0 propagator to go from ΨI (t) to Ψ(t) . 3.5 TPT when the eigenstates of H0 are known In many applications, the eigenenergies Ej of H0 are known and the matrix elements Vji = <j|V(t)|i> can be calculated, leading to explicit expressions for B.3.4.2. Consider the example of first order TDPT. Let |Ψ(0)>=|i> and expand

|Ψ(t) >= cji (t) | j(t) >

j∑ = cji (t)e

− iE j t / | j >j∑ B.3.5.1

from which follows

|ΨI( t) >= e

iH0 t /h cji(t)e−iE ft /h | j >

j∑ = cji(t) | j >

f∑ . B.3.5.2

The first order term in eq. B.3.4.2 then becomes

cji (t) | j >j∑ =| i > −

i

dt 'e+ iH0 t '/V (t ')0

t

∫ e− iH0 t '/ | i > . B.3.5.3

Taking the matrix element with a final state <f| and using the fact that |f> and |i> are eigenfunctions of H0 we have

cfi (t) =< f | i > −i

dt 'e+ iE f t '/ < f |V (t ') | i >0

t

∫ e− iEi t '/

= δ fi −i

dt 'ei(ω f −ω i )tVfi (t ')0

t

∫. B.3.5.4

Evidently, the time-dependent correction is proportional to the Fourier transform of the time dependent potential evaluated at ω f −ω i if we allow that V=0 for t'<0 and t'>t. Similarly, the higher order perturbation corrections are simply nested Fourier transforms where series of matrix elements such as VfkVki ... appear instead of just Vfi. The sum of all |cfi|2 should remain unity and cii should start out at 1 and decrease in magnitude, while the others start out at zero and increase in magnitude. 3.6 The Golden Rule Consider a state |0> which is initially the only one populated. It is coupled by time independent matrix elements Vj0 to a manifold of states {|j>}. Population will leak out of the state |0>, and the question is: how fast is the rate of that population leakage? We are given

Page 16: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-16

H = H0 + V, H0 | 0 >= E0 | 0 >, H0 | j >= Ei | j >, Vj0 =< j |V | 0 > . B.3.6.1 A typical example of this problem might be a vibronic state which is undergoing predissociation into a continuum, or a highly excited vibrational state which is decaying into a manifold of other vibrational states after excitation. According to first order TDPT,

cj0 = −

i

dt 'eiω j 0 tVj00

t

∫ = −Vj0eiω j 0 t −1ω j0

B.3.6.2

The population going to state |j> is given by the square of that amplitude,

Pj0 =| cj0 |2=|< t | 0 >|2=

|Vj0 |2

2ω j02 {2 − e

iω j 0 t − e− iω j 0 t}

=2 |Vj0 |

2

2ω j02 {1− cosω j0t}

=4 |Vj0 |

2

2ω j02 sin2 (ω j0t / 2)

=|Vj0 |

2 t 2

2sinc2 (ω j0t / 2)

B.3.6.3

The total transition probability for leaving state |0> is obtained by summing eq. 3 over all states |j>. The sum can be approximated by an integral, if we assume that the density of states ρ(E) of the manifold {|j>} is large and independent of energy, and if Vj0 is independent of j (and thus of energy):

Ptot = 1− Pj0j≠0∑ ≈ 1− dEρ(E)Pj0∫ (E)

≈ 1− |Vj0 |2 ρ(E0 )

t 2

2dE sinc2 ([E − E0 ]t / 2)

−∞

+∞

≈ 1− 2π

ρ(E0 ) |Vj0 |2 t = 1− kGRt

B.3.6.4

Let us review some of the limitations of eq. 4. 1) It was derived with first order TDPT and is valid only for short times. 2) Because we introduce an integral, it holds only for a true continuum of constant density of states; variations in the density of states are not allowed for; moreover, a finite density of states has to roll off as a sum of cosines, or 1-at2, as one can see by expanding |<0|t>|2 in eigenstates that Ptot at early times. 3) It is only valid if |0> is coupled to every other state by either the same average coupling, or at least by randomly distributed couplings independent of energy. The Golden Rule is a mean field theory. To extend eq. 4 to longer times, we can assume that we have n = t/Δt such steps of short duration Δt that are independent of one another, or

Ptot ≈ limn→∞1 − kGRt

n⎛ ⎝

⎞ ⎠

n

= e−k GRt . B.3.6.5

A succession of infinitely many infinitely short first order PT steps predicts an exponential decay. Most theories that derive exponential decays like eq. 5 make overt or hidden assumptions similar to 1)-3) discussed above. Real quantum systems generally violate 3). This has little effect at short times, where replacing matrix elements by their ensemble averages does not greatly affect the dynamics. At long times, the steps Δt are no longer independent of one another; they are correlated by the fluctuations in the matrix elements V0i. The rate constant kGR

Page 17: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-17

can be expected to be a good predictor of the 1/e decay envelope of Ptot, but one should never expect the decay of a quantum system to be exponential at very long times, and if the quantum system has a finite density of states, one should not expect it at very short times either. The main effect of correlating time steps is that they are no longer independent. Rather than letting n→∞, n should be the finite number of effectively independent steps. To a first approximation, one might therefore expect quantum systems to decay as

Ptot ≈ (1 −σ ) 1+ktD

⎛ ⎝

⎞ ⎠

−D

+σ . B.3.6.6

σ≠0 for a finite density of states ρ(Ε0) because there are only kGRρ(Ε0) states under the spectral envelope of an exponential decay (see 3.1.6). If the quantum system decays statistically at long times such that there is an equal probability of finding it in any one of its states, then even state |0> will be populated with probability σ=1/(kGRρ(Ε0) ). Of course this approaches 0 as {|i>} becomes a true continuum. 3.7 Absorption of a Gaussian electromagnetic pulse Consider a molecule with a dipole moment subjected to a z-polarized Gaussian electromagnetic pulse. The (nonconservative) Hamiltonian becomes H = H0 −

µ • ε(t), where ε(t) = εe− t

2 /Δt2 cos(ωt +ϕ )Z . B.3.7.1 ˆ Z is a unit vector in the space-fixed z direction. To make the example more concrete, let the

molecule be a diatomic AB with rotation-vibration Hamiltonian

H0 = −

2

2µ∂2

∂r2+V (r) + 2

2µr2J 2 B.3.7.2

and dipole moment

µ = µ(r)z . B.3.7.3

In these equations, µ is the reduced mass mAmB/(mA+mB), r is the internuclear distance, V(r) is the vibrational potential energy, ˆ J the rotational angular momentum operator, µ(r) the dipole moment as a function of bond distance, and ˆ z a unit vector pointing along the bond axis, which is the molecule-fixed z axis. We will assume we know the eigenfunctions |vJM> and energy levels EvJM of this Hamiltonian. They depend on three quantum numbers: the vibrational quantum number v, the total angular momentum quantum number J, and the orientational quantum number M. The eigenfunctions are of the form |vJM> = |vJ>|JM> because Hmol commutes with the angular momentum operator ˆ J :

−2

2µ∂2

∂r2+V (r) + 2

2µr2J 2

⎛⎝⎜

⎞⎠⎟| vJ >| JM >

= −2

2µ∂2

∂r2+V (r) + 2

2µr2J(J +1)

⎛⎝⎜

⎞⎠⎟| vJ >| JM >

= EvJM | vJ >| JM >

B.3.7.4

Taking a matrix element with <JM| on both sides we obtain the eigenvalue equation for |vJ>:

−2

2µ∂2

∂r2+V (r) + 2

2µr2J(J +1)

⎛⎝⎜

⎞⎠⎟| vJ >= EvJM | vJ > B.3.7.5

Typically this could be approximately solved by calculating matrix elements in a harmonic oscillator basis |v> and diagonalizing the matrix. If J is small and V(r) is approximately a parabola, eq. 5 approximately becomes the harmonic oscillator equation, so |vJ> ≈ |v> under those conditions and |vJM> ≈ |v>|JM>. The details of how to derive a Hamiltonian such as eq. 5

Page 18: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-18

and its eigenfunctions and energy levels are discussed in chapters 5 through 9. The interaction term in the Hamiltonian explicitly becomes −

µ • ε(t) = µ(r)cosθεe− t

2 /Δt2 cos(ωt +ϕ ) B.3.7.6 where θ is the angle between the ˆ z and ˆ Z axes. Taking the transition matrix element,

Vfi (t) = − < v f J f M f | µ(r)cosθεe− t2 /Δt2 cos(ωt +ϕ ) | vi JiMi >

= − < v f J f | µ(r) | vi Ji >< J f M f | cosθ | JiMi > εe− t2 /Δt2 cos(ωt +ϕ )

= −µ fiεe− t2 /Δt2 cos(ωt +ϕ )

B.3.7.7

As seen in chapter 5, µfi is nonzero only if Jf = Ji±1, corresponding to an R branch or P branch transition. Doing first order time-dependent perturbation theory, we obtain

cfi =

iµ fiε

dt '

−∞

∫ eiω fi te− t2 /Δt2 cos(ωt +ϕ ) . B.3.7.8

Note that the integration is extended to ±∞ instead of 0 to t; we want to know the final transition probability, and we can extend the integration because the pulse has a cutoff Δt, so the integrand is effectively zero for values much outside the range (-Δt, Δt). The integral is simply the Fourier transform of a Gaussian, which yields another Gaussian

cfi =

iµ fiε πΔt2

eiϕe−Δt2 (ω +ω fi )

2 /4 + e− iϕe−Δt2 (ω −ω fi )

2 /4( ) . B.3.7.9

Taking the absolute value squared to obtain the transition probability,

Pfi =

µ fi2 ε 2πΔt 2

42e−Δt

2 (ω +ω fi )2 /2 + e−Δt

2 (ω −ω fi )2 /2 + 2cos2ϕe−Δt

2 [(ω +ω fi )2 +(ω −ω fi )

2 ]/4( ) . B.3.7.10

If the phase of the incoming radiation is random (e.g. the glowbar of a IR spectrometer, or even if a laser, the molecules are at random spatial positions) , the third term in parenthesis averages out to zero and we have our final result

Pfi =

π 3/2

23/2µ fiε

⎝⎜⎞

⎠⎟

2

Δt Δt2π

e−Δt2 (ω +ω fi )

2 /2 +Δt2π

e−Δt2 (ω −ω fi )

2 /2⎛⎝⎜

⎞⎠⎟ . B.3.7.11

Again, the probability increases linearly with time just as for the Golden Rule. The first term in parentheses is called the Rabi frequency (see chapter 11), while the second term in parentheses contains the normalized Gaussian lineshapes. The ω − ω fi term is resonant upon absorption, while the ω +ω fi term is resonant only for negative frequencies, corresponding to stimulated emission. If we took the limit Δt → ∞ in the lineshape, it would approach a delta function. If we had taken a box-shaped pulse (constant from 0 to t), the lineshape would have been a sinc function, as discussed in the previous section. 3.8 Spectra and dipole correlation functions We can make a more direct connection between the dynamics of molecular dipoles and the observed spectrum, one transition of which is given in eq. B.3.7.11. For simplicity, we will assume that the spectrum is scanned with tunable monochromatic light, so the lineshapes in B.3.7.11 can be approximated by delta functions. We will also assume that stimulated emission is negligible compared to absorption (i.e. the temperature is low enough and the laser weak enough so the upper state population is negligible. The average transition rate is Pfi/Δt which becomes

Page 19: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-19

Γ fi =

π2

⎛⎝⎜

⎞⎠⎟3/2

ε 2

2µ fi2δ (ω −ω fi ) . B.3.8.1

In general, there will be many thermally populated states i, and many final states f to which they can go. If Q is the partition function for the system,

Γtot =

π2

⎛ ⎝

⎞ ⎠

3/ 2 ε 2

Q2µ fi2δ(ω −ω fi)

i , f∑ e−Ei / kT . B.3.8.2

If we write out the matrix elements of the dipole operator explicitly and Fourier-transform the delta function, this can be rewritten as

Γ fi = π2

⎛ ⎝

⎞ ⎠

3/ 2 ε2

Q2< f | µ | i > • < i | µ | f > δ(ω −ω fi)

i, f∑ e−Ei / kT

=π2

⎛ ⎝

⎞ ⎠

3/ 2 12π

ε2

Q2dt

−∞

∫ ei(ω −ω fi ) t < f |

µ | i > • < i |

µ | f >i, f∑ e−E i / kT

=12π

dteiωtC(t)−∞

. B.3.8.3

where the dipole correlation function C(t) is given by

C(t) =

π4

ε 2

Q 2ei (Ei −Ef )t / e−Ei / kT < i |

µ | f > • < f |

µ | i >

i, f∑ . B.3.8.4

Eq. 4 can be rearranged to make it clearer why we call it a dipole correlation function. Making use of the fact that |f> and |i> are eigenstates of H0 , eliminating a complete set of states to obtain a trace, and making use of the fact that traces are invariant to operator rotation,

C(t) = π4

ε 2

Q2< i | eiEit / µ | f > • < f | e− iE f t / µe−Ei /kT | i >

i, f∑

=π4

ε 2

Q2< i | eiH0 t /µ | f > • < f | e− iH0 t /µe−H0 /kT | i >

i, f∑

=π4

ε 2

Q2< i | eiH0 t /µ • e− iH0 t /µe−H0 /kT | i >

i∑

=π4

ε 2

Q2Tri e

iH0 t /µe− iH0 t / • µe−H0 /kT{ }

=π4

ε 2

Q2Tri µ(t)• µ(0)e−H0 /kT{ } = π

4ε 2

2µ(t)• µ(0) thermal

. B.3.8.5

This is an autocorrelation function because we compute the overlap of the dipole vector at a later time with itself at time zero. It is quite a remarkable result because it says that the spectrum of a quantum system can be obtained by Fourier-transforming the temporal autocorrelation of its dipole vectors.

Classically, eq. 5 becomes a thermal phase space average over the classical dipole correlation function, so one can use equation 5 to extract spectra even from classical simulations. If you had a single molecule, µ(t) would simply be its dipole moment function, and the Boltzmann factor would disappear. Note that the equilibrium correlation function is independent of time (i.e. <µ(t+Δt)µ(Δt)> still yields the same result as <µ(t)µ(0)>). This property is always true for the survival probability in eq. B.3.6.3, when calculated exactly for a time-independent Hamiltonian:

Page 20: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-20

shifting Ψ(t) in time simply adds an overall phase factor that goes away when P(t) = |<0|t>|2 is evaluated, that is, |<0|t>|2 = |<Δt|Δt+t>|2 because the Δt parts of the bra and ket propagators cancel out.

The classical autocorrelation function differs from the quantum mechanical one in an important aspect: µ(t) and µ(0) do not commute quantum mechanically, so C(t) can have real and imaginary parts quantum mechanically. As a result, the intensity function I(ω) is not necessarily symmetrical quantum mechanically. The classical C(t) is always real, and so the Fourier transform in equation 3 is symmetrical (the antisymmetric part integrates out to zero). For a harmonic oscillator of frequency ω, the correction factor is (ω / kT ) / (1− exp[−ω / kT ]) , and this can be used to approximately correct the classical intensity function I(ω). Let us consider a simple example of a correlation function. Let the dipole of an ensemble of molecules decay exponentially with time τ. Eq. 3 immediately yields (if we symmetrize the classical exponential decay at negative times)

e− |t |/τ F.T.⎯ →⎯⎯ L(ω ) = 1

πτ −1

(ω −ω fi )2 + τ −2 , B.3.8.6

the famous Lorentzian line shape. Note that propagating Ψ(t) backwards in time also generates a decaying P(t) = |<0|t>|2 since <0|-t> = <-t|0>* = <t|0>. Similarly, a multi-degree-of freedom classical system will decay to a state of higher entropy even when run backwards in time because low-entropy states (e.g. “all gas atoms in one corner of a box) are fragile due to chaos. The finite value of means there can be no true chaos in quantum mechanics, but decays backwards in time occur anyway because of the time-reversal property of bilinear forms like <0|t>.

For our diatomic molecule in section B.3.7, the natural line profile might be further modified by Doppler broadening. The Boltzmann distribution for any of the three velocity components is

Pdvi = e−Ei (vi )/kT dvi = e

−mvi

2

2kT dvi . B.3.8.7

(In 3-D it would be exp(−m[vx2 + vy

2 + vz2 ] / 2kT)dvxdvydvz = 4π exp(−mv

2 / 2kT)v2dv , but a light beam generally passes through a sample only in one dimension.) Inserting the relation for the Doppler shift Δv /ν0 = v / c into eq. 7 one obtains instead of eq. 6 the Doppler profile

P(ν) ~ e−mc 2

2 kTν −ν0ν 0

⎝ ⎜

⎠ ⎟ 2

. B.3.8.8 If both natural and Doppler broadening are in effect, the overall lineshape is called a Voight profile and obtained by convolution V(ω) = L(ω)⊗ G(ω ) (see appendix A). In the Fourier domain, that is for the correlation functions, convolution simply becomes multiplication, which is another reason for working with correlation functions: to obtain the effect of several independent broadening processes, one can just multiply the correlation function together. This generally yields a faster decay than any of the individual functions, and hence a broader lineshape after Fourier transformation. There are exceptions: an oscillatory correlation function (e.g. caused by a particle trapped by collisions) could increase the product at certain times, leading to a “motional narrowing” effect. An example is Dicke narrowing, where collisions over a certain range of pressures actually narrow the lineshape as pressure increases. 3.9 Adiabatic approximation Time dependent perturbation theory works well if most of the Hamiltonian is time-independent, with a small time-dependent correction. A different approximation is called for if the differentiable Hamiltonian H(t) changes by a large amount over time, but at least does so slowly. In that case, we cannot split H into an H0 and a V(t), but we can make use of the adiabaticity (slowness) of the transformation.

At any instant t, we can find eigenstates and eigenvalues of H(t) by solving

Page 21: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-21

H (t) |φn (t) >= En (t) |φn (t) > . B.3.9.1 Of course, these are not solutions of the actual equation we want to solve, H (t)Ψ(t) = i Ψ(t) B.3.9.2 but in the limit where H varies infinitely slowly we expect the adiabatic eigenstates to become accurate solutions of the time-dependent Schrödinger equation. They form a complete set, so we can expand the solution as

Ψ(t) = cn (t) |φn (t) >

n∑ exp[− i

En (t ')dt ']0

t

∫ , B.3.9.3

where cn are coefficients to be determined, starting with the initial condition cn (0) =< Ψ(0) |φn (0) > . (If we start in one of the states at time 0, say φ0(0), the initial vector c0 will be zero except for row “0”.) The exponential factor is not necessary, but we might as well factor it out of the cn at this stage because we know that in the limit where H does not depend on t, the solution is Ψ(t) = Σ |φn > exp[− i

Ent] (but see the caveat at the end of the section!). Thus by factoring out the energy phase factor, cn = 1 and thus time independent when H becomes time independent. If H varies slowly, we thus expect the cn to vary slowly from unity.

To obtain an equation for the cn, we insert the Ansatz 3 into the Schrödinger equation 2,

H (t) cn (t) |φn (t) >n∑ exp[− i

En (t ')dt ']0

t

∫ = i cn (t) |φn (t) > exp[− i En (t ')dt ']0

t

∫n∑

+i cn (t) | φn (t) > exp[− i En (t ')dt ']0

t

∫n∑ + cn (t)En (t) |φn (t) > exp[− i

En (t ')dt ']0

t

∫n∑

B.3.9.4

Canceling equal terms on both sides using eq. 1, this reduces to

cn (t) |φn (t) > exp[− i En (t ')dt ']0

t

∫n∑

+ cn (t) | φn (t) > exp[− i En (t ')dt ']0

t

∫ } = 0. B.3.9.5

Projecting onto the bra <φk| to eliminate the first summation, and dividing by the phase factor exp[−

i ∫ Ek (t ')dt '] on both sides, we obtain the coupled first order equations

ck (t) = − cn (t) < φk (t) | φn (t) > exp[ i (Ek (t ') − En (t '))dt ']

0

t

∫n∑ B.3.9.6

In matrix form, these equations can be written as

c1c2

⎜⎜

⎟⎟ = −

Φ11 Φ12 Φ21 Φ22

⎜⎜

⎟⎟

c1c2

⎜⎜

⎟⎟ B.3.9.7

where Φkn =< φk (t) | φn (t) > exp[ i ∫(Ek (t ') − En (t '))dt '] , and the initial condition is cn = 1. These equations can be integrated by conventional means, or we can find a solution analogous to first order time dependent perturbation theory by equating

VI (t) = Φ and obtaining to first order

cn (t) = δn0 − i / ∫ dt ' V (t ')ic0 , where c0 denotes the initial column vector of all 0s except in

column “0.” Because H(t) is differentiable, we can simplify the overlap matrix element < φk (t) | φn (t) >

somewhat for computations. Taking the derivative of equation 1, we obtain

H |φn > +H | φn >= En |φn > +En | φn > . B.3.9.8 Projecting onto the bra <φk|, this reduces to

Page 22: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-22

< φk |φn >=

< φk | H |φn >En − Ek

B.3.9.9

for the case n≠k. (In the case n=k, < φn (t) | φn (t) >= 0 if the Hamiltonian is non-singular because the matrix in eq. 7 has to be Hermitian and d/dt < φn (t) |φn (t) > = d/dt 1 = 0 = < φn (t) | φn (t) > + < φn (t) |φn (t) > = < φn (t) | φn (t) > + < φn (t) | φn (t) >

* = 2 < φn (t) | φn (t) > .) Adiabatic time evolution plays an important role in NMR and optical spectroscopy, where a

Hamiltonian can be tuned (e.g. by a slowly varying magnetic field, or by an electric field causing a Stark effect). Tuning from an initial state at t = 0 back to the same state at time τ can lead to some very interesting behavior. We can think of this as tracing out a path H(ω(t)) in ‘Hamiltonian space,’ where ω(t) is a periodic parametric path function such that ω(τ) = ω(0) again. At a first glance, one might think that very slow time evolution, starting in state Ψ(0)=φ0(0) would yield from eq. 3

Ψ(t) =|φ0 (t) > exp[− i

E0 (t ')dt ']0

t

∫ B.3.9.10

(i.e. c0 remains 1 and all other cn remain 0). In that case, the initial state φ0 would evolve back to φ0 and accrue a phase −

i ∫ E0 (t ')dt ' in the process. This is not so! To see why, take the

somewhat more general solution guess

Ψ(t) =|φ0 (t) > exp[− i

E0 (t ')dt ']0

t

∫ eiγ (t ) B.3.9.11

and insert it into the time dependent Schrödinger equation 2. After canceling identical terms on both sides and projecting onto the bra <φ0(t)|, we obtain for γ

γ = i < φ0 (t) | φ0 (t) > or γ = i dt '

0

t

∫ < φ0 (t ') | φ0 (t ') >≠ 0 B.3.9.12

This additional phase factor is not zero in general. Writing φ0 explicitly in terms of the path ω(t) as φ0(ω(t)) and using the identity ∂/∂t y(x(t)) = ∂y/∂x ∂x∂t, eq. 12 integrated to t = τ becomes

γ (τ ) = i dt

0

τ

∫ < φ0 (ω (t)) | ∂∂ω φ0 (ω ) >

∂ω∂t = dω

ω∫ < φ0 (ω ) | ∂

∂ω φ0 (ω ) > . B.3.9.13

Thus, the wave function returns to φ0(0) exp[−i ∫ E0 (t ')dt '] at the end, but with an additional

phase γ(t). This phase is called “Berry’s phase” or the “geometrical” phase: for very slow time evolution, it does not depend on the details of the time evolution any longer, but rather on the path ω traced out by the Hamiltonian. If the potential energy is singular inside the closed loop w, the contour integral in 13 does not vanish. Berry’s phase plays a role in nonadiabatic transitions of wave packets at conical intersections, where a wave packet can trace out a closed circuit path around the conical intersection (see chapter 6).

3.10 Computational solution methods Many methods suitable for computational implementation of quantum time-propagation have been developed. Here we briefly illustrate two simple yet powerful schemes to show the range of approaches that have been tried.

The first is called the “Shifted Update Rotation” method or SUR. Consider the symplectic form of the Schrödinger equation 3.1.10 again. As the simplest illustration, we consider a “one-level” system. For a one level system at energy E with initial phase c0 (|c0| = 1 so Ψ is normalized), the time evolution is trivial: c(t) = cr(t)+ci(t) = c0 exp[-iEt/]. Inserting c(t) into eq. 3.1.10 yields

Page 23: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-23

cr = +Eci / ci = −Ecr /

. B.3.10.1

Integrating from t to t+Δt yields

cr (t + Δt) = cos(ωΔt)cr (t) + sin(ωΔt)ci (t)ci (t + Δt) = cos(ωΔt)ci (t) − sin(ωΔt)cr (t)

, B.3.10.2

where ω = E / . As expected, time evolution is simply a phase rotation. For a small time step, one might expand the trigonometric functions in a Taylor series and use the approximate rule

cr (t + Δt) ≈ cr (t) +ωΔtci (t)ci (t + Δt) ≈ ci (t) −ωΔtcr (t)

B.3.10.3

for time propagation. This would not work, no matter how small a time step is chosen. To see why, write the equation in matrix form

cr (t + Δt)ci (t + Δt)

⎛⎝⎜

⎞⎠⎟≈

1 ωΔt−ωΔt 1

⎛⎝⎜

⎞⎠⎟cr (t)ci (t)

⎛⎝⎜

⎞⎠⎟

or c(t + Δt) =U(Δt)c(t) . B.3.10.4

In this notation, n+1 is a time step Δt later than n. det(U) = 1+ω2Δt2 > 1. Thus the norm of the state is not preserved, and application of algorithm 4 will blow up the wave function. A simple modification of eq. 3 fixes this problem: update cr before the second equation is evaluated:

cr (t + Δt) ≈ cr (t) +ωΔtci (t)ci (t + Δt) ≈ ci (t) −ωΔtcr (t + Δt)

B.3.10.5

Inserting the first equation into the second so both equations are at time t on the right hand side, and writing in matrix form, yields

cr (t + Δt)ci (t + Δt)

⎛⎝⎜

⎞⎠⎟≈

1 ωΔt−ωΔt 1−ω 2Δt 2

⎛⎝⎜

⎞⎠⎟cr (t)ci (t)

⎛⎝⎜

⎞⎠⎟

. B.3.10.6

Now det(U) = 1. Unimodularity of time propagation is satisfied, and the norm is preserved on average (it may still oscillate about 1). For many coupled basis states |j>, the SUR equations 3.10.3 become

cjr (t + Δt) = cj

r (t) + Δt H jkk∑ ck

i (t) B.3.10.7

cji (t + Δt) = cj

i (t) − Δt H jkk∑ ck

r (t + Δt).

This algorithm converges with a quadratic phase error in the time step Δt. It is the simplest of a class of propagators called “symplectic” because they have the symplectic structure of Hamilton’s equations of motion and are thus are (or norm) preserving. Eq. 7 produces results completely equivalent to the “symplectic leapfrog” algorithm in common use in molecular dynamics packages, but with half the memory requirement (cr does not need to be stored and can be overwritten by the next time step immediately) and saving several add-multiply operations.

The second algorithm is called the “Feit-Fleck” propagator, and it is based on a very different starting point. Consider eq. 3.1.15 again. If the propagator is left in exponential form, no norm-preservation problems arise. Splitting the Hamiltonian into kinetic and potential energy terms, we have

U(t) = e− iHt / = e− i[K ( p)+V ( x )]t / ≈ e− iK ( p)t /e− iV ( x )t / B.3.10.8

The rightmost side (it is called the Trotter formula) is only approximately correct because K and V do not commute. In fact,

Page 24: B.1 Classical mechanics - University of Illinois, UCscs.illinois.edu/mgweb/Course_Notes/chem349/notes/notes.appB.pdfLifshitz or Goldstein's Classical Mechanics. Semiclassical mechanics

B-24

eλ (A+B) = 1+ λ(A + B) + λ2

2(A + B)2 +

= 1+ λ(A + B) + λ2

2(A2 + B2 ) + λ2

2[A,B]+ +

, B.3.10.9

whereas expanding the Trotter formula yields

eλAeλB = (1+ λA +λ2

2A2 +)(1+ λB +

λ2

2B2 +)

= 1+ λ(A + B) + λ2

2(A2 + B2 ) + λ2AB +

B.3.10.10

which is already incorrect in second order, missing the proper anticommutator of operators A and B. Extremely small time steps would have to be taken to use the approximation in eq. 8. Fortunately the propagator U can be split in many other ways that are more accurate. The simplest is

U(t) ≈ e− iK ( p)t /2e− iV ( x )t /e− iK ( p)t /2 B.3.10.11

Expanding this formula in a Taylor series, we obtain

eλA /2eλBeλA /2 = (1+ λ2A +

λ2

8A2 +)(1+ λB +

λ2

2B2 +)(1+ λ

2A +

λ2

8A2 +)

= 1+ λ(A + B) + λ2

2(A2 + B2 ) + λ2[A,B]+ +

. B.3.10.12

This is correct to quadratic order in the phase, and yields good results even with large time steps. There is still a problem: starting with Ψ(x,t=0), how does one propagate the kinetic energy part, which depends on the differential operator p = −i∂ / ∂x ? The answer is simple: use the Fourier transform to switch back and forth between Ψ(x) and Ψ(p):

Ψ(p) = 1

2πdxe− ixpΨ(x)

−∞

∫ and Ψ(x) = 12π

dpeixpΨ(p)−∞

∫ . B.3.10.13

That way both K and V become multiplicative operators in their respective spaces, so the wavefunction needs to be multiplied by a simple phase factor. In practice, the wave function is discretized onto a coordinate grid of 2n points per coordinate, and the Fourier transform is done by discrete FFT, a very fast computational procedure which requires only cNlgN operations (N is the number of points being transformed, lg the log base 2, c a constant of order unity).

Many other schemes exist, such as Lanczos’ method, Kosloff’s Chebychev algorithm, the Mandelstahm-Taylor algorithm. Their implementation is mathematically much more complex, and requires a thorough reading of the original literature that cannot be summarized in a few paragraphs. Each method has particular advantages and disadvantages that make it best suited to certain situations.