Crash Course: QM Math

download Crash Course: QM Math

of 6

Transcript of Crash Course: QM Math

  • 7/28/2019 Crash Course: QM Math

    1/6

    Crash Course: The Math of Quantum MechanicsMarch 8, 2013

    Quantum mechanics uses a wide range of mathematics including See also: http://simple.wikipedia.org/wiki/Quantum_mechanicscomplex numbers, linear algebra, calculus, and differential equa-

    tions. In this handout, we give a crash course on some of the basicmathematics used.

    Complex Numbers

    Complex numbers1 give us a way of saying that the square of a num- 1 For another simple explanation, seealso: http://simple.wikipedia.org/wiki/Complex_numbers

    ber can be negative. To that end, we have to introduce the imaginary

    number i such that

    i =1.(1)

    A complex number then has the form z = a + ib where a and b are the The number z can be treated just as any

    number with a variable i, you just haveto remember that i2 = 1.usual real numbers we use every day.Every complex number also has a complex conjugate. This number

    is made by going through a complex number and replacing i with iand is represented in physics by z = a ib. One important propertyof this conjugation is

    zz = (a ib)(a + ib) = a2 + iab iab i2b2 = a2 + b2.(2)Since a and b are the usual real numbers, zz is not only real, butpositive.

    Exponentiation

    A very important operation for complex numbers and many physical

    applications is the exponential function2. We define it with an infinite 2 http://simple.wikipedia.org/wiki/Exponential_functionsum

    ex =

    n=0

    xn

    n!= 1 +

    x

    1+

    x2

    2 1 +x3

    3 2 1 + .(3)

    It has the important property that if you take its derivative you get

    back the same function3 3 We will see this a lot in the formddt e

    it = i eit or ddx eip x = ip eip x .ddx

    ex = ex.(4)

    Now, this function has the particular property that if instead of x,

    you use an imaginary number, you obtain This property gives the famous ei =1.

    ei = cos + i sin .(5)

    And in fact, any complex number can be represented by z = r ei

    where r and are both real numbers. We can see from this that zz = Remember to change i to i, so z =rei , and also remember that ex ey =ex+y.

    r2, so if we look at Eq. (2), we can see that r =

    a2 + b2.

    http://simple.wikipedia.org/wiki/Quantum_mechanicshttp://simple.wikipedia.org/wiki/Quantum_mechanicshttp://simple.wikipedia.org/wiki/Quantum_mechanicshttp://simple.wikipedia.org/wiki/Quantum_mechanicshttp://simple.wikipedia.org/wiki/Complex_numbershttp://simple.wikipedia.org/wiki/Complex_numbershttp://simple.wikipedia.org/wiki/Complex_numbershttp://simple.wikipedia.org/wiki/Complex_numbershttp://simple.wikipedia.org/wiki/Exponential_functionhttp://simple.wikipedia.org/wiki/Exponential_functionhttp://simple.wikipedia.org/wiki/Exponential_functionhttp://simple.wikipedia.org/wiki/Exponential_functionhttp://simple.wikipedia.org/wiki/Exponential_functionhttp://simple.wikipedia.org/wiki/Exponential_functionhttp://simple.wikipedia.org/wiki/Complex_numbershttp://simple.wikipedia.org/wiki/Complex_numbershttp://simple.wikipedia.org/wiki/Quantum_mechanicshttp://simple.wikipedia.org/wiki/Quantum_mechanics
  • 7/28/2019 Crash Course: QM Math

    2/6

    crash course: the math of quantum mechanics 2

    Linear algebra

    Linear algebra is at the heart of quantum mechanics. It deals with

    vectors and matrices in many dimensions.

    VectorsA column vector defined by |v is defined as just an ordered set of The | is called a ket in quantum

    mechanicsnumbers in a column:

    |v =

    z1z2

    ...

    zN

    .(6)

    The number N is called the dimension. We can define the correspond-

    ing row vector by v|, and it looks like The | is called a bra in quantummechanics.

    v| =

    z1 z2 zN

    .(7)

    Notice how in making the column vector into a row vector we took

    the complex conjugate of each entry. This is very important since now

    we will define the inner product of a two vectors |v and |w as v|w(just replace z with w to obtain the entries of |w) where for the abovewe get

    v|w = z1 w1 + z2 w2 + + zNwN.(8)

    Matrices

    Now, what makes linear algebra linear is that we act on vectors with When we write a |v, it means multiplyeach entry of |v by the number amatrices that have the property that if M is a matrix, |v and |w are

    vectors and a, b are just numbers then

    M(a |v + b |w) = aM |v + bM |w .

    A matrix looks like4 4 Strictly speaking, this is a square ma-trix; in general, the rows and columnsneed not be the same, but most of ouruse will be with square matrices.

    M =

    m11 m12 m1Nm21 m22 m2N

    ......

    . . ....

    mN1 mN2 mNN

    .(9)

    Matrices can be multipled, but the procedure for this is similar to that

    of the inner product and not how one might guess to multiply matri-

    ces. To develop this, consider two ways of thinking of a matrix: As a

    row vector of column vectors or as a column vector of row vectors:

  • 7/28/2019 Crash Course: QM Math

    3/6

    crash course: the math of quantum mechanics 3

    Here we have defined for example

    |m1 =

    m11m21

    .

    ..mN1

    andm1| =

    m11 m12 m1N

    M =

    |m1 |m2 |mN

    =

    m1|m2|

    ...

    mN|

    .(10)

    And like this we can define matrix multiplication

    MN =

    m1|m2|

    ...

    mN|

    |n1 |n2 |nN

    =

    m1|n1 m1|n2 m1|nNm2|n1 m2|n2 m2|nN

    ......

    . . ....

    mN|n1 mN|n2 mN|nN

    .

    (11)

    For a real example of this, see5. 5 http://simple.wikipedia.org/wiki/

    Matrix_(mathematics)Multiplication by a vector is similar. In fact we can define it for

    column and row vectors:

    M |v =

    m1|m2|

    ...

    mN|

    |v =

    m1|vm2|v

    ...

    mN|v

    ,(12)

    and

    v| M = v||m1 |m2 |mN

    (13)

    =v|m1 v|m2 v|mN

    Now, we can also construct the hermitian conjugate of a matrix, M

    by just writing

    M =

    m11 m21 mN1

    m12 m22 m2N...

    .... . .

    ...

    m1N m2N mNN

    .(14)

    Notice how every entry is the complex conjugate and flipped across

    the diagonal6. A matrix is hermitian if M = M. Note that a hermitian 6 This is also sometimes called the

    conjugate transpose.matrix doesnt necessarily have all real entries, just that the entriesabove the diagonal are conjugate to those below the diagonal, as

    defined by mij = mji.There is also a special matrix called the identity matrix which has 1

    http://simple.wikipedia.org/wiki/Matrix_(mathematics)http://simple.wikipedia.org/wiki/Matrix_(mathematics)http://simple.wikipedia.org/wiki/Matrix_(mathematics)http://simple.wikipedia.org/wiki/Matrix_(mathematics)http://simple.wikipedia.org/wiki/Matrix_(mathematics)http://simple.wikipedia.org/wiki/Matrix_(mathematics)
  • 7/28/2019 Crash Course: QM Math

    4/6

    crash course: the math of quantum mechanics 4

    on the diagonal and zero off the diagonal:

    I =

    1 0 00 1 0...

    .... . .

    ...

    0 0 1

    ,(15)

    this has the property that MI = M and I M = M for any matrix M

    (Check this as an exercise).

    Eigenvalues and Eigenvectors

    Matrices can have special vectors called eigenvectors which have the

    property that if |E is an eigenvector of M,

    M |E = E |E ,(16)

    we sometimes identify eigenvectors by their eigenvalues, calling|E

    , Note that if|E

    is an eigenvector, sois a |E. This ambiguiuty oftentimeslets us normalize the vectors by usingthe eigenvector with E|E = 1. Thiscondition is also related to probabilitiesin quantum mechanics.

    the vector with eigenvalue E. If M is hermitian, as it usually is in our

    applications, then the eigenvalues E are real numbers (not imaginary

    or complex).

    And if M is an N N matrix, there are in fact N eigenvectorsassociated with it. One of the astounding properties of these vectors

    is that any vector can be written as a sum of them i.e. they are a

    basis. For example, if we have a three dimensional matrix M with

    eigenvectors |E1, |E2, and |E3, then any vector can be written

    |v = v1 |E1 + v2 |E2 + v3 |E3 .(17)

    In arbitary dimension N this takes the form

    |v =N

    i=1

    vi |Ei .(18)

    Lastly, these vectors are orthogonal. Which means Ei |Ej = 0 ifiand j are different. The proof of this is simple. There are two ways to

    evaluate

    Ei |M|Ej = Ei Ei|Ej = Ej Ei|Ej .(19)

    Thus rewriting the expressions,

    (Ei Ej) Ei|Ej = 0,(20)

    so either Ei = Ej or Ei |Ej = 0, and since we have assumed Ei = Ej,we get that Ei |Ej = 0 necessarily. This proves theyre orthogonal7. 7 Sometimes Ei = Ej and this is called

    a "degeneracy". A modified version ofthis property holds in that case.

  • 7/28/2019 Crash Course: QM Math

    5/6

    crash course: the math of quantum mechanics 5

    Functions and Fourier transforms

    Sometimes instead of the usual vectors and matrices we are working

    with functions and differentiation8. For many purposes functions 8 More generally, functions and linearoperators. In fact, all of linear algebracan be applied to functions and linear

    operators defined by differentials andintegrals. This section is a taste of that.

    actually behave like vectors and differentiation like a matrix. For

    instance, take the differential equation

    i ddx

    f(x) = k f(x),(21)

    this looks very much like the eigenvalue equation seen in Eq. (16) if

    we change M to i ddx , |E to f(x), and E to k. The solution to it is(which you can check using Eq. 4)

    f(x) = eikx .(22)

    The relation to eigenvectors goes even further. Any function can

    be written as an integral of these functions9. Then any g(x) can be 9 When going from vectors to functions,sums are (sometimes) replaced byintegrals.

    written as

    g(x) = g(k)eikx

    dk

    2.(23)

    This decomposition into basis functions is called a Fourier decomposi- The introduction of 2 is necessaryand is sometimes defined elsewhere.In much of quantum mechanics, the

    12 is placed with the inverse Fouriertransform.

    tion or an inverse Fourier transform. The reason it is an inverse Fourier

    transform is because we can actually find the function g(k) (called

    the Fourier transform) by taking

    g(k) =

    g(x)eikx dx,(24)

    To check this is the case, we can substitute this into Eq. (23) while

    change the dummy variable x to y to see

    g(x) =

    g(y)eik(xy) dy dk

    2,(25)

    now the k-integral can be done and it gives a new function we call

    the -function:

    (x y) =

    eik(xy) dk2

    .(26)

    This function has the property that if we do the k-integral in Eq. (25)

    we see:

    g(x) =

    g(y)(x y) dy.(27)The equation Eq. (27) is really what defines the -function, but you

    can think of(x

    y) as a function that is zero everywhere except

    when x y = 0, then it is infinitely large10. 10 For some pictures and a more indepth discussion, see: http://en.wikipedia.org/wiki/Dirac_delta_

    function

    There are many more relations between functions and linear al-

    gebra. In fact, the two are intimately connected. Throughout the

    course, we will find different basis functions by solving eigenvalue

    equations of functions. Much of the machinery here will hold over to

    those problems as well.

    http://en.wikipedia.org/wiki/Dirac_delta_functionhttp://en.wikipedia.org/wiki/Dirac_delta_functionhttp://en.wikipedia.org/wiki/Dirac_delta_functionhttp://en.wikipedia.org/wiki/Dirac_delta_functionhttp://en.wikipedia.org/wiki/Dirac_delta_functionhttp://en.wikipedia.org/wiki/Dirac_delta_functionhttp://en.wikipedia.org/wiki/Dirac_delta_functionhttp://en.wikipedia.org/wiki/Dirac_delta_functionhttp://en.wikipedia.org/wiki/Dirac_delta_function
  • 7/28/2019 Crash Course: QM Math

    6/6

    crash course: the math of quantum mechanics 6

    Inner product of functions

    We defined inner products for vectors, but just as we discussed in the

    last section that linear algebra works for functions, we briefly define

    the inner product of functions. If we have functions f(x) and g(x), then

    their inner product is defined by

    f|g =

    f(x)g(x) dx.(28)

    Depending on the problem, the integral might not be from to+. The bounds of the integral can change.

    We can also writef

    i ddxg

    = i

    f(x) dg(x)dx

    dx.(29)

    As an exercise, use the Fourier transforms and the -function from

    the last section to show that

    f

    i ddxg

    =

    k f(k) g(k) dk.(30)

    Gaussian integrals

    An important integral for physics is the Gaussian integral. Written out

    in full glory, it is For those interested, this can be bothproven and generalized as seen inhttp://en.wikipedia.org/wiki/

    Gaussian_integral

    eax2+bx dx =

    aeb

    2/(4a).(31)

    With this identity we can actually see that the Fourier transform of a

    Gaussian is a Gaussian. Take the function g(x) = eax2 , then we canwrite

    g(k) =

    eax2

    eikx dx,(32)

    Thus, if we let b = ik, the right hand side of Eq. (32) can be evalu-ated to be

    g(k) =

    aek2/(4a).(33)

    As an exercise, perform the inverse fourier transform to recover

    g(x) = eax2

    .

    http://en.wikipedia.org/wiki/Gaussian_integralhttp://en.wikipedia.org/wiki/Gaussian_integralhttp://en.wikipedia.org/wiki/Gaussian_integralhttp://en.wikipedia.org/wiki/Gaussian_integralhttp://en.wikipedia.org/wiki/Gaussian_integralhttp://en.wikipedia.org/wiki/Gaussian_integral