Vectors, Tensors and Matrices

download Vectors, Tensors and Matrices

of 9

Transcript of Vectors, Tensors and Matrices

  • 8/7/2019 Vectors, Tensors and Matrices

    1/9

    Appendix 1Vectors, Tensors and Matr ices

    Cartesian co-ordinates in three dimensionsIn our study of dynamics we have come across three types of physical quantity. Thefirst type is a scalar and requires only a single number for its definition; this is a scalar.The second requires three numbers and is a vector. The third form needs nine numbersfor a complete definition. All three can be considered to be tensors of different rank o rorder.A tensor o f the zeroth rank is the scalar. A tensor o f the first rank is a vector and may bewritten in several ways. A three-dimensional Cartesian vector is

    v = i + y j + z k (A l . l )where i , j and k are the respective unit vectors.In matrix form we have

    V = ( i j k ) y = (elT(v (A1.2)It is common practice to refer to a vector simply by its components (V) where it is under-stood that all vectors in an equation are referred to the same basis (e).

    I: 1It is convenient to replace (x y z) with (x , x2 X J so that we may w rite

    V = x, i from 1 to 3 (Al.3)This tensor is said to be of rank 1 because only one index is needed.A dyad is defined by the following expression

    A B C = A ( B * C )= E (Al.4)where A B is the dyad and A , B , C and E are vectors. In three dimensions a dyad m ay bewritten

    ( A I S )

  • 8/7/2019 Vectors, Tensors and Matrices

    2/9

    A p p e n d ix I 273or

    (Al.6)

    The square matrix is the matrix representation of the dyad and can be writtenD, = A B , (A1.7)

    Thus the dyad is a tensor of rank 2 as it requires two indices to define its elements. The sumof two or more dyads is termed a dyadic.The majority of rank 2 tensors encountered in physics are either symmetric or anti-symmetric. For a symmetric tensor D, = D,,,nd thus there are only six independent ele-ments. For an anti-symmetric tensor, D, = -DJ,nd, because this implies thatD,,0, thereare on ly three independent elements; this is similar to a vector.The process o f outer multiplication of two tensors is defined typically by

    AI,#lm = c,, (Al.8)where C is a tensor of rank 5.If both tensors A and B are of rank 2 then the element

    c,, =AIf& (A 1.9)Thus, if the indices range from 1 to 3 then C will have 34 elements.We now m ak ej = k and sum over all values o f j or k) o obtain

    (A1.lO)Further, we could omit the summation sign if it is assumed that summation is over therepeated index. This is known as Einstein's sum mation convention. Thus in compact form

    c,,= AIB,/ (A l . 11)The process of making two suffices the same is known as contraction, and outer multi-In the case of two rank 2 tensors the process is identical to that of matrix multiplicationIf we consider two tensors of the first rank (vectors) then outer multiplication is

    plication followed by a contraction is called inner multiplication.of two square matrices.

    c,,= A B , (A 1.12)and these can be thought of as the components of a square matrix. In matrix notation,

    [cl= ( A ) (B)' (A I. 13)If we now perform a contraction

    C = A P i = ( Z A P i )we have inner multiplication, which in matrix notation is (A1.14)c = ( A ) ~ ( B ) (A 1. 5 )

    and this is the scalar product.

  • 8/7/2019 Vectors, Tensors and Matrices

    3/9

    2 74 Appendix IAlternatively, because (e) * (e)T = [I], the identity matrix, we may write

    C = A .B = (A)T(e).(e)T(B)= (A)T(B) (A 1.1 6)The vector product of two vectors is w ritten

    C = A x B (A 1.1 7)and is defined as

    C = A B s i n a e ( A 1 . 1 8 )where a is the smallest angle between A and B and e is a unit vector normal to both A andB in a sense given by the right hand rule. In matrix notation it can be demonstrated that

    C = ( - A & + A2B3)+ (A& - A,B3)I'+ (-A2Bl + AlB2) kor

    0 -A 3 A2c = (eIT(C) = (W [2 ,-;,[ (A1.19)

    The square matrix, in this book, is denoted by [A]"so that equation (A l. 19)may be writtenC = (e)T[A]x(B) (A 1.20)or, since (e).(e)T= [13, the unit m atrix ,C = (e)T[A]"(e).(e)T(B)

    = A".B (A1.21)where A" = (e)T[A]x(e) s a tensor operator of rank 2.In tensor no tation it can be shown that the vector product is given by

    ci = EijkA,.Bk (A 1.22)where E gk is the alternating tensor, defined as

    cijk = +1= -

    if ijk is a cyclic permutation of (1 2 3)if ijk is an anti-cyclic permutation of (1 2 3) (A 1.23)

    = 0 otherwiseEquation (A 1.22) may be written

    ci = (EijkAj)Bk (A 1.24)Now let us define the tensor

    Til, = (A 1.25)If we change the order of i and k hen, because of the definition of the alternating tensor, T ' k= - ,; therefore T is anti-symmetric.

  • 8/7/2019 Vectors, Tensors and Matrices

    4/9

    AppendixI 275The elements are then

    = Elldl+ & 1 2 2 A 2 + &13d3= = -T21T13 = E I I J I + E I z J ~ + E I J ~ = +A2 = -T31'23 = E Z I J I + E22J2 + & 2 3 d 3 = - A , = -T32

    and the diagonal terms are all zero. These three equations may be written in matrix form as

    (A1.26)

    which is the expected result.

    C = A x B ,In summary the vector product of two vectors A and B may be w ritten

    (e)'(C) = (e )TL41x(e )* (e )T(4

    (c) [AIX(B)or

    andC, = eflkA,Bk (summing o v e r j and k)

    = T,gk (summing over k)

    Transformation of co-ordinatesWe shall consider the transformation of three-dimensional Cartesian co-ordinates due to arotation of the axes about the origin. In fact, mathematical texts define tensors by the w ayin which they transform. For example, a second-order tensor A is defined as a multi-direc-tional quantity which transforms from one set of co-ordinate axes to ano ther according tothe rule

    A'mn = lnl,L,A,The original set of coordinates will be designated x,, x2,x3and the associated unit vectors

    (A1.27)will be e,, e,, e3.In these terms a position vector V will be

    V = x,e, + x2ez+ x3e3= x,e,Using a primed set of coordinates the sam e vector will be

    V = x;e; + x;e; + x;e; = de', (A1.28)The primed unit vectors are related to the original unit vectors bye; = le , + me2 + ne3 (A 1.29)where I, m and n are the direction cosines between the primed unit vector in the x; directionand those in the original set. We shall now adopt the following notation

  • 8/7/2019 Vectors, Tensors and Matrices

    5/9

    276 Appendix Ie ; = allel+ a,,e, + a I 3 e 3

    (Al.30)with similar expressions for the other two unit vectors. Using the summ ation convention,

    el = aeei (A1.3 1)

    -- a,ej

    In matrix form

    and the inverse transform, b,, is such thatbll bl2 b13[ = [ b2l b22 b23 I[ 31b31 b32 b33

    (A1.32)

    (A1.33)

    It is seen that ~ 1 3s the d irection cosine of the angle between e; and e, whilst b31 is the direc-tion cosine o f the angle between e, and e,; hus a13 = b31. Therefore b, is the transpose of au,that is b, = aji.The transformation tensor a, is such that its inverse is its transpose, in matrix form [A][AIT= [11. Such a transform ation is said to be orthogonal.NowV = eGi = 4-4 (A1.34)

    so premultiplying both sides by t$ gives(A1.35)(A1.36)

    It should be noted thatxl! = a,+

    In matrix notationis equivalent to the previous equation as only the arrangement of indices is significant.(v) = (e>(x) = (eIT(xf) (A1.37)

    but (e)= [a](e),and therefore= (e)T[alT(x)

    Premultiplying each side by (e) gives(XI = [aIT(x)

    and inverting we obtain(x) = [ a m )

    The square of the magnitude of a vector is(A1.38)

  • 8/7/2019 Vectors, Tensors and Matrices

    6/9

    Appendix I 277J = (x)'(x) = (xr)'(x')

    = (x)'EaI'[al(x)

    [al'bl = [I1 = Wlbl[ b ] = [a]' = [a]-'

    and because (x) is arbitrary it follows that

    where

    (A1.39)

    (A 1.40)

    In tensor notation this equation isb,aj, = aiiajl = 6 (Al.4 1)

    where 6 is the Kronecker delta defined to be 1 when i = 1 and 0 otherwise.Because ajiail = aj,aji.equation (A1.41) yields six relationships between the nine ele-ments a,, and this implies that only three independent constants are required to define thetransformation. These three constants are not arbitrary if they are to relate to proper rota-tions; for example, they must all lie between - and +1. Another condition which has to bemet is that the triple scalar product of the unit vectors must be unity a s this represents thevolume of a unit cube. Soe, (e2 X e3)=e,' (e; X e;) = 1 (Al.42)

    sincee; = al l e l+ al2e2+ aI3e, etc.

    We can use the w ell-known determinant form for the triple product and w rite

    (Al.43)

    orDet [a]= 1

    The above argum ent only holds if the original set of axes and the transformed set are bothright handed (or both left handed). If the handedness is changed by, for example, the direc-tion of the z' axis being reversed then the bottom row of the determinant would all be ofopposite sign, so the value of the determinant would be - . It is interesting to note that noway of formally defining a left- or right-handed system has been devised; it is only the dif-ference that is recognized.In general vectors which require the use of the right hand rule to define their sense trans-form d ifferently when changing from right- to left-handed systems. Such vectors are calledaxial vectors or pseudo vectors in contrast to po lar vectors.Examples of polar vectors are position, displacement, velocity, acceleration and force.Exam ples of axial vectors are angular velocity and mom ent of force. It can be dem onstratedthat the vector product of a po lar vector and an axial vector is a polar vector. Another inter-esting point is that the vector of a 3 x 3 anti-symmetric tensor is an ax ial vector. This pointdoes not affect any of the arguments in this book because we are always dealing with right-handed systems and pure rotation does not change the handedness of the axes. However, if

  • 8/7/2019 Vectors, Tensors and Matrices

    7/9

    278 Appendix Ithe reader wishes to delve deeper into relativistic mechanics this distinction is of someimportance.

    Diagonalizationof a second-order tensorWe shall consider a 3 X 3 second-order symmetric Cartesian tensor which m ay representmoment o f inertia, stress or strain. Let this tensor be T = 7', and the matrix of its elementsbe [a.The transformation tensor is A = A , and its matrix is [A]. The transformed tensoris

    [TI = [AITITl[Al (A 1.44)Let us now assume that the transformed matrix is diagonal so

    h , 0 0

    0 0 h3[T ' ]= [ 0 h2 0 ] (A 1.45)If this dyad acts on a vector (C ) the result isc; = h l C lc; = h3C3c; = h,C, (A 1.46)

    Thus if the vector is wholly in the x r direction the vector i"xr would still be in the x r direc-tion, but multiplied by XI .Therefore the vectors C l r i ' ,C2'j' and C3'kr orm a unique set of orthogonal axes whichare known as the principal axes. From the point of view of the original set of axes if a vec-tor lies along any one of the principal axes then its direction will remain unaltered. Such avector is called an eigenvector. In symbol formTJq= hCi (A 1.47)

    or[TI ( C ) = h(C) (A 1.48)

    Rearranging equation (Al.48) gives( [ T l - UllHC) = (0)

    where [ 13 is the unit matrix. In detail3

    (T33 - h)(A1.49)

    This expands to three homogeneous equations which have the trivial solution of ( C ) = (0).The theory of linear equations states that for a non-trivial solution the determinant of thesquare matrix has to be zero. That is,

  • 8/7/2019 Vectors, Tensors and Matrices

    8/9

    AppendixI 279(TI, - 1) TI2

    (T22 - T23 (A1.50)T3 I T32 v 3 3 - I = O[ T2,

    This leads to a cubic in h thus yielding the three roots which are known as the eigenvalues.Associated with each eigenvalue is an eigenvector, all of which can be shown to be mutuallyorthogonal. The eigenvectors only define a direction because their m agnitudes are arbitrary.Let us consider a special case for which T I 2= T21= 0 and TI3 = T = 0 . In this case fora vector (C )= (1 0 O)T the product [T l (C)yields a vector ( T I ,0 0) ,which is in the samedirection as (C). Therefore the x , direction is a principal axis and the x2 ,x3plane is a planeof symm etry. Equation (Al.50) now becomes

    (A1.51)Til - h)[(T22 - h ) ( T - - Tf3I = 0

    T 3

    In general a symm etric tensor when referred to its principal co-ordinates takes the formh, 0 0

    0 0 1 3[TI = [ 0 A2 0 ] (A1.52)

    and when it operates on an arbitrary vector (C) the result is

    (Al.53)

    Let us now consider the case of degeneracy with h3 = h2.It is easily seen that if ( C ) lies inthe xs3plane, that is ( C ) = (0 C2 C3)T,hen

    [ T I ( C ) = h2 c* (Al.54)L3rom which we see that the vector remains in the x g 3plane and is in the same direction. Thisalso implies that the directions of the x2 and x3 axes can lie anywhere in the p lane normal tothe x, axis. This would be true if the x I axis is an axis of symmetry.If the eigenvalues are triply degenerate, that is they are all equal, then any arbitrary vec-tor will have its direction unaltered, from which it follows that all axes are principal axes.The orthogonality of the eigenvectors is readily proved by reference to equation (A l.48).Each eigenvector will satisfy this equation with the appropriate eigenvalue thus[ T I ( C ) , = h,(C), (A1.55)

    and[TI(C), = h2(C)* (A1.56)

    We premultiply equation (A1.55) by (C): and equation (A1.56) by (C): to obtain the scalars(C) : [T l (c ) l = h,(C):(C), (A1.57)

  • 8/7/2019 Vectors, Tensors and Matrices

    9/9

    280 Appendix Iand

    ( C ) [TI(C)2 = 1 2 ( C ) kC)2 (A1.58)Transposing both sides of the last equation, remembering that [ ] is symmetrical, gives

    ( C ) m C ) I = h*(C):(C), (A1.59)and subtracting equation (Al.59) from (A l.57) gives0 = (1,- 1 2 ) ( C ) 3 C ) I (A1.60)

    so when 1, 1,we have that ( C ) ~ ( C ) , 0; that is, the vectors are orthogonal.