UNIT 1 - ::MPBOU:: · UNIT 1 Groups STRUCTURE PAGE NO. 1.1 Introduction 01 1.2 Objective 01 1.3...

134
UNIT 1 Groups STRUCTURE PAGE NO. 1.1 Introduction 01 1.2 Objective 01 1.3 Normal and Subnormal Series 2-5 1.4 Composition Series 5-7 1.5 JordanHölder theorem 7-11 1.6 Solvable Groups 11-13 1.7 Nilpotent Groups 13-15 1.8 Unit Summary/ Things to Remember 16 1.9 Assignments/ Activities 17 1.10 Check Your Progress 17 1.11 Points for Discussion/ Clarification 18 1.12 References 18

Transcript of UNIT 1 - ::MPBOU:: · UNIT 1 Groups STRUCTURE PAGE NO. 1.1 Introduction 01 1.2 Objective 01 1.3...

UNIT 1

Groups

STRUCTURE PAGE NO.

1.1 Introduction 01

1.2 Objective 01

1.3 Normal and Subnormal Series 2-5

1.4 Composition Series 5-7

1.5 Jordan–Hölder theorem 7-11

1.6 Solvable Groups 11-13

1.7 Nilpotent Groups 13-15

1.8 Unit Summary/ Things to Remember 16

1.9 Assignments/ Activities 17

1.10 Check Your Progress 17

1.11 Points for Discussion/ Clarification 18

1.12 References 18

1

1.1 Introduction

The theory of groups was unified starting around 1880. Since then, the impact of group theory has

been ever growing, giving rise to the birth of abstract algebra in the early 20th century,

representation theory, and many more influential spin-off domains. The classification of finite

simple groups is a vast body of work from the mid 20th century, classifying all the finite simple

groups. These groups can be seen as the basic building blocks of all finite groups, in much the same

way as the prime numbers are the basic building blocks of the natural numbers.

The classification of the finite simple groups is believed to classify all finite simple groups. These

groups can be seen as the basic building blocks of all finite groups, in much the same way as the

prime numbers are the basic building blocks of the natural numbers. The Jordan–Hölder theorem is

a more precise way of stating this fact about finite groups. Jordan defines isomorphism of

permutation groups and proves the theorem for permutation groups. In a huge collaborative effort,

the classification of finite simple groups was accomplished in 1983 by Daniel Gorenstein.

1.2 Objective

After the completion of this unit one should able to :

define normal, subnormal and composition series.

prove Jordan–Hölder theorem.

to prove fundamental theorem of Arithmetic as an application of Jordan–Hölder theorem.

relate solvable and nilpotent groups.

to show the solvability of finite p-group.

to analyze abstract and physical systems in which symmetry is present.

1.3 Normal and Subnormal series

2

An important part in group theory is played by subnormal, normal and central series.

These series are related with the view of a series of a group G, which gives insight into the structure

of G. The results hold for both abelian and non abelian groups.

Subgroup series: A subgroup series is a finite sequence of subgroups of a group G contained in

each other such that:

{e}= GGGGGG n ...........3210

or {e}= GGGGGG n ...........3210 ……………(1)

One also considers infinite chains of imbedded subgroups (increasing and decreasing), which may

be indexed by a sequence of numbers or even by elements of an ordered set. Here n is called as the

length of the series.

A subgroup series (1) is called subnormal series or sub invariant series.

If also each subgroup Gi , i = 0,1,2,……n, is normal in G, the subgroup series (1) is called a

normal series or invariant series in G.

It is noted that for abelian groups the notions of subnormal and normal series coincide, since every

subgroup is normal. So a normal series is a subnormal series, but the converse is not true in general.

Sometimes the phrase "normal series" is used when the distinction between a normal series and a

subnormal series is not important. Most of the authors use the term normal series, instead of

subnormal series.

Examples:

1. {0} < 8 < 4 < : Normal series

2. {0} < 9 < 3 < : Normal series

3. : Subnormal series.

Since is not normal in . Here is the group of symmetries of the

square denotes rotations and denotes mirror images.

3

Refinement of a normal series

A subnormal ( normal) series {1} = K0 = K1 = · · · = Km = G is a refinement of subnormal

( normal) series {1} = H0 = H1 = · · · = Hn = G if { Hi } { Kj } . A refinement of a normal

series is another normal series obtained by adding extra terms.

Examples:

1. Sn > An > 1 is a normal series for G = Sn for any n ≥ 1.

The associated subquotients are Sn /An ∼ /2 and An /1 = An .

For n = 4 this normal series has a refinement: S4 > A4 > K > 1.

2. The series { 0 } < 72 < 24 < 8 < 4 < is a refinement of a series

{ 0 } < 72 < 8 <

Schreier Refinement Theorem:

Any two subnormal series of a group have equivalent refinements.

Proof Consider the subnormal series of a group G.

)(...210 eGGGGG s … (1)

)(... 1210 eHHHHG … (2)

Since for any i,j,k and l, 1iG is a normal subgroup of iG and 1kH is a normal subgroup of 'kH

we get

),...,2,1,0;1,...,2,1,0(),(1, tjsiHGGG jiiji …(3)

),...2,1,0;1,...,2,1,0(),( 11, sltkGHHH kklk …(4)

are subgroup of G.

4

Now ljH is normal subgroup of jH implies that 1' jiG is normal subgroup of '' jiG . Similarly 1' lkH

is normal subgroup of lkH , .

Since )(eH t and ,0 GH we have 11, )()( ilitiiti GeGHGGG .

Also iiiiiiii GGGGGGHGGG 11010, )()( .

Thus .1,...2,1,00,11, siGGG iiti …(5)

Similarly 1,...,2,1,00,11, tiHHH kksk …(6)

Consider two series

)(......)(...

)(...

,10,10,22,1

2,11,10,11,02,00,00

eGGGGGG

GGGGGGGGG

stsst

t

…(7)

)8......()(......)(...

)(...

,11,10,10,10,22,12,1

1,10,11,02,01,00,00

eHHHHHHHHH

HHHHHHHHG

tstttts

s

Both (7) and (8) have same number of terms.

Clearly 0G occurs in (7) and for each tmm GGassm,1

,,...2,1

by (5), we see that each mG occurs

in (7) for all m. Thus (7) is a refinement of (1).

Similarly (8) is a refinement of (2).

Now by Zassenhaus theorem (Let A and B be any two subgroups of a group G, A* and B* be

normal subgroups of A and B respectively then *)(*

)(*

*)(*

)(*

BCC

BCC

CBB

CBB

)

We have 1,

,

11

1

11

1

1,

,

)(

)(

(

)(

rs

rs

rss

rss

srr

srr

sr

sr

H

H

GHH

GHH

HGG

HGG

G

G

For all 1...,2,1,0 sr and for all .1,...2,1,0 ts

5

Thus (7) and (8) are equivalent. This proves the theorem.

Lemma If G is a commutative group having a composition series then G is finite.

Proof We firstly show that a simple abelian group must be a cyclic group of prime order. This

follows the fact that any subgroup of an abelian group is normal.

Now let )(...210 eGGGGG s be a composition series of G.

Since 1

11

)(

s

s

s

sG

e

G

G

G is simple abelian, ')( 12 ss pGo where 1sp is a prime number.

Further1

2

s

s

G

Gis simple abelian ,2

1

2

s

s

s pG

Go for some prime number .2sp

Thus .)()( 121

1

22

sss

s

ss ppGo

G

GoGo

Proceeding in this manner we get that G has 1210 ... spppp elements where

1i

ii

G

Gop for

1,...,2,1,0 si

1.4 Composition series

If a group G has a normal subgroup N which is neither the trivial subgroup nor G itself, then the

factor group G/N may be formed, and some aspects of the study of the structure of G may be

broken down by studying the "smaller" groups G/N and N. If G has no such normal subgroup, then

G is a simple group.

A composition series of a group G is a subnormal series

{1} = H0 = H1= H2 = · · · = Hn= G

6

with strict inclusions, such that each Hi is a maximal normal subgroup of Hi+1. Equivalently, a

composition series is a subnormal series such that each factor group Hi+1 / Hi is simple. The factor

groups are called composition factors.

A subnormal series is a composition series if and only if it is of maximal length. That is, there are

no additional subgroups which can be "inserted" into a composition series. Thus, A subnormal

series {1} = H0 = H1 = H2 = · · · = Hn = G is a composition series if and only if

(1) H0 < H1 < H2 < · · · < Hn = G.

(2) For each i = 0, . . . , n - 1, we have that there is no normal subgroup of Hi+1 that lies strictly

between Hi and Hi+1.

The length n (the number of subgroups in the chain, not including the identity) of the series is called

the composition length.

If a composition series exists for a group G, then any subnormal series of G can be refined to a

composition series, informally, by inserting subgroups into the series up to maximality. Every finite

group has a composition series, but not every infinite group has one. For example, the infinite

cyclic group has no composition series.

Example: Let G = 6 = {[0], [1], [2], [3], [4], [5]} be the cyclic subgroup of order 6 and consider

the subgroups H = [2] = {[2], [4], [0]} and K = [3] = {[3], [0]}. This gives rise to the two subnormal

series

{[0]} = H = G,

{[0]} = K = G.

In fact these are both composition series.

The first has composition factors H/{[0]} 3 and G/H 2, whereas the latter series has composition

factors K/{[0]} 2 and G/K 3. Notice that the composition factors turn out to be the same (up to

order).

Note: For infinite groups existence of a composition series is some kind of finiteness condition. If G

has a composition series, then any subnormal series has a refinement which is a composition series.

Isomorphic normal series

Two normal series

HHHHHeandGGGGGe ttss 121121 ................

7

are called isomorphic or equivalent if there exists a 1-1 correspondence between the factors of the

two series (thus s=t) such that the corresponding factors are isomorphic.

Theorem Every finite group has a composition series.

Proof Take the longest possible normal series. Then the sub quotients are all simple. If one sub

quotient Hi /Hi+1 is not simple then it has a nontrivial normal subgroup which, by the

correspondence theorem, gives a group K between Hi and Hi+1 extending the normal series:

Hi ⊳ K ⊳ Hi+1 .

A composition factor is a special case of a sub quotient of G which is defined to be a quotient of a

subgroup of G (i.e., H/N where N ⊴ H ≤ G).

Thus a finite group has at least one composition series.

1.5 Jordan-Hölder Theorem

A group may have more than one composition series. However, the Jordan–Hölder theorem (named

after Camille Jordan and Otto Hölder) states that any two composition series of a given group are

equivalent. That is, they have the same composition length and the same composition factors, up to

permutation and isomorphism. The Jordan–Hölder theorem is also true for transfinite ascending

composition series, but not transfinite descending composition series .

Theorem

(1) Every finite group G having at least two elements has a composition series.

(2) If GGGGGe k ...)( 210

and GHHHHe 1210 ...)(

are two composition series for a group G than lk and )(

1)(1,1,...,2,1,0i

i

i

i

H

H

G

Gki

for some

permutation of the set 1...,2,1,0 k (i.e. any two composition series for G are equivalent).

8

Proof: (1) Suppose we are given two composition series for a group G:

GGGGGe k ...)( 210 -------1)

and GHHHHe 1210 ...)( -------2)

then lk and )(

1)(1,1,...,2,1,0i

i

i

i

H

H

G

Gki

for some permutation of the set

1...,2,1,0 k (i.e. any two composition series for G are equivalent).

Now We shall apply induction on o(G).

Let n= o(G). In case n=2, (e) = GGG 10 is the only composition series for G as GGG 01 /

and G being a cyclic group of prime order, is simple. So the theorem holds for .2n

Let every group of order n have a composition series and let G be a group of order .2n

If G is simple, then G has no proper normal subgroup. Consequently (e) = GGG 10 is the only

composition series for G.

If G is not simple and let N be a proper normal subgroup of G. Since G is finite, there exists only

finitely many proper normal subgroups of G containing N and let M be one such subgroup having

largest number of elements. Then M is a maximal normal subgroup of G. Clearly G/M is a simple

group and GM . So that .)( nMo Hence by the induction hypothesis M has a composition

series,

....)( 1210 MMMMMe

Now consider the series

GMMMMMe t ...)( 210 …(1)

By definition of a composition series, each i

i

M

M 1 is a simple group for all .1,...,2,1,0 ti

9

Also M

G

M

G

t

is simple.

Consequently (l) is a composition series for G.

(2) Suppose

GGGGGGe kk 1210 ...)( …(2)

and GHHHHHe ll 1210 ...)( …(3)

are two composition series for the finite group G. Again we shall apply induction on ).(Go Let

.)( nGo If n=2 then as seen before G has only one composition series, so 2nd

part of theorem

holds for groups of order 2.

Let us suppose that 2)( nGo and let the result be true for all groups of order < .n We consider

two cases 11)( lk HGI and 11)( lk HGII .

Case I 11 lk HG

Evidently 1210 ...)( kGGGGe …(4)

and 11210 ...)( kl GHHHHe …(5)

are two composition series for the group 1kG whose order is less than n. So by the induction

hypothesis series (4) and (5) are equivalent. This gives lkeilk .,.11 and further series (2)

and (3) are equivalents as

.1111

l

l

lkk

k

H

H

H

G

G

G

G

G This case now follows.

Case II .11 lk HG

As 1kG and 1lH are maximal normal subgroups of 11, lk HGQG is normal subgroup of 1kG

as well as of 1lH and as 1'11 klk GQHG and .1 lHQ Further 11 lk HG is a normal subgroup

10

of G. Now .111 lkk HGG Since 1kG and 1lH are maximal normal subgroups of

., 11 GHGG lk (If can‟t be equal to 1kG and 1lH simultaneously.)

Let QQQQQe m ...)( 210 be a composition series for the group Q, then we claim that

GGQQQQQQe kkm 1210 )(...)( …(6)

and GHHQQQQQe lm 11210 )(...)( …(7)

both are composition series for G. For this purpose it is sufficient to prove that Q

Gk 1 and Q

H l 1 are

simple. Now 11

11

11

11

ll

lk

lk

kk

H

G

H

HG

HG

G

Q

G; but

1lH

Gis simple.

Hence Q

Gk 1 is simple.

Similarly 1

1

k

l

G

G

Q

His simple. Again (6) and (7) are equivalent, since

1

1

l

k

H

G

Q

Gand

Q

H

G

G l

k

1

1

(as seen above) and each of (6) and (7) is of length .2M

Now by Case (I), series(2) and(6) are equivalent, so .2 mk Again by Case(1), series(3) and (7)

are equivalent, therefore .2 ml Hence .lk Also as series (6) and (7) are equivalent; the series

(2) and (3) are equivalent. This completes the proof for the Case (II).

Note: This theorem can also be proved by Schreier refinement theorem.

Application of Jordan-Hölder Theorem

We use the Jordan-Hölder Theorem to prove the uniqueness part of the Fundamental Theorem of

Arithmetic. The Fundamental Theorem of Arithmetic states that every positive integer not equal to

a prime can be factored uniquely (up to order) into a product of primes.

First, we claim that such a factorization exists. Indeed, suppose n is composite (i.e., n>1and n is not

a prime). Then an easy induction shows that n has a prime divisor of p and we can write n = pn1,

where n1 is an integer satisfying n1<n. If n1 is prime, the claim holds. Otherwise, n1 has a prime

11

factor p1, and n1=p1n2 here n2<n1 is an integer. Continuing in this fashion, we must come to an

equation nj-1=pj-1nj, where nj is a prime pj, since the sequence of decreasing positive integers

n > n1 > n2 > n3……………

cannot continue indefinitely. We now have that n= p p1 p2 p3……… pj is a product of primes. This

proves the existence claim.

On the basis of the Jordan-Hölder Theorem, we can easily show the other part of the Fundamental

Theorem of Arithmetic, i.e., apart from order of the factors, the representation of n as product of

primes is unique. To do this suppose that n= p1 p2 p3……… ps and n= q1 q2 q3……… qt

where the pi and qj are primes. Then denoting, as usual, by Ck the cyclic group of order k, we have

and

as two composition series for Cn. But the Jordan-Hölder Theorem implies these must be equivalent;

hence we must have s=t and by suitably arranging pi = qi, 1≤ i ≤ s. Thus we have established the

unique factorization theorem for positive integers as an application of the Jordan-Hölder Theroem.

1.6 Solvable Groups

The group G is said to be solvable or soluble if there exists a finite chain of subgroups

G = N0 N1 · · · Nn such that (i) Ni is a normal subgroup in Ni-1 for i = 1,2, . . . ,n,

(ii) Ni-1 / Ni is abelian for i = 1,2, . . . ,n, and

(iii) Nn = {e}.

Examples: Any abelian group is solvable. A simple group is solvable if and only if it is cyclic of

prime order. A4 is solvable. Indeed, the subgroup {e, (12)(34), (13)(24), (14)(23)}, isomorphic to 2

x 2 , is normal in A4, with quotient isomorphic to 3. Since 2 x 2 and 3 are abelian, A4 is

solvable. Its composition factors are 2 and 3. As a corollary, one deduces that S4 is also solvable,

since S4/A4 is isomorphic to 2 . An is not solvable for n > 4, because it is simple and not abelian.

Then Sn is not solvable for n>4 either.

12

Any abelian group is solvable even if it is infinite. Another interesting example is the symmetric

group S4 which has the solvable series: S4 ⊳ A4 ⊳ K ⊳ 1 with quotients S4 /A4 ∼ /2, A4 /K ∼ /3

and K/1 = K ∼ /2 × /2.

Properties of solvable groups

If G is solvable, and H is a subgroup of G, then H is solvable.

If G is solvable, and H is a normal subgroup of G, then G/H is solvable.

If G is solvable, and there is a homomorphism : G H, then H is solvable.

If H and G/H are solvable, then so is G.

Theorem Let p be a prime number. Any finite p-group is solvable.

Proof Let G be a finite p-group(p is a prime number) then o(G)=pn, for some n 0. If n=l G is

abelian and so it is solvable. So let n>l, we apply induction on n. suppose the result holds for groups

of order pm

with m<n. Now (G), the centre of G is not equal to (e). Consequently o[ (G)]=pk for

some k l . This yields that G/ (G) is a group of order pn-k

with n-k<n. Hence by the induction

hypothesis G/ (G) is solvable. Also (G) being abelian, is solvable. Hence, taking H= (G), we

see from the above theorem that G is solvable.

Supersolvable groups

As a strengthening of solvability, a group G is called supersolvable (or supersoluble) if it has an

invariant normal series whose factors are all cyclic. Since a normal series has finite length by

definition, uncountable groups are not supersolvable. The alternating group A4 is an example of a

finite solvable group that is not supersolvable.

We now establish a criterion for the solvability of a group by means of commutator subgroups.

Commutator Subgroup

Let G be a group. An element g G is called a commutator if g = aba-1

b-1

for elements a,b G. The

smallest subgroup that contains all commutators of G is called the commutator subgroup.

The subgroup (G' )' is called the second derived subgroup of G.

We define G(k)

inductively as (G(k-1)

)', and call it the k th derived subgroup.

13

Proposition Let G be a group with commutator subgroup G', then

(a) The subgroup G' is normal in G, and the factor group G/G' is abelian.

(b) If N is any normal subgroup of G, then the factor group G/N is abelian if and only if G' N.

Theorem A group G is solvable if and only if G(n)

= {e} for some positive integer n.

Proof Let G be solvable and let )(...212 eGGGGG n be a solvable series. We prove

inductively that .)( kGG k

k clearly .0

)0( GGG

Suppose for some '

)(, k

k GGk this gives '.]'[ )()1(

k

kk GGG

As 1/ kk GG is abelian, we have .1

1

kk GG Consequently .1

)1(

k

k GG Hence by induction

nkGG k

k ,...2,1,0)( In particular .)(

n

n GG Hence )()( eG n since ).(eGn Conversely let

)()( eG n for some n, then )(... )()2()1(0 eGGGGG n is a subnormal series for G,

such that ]'/[/ )()(1()( iiii GGGG is abelian since for any group H,H/H’ is always abelian, hence G

is solvable.

Corollary nS is not solvable for .5n

Proof An is a simple non-commutative, so ).(' lAn However nA is simple, so its only normal

subgroups are nA and ).(l

Consequently .')''(')2(

nnnnnn AAAAAA In general n

k

n AA)(

positive integers ,k thus

.)()(

keAk

n Hence nA is not solvable.

As now nA is a subgroup of nn SS ' is also not solvable for ,5n since subgroup of a solvable group

is solvable.

1.7 Nilpotent Groups

A nilpotent group is a group that is "almost abelian". This idea is motivated by the fact that

nilpotent groups are solvable, and for finite nilpotent groups, two elements having relatively prime

orders must commute. It is also true that finite nilpotent groups are super solvable.

14

Nilpotent groups arise in Galois theory, as well as in the classification of groups. They also appear

prominently in the classification of Lie groups. Analogous terms are used for Lie algebras including

nilpotent, lower central series, and upper central series.

The following are equivalent definitions for a nilpotent group:

A nilpotent group is one that has a central series of finite length.

A nilpotent group is one whose lower central series terminates in the trivial subgroup after finitely

many steps.

A nilpotent group is one whose upper central series terminates in the whole group after finitely

many steps.

For a nilpotent group, the smallest n such that G has a central series of length n is called the

nilpotency class of G and G is said to be nilpotent of class n. Equivalently, the nilpotency class of G

equals the length of the lower central series or upper central series (the minimum n such that the nth

term is the trivial subgroup, respectively whole group). If a group has nilpotency class at most m,

then it is sometimes called a nil-m group.

The trivial group is the unique group of nilpotency class 0, and groups of nilpotency class 1 are

exactly non-trivial abelian groups.

Properties of nilpotent groups

Since each successive factor group i+1/ i is abelian, and the series is finite, every

nilpotent group is a solvable group with a relatively simple structure.

Every subgroup of a nilpotent group of class n is nilpotent of class at most n; in addition, if f

is a homomorphism of a nilpotent group of class n, then the image of f is nilpotent of class

at most n.

If G is a nilpotent group and if H is a proper subgroup of G, then H is a proper normal

subgroup of NG(H) (the normalizer of H in G).

Every maximal proper subgroup of a nilpotent group is normal.

Nilpotent group is the direct product of its Sylow subgroups.

The last statement can be extended to infinite groups: If G is a nilpotent group, then every Sylow

subgroup Gp of G is normal, and the direct sum of these Sylow subgroups is the subgroup of all

elements of finite order in G

Note that:

15

Given a prime number p, a p-group (also p-primary group) is a group such that for each element g

of the group there exists a nonnegative integer n such that g to the power pn is equal to the identity

element. (In other words, g has order pn).

A subgroup of a finite group is termed a Sylow subgroup if it is a p-Sylow subgroup for some prime

number p.

Nilpotent is Solvable

Given a central series for G, build a corresponding normal series, in reverse order, as follows.

A trivial homomorphism maps G onto G, with kernel e. Thus C0 = G, and N0 = e.

Moving to C1, a homomorphism maps G onto C1, and its kernel is the center of G. Let N1 be this

kernel, the center of G.

A second homomorphism maps C1 onto C2. Combine this with the first homomorphism to get a

map from G onto C2. Let N2 be the kernel of this map. Continue all the way to Ck = e, whence

Nk = G.

Since each Ni is the kernel of a homomorphism, each Ni is normal in G. Conjugate Ni by the

elements in Ni+1, which all belong to G, and the result is always Ni. Thus Ni is normal in Ni+1, and

we have a normal series for G, running in reverse order.

The factor group Ni+1/Ni is the center of Ci, which is abelian. All factor groups are abelian, the

series is solvable, and the group is solvable. Every nilpotent group is solvable.

However, there are plenty of solvable groups that are not nilpotent. S3 has no center, and no central

series, even though it has an abelian subgroup 3 with an abelian factor group 2.

Theorem If G is a group of order pn, with p prime, and H is a proper subgroup of G, then the

normalizer NG(H) strictly contains H.

Proof The proof is by induction on the order of G. If |G|=1 there is nothing to prove. Suppose

|G|>1. Let denote the center of G. We know by previous stuff that | |>1. Note also that is

contained in the normalizer of H. So, if is not a subset of H, then we are done. Suppose is a

subset of H. Then, since is normal in G, we consider the subgroup H/ of the quotient group

G/ . Since |G/ | is strictly smaller than |G|, and is still a power of p, we can apply the inductive

hypothesis. We conclude that the normalizer of H/ in G/ strictly contains H/ .

16

This result also holds in arbitrary nilpotent groups. Using a lemma called the Frattini argument

(Let G be a finite group, and let H be a normal subgroup of G. If P is any Sylow subgroup of H,

then G=HN(P), and [G:H] is a divisor of |N(P)|.), one can show that a finite group is nilpotent if and

only if all of its Sylow subgroups are normal. Hence G is isomorphic to the direct product of its

Sylow subgroups. So, a general finite nilpotent group is isomorphic to a direct product of p-groups

for some collection of primes p.

1.8 Unit Summary/ Things to Remember

1 A subnormal series of a group G is a sequence of subgroups, each a normal subgroup of the

next one.

In a standard notation {e}= GGGGGG n ...........3210

There is no requirement made that Gi be a normal subgroup of G, only a normal subgroup

of Gi+1. The quotient groups Gi+1/ Gi are called the factor groups of the series.

If in addition each Gi is normal in G, then the series is called a normal series.

2 Composition series is a subnormal series such that each factor group Gi+1 / Gi is simple. The

factor groups are called composition factors.

3 Two subnormal series are said to be equivalent or isomorphic if there is a bijection between

the sets of their factor groups such that the corresponding factor groups are isomorphic.

4 Jordan–Hölder theorem states that any two composition series of a given group are

equivalent. That is, they have the same composition length and the same composition

factors, up to permutation and isomorphism.

5 A solvable group, or soluble group, is one with a subnormal series whose factor groups are

all abelian.

6 The derived series of a group G is the normal series G > G1 > ... > Gn > ... where

G1 = [G,G], G2=[G1,G1], ... , Gn=[Gn-1,Gn-1]. Note that Gn/Gn+1 is the largest abelian quotient

of Gn. G is solvable if and only if its derived series eventually reaches the identity - that is,

Gn=1 for some n.

7 A nilpotent series is a subnormal series such that successive quotients are nilpotent.

8 A nilpotent series exists if and only if the group is solvable.

9 Nilpotent groups do not have the "hereditary property" that solvable groups have. That is,

nilpotency of a normal subgroup N and the quotient G/N do not imply nilpotence of G,

17

although the converse does hold. The issue is that one needs a normal, rather than

subnormal series, and a normal subgroup of N need not be normal in G.

1.9 Assignment/Activities

1 Prove that a finite p-group where p is a prime number is cyclic if and only if it has only one

composition series.

2 Let a group G be direct product of two subgroups H and K. Show that G has a composition

series if and only if each of H and K has a composition series.

3 Show that if N is a normal subgroup of G and G has a composition series than N has a

composition series.

4 Prove that if G is a group that has a normal subgroup N such that both N and G/N are

solvable, then G must be solvable.

5 Prove that every group of order qpqp ,,2 are primes is solvable.

6 A group G is said to be nilpotent if it has a normal series )(...210 eGGGGG s

such that 1,...,2,1,011

siG

GZ

G

G

ii

i then prove that every nilpotent group is solvable,

however converse is not true.

1.10 Check Your Progress

2 Let G be a group having a composition series and H, a normal subgroup of G. Prove that G

has a composition series, one of whose terms is H.

3 Determine a composition series of An (n≠4).

4 A finite group G is solvable if and only if it has a composition series, each of whose factor

group is cyclic of prime order.

5 A subgroup and a quotient group of a nilpotent group are nilpotent.

6 Show that every nilpotent group is solvable.

7 Prove that every group of order qpqp ,,2 are primes is solvable.

8 Show that a finite direct product of solvable groups is solvable.

18

1.11 Points for discussion / Clarification

At the end of the unit you may like to discuss or seek clarification on some points. If so,

mention the same.

1. Points for discussion

_______________________________________________________________________

_______________________________________________________________________

_______________________________________________________________________

2. Points for clarification

________________________________________________________________________

________________________________________________________________________

-

________________________________________________________________________

_________________________________________________________________________

1.12 References

1. D.J.S Robinson, A Course in the Theory of Groups, 2nd

Edition, New York: Springer-

Verlag, 1995.

2. J. S. Lomont, Applications of Finite Groups, New York: Dover, 1993.

3. John S. Rose, A Course on Group Theory. New York: Dover, 1994.

4. P.B. Bhattacharya, S.K. Jain and S.R. Nagpaul, Basic Abstract

Algebra,(2nd

Edition),Cambridge University Press, Indian Edition, 1997.

5. John B. Fraleigh, A first Course in Abstract Algebra, 7th

Edition, Pearson Education, 2004.

19

UNIT 2

Canonical Forms

STRUCTURE PAGE NO.

2.1 Introduction 19

2.2 Objective 19

2.3 Similarity of linear transformations 20-21

2.4 Invariant Subspaces 22-24

2.5 Reduction to triangular forms 24-25

2.6 Nilpotent transformation and Index of nilpotency 25

2.7 Invariants of a nilpotent transformation 26-27

2.8 The Primary decomposition theorem 27-28

2.9 Jordan block and Jordan forms 29-42

2.10 Cyclic modules 42-44

2.11 Simple modules 44

2.12 Semi- Simple modules 44

2.13 Schur’s Lemma 45

2.14 Free modules 46

2.15 Unit Summary/ Things to Remember 46-48

2.16 Assignments/ Activities 48

2.17 Check Your Progress 49

2.18 Points for Discussion/ Clarification 50

2.19 References 50

20

2.1 Introduction

Cayley in 1858 published Memoir on the theory of matrices which is remarkable for containing the

first abstract definition of a matrix. He shows that the coefficient arrays studied earlier for quadratic

forms and for linear transformations are special cases of his general concept. Cayley gave a matrix

algebra defining addition, multiplication, scalar multiplication and inverses. He gave an explicit

construction of the inverse of a matrix in terms of the determinant of the matrix.

In 1870 the Jordan canonical form appeared in Treatise on substitutions and algebraic equations by

Jordan. It appears in the context of a canonical form for linear substitutions over the finite field of

order a prime.

Frobenius, in 1878, wrote an important work on matrices On linear substitutions and bilinear forms

although he seemed unaware of Cayley's work. Frobenius in this paper deals with coefficients of

forms and does not use the term matrix. However he proved important results on canonical matrices

as representatives of equivalence classes of matrices. He cites Kronecker and Weierstrass as having

considered special cases of his results in 1874 and 1868 respectively. Frobenius also proved the

general result that a matrix satisfies its characteristic equation. This 1878 paper by Frobenius also

contains the definition of the rank of a matrix which he used in his work on canonical forms and the

definition of orthogonal matrices.

The nullity of a square matrix was defined by Sylvester in 1884. He defined the nullity of A, n(A),

to be the largest i such that every minor of A of order n-i+1 is zero. Sylvester was interested in

invariants of matrices, that is properties which are not changed by certain transformations.

2.2 Objective

After the completion of this unit students should be able to:

make use of matrices in various domains.

make similar linear transformation.

define nilpotent transformation.

prove Primary Decomposition Theorem.

classify cyclic, simple, semi-simple and free modules.

prove Schur‟s lemma.

represent matrix in Jordan Forms.

21

2.3 Similarity of linear transformations

In mathematics, a linear transformation (also called a linear map, linear function or linear operator)

is a function between two vector spaces that preserves the operations of vector addition and scalar

multiplication. The expression "linear operator" is commonly used for linear maps from a vector

space to itself (endomorphism). In the language of abstract algebra, a linear map is a

homomorphism of vector spaces.

We know that if V and W be vector spaces over the same field F. A linear transformation is a

function T:V W such that:

T(v+w)=T(v)+T(w) for all v,w V

T( v)= T(v) for all v V , and F

The set of all linear maps V W is denoted by HomF(V,W) or Hom (V,W).

If V=W, then Hom (V,V) is an algebra over F. For convenience of notation we shall write Hom

(V,V) as A(v).

)(VAT is called right (left) invertible if there exists )(VAS such that TS=I (ST = I) where I is

the identity transformation on V where S is called a right (left) inverse of T.

Thus,

)(VAT is called invertible or regular if T is both right and left invertible or )(VAT is

invertible then there exists )(VAS such that .ITSST

)(VAT is said to be equivalent to )(VAS if there exists invertible transformations

)(, VAQP such that .PSQT

)(VAT is said to be similar to )(VAS if there exists a invertible transformation

)(VAP such that .1SPPT

A linear transformation T on a vecror space V is said to be cyclic if there exists Vv such that

;][ vTFV in such a situation V is said to be a cyclic space relative to T.

Now we need to know some more terms:

22

Minimal Polynomial: The polynomial of least degree which both divides the characteristic

polynomial of a matrix and has the same roots.

Monomial: One degree polynomial is called monomial.

Coprime polynomials: Two polynomials p(t) and q(t) are coprime if gcd(p(t), q(t)) = 1 (or any

nonzero number). Specifically this means that c(t) divides p(t) and q(t) ⇒ c(t) is a nonzero constant.

Direct Sum: The direct sum of a family of objects Ai, with i ∈ I, is denoted by , and

each Ai is called a direct summand of A.

Cyclic Linear Transformation: A linear transformation T on a vector space V is said to be cyclic

if there exists Vv such that ;][ vTFV in such a situation V is said to be a cyclic space relative

to T.

Theorem: A linear transformation VVT : is invertible if and only if its minimal polynomial

has non-zero constant term.

Proof Let k

kkk axaxaxxm ...)( 2

2

1

1 be the minimal polynomial of T. Then

0... 1

1

1

IaTaTaT kk

kk

yields

IaTIaTaTIaTaTT kk

kk

k

kk

)...)...(...( 1

2

1

1

1

2

1

1

…(1)

Let T be regular. If ,0ka then as 1T exists, (1) implies that .0... 1

2

1

1

IaTaT k

kk

This contradicts the fact that deg kxm )( and T cannot be a root of any non-zero polynomial over

F of degree < k. Hence 0ka

Conversely if 0ka , then (1) yields that ITTTT '' where

)....(' 1

2

1

11 IaTaTaT k

kk

k

Hence T is invertible.

23

Lemma A vector space V is cyclic relative to a linear transformation T on V if and only if dimF V is same as

the degree of the minimal polynomial of T.

Proof If V is cyclic relative to T, then there exists Vv such that .][ vTVv

Then )(deg][ xmvTF v where )(xmv is the minimal polynomial of v relative to T.

Conversely let the degree of ),(xm the minimal polynomial of T, be n where .dimVn Now V has

a vector v whose minimal polynomial relative to T is )(xm and hence dim F[T]v = n .

Consequently .][ vTFV This proves that V is cyclic relative to T.

2.4 Invariant Subspaces

Let T be a linear transformation on a vector space V. A subspace W of V is said to be

T-invariant subspace of V if for every ,)(, WxTWx [i.e, ].)( WWT

Remark: For any )0(,);( VVAT and )(VT are always T-invariant subspaces of V.

Given a T-invariant subspace W of V, we can define a linear transformation WWT :' such that

)()(' xTxT for ever .Wx 'T is said to be induced by T.

Now we have the following:

Lemma: Let 21 VVV and T be a linear transformation from V to V such that 1V and 2V are both

T-invariant subspaces. If )2,1( iTi is the linear transformation on )2,1( iVi induced by T, then the

minimal polynomial )(xf of T is the LCM of the minimal polynomials of 1T and 2T .

Proof: Let )(xf i be the minimal polynomial of iT Since )(xf = 0 and iT is the restriction of T to

iV , we also have 0)( iTf . Hence )(|)( xfxf i . Therefore the LCM )(xg of all these )(xfi also

divides )(xf .

Now, for any ,11 Vx ,0)()( 11 TgxTg as 0)( 1 Tg

24

Similarly 0)( 2 xTg for every .22 Vx

Hence )0()]([ 1 VTg and ).0()]([ 2 VTg This yields that )0()( VTg as .21 VVV Thus,

.0)( Tg Consequently ).(|)( xgxf Hence )(xf is an associate of )(xg . As a

result of this )(xf is LCM of )(1 xf and )(2 xf .

Theorem Let T be a cyclic linear transformation on a vector space V and )(xf be its minimal

polynomial. If ta

t

aa xfxfxfxf )](....[)]([)]([)( 2211 is the decomposition of )(xf into product of

powers of distinct irreducible monic polynomials )(xfi over F, then tVVVV ...21 for some

T-invariant subspaces iV of V such that the restriction iT of T to iV is a cyclic linear transformation

of iV with ia

i xf )]([ as its minimal polynomial.

Proof Since T is cyclic transformation on vTFVV ][, for some Vv and )(xf is the minimal

polynomial of v relative to T.

Define .)](...[)]([)]([

,...,)](...[)]([)]([)](...[)]([

112211

33112221

vTfTfTfv

vTfTfTfvvTftfv

ta

t

aa

t

ta

t

aaat

t

a

Then the minimal polynomial of 1v is 11 )]([ aTf . In general the minimal polynomial of iv is

.)]([ ia

i Tf Thus the minimal polynomial of

ivvvw ...21 …. (1)

is .)](...[)]([)]([ 2211 ta

t

aa xfxfxf Hence dim V =deg )(xf = dim wTF )( yields that

wTFV ][ …. (2)

Let ii vTFV ][ ( ).1 ti Since ,)(][ ii vTFvTTF we get that iV is T-invariant space. So if iT is

the restriction of T to ,iV we get ,][ iii vTFV a cyclic space relative to iT . Since ia

i xf )]([ is the

minimal polynomial of iv relative to .iT we also get ia

i xf )]([ is the minimal polynomial of iv

relative to iT Hence iT also has ia

i xf )]([ as its minimal polynomial.

Since tvvvw ...21 yields ,][...][][ 1 tvTFvTFwTFV we get

25

tVVVV ...21 … (3)

We claim that this sum is direct.

Let 0)(...)()( 2211 tt vTgvTgvTg for some iii VvTg )( … (4)

Then

0])(...)([)](...[)]([ 1122 ttta

t

a vTgvTgtftf … (5)

Since 0)]([ iia

i vTf for all i, (5) gives that

.0)](...[)]()[( 221 ita

i

a vTfTfTg

Consequently ),(|)]([ 11 1xgxf a

as axf 11 )]([ is relatively prime to each of ).2(,)]([ ixf ia

i

Hence .0)( 11 vTg Similarly .0,...,0)( 22 tgvTg Thus in (4) every term on the left is zero.

Hence ....21 tVVVV This proves the theorem.

2.5 Reduction to triangular forms

Theorem: Let V be a finite-dimensional vector space over F and T be a linear operator on V.

Then T is triangulable if and only if mT is a product of linear polynomials over F:

To prove this theorem we must know the following lemma:

Let , V finite-dimensional vector space.

Suppose mT is a product of linear factors

and Let W be a proper T-invariant subspace of V. Then there exists a vector such that

for some eigen values c of T. (that is, the T-conductor of is linear.)

Start by applying the lemma to OW to obtain W1 and 1111 aT for some .11a Then apply

the lemma to 1W span .1 and obtain 12 W such that 112222 )( aIaT for some

26

.2212aa Let 2W = span 21, . Then 2W is T-invariant and dim .22 W Continue in this way to

obtain i

,....,1 and iW = span i ,....,1 such that

1. ;1 ii W

2. iiiiiiiii aaaaaT 112211 ,....

3. iW is T-invariant, and dim .iWi

Thus we can continue to reach i = n. With respect to nB ,...,1

nn

n

n

n

B

a

aa

aaa

aaaa

T

...000

....

....

....

....00

...0

...

333

22322

1131211

If T is triangulable, then the characteristic polynomial has the form

kd

k

d

T cxcx ...1

1

Thus is a product of linear factors, since .| TmT

2.6 Nilpotent transformation and Index of Nilpotency

In linear algebra, a nilpotent matrix is a square matrix N such that for some positive integer k.

The smallest such k is sometimes called the degree of N. More generally, a nilpotent

transformation is a linear transformation T of a vector space such that Tk = 0 for some positive

integer k.

A linear transformation T on a vector space V is said to be nilpotent of index 1k if 0kT but

01 kT .

Analogously, definition of a nilpotent matrix can be given; It is immediate that a linear

transformation T is nilpotent of index k if and only if its corresponding matrix is nilpotent of index

k. For example any nn matrix, whose elements on and below the diagonal are zeros is nilpotent of

index .n

27

2.7 Invariants of a nilpotent transformation

The following theorem is helpful to know about the invariants of a nilpotent transformation

Theorem If )(VAT is nilpotent of index k, then there exists an ordered basis B of V such that the

matrix of T relative to B is of the form

RK

K

K

MOOO

OMO

OOOM

............

...02

1

where rkkkk ...21 and for any positive integer tMt, is tt matrix given by

0...000

1...000

............

0...100

0...010

tM

Proof Since ,0kT the minimal polynomial )(xm of T divideskx , hence lxxm )( for some

.kl As 1,01 klT k . Hence .kl So that kxxm )( . Let )(),...,(),( 21 xqxqxq r be the

elementary divisors of T. Since each of them divides the minimal polynomial

ik

i

k xxqxxm )(,)( for some .kki so we can re-arrange sxqi )'( in such a way that

.21 ... rkkk As the LCM of there sxqi )'( is ,kx we get 1kk . Now the matrix ik

i xxq )(

relative to x is ii kk matrix.

0...000

1...000

...............

0...100

0...010

ikM

28

Thus, V has an ordered basis B such that the matrix of T relative to B is

rk

k

k

MOO

OMO

OOM

2

1

Remark Since the elementary divisors of T above are uniquely determined, the integers

rkkk ,...,, 21 in the above theorem are uniquely determined by T. These integers are called the

invariants of the nilpotent linear transformation T.

If T is nilpotent, the subspace M of V, of dimension m, which is invariant under T, is called

cyclic with respect to T if:

MTm

= (0) , MTm-1

(0) ;

There is an element z such that z , zT,….. zTm-1

form a basis of M.

2.8 The Primary decomposition theorem

The primary decomposition theorem states that decomposition is determined by the minimal

polynomial of α :

Let α:V→V be a linear operator whose minimal polynomial factorises into monic, coprime

polynomials-

mα(t)=p1(t)p2(t)

Then,

V= W1⊕W2

Where the Wi are α-invariant subspaces such that pi is the minimum polynomial of α|Wi

Repeated application of this result (i.e., fully factorising mα into pairwise coprime factors) gives a

more general version: if mα(t)=p1(t)...pk(t) as described, then

V=W1⊕...⊕Wk

With α-invariant Wi with corresponding minimal polynomial pi.

29

It may now be apparent that diagonalisation is a special case in which each Wi has a minimal

polynomial which consists of a single factor, (t-λi); i.e. if mα=(t-λ1)...(t-λk) for distinct λi then α is

diagonalisable with the λi as the diagonal entries.

Proof of the Primary Decomposition Theorem

The theorem makes two assertions; that we can construct α-invariant subspaces Wi based on the pi;

and that the direct sum of these Wi constructs V.

For the first, a result about invariant subspaces is needed:

Lemma: If α, β:V→V are linear maps such that αβ = βα, then ker β is α-invariant.

Proof: Take w∈Ker β - we need to show that α(w) is also in ker β. Now β(α(w))=α(β(w)) by

assumption, so β(α(w))=α(0) since w∈ker β , = 0 since α is a linear map. But if β(α(w))=0 then α(w)

∈ ker β, hence ker β is α-invariant.

Given this result, we now take the Wi as Ker pi(α). Then since pi(α)α = αpi(α), it follows that ker pi

= Wi is α-invariant.

We now seek to show that (i) V = W1 + W2, and (ii) W1∩W2 = {0} (that is, V decomposes as a

direct sum of the Wi's.)

Using Euclid's Algorithm for polynomials, since the pi are coprime there are polynomials qi such

that p1(t)q1(t) + p2(t)q2(t) =1.

So for any v ∈V, consider w1=p2(α)q2(α)v and w2=p1(α)q1(α)v. Then v= w1 + w2 by the above

identity. We can confirm that w1∈W1: p1(α)w1=mα(α)q2(α)v = 0. Similarly, w2∈W2.

So we have (i). For (ii), let v ∈ W1∩W2. Then

v = q1(α)p1(α)v + q2(α)p2(α)v = 0. So W1∩W2 = {0}.

Finally, for the claimed minimum polynomials, let mi be the minimal polynomial of α|Wi . We have

that pi(α|Wi)=0, so the degree of pi is at least that of mi. This holds for each i.

However, p1(t)p2(t)=mα(t)=lcm{m1(t), m2(t)} so we obtain

deg p1 + deg p2 = deg mα ≤ deg m1 + deg m2.

It follows that deg pi = deg mi for each i, and given monic pi it must be that mi=pi.

The proof is complete.

30

2.9 Jordan block and Jordan forms

Let 01

1

1 ...)( axaxaxxf k

k

k

be any monic polynomial over a field F. Then the kk matrix

1

2

2

1

0

1...00

0...00

...............

0...10

0...01

0...00

k

k

a

a

a

a

a

is called the companion matrix of )(xf and is denoted by )].([ xfC

Example 1. The companion matrix of 123)( 23 xxxxf is

310

201

000

Example 2. Consider the polynomials .15)( 24 xxxf

We can write it as 1050)( 234 xxxxxf .

Hence its companion matrix is

0100

5010

0001

1000

Theorem Let 01

1

1 ...)( axaxaxxf m

m

m

be any monic polynomial over a field F. Then

three exists a linear Trans a linear transformation T on a vector space W of dimension

),(deg xfm such that the minimal polynomial of T is ).(xf

31

Proof Let ),..., 21 mvvv be an ordered basis of W. Define a linear transformation T on W such that

....)(,)(,...,)(,)( 1211013221 mmmmm vavavavTvvTvvTvvT

Now

},...,,{)}(),...,(),(,{ 211

1

1

2

11 m

m vvvvTvTvTv is a linearly independent set, the degree of the

minimal polynomial of 1v relative to T is at least m and hence m, as dim W = m; also degree of the

minimal polynomial of any vector or of a linear transformation cannot exceed the dimension of

underlying vector space .Now the defining relations of T give

)(...)()()()( 1

1

11

2

21

2

111101 vTavTavTavTavavT m

m

m

i.e.,

.0)... 101

2

2

1

1

vIaTaTaTaT m

m

m

m

m

Hence 2

2

1

1).(

m

m

m

m

m xaxaxxf 0... a is the minimal polynomial of 1v relative to T and

thus we get that ).(xf is the minimal polynomial of T.

Theorem If the minimal polynomial )(xf of a linear transformation T on V is of degree equal to the

dimension of V, then V has on ordered basis such that matrix of T relative to that basis is )).(( xfC

Proof Let 0

1

1 ...)( axaxxf k

k

k

be the minimal polynomial of )(VAT where dim V = K.

Now there exists a vector Vv )0(1 such that )(xf is minimal polynomial of 1v relative to T.

Hence )}(),...,(),(,{ 1

1

1

2

11 vTvTvTv k is a basis of V.

Put ).()(),...,()(),( 11

1

21

2

312

k

k

k vTvTvvTvTvvTv

Now ),...,,( 21 kvvv is an ordered basis of V such that kk vvTvvTvvT )(,...,)(,)( 13221 and

)(...)()()( 1

1

111101 vTavTavavTvT k

k

k

k

as .0)( 1 vTf

This gives that ....)( 12110 kkk vavavavT The matrix of T relative to the ordered basis

),...,( 21 kvvv is

32

)).((

1...00

0...00

...............

0...10

0...01

0...00

1

2

2

1

0

xfC

a

a

a

a

a

k

k

Corollary Let A be an nn matrix over a field F. If the minimal polynomial )(xf of A is of degree

n, then there exists a non-singular matrix P over F such that APP 1 is )).(( xfC

Proof Let V be an n dimensional vector space over F and ),...,( 21 nvvvB be an ordered basis of V.

Let T be a L.T. on V whose matrix relative to B is A. Then the minimal polynomial of T is

also )(xf , since deg )(xf is n, there exists an ordered basis ),...,( 21 nwwwB of V such that matrix

of T relative to B‟ is )).((' xfCA if P is the matrix of ),..., 21 nwww relative to ( ),.., 21 nvvv then

APPA 1' . Hence )).((1 xfCAPP

Let T be a cyclic linear transformation on V and 011 ...)( axaaxxf n

n be its minimal

polynomial. Then the companion matrix of )(xf viz,

1

2

2

1

0

1...00

0...00

...............

0...10

0...01

0...00

n

n

a

a

a

a

a

A

is called Jordan Matrix of T.

Theorem Let ,))(()( kxqxf be a monic polynomial over F of degree n with )(xq being any

monic polynomial of degree )...( mkneim If T is cyclic linear transformation of V such that )(xf

is the minimal polynomial of T then, relative to some ordered basis of V, the matrix of T is of the

form

33

NC

NC

NC

A

..

..

..

..

..

..

..

..

..

Where ))(( xqCC and N is the mm matrix

0...00

............

0...00

1...00

Proof Now for some .][, vTFVVv

Define

,)]([,)]([,)]([ 111

2

1

1 vTqTvvTqTvvTqv km

m

kk

).(),...(,

..............................................................................................

,)]([,...)]([,)]([

,)]([,...,)]([,)]([

1

2)1(1)1(

31

3

3

22

3

12

21

2

2

2

2

1

vTvvTvvv

vTqTvvTqTvvTqv

vTqTvvTqTvvTqv

m

kmmKmk

km

m

k

m

k

m

km

m

k

m

k

m

v

…(1)

Thus each of the svi ' is of the from vTg i )( with 0)( xg i and deg .)( nxg i Let for some

0,1

km

i

iii vaFa such that some .0)(.,..0

vTgaeia

i

iii Noticing that sxg i )'( are all of

different degrees, we can see that nxgxgaxg ii )(deg,0)()( and .0)( vTg This

contradicts the fact that the minimal polynomial of v relative to T is )(xf and of degree n. Hence

the n vectors )(,..., 21 nkm vvvv are linearly independent.

Let ...)( 01

1 axaaxxq mm …(2)

34

The definitions of svi ' given above yield the following:

...................

...................

)(

)(

32

21

vvT

vvT

mm vvT )( 1

vTqTvT km

m

1)]([)(

,)]()][([ 1vTqTqT km since 0)]([ vTq k

vTqTavTqTavTqa

vTqTaTaIa

km

m

kk

km

m

11

1

1

1

1

0

11

110

)]([...)]([)]([

)]()[...(

This implies that

mmm vavavavT 12110 ...)(

............................

)(

)(

32

21

mm

mm

vvT

vvT

mm vvT 212 )(

vTqTvT km

m

2

2 )]([)(

)]([)]([ 1 TqTvTq mk vTq k 2)]([ .

vTqTaTaIav km

m ])()[...( 21

1101

This gives that

mmmmm vavavavvT 21211012 ...)(

35

Continuing in this manner, we get

kmmmkmkmkkm

kmkm

mkk

mmmmm

mm

mm

mm

vavavavvT

vvT

vvT

vavavvT

vvT

vvT

vvT

12)1(11)1(01)2(

1

2)1()1(

3112013

313

3222

212

...)(

)(

.............................

(

.............................

.............................

...)(

)(

.............................

)(

2)(

Using the fact that if

kmn

j

jjii vavT1

,)( then in the ith column of matrix ),( jia the jth entry is the

coefficient of ,jv all above equations yield that the matrix of T relative to ),...,( 21 kmvvv is

OOOOOO

NCOOOO

OONCOO

OOONCO

OOOONC

...

...

............................

...

...

...

where C is mm matrix andNxqC

a

a

a

a

m

))((

1...00

0...10

0...01

0...00

1

2

1

0

36

is the mm matrix

00...00

00...00

10...00

This proves the theorem.

Jordan Matrix

Let )(xq be a monic polynomial of degree, say m, over a field F, for any positive integer k, the

kmkm matrix is given by

COOOO

NCOOO

OONCO

OOONC

...

...

...

...

Where C is the companion matrix of )(xq and N is the mm matrix

00...00

00...00

00...00

10...00

called the Jordan matrix of kxqxf ))(()( relative to )(xq .

Example Consider )(,)1()( 32 xfxxf is of degree 6 over Q.

Here .1)( 2 xxq The companion matrix of )(xq is

01

10C

We have a linear transformation whose minimal polynomial is f(x)=(x2 + 1)

3 then there exists an

ordered basis ),,,,,( 654321 uuuuuuB of Q6 such that the matrix of T relative to this basis is the

companion matrix of 133)1()( 24632 xxxxxf

37

010000

301000

000100

300010

000001

100000

.,. Aei

At the same time, there exists an ordered basis ).,,,,(' 654321 vvvvvvB of Q6 such that the matrix of

T relative to B is

COO

NCO

ONC

A' Where N =

00

10

010000

100000

000100

101000

000001

001010

'ThusA

So if P is the matrix of B’ relative to B then .' 1APPA Hence 'A and A are similar matrices, 'A is

the Jordan matrix of )(xf relative to .12 x

Example Let Fa and .)()( naxxf

Now nnnnnnn aaxCaxCxxf )1(...)( 22

2

1

1 where for

r

rnnnCnr r

n

..3,2,1

)1)...(1(,1

38

So the companion matrix of )(xf is

ac

ac

ac

a

A

n

nnn

nnn

nn

1

2

2

1

1

1

1

1...000

)1(0...010

)1(0...001

)1(0...000

At the same time the companion matrix of axxq )( is ).(a If ),1(N then nn matrix

CO

NC

NC

NC

A

...00

...00

0...0

0...0

' =

a

a

a

a

0...000

1...000

00...10

00...01

Since A and A‟ are the matrices of same linear transformation on )(nF , A and A’ are similar, i.e.,

there exists a non-singular matrix )(FMP n such that APPA 1'

Remark given ,Fa the nn matrix

a

a

a

a

A

0...0000

1...0000

00...010

00...001

is such that naxxf )()( is

The above matrix isaxxf n)()( such that every eigen value of A is a root of its minimal

polynomial and )(xf has a as its only root, we conclude that a is the only eigen value of A. This

matrix A is called a Jordan Block.

Thus we can immediately, say that the matrices

20

12,

100

110

011

,

5000

1500

0150

0015

39

have respectively 2, - 1, 5 as their only eigen values. So if we can show that a given matrix is

similar to a Jordan Block ,it is very easy to determine its eigen values .

Theorem Let T be a cyclic linear transformation on a vector space V and

ta

t

aa xfxfxfxf )](...[1)]([)](])( 2211 be the minimal polynomial of T, where sxf i )'( are distinct

irreducible monic polynomials over F and .1ia Then there exists an ordered basis B of V such

that the matrix of T relative to B is

tJ

J

J

2

1

Where each ),...,2,1( tiJ i is the Jordan matrix of ia

i xf )]([ relative to ),(xf i

Proof We can write

tVVVV ...21 (1)

for some T-invariant subspaces iV of V such that iV is cyclic relative to ,iT the restriction of T to

iV and ia

i xf )]([ is the minimal polynomial of iT We can find an ordered basis

),...,( 21 ini wwwB of iV such that the matrix of iT relative to iB is the Jordan matrix iJ of

ia

i xf )]([ relative to ),(xf i Now ,,...,...,,,,...,,( 1

2

2

2

2

2

1

1

1

1

2

1

2

t

nn wwwwwwwB ),...,1

t

n

t

tww is an ordered

basis of V. Let ).( i

lmi bJ Then

miwbwTwT i

l

n

l

i

lm

i

mi

i

m

i

,)()(1

(2)

Observe that )( i

mwT is expressible as a linear combination of i

n

ii

iwww ,...,, 21 and the coefficients i

lmb

are coming from .iJ Keeping this in mind one can easily convince oneself that the matrix of T

relative B is

40

t

I

J

J

J

2

Following immediate

Corollary Let )(FMA n be such that its minimal polynomial )(xf is of degree n. If

ta

t

a

t xfxfxf )](...[)]([)( 1 where sxf i )'( are irreducible monic polynomials over F and .1ia Then

there exists a non-singular matrix P such that

tJ

J

J

APP

2

1

1

Where each iJ is the ii aa Jordan Block

i

i

i

i

a

a

a

a

...000

1...000

......00

0...10

0...01

Let T be a cyclic linear transformation on a vector space FV and ta

t

a xfxfxf )](...[)]([)( 11 , where

sxf i )'( are distinct irreducible monic polynomials over F and ,1ia be the minimal polynomial of

T. Then the natrix

tJ

J

J

APP

2

1

1

41

Where iJ the Jordan matrix of ia

i xf )]([ relative to ),(xf i is called Classical Canonical matrix or

Jordan Canonical form of T; the above matrix is also called Classical Canonical matrix of the

polynomial ).(xf

Definition Let T be a linear transformation on a vector space V and tVVV ...,,2,1

be T-invariant subspaces of V such that .21 ,..., tVVVV If iT is the restriction of T to iV then T

is called a direct sum of tTTT ,...2,1 and we write .2 ... tTTTT

If iA is the matrix of iT relative to some ordered basis iB of iV and B is the ordered basis of V,

which is the union of all the sBi ' such that in the ordering, the members of 1B come first, those of

2B come next, those of 3B come next and so on. It can be easily seen that the matrix A of relative

to B is

tA

A

A

2

1

we say that A is direct sum of tAAA ,...,, 21 and write .21 ... tAAAA

Example Let T be a linear transformation on Q3 whose minimal polynomial is

1).1)(1()( 22 xxxxf and 1x are distinct monic irreducible factors of ).(xf The respective

companion matrices of these polynomials are

1

01J

0

1and ).1(2 J So the Classical

Canonical matrix of T is

2

1

JO

OJ

=

100

001

010

42

Example Let T be a linear transformation on C5, whose minimal polynomial is

.)3)(2()1()( 22 xxxxf Since deg TCxf ,dim)( 5 is a cyclic linear trans-formation. The

respective Jordan matrices of )2(,)1( 2 xx and 2)3( x are

30

13),2(,

10

1132 JJJ z

So the Classical Canonical matrix of T is

3

2

1

JOO

OOO

OOJ

=

30000

13000

00200

00010

00011

Notice that 1,2,3 coming in the diagonal, are the eigen values of T.

Example Consider the linear transformation T on Q2 defined by 21221 23)(,)( eeeTeeT where

),( 21 ee is the standard basis of Q2. Since 2111 ,)(, eeeTe is linearly independent, the minimal

polynomial of 1e relative to T is at least of degree 2; hence 2, as it cannot exceed 2(= dim Q2). So T

is a cyclic linear transformation. Now the matrix of T relative to ),( 21 ee is

1

0A

2

3 …(1)

Since ,0'322 IAA and T is cyclic, we get )3)(1(32)( 2 xxxxxf is the minimal

polynomials of T. Now 1x and 3x are the distinct monic irreducible factors of );(xf their

respective Jordan matrices (here just the companion matrices) are -1

and 3.

Hence the Classical Canonical matrix of T is

0

1

3

0

Let us determine an ordered basis of Q2 relative to which the matrix of T is the above classical

canonical matrix. Define

211111 3)(3)()3( eeeIeTeITu …. (2)

43

211112 )()()( eeeIeTeITu …. (3)

21,uu are linearly independent. So ),( 21 uuB is an ordered basis of Q2.

Now 2121221 3)23(3)(3)( eeeeeeTuT i.e., .)( 11 uuT

Similarly .3)( 22 uuT Hence the matrix of T relative to ),( 21 uu is

0

1'A

3

0.

Form (2) and (3) it is clear that the matrix of ), 21 uu relative to ),( 21 ee is

1

3P

1

1.

It can be easily verified that .' 1APPA

2.10 Cyclic modules

Let R be a ring, and let M be an abelian group. Then M is called a left R-module if there exists a

scalar multiplication

: R × M -> M denoted by (r,m)=rm, for all r R and all m M,

such that for all r,r1, r2 R and all m, m1, m2 M,

(i) r(m1 + m2) = r m1 + r m2

(ii) ( r1 + r2 ) m = r1 m + r2 m

(iii) r1 ( r2 m ) = ( r1 r2 ) m

(iv) 1 m = m .

Example 1 (Vector spaces over F are F-modules) If V is a vector space over a field F, then it is an

abelian group under addition of vectors. The familiar rules for scalar multiplication are precisely

those needed to show that V is a module over the ring F.

Example 2 (Abelian groups are Z-modules) If A is an abelian group with its operation denoted

additively, then for any element x Z and any positive integer n, we have defined nx to be the sum

of x with itself n times. This is extended to negative integers by taking sums of -x. With this

familiar multiplication, it is easy to check that A becomes a Z-module.

Another way to show that A is a Z-module is to define a ring homomorphism :Z->End(A) by

letting (n)=n1, for all n Z. This is the familiar mapping that is used to determine the characteristic

of the ring End(A). The action of Z on A determined by this mapping is the same one used in the

previous paragraph.

44

If M is a left R-module, then there is an obvious definition of a submodule of M: any subset of M

that is a left R-module under the operations induced from M. The subset {0} is called the trivial

submodule, and is denoted by (0). The module M is a submodule of itself, an improper

submodule. It can be shown that if M is a left R-module, then a subset N M is a submodule if and

only if it is nonempty, closed under sums, and closed under multiplication by elements of R.

If N is a submodule of RM, then we can form the factor group M/N. There is a natural

multiplication defined on the cosets of N: for any r R and any x M, let r(x+N) = rx+N. If

x+N=y+N, then x-y N, and so rx-ry=r(x-y) N, and this shows that scalar multiplication is well-

defined. It follows that M/N is a left R-module.

Any submodule of RR is called a left ideal of R. A submodule of RR is called a right ideal of R,

and it is clear that a subset of R is an ideal if and only if it is both a left ideal and a right ideal of R.

For any element m of the module M, we can construct the submodule

Rm = { x M | x = rm for some r R }.

This is the smallest submodule of M that contains m, so it is called the cyclic submodule generated

by m. More generally, if X is any subset of M, then the intersection of all submodules of M which

contain X is the smallest submodule of M which contains X. We will use the notation <X> for this

submodule, and call it the submodule generated by X. We must have Rx <X> for all x X, and

then it is not difficult to show that

< X > = x XRx.

The left R-module M is said to be finitely generated if there exist m1, m2, . . . , mn M such that

M= Rmi.

In this case, we say that { m1, m2, . . . , mn } is a set of generators for M. The module M is called

cyclic if there exists m M such that M=Rm.

Let M and N be left R-modules. A function f:M -> N is called an R-homomorphism if

f(m1 + m2) = f(m1) + f(m2) and f(rm) = rf(m)

for all r R and all m, m1, m2 M. The set of all R-homomorphisms from M into N is denoted by

HomR(M,N) or Hom(RM,RN).

For an R-homomorphism f HomR(M,N) we define its kernel as ker(f) = { m M | f(m) = 0 }.

45

We say that f is an isomorphism if it is both one-to-one and onto. Elements of HomR(M,M) are

called endomorphisms, and isomorphisms in HomR(M,M) are called automorphisms. The set of

endomorphisms of RM will be denoted by EndR(M).

A submodule N of the left R-module M is called a maximal submodule if N M and for any

submodule K with N K M, either N=K or K=M. Consistent with this terminology, a left ideal A of

R is called a maximal left ideal if A R and for any left ideal B with A B R, either A=B or B=R.

Thus A is maximal precisely when it is a maximal element in the set of proper left ideals of R,

ordered by inclusion. It is an immediate consequence of that every left ideal of the ring R is

contained in a maximal left ideal, by applying the proposition to the set X = {1}. Furthermore, any

left ideal maximal with respect to not including 1 is in fact a maximal left ideal.

2.11 Simple modules

Let R be a ring, and let M be a left R-module. For any element m M, the left ideal

Ann(m) = { r R | r m = 0 } is called the annihilator of m.

The ideal Ann (M) = { r R | r m = 0 for all m M }is called the annihilator of M.

The module M is called faithful if Ann(M)=(0).

A nonzero module RM is called simple (or irreducible) if its only submodules are (0) and M.

We first note that a submodule N M is maximal if and only if M/N is a simple module. A

submodule N M is called a minimal submodule if N (0) and for any submodule K with

N K (0), either N=K or K=(0). With this terminology, a submodule N is minimal if and only if it

is simple when considered as a module in its own right.

The following conditions hold for a left R-module M:

(a) The module M is simple if and only if Rm=M, for each nonzero m M.

(b) If M is simple, then Ann(m) is a maximal left ideal, for each nonzero m M.

(c) If M is simple, then it has the structure of a left vector space over a division ring.

2.12 Semi- Simple modules

Let M be a left R-module. The sum of all minimal submodules of M is called the socle of M, and is

denoted by Soc(M). The module M is called semi-simple if it can be expressed as a sum of minimal

submodules.

46

A semisimple module R M behaves like a vector space in that any submodule splits off, or

equivalently, that any submodule N has a complement N' such that N+N'=M and N N'=0.

Any submodule of a semisimple module has a complement that is a direct sum of minimal

submodules.

The following conditions are equivalent for a module R M.

(1) M is semisimple;

(2) Soc (M) = M.

(3) M is completely reducible;

(4) M is isomorphic to a direct sum of simple modules.

2.13 Schur’s Lemma

Schur's lemma is an elementary but extremely useful statement in representation theory, an

elementary observation about irreducible modules, which is nonetheless noteworthy because of its

profound applications. In the group case it says that that if M and N are two finite-dimensional

irreducible representations of a group G and φ is linear map from M to N that commutes with the

action of the group, then either φ is invertible, or φ = 0. An important special case occurs when

M = N and φ is a self-map. The lemma is named after Issai Schur who used it to prove Schur

orthogonality relations and develop the basics of the representation theory of finite group.

Lemma:

1. Suppose M and N are simple R-modules, and NM : is a homomorphism. Then is

either the zero homomorphism or an isomorphism.

2. Suppose M is a simple R module. Then EndR (M) is a division ring.

Proof .

1. Suppose is non-zero. Then we have to show that is an isomorphism, i.e. is both

injective and surjective.

We know that ker( ) is a submodule of M. It can‟t be the whole of M, because is non-zero.

So (since M is simple) Ker ( ) must be 0; in other words, is injective.

2. We know that EndR (M) is a ring; so it suffices to show that every non-zero element of EndR

(M has an inverse. But if is a non-zero element of EndR (M), then is a non-zero

homomorphism from M to M, so by the first part of Schur‟s Lemma is an isomorphism; so

has an inverse, and thus this is an isomorphism form M to M, and hence an element of EndR

(M).

47

One of the most important consequences of Schur's lemma is the following:

Corollary Let V be a finite-dimensional, irreducible G -module taken over an algebraically

closed field. Then, every G -module homomorphism f:V→V is equal to a scalar multiplication.

Proof Since the ground field is algebraically closed, the linear transformation f:V→V has an eigen

value; call it . By definition, f− 1 is not invertible, and hence equal to zero by Schur's lemma. In

other words, f= , a scalar.

2.14 Free modules

The module M is called a free module if there exists a subset X M such that each

element m M can be expressed uniquely as a finite sum m= ai xi, with a1, . . . , an R

and x1, . . . , xn X.

We note that if N is a submodule of M such that N and M/N are finitely generated, then M is

finitely generated. In fact, if x1, x2, . . . , xn generate N and y1+N, y2+N, . . . , ym+N generate M/N,

then x1, . . . , xn, y1, . . . , ym generate M.

The module RR is the prototype of a free module, with generating set {1}. If RM is a module, and

X M, we say that the set X is linearly independent if ai xi=0 implies ai=0 for i=1,...,n, for any

distinct x1, x2, . . . , xn X and any a1, a2, . . . , an R. Then a linearly independent generating set for

M is called a basis for M, and so M is a free module if and only if it has a basis.

Examples:

1. For any positive integer nRn, is a free R-modules.

2. The matrix ring )(RM mn is a free R-module with basis .,...,1,,...1, njmiEij

3. The polynomial ring ][XR is a free R-module with basis ,....,,1 2XX

4. The zero module is free with the empty set as basis.

2.15 Unit Summary/ Things to Remember

1. If V and W are finite-dimensional, and one has chosen bases in those spaces, then every linear

map from V to W can be represented as a matrix; this is useful because it allows concrete

48

calculations. Conversely, matrices yield examples of linear maps: if A is a real m-by-n matrix, then

the rule f(x) = Ax describes a linear map Rn → R

m.

2. Similarity of Linear Transformation: A mapping that associates with each linear

transformation T on a vector space the linear transformation R-1

PR that results when the coordinates

of the space are subjected to a nonsingular linear transformation R.

3. An invariant subspace of a linear mapping T : V → V from some vector space V to itself is a

subspace W of V such that T(W) is contained in W.

4. In linear algebra, a nilpotent matrix is a square matrix N such that for some positive integer k.

The smallest such k is sometimes called the degree of N.

5. A linear transformation T on a vector space V is said to be nilpotent of index 1k if 0kT but

01 kT .

6. A Jordan block over a ring R (whose identities are the zero 0 and one 1) is a matrix which is

composed of 0 elements everywhere except for the diagonal, which is filled with a fixed element ,

and the entries above and right to the main diagonal are composed of unities of the ring.

6. Primary Decomposition Theorem: “Let m(x) be the minimal polynomial of T: V→ V , dim

V< such that m(x)= m1 (x) m2 (x) where gcd (m1, m2 ) =1then there exists T-invariant subspaces

V1 , V2 such that V= V1 V2.”

7. A cyclic module is a module over a ring which is generated by one element. A left R-module M

is called cyclic if M can be generated by a single element i.e. M = (x) = R x for some x in M.

Similarly, a right R-module N is cyclic, if N = y R for some y in N.

8. The simple modules over a ring R are the (left or right) modules over R which have no non-zero

proper submodules. Equivalently, a module M is simple if and only if every cyclic submodule

generated by a non-zero element of M equals M. The simple modules are precisely the modules of

length 1; this is a reformulation of the definition. They are also called irreducible modules.

9. A module over a (not necessarily commutative) ring with unity is said to be semisimple (or

completely reducible) if it is the direct sum of simple (irreducible) submodules.

49

10. Shur’s Lemma: If M and N are two simple modules over a ring R, then any homomorphism f:

M → N of R-modules is either invertible or zero. In particular, the endomorphism ring of a simple

module is a division ring.

11. A free module is a module with a free basis: a linearly independent generating set.

2.16 Assignments/ Activities

1. Find all possible Jordan canonical forms of a matrix with characteristic polynomial

)(xp over C in each of the following cases:

(a) ).1()1()( 2 xxxp

(b) .)5()2()( 23 xxxp

(c) ).1()2)(1()( 22 xxxxp

2. Show that every sub module of the quotient module M/N can be expressed as (L+N)/N for some

sub module L of M.

3. In the matrix ring ),(RM n let M be the sub module generated by 11E , the matrix with 1 in row 1,

column 1, and 0‟s elsewhere. Thus )}.(:{ 11 RMAAEM n Show that M consists of all matrices

whose entries are zero except perhaps in column 1.

4. Continuing Problem 3, show that the annihilator of 11E consists of all matrices whose first

column is zero, but the annihilator of M is }.0{

5. If I is an ideal of the ring R, show that R/I is a cyclic R-module.

6. Let M be an R-module, and let I be an ideal of R. We wish to make M into an R/I-module via

( .,,) MmRrrmmIr When will this be legal?

7. Assuming legality in Problem 6, let 1M be the resulting R/I-module, and note that as sets,

.1 MM Let N be a subset of M and consider the following two statements:

(a) N is an R-sub module of M;

50

(b) N is an R/I- sub module of .1M

Can one of these statements be true and the other false?

2.17 Check Your Progress

1. Show that two transformations in A(V) are similar if and only if they have same families of

elementary divisors.

2. Let )(VAT have tm

t

mmxpxpxpxf )](...[)]([)([)( 21

21 as its minimal polynomial, where

sxpi )'( are distinct monic irreducible polynomial and .1im Prove that the matrix of T in

the Jordan Normal Form can be put in the form

IROOOO

OORO

OOOR

A...............

...

...

2

1

3. if )(VAT has all its characteristic roots in F, show that the matrix of F in Jordan Normal

Form, is triangular in the sense that all its entries below the diagonal are zero.

4. Give a proof, using matrix computation that if A is triangular nn matrix with entries

n ,...,, 21 on the diagonal, then )...')('( 21 IAIA 0)'( IA n Where I‟ is the nn

identity matrix.

5. Find the Jordan Normal Form of the matrix representing T in each the following.

(i) )( 2QAT having elementary divisors 3,2 xx

(ii) )( 2QAT having elementary divisors 122 xx

(iii) )( 3QAT having minimal polynomial 13 xx

(iv) )( 4RAT having elementary divisors 1, 22 xxx

(v) )( 6CAT having minimal polynomial 16 x

51

2.18 Points for discussion / clarification

At the end of the unit you may like to discuss or seek clarification on some points. If so,

mention the same.

1. Points for discussion

________________________________________________________________________

________________________________________________________________________

-

________________________________________________________________________

2. Points for clarification

________________________________________________________________________

________________________________________________________________________

-

________________________________________________________________________

1.19 References

6. I . N. Herstein, Topics in Algebra, Second Edition, John Wiley and Sons, 2006.

7. Lay, David C. , Linear Algebra and Its Applications (3rd ed.), Addison Wesley, 2005.

8. Rose John, A Course on Group Theory. New York: Dover, 1994.

9. P.B. Bhattacharya, S.K. Jain and S.R. Nagpaul, Basic Abstract

Algebra,(2nd

Edition),Cambridge University Press, Indian Edition, 1997.

10. Fraleigh B John, A first Course in Abstract Algebra, 7th

Edition, Pearson Education, 2004.

11. Steven J. Leon, Linear Algebra With Applications (7th ed.), Pearson Prentice Hall, 2006.

52

UNIT-3

Field Theory

STRUCTURE PAGE NO.

3.1 Introduction 01

3.2 Objective 01

3.3 Extension Field 2-4

3.4 Algebraic and Transcendental Extension 4-5

3.5 Inseparable and Separable Extension 5-6

3.6 Normal Extension 06

3.7 Perfect & Finite Field 6-8

3.8 Primitive Element 0 9

3.9 Algebraic Closed Field 10-15

3.10 Automorphism of Field 10-15

3.11 Galois theory 15-16

3.12 Fundamental theorem of Galois theory 17-19

3.13 Solvability and Insolvability of polynomials by radicals 9-26

3.14 Unit Summary/ Things to Remember 26-27

3.15 Assignments/ Activities 27-28

3.16 Check Your Progress 28

3.17 Points for Discussion/ Clarification 29

3.18 References 29

53

3.1 Introduction

Fields are important objects of study in algebra since they provide a useful generalization of many

number systems, such as the rational numbers, real numbers, and complex numbers. In particular,

the usual rules of associativity, commutativity and distributivity hold.

When abstract algebra was first being developed, the definition of a field usually did not include

commutativity of multiplication, and what we today call a field would have been called either a

commutative field or a rational domain. In contemporary usage, a field is always commutative. In

1893, Heinrich M. Weber gave the first clear definition of an abstract field.

In 1910 Ernst Steinitz published the very influential paper Algebraische Theorie der Körper

(German: Algebraic Theory of Fields). In this paper he axiomatically studies the properties of fields

and defines many important field theoretic concepts like prime field, perfect field and the

transcendence degree of an field extension.

Galois, was honored to be the first mathematician in linking group theory and field theory. Galois

theory is named after him. However it was Emil Artin who first developed the relationship between

groups and fields in great detail during 1928-1942.

The concept of field was used implicitly by Niels Henrik Abel and Évariste Galois in their work on

the solvability of equations that there is no general formula expressing in terms of radicals the roots

of a polynomial with rational coefficients of degree 5 or higher.

3.2 Objective

After the completion of this unit students should be able to:

get the knowledge of finite fields that are used in number theory, Galois theory and coding

theory

construct many codes as subspaces of vector spaces over finite fields.

know various types of field and their extensions .

define primitive element.

define Automorphism of extension.

prove Fundamental theory of Galois theory

know about rational polynomials and their roots.

54

3.3 Extension Fields

A field extension of a field F is pair ( ,k ), where k is a field and is a monomorphism of F into k.

Suppose E is a field and F is a subfield of E. The injection map EFi : defined by xxi )( for

all Fx is a monomorphism. Consequently ),( iE is a field extension of F. In such a situation when

i is a trivial type of mapping, we do not mention i and simply say that E is a field extension of F. In

general case of a field extension ),( K , K has a subfield )(F isomorphic to F. We can identify

any two isomorphic algebraic systems. Here also we shall normally identify F with )(F and thus

treat F as a subfield of K, then we shall simply say that K is a field extension of F. Henceforth F

will be a field and K will be a field extension of F.

Now, as for any ,Fa ,, KaxKx we get an external composition KKF given by

.),( axxa One can immediately see that the additive group ,K becomes a vector space over

F relative to the external composition in K with respect to F defined above. Thus K must have a

basis and dimension over F.

The dimension of K as a vector space over F is called the degree of K over F. In general ]:[ FK

will denote the degree of K over F.

Example 1: 2]2[ ba Q Qba , is field extension of Q, the subset 2,1 forms a basis

of ]2[Q over Q. Consequently 2[[Q ]:Q=2.

Example 2: Consider an indeterminate x over a field F. Let K be the field of quotients of

].[xF Then, as we know that for any Faaaa n ,...,, 210 , 0...1 2

210 n

n xaxaxaa implies

,0 iai if follows that ,...,...,,,,1 32 nxxxx is an infinite subset of K which is linearly

independent over F. Consequently ]:[ Fk is infinite.

Hence K is said to be a finite or infinite extension of F according as the degree of K over F is finite

or infinite.

55

Thus, in Example 1, ]2[Q is a finite extension of Q and in Example 2, K is an infinite extension

of F. Following theorem about degrees of field extensions is of great significance.

Theorem If K is a finite extension of F and L is a finite field extension of K then L is a finite

extension of F and ].:][:[]:[ FKKLFL

Proof Let nFK ]:[ and .]:[ mKL Then we can find a basis nxxx ,...,, 21 of K over F and a

basis myyyy ,...,,, 321 of L over K. Consider the elements .,...2,1;,...,2,1; mjniyx ji If we can

show that these mn elements of L are linearly independent over F and they generate L as a vector

space over F, it will give that ]:][:[]:[ FKKLmnFL and the theorem will follow. For their

linear independence let 0,

jiij

ji

yxa for some .Faij That gives

.01 1

j

m

j

i

n

i

ij yxa Since myyy ,...,, 21 are linearly independent over K and Kxa iij for every j,

we get

n

i

iij mjxa1

.,...,2,10 This gives jiaij ,0 as nxxx ,...,, 21 are linearly independent over

F. Hence, the elements ji yx are linearly independent over F. Consider any .Lx Then

m

j

jj yax1

for some, .,...,2,1, mjKa j

Again for each

n

i

iijj xaaj1

, for some ,Faij as nxxx ,..., 21 is a basis of K over F. Consequently

m

j

n

i

jiij yxax1 1

with ., jiFaij

Hence, the mn elements mjniyx ji ,...2,1,...,2,1, form a basis of L over F.

Subfield generated by a Subset

Let S be a subset of a field K, then a subfield 'K of K is said to be generated by S if

'.)( KSi

)(ii for any subfield L of K, LS implies .' LK

56

Notation Subfield generated by a subset S will be denoted by . S

Essentially the subfield generated by S is the intersection of all subfields of K which contain S.

Now let K be a field extension of F and S be any subset of K, then the subfield of K generated

by SF is said to be the subfield of K generated by S over F and this subfield is denoted by

).(SF However if S is a finite set and its members are naaa ,....,, 21 then we write

).,...,()( 21 naaaFSF

A field K is said to be finitely generated over F if there exists a finite number of elements

naaa ,..., 21 in K such that ).,...,,( 21 naaaFK

In particular if K is generated by a single element over F, then K is called a simple extension of

F .Thus Q ]2[ = Q )2( is a simple extension of Q. Consider any Ka and let )(aF be the

subfield of K generated by a over F. Then for any ,,...,,, 210 Faaaa n

).(...2

210 aFaaaaaaa n

n This means that ).(][ aFaF Consequently the field of

quotients T of ).(aF is also contained in ).(aF However TF and Ta therefore

TaF )( and we get )(,,),( aFeiaFT is the field of quotients of the subring of K generated

by aF . This discussion can be extended to any set S and we can say .SF Let us remark

that for any n

n xaxaaxf ...)( 10 in ][xF and for any Ka , )(af denotes

n

naaaaa ...10 It can also be verified that the mapping )()( afxf is a ring

homomorphism of )(xF into K.

3.4 Algebraic and Transcendental Extension

The extension K of F is called an Algebraic Extension of F if every element in K is algebraic over F

Field extensions which are not algebraic, i.e. which contain transcendental elements, are called

transcendental.

57

For example, the field extension R/Q, that is the field of real numbers as an extension of the field of

rational numbers, is transcendental, while the field extensions C/R and Q(√2)/Q are algebraic,

where C is the field of complex numbers.

All transcendental extensions are of infinite degree. This in turn implies that all finite extensions are

algebraic. The converse is not true however: there are infinite extensions which are algebraic. For

instance, the field of all algebraic numbers is an infinite algebraic extension of the rational numbers.

If „a‟ is algebraic over K, then K[a], the set of all polynomials in „a‟ with coefficients in K, is not

only a ring but a field: an algebraic extension of K which has finite degree over K. In the special

case where K = Q is the field of rational numbers, Q[a] is an example of an algebraic number field.

Theorem Every finite extension of a field is algebraic extension.

Proof Let K be a finite extension of a field F. Let ]: FK = n. For any )(, aFKa is a subfield of k. But

]:)([ FaF | n, so it is finite. Hence a is algebraic over F. Consequently K is algebraic extension of F.

3.5 Separable and Separable Extensions

If an element a of a field extension K of F is algebraic over F, then a is said to be separable

(inseparable) over F, if the minimal polynomial of a over F is separable (inseparable).

An algebraic extension K of a field F is a said to be a separable extension, if every element of K is

separable over F, otherwise K is said to be an inseparable extension.

As observed before, every polynomial over a field of characteristic zero is separable; we see that

every algebraic extension of a field of characteristic zero is a separable extension.

However, if we take )(2 tZF and K be the splitting field of tx 2 , then K is a finite extension of

F as 2]:[ Fk . But K is an inseparable extension as tx 2 has a repeated root in K. Notice that F

is an infinite field of finite characteristic. We show that any algebraic extension of a finite field is

separable.

Theorem Any algebraic extension field F is a separable extension.

Proof Let )(xf be any irreducible polynomial over F. Suppose )(xf is inseparable over F, so

][)( pxfxf (Lemma 13.44), mp

m

pp xxxBxf ...)( 2

210 for some

58

.)0(, miFi As Faaa p , is an automorphism of F we can find Fai such that

.iap

ii

Consequently mpp

m

pppppxaxaxaaxf ...)( 2

210

pm

mxaxaxaa )...( 2

210

This implies )(xf is not irreducible. This is a contradiction. Hence )(xf is separable. Thus in

particular if K is any algebraic extension of F, the minimal polynomial of each element of K over F

is separable. Hence by definition K is a separable extension of F.

3.6 Normal Extensions

An algebraic field extension L/K is said to be normal if L is the splitting field of a family of

polynomials in K[X].

The normality of L/K is equivalent to each of the following properties:

Let Ka be an algebraic closure of K containing L. Every embedding σ of L in Ka which restricts to

the identity on K, satisfies σ(L) = L.

Every irreducible polynomial in K[X] which has a root in L factors into linear factors in L[X].

Other Properties :

Let L be an extension of a field K, then

If L is a normal extension of K and if E is an intermediate extension (i.e., L ⊃ E ⊃ K), then L is

also a normal extension of E.

If E and F are normal extensions of K contained in L, then the compositum EF and E ∩ F are also

normal extensions of K.

If K is a field and L is an algebraic extension of K, then there is some algebraic extension M of L

such that M is a normal extension of K.

3.7 Perfect and Finite Field

A field F is called Perfect if all finite extensions of F are separable.

59

There exists a simple criterion for perfectness:

A field F is perfect if and only if F has characteristic 0, or F has a nonzero characteristic p, and

every element of F has a p-th root in F.

Fields having only a finite number of elements are called Finite Fields. Such fields do exist, for the

ring of integers modulo any prime p.

Theorem Let F be an infinite field and E be some field extension of F. Let Eba , be algebraic over F

such they are separable over F. Then there exists Kc such that ),()( baFcF and abac for some

.Fa

Proof Let )(xf and )(xg be the minimal polynomial over F of a and b respectively and let m,n be their

respective degrees. Let K be the splitting field of )()( xgxf over E. Then ., Kba Clearly every root of

)(xf is a root of ),()( xgxf so K contains a splitting field of ).(xf Similar is the case for ).(xg since

ba, are separable over )(, xfF has m distinct roots maaaaa ,...,,, 321 in K and )(xg has n distinct roots

nbbbb ,...., 21 in K.

For .2,2 njmi Define .Kbb

aa

j

iij

These 'ij s are finite in number. As F has in finite number of elements, clearly we can find an Fa )0(

such that .2,, jia ij

Then .2,.,,,2,)( jiabaabaeijiaabba jiij

Now put ),,( baFabac so ).,()( bacF we show that ).,()( baFcF

Since )(cFc and every coefficient of )(xf is in F so also in K, we get that the polynomial

].[)()( xKaxcfxh Further deg mxfxh )(deg)( (why?). Now

.0)()()( afabcfbh Suppose that for some 0)(,2 jbhj then 0)( jabcf

So that ij aabc for some i since only naaa ,...., 21 are that roots of ).(xf If 1i then aai gives us

that ..., bbaabaabeiaabc jjj This is a contradiction. So .2i In that case we shall get

60

ij aabaab does not divide ).(xh Now bx is a factor of )(xh over K. As b is also a root of )(xg in

bxK , is a common factor of ).(xg

We claim that bx is their .HCF As )(xf has no multiple root, ).()( 2 xgbx Since

))()(()( nbxbxbxxg and each of )2)(( jbx j does not divide ),(xh it follows that bx is the

HCF of )(xg and ).(xh Now ])[()( xcFxh as )(cFc and .Fa Also ].)[()( xcFxg Let )(1 xg be

the minimal polynomial of b over ).(cF Then )(|)(),(|)( 11 xhxgxgxg over )(cF and hence over K also,

as .)( KcF

So )(|)(1 bxxg over K as bx is the HCF of )(xg and )(xh over K. However, )(1 xg is of positive

degree and is monic so we must have ].)[()(1 xcFxgbx This implies ).(cFb

Therefore )(cFabca

Thus ).(),( cFbaF Hence ).,()( baFcF

Theorem Any finite separable extension of an infinite field is a simple extension.

Proof Let F be an infinite field and K a finite field extension of F. There exist finite number of elements

naaa ,...,, 21 in K such that ).,...,,( 21 naaaFK We prove the result by induction on n, If 1n , then K is

already simple. Suppose 1n and theorem holds for all finite separable extension of F generated by less

than n elements. Let ).(),...,( 1211 bFaaaFK n

Thus ),( nabFK that gives )(cFK for some ).,( nabFc This proves the theorem.

Theorem Finite fields having the same number of elements are isomorphic.

Proof Let 1K and 2K be two fields each having q elements. By Theorem 13.52 npq for some

Nn and prime number P, further characteristic of each of 1K and 2K is p. Let

1P and 2P be the prime subfields of 1K and 2K respectively. As pZP /1 and

.,/ 212 PPpZP Now by Lemma 13.53, 1K is splitting field of xxq over 1P and 2K is

splitting field of yy q over 2P . Hence by Theorem 13.35, 21 KK

61

Notation A finite field of q elements is denoted by ).(qGF

3.8 Primitive Element

A field extension L/K is called a simple extension if there exists an element θ in L with L = K(θ).

The element θ is called a primitive element, or generating element, for the extension; we also say

that L is generated over K by θ.

A primitive element of a finite field is a generator of the field's multiplicative group. When said at

greater length: In the realm of finite fields, a stricter definition of primitive element is used. The

multiplicative group of a finite field is cyclic, and an element is called a primitive element if and

only if it is a generator for the multiplicative group. The distinction is that the earlier definition

requires that every element of the field be a quotient of polynomials in the primitive element, but

within the realm of finite fields the requirement is that every nonzero element be a pure power.

The primitive element theorem provides a characterization of the finite field extensions which are

simple and thus can be generated by the adjunction of a single primitive element.

A field extension L / K is finite and has a primitive element if and only if there are only finitely

many intermediate fields F with K ⊆ F ⊆ L.

In this form, the theorem is somewhat unwieldy and rarely used. An important corollary states:

Every finite separable extension L / K has a primitive element.

In more concrete language, every separable extension L / K of finite degree n is generated by a

single element x satisfying a polynomial equation of degree n, xn + c1xn−1 + ... + cn = 0, with

coefficients in K. The primitive element x provides a basis [1, x, x2, ..., x n−1] for L as a vector space

over K.

This corollary applies to algebraic number fields, which are finite extensions of the rational

numbers Q, since Q has characteristic 0 and therefore every extension over Q is separable.

For non-separable extensions, one can at least state the following:

If the degree [L : K] is a prime number, then L / K has a primitive element.

62

If the degree is not a prime number and the extension is not separable, one can give

counterexamples. For example if K is Fp(T, U), the field of rational functions in two indeterminates

T and U over the finite field with p elements, and L is obtained from K by adjoining a p-th root of

T, and of U, then there is no primitive element for L over K. In fact one can see that for any α in L,

the element αp lies in K. Therefore we have [L : K] = p2 but there is no element of L with degree p

2

over K, as a primitive element must have.

3.9 Algebraic Closed Field

A field K is algebraically closed if every non-constant polynomial in K[X] has a root in K.

An extension field L of K is an algebraic closure of K if L is algebraically closed and every element

of L is algebraic over K .Using the axiom of choice, one can show that any field has an algebraic

closure. Moreover, any two algebraic closures of a field are isomorphic as fields, but not necessarily

canonically isomorphic.

Theorem A finite field cannot be algebraically closed.

Proof The proof proceeds by the method of contradiction. Assume that a field F is both finite and

algebraically closed. Consider the polynomial p(x)=x2−x as a function from F to F There are two

elements which any field (in particular, F must have -- the additive identity 0 and the multiplicative

identity 1 ). The polynomial p maps both of these elements to 0 Since F is finite and the function

p:F F is not one-to-one, the function cannot map onto F either, so there must exist an element a of

F such that x2−x a for all x F In other words, the polynomial x

2−x−a has no root in F so F could

not be algebraically closed

Proposition Every algebraically closed field is perfect.

Proof Let K be an algebraically closed field of prime characteristic p . Take a K . Then the

polynomial −a admits a zero in K . It follows that a admits the p th root in K . Since a is arbitrary

we have proved that the field K is perfect.

63

3.10 Automorphism of the Field

Let F be any field and K be any field extension of F. Then an automorphism of K is said to be an

F-automorphism if xx )( for every Fx i.e., leaves every element of F fixed.

Clearly, the identity automorphism of K is an F-automorphism.

Let 21, by any two F-automorphisms of K. Then 1

21

)(KAut as )(KAut is a group. At

the same time for any .)(,)(, 21 xxxxKx This implies

.)()]([)()()( 1

1

2121

1

21 xxxaxxxxx

Hence 1

21

is also an F-

automorphisms of K. Thus the set of all F-automorphisms of K is a subgroup of the group of all

automorphisms of K.

Notation ),( FKG will denote the group of all F-automorphisms of K. ),( FKG is called the

Galois group of K over F which we will discuss in the next section .

Corollary Any set of automorphisms of K is linearly independent over K.

Lemma The set of all automorphims of a field form a group under resultant composition.

Proof Let Aut (K) be the set of all automorphisms of K.

(i) Closure Let ).(, 21 KAut Then 1 and 2 are both 1-1 and onto mappings. So 21

is also onto mapping. Further for any ., Kyx

)]([)( 2121 yxyx

)()(

)]([)]([

)]()([

2121

2121

221

yx

yx

yx

)]([)( 2121 xyxy

)]()][([

)]}([)]}{([{

)]()([

2121

2121

221

yx

yx

yx

64

This shows that 21 is an automorphism of K.

Hence ).(21 KAut

(ii) Associatively: If follows from the fact that the resultant composition of mappings, in

general, is associative.

(iii) Existence of Identity : The identity map I on K is identity of ).(KAut

(iv) Existence of Inverse: Consider any ).(KAut Since is 1-1 and onto, 1 (the in

verse of the mapping ) exists. For any yxKx )(, 1 if and only if .)( xy Now

consider ., 21 Kxx Let .)(,)( 22

1

1

1 yxyx Then ,)( 11 xy 22 )( xy so that

212121 )()()( xxyyyy and .)()()( 212121 xxyyyy Hence

).()( 2

1

1

1 xx Consequently 1 is an automorphism of ).(.,. 1 KAutaeKi

This proves that )(KAut is a group.

Lemma Let K be any field extension of F and Ka be algebraic over F. Than for every F-

automorphism of )(),( aFKG is a conjugate of a over F.

Proof Let 0

2

2

1 ...)( axaxaaxxf n

n

n

n

nn

be the minimal polynomial of a over

F.

Then 0... 0

2

2

1

1

aaaaaa n

n

n

n

n

So )...)(0(0 0

2

2

1

1 aaaaaa n

n

n

n

n

)(...)]()[()()[()]([ 0

2

2

1

1 aaaaaa n

n

n

n

n

0

1

1 ...)]([])]([ aaaa n

n

n

Since .1,...2,1,0)( niaa ii

This shows that )(a is also a root of ).(xf Hence )(a is a conjugate of a over F.

65

Remark Let ),...,( 21 naaaFK be a finite algebraic extension with },...,,{ 21 naaa as a basis

of K. Then each Kx is expressible as nnaaaaaax ....2211 for some .Fai

Suppose is any F-automorphism of K. Then

)()(...)()()()()( 2211 nn aaaaaax

).(...)()( 2211 nn aaaaaa

So that )(x is known if we know )(),...,(),( 21 naaa i.e., is completely determined by the

images of the basis elements of K. In fact, more generally, if K is finitely generated over F and

naaa ,..., 21 is a set of generators of K over F then is determined by ).(),...(),( 21 naaa

Theorem (Artin) Let G be a finite group of automorphisms of a field K; ,0F the fixed field

under G. Then the degree of K over 0F is equal to the order of the group G.

Proof Let nGo )( we show that

(i) if ]:[ 0FK is m, then nm and

(ii) if ]:[ 0FK is m, then .nm

(i) Suppose .nm Let n ,...1 21 be all the members of G. Let },...,{ 21 mxxx be a basis of K

over .0F Consider the system of m linear homogenous equations.

mjuxuxux njnnjj ,...,2,10)(...)()( 211 … (1)

As the number of equations is less than the number of variables, the equation (1) has a non-

trival solution, say ),...,( 21 nyyy over K.

Then mjyxyxyx njnjj ,...2,10)(...)()( 2211 ….(2)

Consider any .Kx Then mm xaxaxax ...2211 for some 0Fai as ),...,,{ 21 mxxx is a

basis of K over 0F . In (2), multiplying jth equation by ,ja adding the resulting equations and

using the fact that jiaa jji ,)( we get

66

.0)(...)()( 2211 Kxxyxyxy nn

So that nnyyy ...2211 ō with atleast one of .0jy This is not possible. Hence m < n

so .nm

(ii) Suppose there exist 1n linearly independent elements say 121 ,...,, nxxx of K over .0F

Consider the system of n liner homogenous equations in 1n unknowns.

0)(...)()( 12211 njjj xuxux for nj ,...,2,1 …(3)

As the number of variables is greater than the number of equations these homogenous equations

have a non-trivial solution. Let ),...,.( 121 nzzz be a non-trivial solution of equations (3) with smallest

number, say r, of non-zero components. We can renumber then and suppose that .10 rjz j

Then (3) gives

njzxzxzx rrjjj ,...2,10)(...)()( 2211 ….(4)

Dividing the equations by rz and setting ,' iz

zz

r

i

j we get

jxzxxxzx rjrrjjj 0)(')(...')(')( 112211 …(5)

Now for ,,1 Ij j so .)(1 ixx ii We get from (4)

0'...'' 112211 rrr xzxzxzx …(6)

If all of ',...,',' 121 rzzz were in .0F then (6) would give rxxx ,...' 21 are linearly dependent over .0F

This would not be possible. Hence at least one of these, say, '1z is not in .0F Notice further that

1r otherwise we would have got .0'1 z As 01 ' Fz there exists some Gi such that

'.)'( 11 zzi Applying i to (5) we get

.,...,2,10)(]')([...]')([]')([ 112211 njxzxzxzx rjirrijiji

67

However GGi so that every Gk is the form ,ji we get

0)()'()(...)'()()'()( 112211 rjrirjijij xzxzxzx …(7)

for all .,...,2,1 nj

Subtracting (7) from (5) we get

njzzx

zzxzzx

rirrj

ijij

,...,2,10)]'(')[(

...)]'(')[()]'(')[(

111

222111

…(8)

Now we put .1,...,2,1)'(' rkzzt kikk

and 1,...,1,0 nrrktk

Then (8) gives

rrjrrjjj txtxtxtx )()(...)()( 112211 ……;,...,2,10)(... 11 njx n

t

nj …(9)

Further 01 t as )'(' 11 zz i so that )0...,0,0,,...,,( 121 rttt is a non-trivial solution of the system of

equation (3). It has less than r non-zero components. But by our choice )0...,0,0,,..., 21 rzzz is a non-

trivial solution (3), having smallest number of non-zero components and these are r in number.

Hence we get a contradiction. This prove that .nm Hence .nm i.e., ).(]:[ 0 GoFK This

completes the proof.

Remark 1 Let G be any group (not necessarily finite) of automorphism of a field K such that

mFK 0]:[ 0 is finite, where 0F is the fixed field under G.

Remark 2 Let K be a finite extension of F and G be the group of all F automorphism of K. Then the

fixed field 0F under G contains F, so ].:[]:[ 0 FKFK Hence ].:[)( FKGo

68

3.11 GALOIS THEORY

One of the most elegant theories in Abstract Algebra is the “Galois theory of fields”. This theory is

an excellent composite of the theory of groups and the theory of algebraic field extensions. It has

many applications to the Theory of Equations and Geometry. Its fundamental concepts and its

applications are as under:

A finite extension K of a field F is said to be a Galois Extension of F if F is the fixed subfield of K

under the group ),( FKG of all F-automorphisms of K.

Corollary Let )(aFK be a simple finite separable extension of F. Then K is a splitting field of

the minimal polynomial of a over F if and only if F is the fixed field under the group of all F-

automorphisms of K.

Proof Let )(xf be the minimal polynomial of a over F and let deg .)( mxf Then .]:[ mFK Let

raaaaa ,...,,, 321 be the distinct conjugates of a in K. Then .,...,2,1)( riaFK i For each i,

there exists an F-automorphism i of K such that .)( 1 ii aa Since 1a generates K over F, each

i is uniquely 1a ( determined. Further for any F-automorphism of K , as )( 1a is a conjugate of

1a (Lemma 14.5), iaa )( 1 for some .ia From this it follows that .i Hence the group

),( FKG consists of .,...,, 21 r Let 0F be the fixed field under ).,( FKG Then by the above

theorem .)],([]:[ 0 rFKGoFK So 0FF if and only if .mr Hence F is the fixed field under

G if and only if )(xf has all its m roots in K i.e., if and K is the splitting field of )(xf over F.

Corollary Let )(aFK be a simple finite separable extension of F. Then K is a splitting field of

the minimal polynomial of a over F if and only if F is the fixed field under the group of all F-

automorphisms of K.

Proof Let )(xf be the minimal polynomial of a over F and let deg .)( mxf Then .]:[ mFK Let

raaaaa ,...,,, 321 be the distinct conjugates of a in K. Then .,...,2,1)( riaFK i For each i,

there exists an F-automorphism i of K such that .)( 1 ii aa Since 1a generates K over F, each

i is uniquely determined. Further for any F-automorphism of K , as )( 1a is a conjugate of

69

Lemma 14.5), iaa )( 1 for some .ia From this it follows that .i Hence the group

),( FKG consists of .,...,, 21 r Let 0F be the fixed field under ).,( FKG Then by the above

theorem .)],([]:[ 0 rFKGoFK So 0FF if and only if .mr Hence F is the fixed field under

G if and only if )(xf has all its m roots in K i.e., if and K is the splitting field of )(xf over F.

3.12 FUNDAMENTAL THEOREM OF GALOIS THEORY

Fundamental Theorem of Galois Theory Let K be a finite, normal, separable field extension of

a field F and let ),( FKF be the Galois group of K over F. Then the correspondence

),( EKGE where E is a subfield if K containing F is 1-1 between the families of the subfields of

K containing F and the family of all subgroups of ),,( FKG satisfying the following conditions.

Given any subfield E of K containing F and subgroup H of ),( FKG

(i) ),( EKGKE

(ii) ),( HKKGH

(iii) )],([]:[ EKGoEK and ]:[ FE = index of ),( EKG in ),( FKG

(iv) E is a normal extension of F if and only if ),( EKG is a normal subgroup of ).,( FKG

(v) When E is a normal extension of F. then, ),( FEG is isomorphic

to ).,(|),( EKGFKG

Proof Since K is a finite extension of F and ,KEF we get that K is a finite normal separable

extension of E. So E is same as the fixed field ),( EKGK .

Thus (i) follows by definition },)(|{ HxxKxKH each H is a HK automorphism

of K. So that ).,( HKKGH However ]:[)( HKKHo (Theorem 14.8). At the same time as K is

a normal extension of 'HK Theorem 14.15 gives that HK is the fixed field under ).,( HKKG So

70

)].,([]:[ HH KKGoKK Thus )],(()( HKKGoHo and consequently ).,( HKKGH This proves

(ii).

Now as K is a normal separable extension of E; )].,([:[ EKGoEK Thus

]:)][,([]:][:[]:[)],([ FEEKGoFEEKFKFKGo gives

.)],([

)],([]:[

EKGo

FKGoFE This proves (iii).

Let E be a normal extension of F. Consider any ,Ea then the splitting field of the minimal

polynomial of a over F is contained in E. That gives every conjugate of a over F in K is again in E.

Since for any )(),,( aFKG is a conjugate of a, we have .)( Ea Thus for any

)()]([),,( aaEKG and hence .))(( 1 aa This proves ),(1 EKG for every

),( EKG and ).,( FKG

Consequently ),( EKG is a normal subgroup of ).,( FKG Coversely let ),( EKG be a normal

subgroup of ),( FKG . Consider .Ea As K is a normal extension of F, K contains a splitting field

say L of the minimal polynomial )(xp of a over F. Consider any root b of )(xp in L. Then b is a

conjugate of a over F. So there exists an F-automorphism of K such that .)( ba For any

).,( EKG ),(1 EKGn so .))(( 1 aa

Thus ).,()()]([ EKGaa However E is the fixed field under ),( EKG . This gives that

.)( Eab Hence .EL This proves that E is a normal extension of F. Hence (iv) is proved.

Let E be a normal extension of F. E = F (a) for some .Ea For any ),,( FKG let E denote

the restriction of to E. Since ,)( Ea we get .)( EE As ],:[]:)([ FEFE we get

.)( EE Hence E is an F-automorphism of E and so ).,( FEGE Define a mapping

),(),(: FEGFKG

by ).,()( FKGE Clearly for any , .)(),,( EEEFKG Hence is a group of

homomorphism. Consider any ).,( FEGy Now )(ay is a conjugate of a over F. Thus there exists

71

an F-automorphism of K such that ).()( aya Further as and y are both identity on F and E

is generated by a over F we get EaFxxyx )()()( i.e. ).( Ey This proves that

is onto mapping. Hence .),(

),(Ker

FKGFEG Now Ker if and only if E is identity on E i.e.,

if and only if Exxx )( i.e., if and only if ).,( EKG Hence ),( EKGKer and we

obtain .),(

,(),(

EKG

FKGFKG

This proves (v).

Hence theorem is proved .

3.13 Solvability and Insolvability of Polynomials by radicals

Historically, the problem of solving polynomial equations by radicals has motivated a great deal of

study in algebra. The quadratic formula gives a solution of the equation ax2 + bx + c = 0, where „a‟

is nonzero, expressed in terms of its coefficients, and using a square root. More generally, we say

that an equation anxn + ... + a1x + a0 = 0 , an ≠ 0 is solvable by radicals if its solutions can be given

in a form that involves sums, differences, products, or quotients of the coefficients an , ..., a1 , a0 ,

together with square roots, cube roots, etc., of such combinations of the coefficients.

Abel attacked the general problem of when a polynomial equation could be solved by radicals. His

papers inspired Galois to formulate a theory of solvability of equations involving the structures we

now know as groups and fields. Galois worked with fields to which all roots of the given equation

had been adjoined. He then considered the set of all permutations of these roots that leave the

coefficient field unchanged. The permutations form a group, called the Galois group of the

equation. From the modern point of view, the permutations of the roots can be extended to

automorphisms of the field, and form a group under composition of functions. Then an equation is

solvable by radicals if and only if its Galois group is ``solvable''.

Polynomials are functions of the type p(x)= anxn + an-1x

n-1 ... + a1x + a0 = 0 , an ≠ 0

The root(s) of a polynomial are the value(s) of x which satisfy p(x)=0.

Being able to solve for polynomial roots using radicals is not about finding a root, as this is known

by the fundamental theorem of algebra that any polynomial of degree n has n complex roots, which

need not be distinct. Solving a polynomial by radicals is the expression of all roots of a polynomial

using only the four basic operations, addition, subtraction, multiplication and division, as well as

the taking of radicals, on the arithmetical combinations of coefficients of any given polynomial.

72

Solving for polynomial roots by radicals, involves finding the general solution to the general form

of a polynomial of some specific degree. The general solution to a polynomial of degree two, or a

quadratic equation, for example, is by using the quadratic formula.

While the quadratic formula for solving the quadratic equations has been known for as long as two

thousand years, cubic or quartic formulas were only known in the last five hundred years, and this

problem has been worked on by many mathematicians, including Gauss.

This raises the question of how to solve polynomials of higher degree. The methods for solving the

roots of polynomials of degrees three and four exists but are by far less well known and more

complex as compared to the quadratic formula. All three known formulas will be mentioned in this

paper.

However, there is no formula for any polynomial for degree five and above, and it will be proven

that such formulas do not exist.

While it may be impossible to find a general formula for these polynomials, specific cases of

polynomials which can be expressed by radicals will be investigated.

The purpose of this research is thus to find out if all polynomials can be solved by radicals and to

prove the resultant findings about the solvability of polynomials.

RESULTS

Constant Polynomials

Constant polynomials of the form p(x) = c have no roots, unless c=0 , in which case there are

infinitely many roots.

Linear Functions

These are functions of the form , with m nonzero, and have exactly one root, i.e.

Quadratic Functions

Quadratic equations can be solved using the Quadratic Formula:

For any quadratic equation of the form

73

Completing the square for will derive the quadratic formula.

Rearranging the terms of (2) we can get the quadratic formula.

Cubic Functions

Solving Cubic functions can be done using Cardano‟s method, which transforms the general cubic

equation into a depressed cubic without x2 term.

This method is believed to have originated from Scipione del Ferro, and was later adapted by

Niccolò Tartaglia, and published in Cardano‟s 1545 paper.

The method is as follows.

We begin with the general form of a polynomial of degree three.

Since it is easier to work with a polynomial of leading coefficient one, we can divide „a‟ out of the

entire equation.

Substitute the following equation into (4)

The polynomial becomes

For a cubic polynomial of the form

Observe that

74

Equation (6) corresponds to equation (5) since we can let

Thus we can solve for y

where i∈{1,2,3} and is one of the 3rd roots of unity.

Thus the general solutions for the equation (4) are

Quartic Functions

Solving Quartic polynomials can be done using Ferrari‟s method, which transforms a quartic

polynomial into a depressed quartic which has no x^3 term.

We begin with the general form of a quartic equation.

We can reduce all quartic polynomials to monic polynomials by dividing throughout by the leading

coefficient, and replacing the coefficients of the other terms with a,b,c,d.

Substitute the following equation into (7)

to get a equation of the form

We can add 2zy^2+z^2 to the above equation, to obtain

Since we want the left side to be a square as well, we can let the discriminant of the quadratic on the

RHS be 0.

75

Therefore,

Rearranging the terms we get a cubic in z,

We can thus find the root of this equation, and solve for y by substituting that value into (1) to get a

quadratic in y2.

Solving the resultant quadratic in y2

gives the roots of the depressed quartic, from which we can

derive x.

Theorem There exists a polynomial of degree 5 with rational coefficients that is not solvable by

radicals.

Proof Let p(x) = x5 - 2x

3 - 8x - 2. It is easy to check that the derivative of p(x) has two real roots,

corresponding to one relative maximum and one relative minimum. Since the values of p(x) change

sign between -2 and -1, between -1 and 0, and between 2 and 3, the polynomial has precisely three

real roots.

There exists a splitting field F for p(x) that is contained in the field C of complex numbers. The

polynomial p(x) is irreducible over the rational numbers by Eisenstein's criterion, and so adjoining a

root of p(x) gives an extension of degree 5.

It follows from theorems in Galois theory and group theory that the Galois group of p(x) over the

rationals must contain an element of order 5. Every element of the Galois group of p(x) gives a

permutation of the roots, and so the Galois group turns out to be isomorphic to a subgroup of the

symmetric group on 5 elements. This subgroup must contain an element of order 5, and it must also

contain the transposition that corresponds to the element of the Galois group defined by complex

conjugation. It can be shown that any subgroup contains both a transposition and a cycle of length 5

must be equal to all of the symmetric group. Therefore the Galois group of p(x) over the rationals

must be isomorphic to the symmetric group on 5 elements.

The proof is completed by showing that this group is not a solvable group. With some hard work it

can be shown that in the group the only candidate for a chain of normal subgroups as required in the

definition of a solvable group is S5 > A5 > {e} . Since the alternating group on 5 elements is not

abelian, it follows that the symmetric group on 5 elements is not a solvable group. Thus p(x) cannot

be solved by radicals. This we had already discussed in block 1 section 1.6.

A general polynomial of degree greater than five cannot be solved by radicals, and this fact can be

proved so using Galois Theory. However, many types of (non-general) polynomials of degrees

76

greater than five can be solved by radicals, and solutions by radicals have been found for

polynomials of these types. For quadratic, cubic, and quartic equations, general solutions have been

found, and they are respectively the quadratic formula, Cardano‟s method and Ferrari‟s method.

Quintic Functions

A general quintic polynomials are insolvable by radicals. This proof makes use of group theory and

Galois Theory, and is unlike Abel‟s 1819 paper. We will use the result below:

Theorem A polynomial is solvable by radicals if and only if the Galois group of the splitting field

of the polynomial f(x)∈F[x] is solvable.

Let be independent transcendental elements over Q. Consider

By Vieta‟s formulas, we know that

are elementary symmetrical

functions in yi .

Set and . Then the polynomial f(x) in F[x] has the E as its

splitting field.

The proof of the insolvability of f(x) by radicals is as follows.

Suppose on the contrary that is solvable for some polynomial of degree

five p(x)∈F[x]

Consider the composition series of subgroups from to Gr = (1) :

⊵ ⊵ ⊵ ⊵ .

This corresponds to the extension fields of

F= ⊂ ⊂ ⊂ ⊂Fr .

Each extension is cyclic and Galois .

77

The composition series of is as follows: ⊳

Thus Gal(E/F) is not solvable. Hence f(x) is not solvable by radicals .

Special Solvable Cases

By the proof above, we know that it is impossible to solve all quintics by radicals, and thus no

general solution can be found. However, there are many cases of quintics which are solvable by

radicals.

Consider the general quintic polynomial

If a=0, then the quintic becomes a quartic polynomial, and is thus solvable by radicals using the

aforementioned Ferrari‟s method.

we know that a polynomial is solvable if and only if its Galois group is solvable.

Consider the cyclotomic polynomial x5-1=0 .

This equation is solvable in radicals as its splitting field is generated by the 5th roots of unity, so

the resultant Galois group is solvable.

The roots of this equation are simply the 5th roots of unity,

where k∈{0,1,-1,-2,2} .

These roots of unity can be expressed by radicals.

Similarly, all equations of the form , where m is a constant, are solvable by

radicals, since the roots are simply

Eisenstein‟s Irreducibility Criterion

78

For any polynomial of the form

∈Z

If for some prime number p,

are divisible by p, a0 is not divisible by p^2,

Then the polynomial is irreducible over , or it cannot be decomposed non-trivially into two

polynomials of lesser degree with rational coefficients. In particular, it does not have root in.

If a quintic polynomial is reducible, then its roots can be expressed by radicals.

Also, any polynomial which can be decomposed into rational factors which are defined over Q

can be solved, since we can express the roots of each of these factors in radicals.

Thus in general, polynomials of degree 5 or greater than 5 cannot be solved using radicals.

Polynomials of degree 0,1,2,3,4 can be solved generally by radicals.

While polynomials of degree five or larger cannot be solved by radicals generally, there are

many more specific types of polynomials f(x) that can. For instance, if where

each fi (x) is defined over Q, then f(x) can be solved by radical.

3.14 Unit Summary/ Things to Remember

1. A field extension of a field F is pair ( ,k ), where k is a field and is a monomorphism of

F into k.

2. The extension K of F is called an Algebraic Extension of F if every element in K is

algebraic over F

3. Field extensions which are not algebraic, i.e. which contain transcendental elements, are

called transcendental.

79

4. Every finite extension of a field is algebraic extension.

5. An algebraic extension K of a field F is a said to be a separable extension, if every element

of K is separable over F, otherwise K is said to be an inseparable extension.

6. Every polynomial over a field of characteristic zero is separable.

7. Any algebraic extension field F is a separable extension.

8. An algebraic field extension L/K is said to be normal if L is the splitting field of a family of

polynomials in K[X].

9. A field F is called Perfect if all finite extensions of F are separable.

10. Fields having only a finite number of elements are called Finite Fields.

11. Any finite separable extension of an infinite field is a simple extension.

12. Finite fields having the same number of elements are isomorphic.

13. A field extension L/K is called a simple extension if there exists an element θ in L with L =

K(θ) where the element θ is called a primitive element, or generating element.

14. A field K is algebraically closed if every non-constant polynomial in K[X] has a root in K.

15. Every algebraically closed field is perfect. Also A finite field cannot be algebraically

closed.

16. An automorphism of K is said to be an F-automorphism if xx )( for every

Fx where F be any field and K be any field extension of F.

17. A finite extension K of a field F is said to be a Galois Extension of F if F is the fixed

subfield of K under the group ),( FKG of all F-automorphisms of K.

18. An equation is solvable by radicals if and only if its Galois group is ``solvable''.

19. In general, polynomials of degree 5 or greater than 5 cannot be solved using radicals.

However, there are many cases of 5 degree polynomials which are solvable by radicals.

80

3.15 Assignments/ Activities

1. Prove that if F is a field extension of K and K = FG for a finite group G of

automorphisms of F, then there are only finitely many subfields between F and K.

2. 7. Let F be the splitting field over K of a separable polynomial. Prove that if Gal (F/K) is

cyclic, then for each divisor d of [F:K] there is exactly one field E with K E F and

[E:K] = d.

3. 8. Let F be a finite, normal extension of Q for which | Gal (F/Q) | = 8 and each element

of Gal (F/Q) has order 2. Find the number of subfields of F that have degree 4 over Q.

4. 9. Let F be a finite, normal, separable extension of the field K. Suppose that the Galois

group Gal (F/K) is isomorphic to D7. Find the number of distinct subfields between F

and K. How many of these are normal extensions of K?

5. 10. Show that F = Q( , i) is normal over Q; find its Galois group over Q, and find all

intermediate fields between Q and F.

6. Let K be a field of characteristic zero, and let E be a radical extension of K. Then there

exists an extension F of E that is a normal radical extension of K.

7. Let p be a prime number, let K be a field that contains all pth roots of unity, and let F be

an extension of K. If [F:K] = |Gal(F/K)| = p, then F = K(u) for some u in F such that up

is in K.

8. Let K be a field of characteristic zero that contains all nth roots of unity, let a be an

element of K, and let F be the splitting field of xn-a over K. Then Gal(F/K) is a cyclic

group whose order is a divisor of n.

9. Let F be the splitting field of xn - 1 over a field K of characteristic zero. Then Gal(F/K)

is an abelian group.

10. Let f(x) be a polynomial over a field K of characteristic zero. The equation f(x) = 0 is

solvable by radicals if and only if the Galois group of f(x) over K is solvable.

3.16 Check Your Progress

1. A polynomial of degree n over a field can have atmost n roots in any extension field.

2. Let K be a finite, separable field extension of a field F. Then K is a normal extension of

F if and only if the fixed field under the Galois group ),( FKG is F itself in case K is a

normal extension of F, )].,([0]:[ FKGFK

3. Prove that any subgroup of S5 that contains both a transposition and a cycle of length 5

must be equal to S5 itself.

4. If F is the splitting field over K of a separable polynomial and G = Gal(F/K), then

FG = K.

81

5. Let K be a finite field and let F be an extension of K with [F:K] = m. Then Gal(F/K) is a

cyclic group of order m.

6. State and Prove Fundamental theorem of Galois Theory.

7. Let f(x) K[x] be a polynomial with no repeated roots and let F be a splitting field for

f(x) over K. If : K -> L is a field isomorphism that maps f(x) to g(x) L[x] and E is a

splitting field for g(x) over L, then there exist exactly [F:K] isomorphisms : F -> E such

that (a) = (a) for all a in K.

3.17 Points for discussion / Clarification

At the end of the unit you may like to discuss or seek clarification on some points. If so,

mention the same.

1. Points for discussion

_______________________________________________________________________

_______________________________________________________________________

________________________________________________________________________

2. Points for clarification

________________________________________________________________________

________________________________________________________________________

-

________________________________________________________________________

3.18 References

12. I . N. Herstein, Topics in Algebra, Second Edition, John Wiley and Sons, 2006.

S.P.B. Bhattacharya, S.K. Jain and S.R. Nagpaul, Basic Abstract

Algebra,(2nd

Edition),Cambridge University Press, Indian Edition, 1997.ss

82

13. John S. Rose, A Course on Group Theory. New York: Dover, 1994.

14. P.B. Bhattacharya, S.K. Jain and S.R. Nagpaul, Basic Abstract

Algebra,(2nd

Edition),Cambridge University Press, Indian Edition, 1997.

UNIT-4

Rings and Modules

STRUCTURE

1.1 Introduction

83

1.2 Objective

1.3 Northerian and Artinian modules and ring

1.4 Hilbert Basis Theorem

1.5 Wed artin theorem

1.6 Uniform modules

1.7 Primary modules

1.8 Northern Lasker theorem

1.9 Smith Normal Form

1.10 Unit Summary/ Things to Remember

1.11 Assignments/ Activities

1.12 Check Your Progress

1.13 Points for Discussion/ Clarification

1.14 References

UNIT-5

STRUCTURE

1.1 Introduction

1.2 Objective

84

1.3 Structure theorem for finitely generated modules over a principal ideal domain and its

applications to

1.4 Rational Canonical Form

1.5 Normal Jordan form to any field

In a vector space, the set of scalars forms a field and acts on the vectors by scalar multiplication, subject to

certain formal laws such as the distributive law. In a module, the scalars need only be a ring, so the module

concept represents a significant generalization. In commutative algebra, it is important that both ideals and

quotient rings are modules, so that many arguments about ideals or quotient rings can be combined into a

single argument about modules. In non-commutative algebra the distinction between left ideals, ideals, and

modules becomes more pronounced, though some important ring theoretic conditions can be expressed either

about left ideals or left modules.

Noetherian module is a module that satisfies the ascending chain condition on its submodules,

where the submodules are partially ordered by inclusion.

Historically, Hilbert was the first mathematician to work with the properties of finitely generated

submodules. He proved an important theorem known as Hilbert's basis theorem which says that any

ideal in the multivariate polynomial ring of an arbitrary field is finitely generated. However, the

property is named after Emmy Noether who was the first one to discover the true importance of the

property.

Polynomial rings over fields have many special properties; properties that follow from the fact that

polynomial rings are not, in some sense, "too large". Emmy Noether first discovered that the key property of

polynomial rings is the ascending chain condition on ideals. Noetherian rings are named after her.

In mathematics, Hilbert's basis theorem states that every ideal in the ring of multivariate

polynomials over a field is finitely generated. This can be translated into algebraic geometry as

follows: every algebraic set over a field can be described as the set of common roots of finitely

many polynomial equations. The theorem is named for the German mathematician David Hilbert

who first proved it in 1888.

Hilbert produced an innovative proof by contradiction using mathematical induction; his method

does not give an algorithm to produce the finitely many basis polynomials for a given ideal: it only

shows that they must exist. One can determine basis polynomials using the method of Gröbner

bases.

[edit] Proof

The following more general statement will be proved: if R is a left (respectively right) Noetherian

ring then the polynomial ring R[X] is also a left (respectively right) Noetherian ring.

85

Let I be an ideal in R[X] and assume for a contradiction that I is not finitely generated. Inductively

construct a sequence f1, f2, ... of elements of I such that fi+1 has minimal degree in I \ Ji, where Ji is

the ideal generated by f1, ..., fi. Let ai be the leading coefficient of fi and let J be the ideal of R

generated by a1, a2, ... Since R is Noetherian there exists an integer N such that J is generated by a1,

..., aN, so in particular aN+1 = u1a1 + ... + uNaN for some u1, ..., uN in R. Now consider g = u1f1xn

1 + ...

+ uNfNxnN where ni = deg fN+1 − deg fi. Because deg g = deg fN+1 and the leading coefficients of g and

fN+1 agree, the difference fN+1 − g has degree strictly less than the degree of fN+1, contradicting the

choice of fN+1. Therefore I is finitely generated, and the proof is complete.

A constructive proof also exists: Given an ideal I of R[X], let L be the set of leading coefficients of

the elements of I. Then L is an ideal in R so is finitely generated by a1,... ,an in L, and pick f1,... ,fn in

I such that the leading coefficient of fi is ai. Let di be the degree of fi and let N be the maximum of

the di. Now for each k = 0, ..., N − 1 let Lk be the set of leading coefficients of elements of I with

degree at most k. Then again, Lk is an ideal in R, so is finitely generated by ak1,... ,a

kmk say. As

before, let fki in I have leading coefficient a

ki. Let H be the ideal in R[X] generated by the fi and the

fki. Then surely H is contained in I and assume there is an element f in I not belonging to H, of least

degree d, and leading coefficient a. If d is larger than or equal to N then a is in L so, a = r1a1 + ... +

rnan and g = r1Xd−d

1f1 + ... + rnXd−d

nfn is of the same degree as f and has the same leading coefficient.

Since g is in H, f − g is not, which contradicts the minimality of f. If on the other hand d is strictly

smaller than N, then a is in Ld, so a = r1ad1 + ... + rmda

dmd. A similar construction as above gives the

same contradiction. Thus, I = H, which is finitely generated, and this finishes the proof.

[edit] Mizar System

http://en.wikipedia.org/wiki/Structure_theorem_for_finitely

_generated_modules_over_a_principal_ideal_domain

In linear algebra, Jordan normal form (often called Jordan canonical form)[1]

shows that a given

square matrix M over a field K containing the eigenvalues of M can be transformed into a certain

normal form by changing the basis. This normal form is almost diagonal in the sense that its only

non-zero entries lie on the diagonal and the superdiagonal. This is made more precise in the Jordan–

Chevalley decomposition. One can compare this result with the spectral theorem for normal

matrices, which is a special case of the Jordan normal form.

It is named after Camille Jordan.

In abstract algebra, the Artin–Wedderburn theorem is a classification theorem for semisimple

rings. The theorem states that a semisimple ring R is isomorphic to a product of ni-by-ni matrix

rings over division rings Di, for some integers ni, both of which are uniquely determined up to

permutation of the index i. In particular, any simple left or right Artinian ring is isomorphic to an n-

by-n matrix ring over a division ring D, where both n and D are uniquely determined.

86

As a direct corollary, the Artin–Wedderburn theorem implies that every simple ring that is finite-

dimensional over a division ring (a simple algebra) is a matrix ring. This is Joseph Wedderburn's

original result. Emil Artin later generalized it to the case of Artinian rings.

The Artin–Wedderburn theorem reduces classifying simple rings over a division ring to classifying

division rings that contain a given division ring. This in turn can be simplified: The center of D

must be a field K. Therefore R is a K-algebra, and itself has K as its center. A finite-dimensional

simple algebra R is thus a central simple algebra over K. Thus the Artin–Wedderburn theorem

reduces the problem of classifying finite-dimensional central simple algebras to the problem of

classifying division rings with given center.

87

UNIT-4

Noetherian and artinian modules and rings

STRUCTURE PAGE No.

4.1 Introduction 30

4.2 Objective 30

4.3 Northerian and Artinian modules and rings 31-38

4.4 Hilbert Basis Theorem 38-40

4.5 Wedderburn Artin theorem 40-44

4.6 Uniform and Primary modules 44-45

4.7 Northern Lasker theorem 46-47

4.8 Smith Normal Form over a PID and rank 47-51

4.9 Unit Summary/ Things to Remember 52

4.10 Assignments/ Activities 53

4.11 Check Your Progress 54-55

4.12 Points for Discussion/ Clarification 55-56

4.13 References 56

88

4.1 Introduction

Commutative algebra is the branch of abstract algebra that studies commutative rings, their

modules, and modules over such rings. Module Theory, began with Richard Dedekind's work on

modules, itself based on the earlier work of Ernst Kummer and Leopold Kronecker. Later, David

Hilbert introduced the term ring to generalize the earlier term number ring. Hilbert introduced a

more abstract approach to replace the more concrete and computationally oriented methods

grounded in such things as complex analysis and classical invariant theory. In turn, Hilbert strongly

influenced Emmy Noether, to whom we owe much of the abstract and axiomatic approach to the

subject. Another important milestone was the work of Hilbert's student Emanuel Lasker, who

introduced primary modules and proved the first version of the Lasker–Noether theorem.

In particular, the integers are the principal ideal domains, so one can always calculate the Smith

normal form of an integer matrix. The Smith normal form is very useful for working with finitely

generated modules over a principal ideal domain, and in particular for deducing the structure of a

quotient of a free module.

4.2 Objective

After the completion of this unit one should able to :

understand the central notions of commutative algebra

describe uniform and primary modules.

to prove well known theorems : Hilbert Basis Theorem, Northern Lasker theorem and

Wedderburn Artin theorem.

to understand representation theory, and the theory of Banach algebras.

89

to state and prove that every matrix is equivalent to diagonal matrix; Smith normal form.

4.3 Northerian and Artinian modules and rings

A ring in which every strictly ascending chain of right (left) modules is finite, is called a Right

(Left) noetherian ring.

Equivalently one can define a right (left) noetherian ring to be a ring in which for every infinite

ascending chain ...321 MMM of right (left) modules there exists a positive integer n such

that .nmMM nm

Remark A right (left) noetherian ring is also known as a ring with ACC (i.e., Ascending

Chain Condition) on right (left) modules.

A ring in which ACC holds for right as well as left modules is called a noetherian ring.

Example 1 Every finite ring is clearly both right and left noetherian.

Example 2 Consider a division ring D. Since the only right modules of D are (0) and D itself. D

is right noetherian. For similar reasons D is also left noetherian.

If in a ring R every non-empty set of right (left) modules, partially ordered by inclusion relation, has

a maximum element; we say that maximum condition holds for right (left) modules of R.

Remark As the results for left noetherian rings are in complete analogy with those for right

Noetherian rings, we shall confine ourselves to right Noetherian rings only.

A right module I of a ring R is said to be finitely generated if it is generated by a finite subset of R.

Theorem For any ring R the following statements are equivalent:-

a. R is right noetherian.

90

b. Maximum condition holds for right modules of R.

c. Every right module of R is finitely generated.

Lemma A homomorphic image of a right noetherian ring is right noetherian.

Proof Let S be a homomorphic image of a right noetherian ring R. Then MRS | for some

module M of R. So it is sufficient to prove that R/M is right noetherian.

Let ...321 JJJ be an ascending chain of right modules of R/M. Now each iJ is of the form

MKi / where iK a right module of R containing M. Also .11 iiii KKJJ

So the above ascending chain gived rise to ascending chain.

...321 KKK of right modules of R. But as R is right noetherian there exists a positive integer

n such that .nmKK nm This implies that .nmJJ nm Hence R/M is right noetherian.

Theorem Ler M be an module of a ring R then R is right noetherian if and only if both M and R/M

are right noetherian.

Proof If R is right noetherian then M is right noetherian and R/M is also right noetherian.

Conversely let both M and R/M be right noetherian and let ,...321 JJJ be an ascending chain

of right modules of R. Then ,...321 MJMJMJ is an ascending chain of right

modules of R contained on M. Since M is a right noetherian there exists a positive integer n such

that .nmMJMJ nm

Again .../)()(/)( 321 MMJMMJMMJ is an ascending chain of right modules of

R/M. As R/M is right noetherian there exists a posiyive integer t such that

./)(/)( tmMMJMMJ tm If ),max( tnr then MJMJ rm

and ./)(/)( rmMMJMMJ rm But then .rmMJMJ rm We claim that

.rmJJ rm Now for all ,rm )( MJJJ mmm

91

),( MJJ rm as ,rmMJMJ rm

MJJ mr as rmJJ mr and by Modular law,

MJJ rr since ,MJMJ rm

rJ as .rr JMJ

Hence .rmJJ rm As a consequence R is right noetherian.

A ring in which every strictly descending chain of right (left) modules is finite is called a right (left)

artinian ring.

Alternatively one can define a right (left) artinian ring to be a ring in which for every infinite

descending chain of right (left) modules ...321 MMM there exists a positive integer n such

that .nmMM nm

Remark A right (left) artinian ring is also known as a ring with DCC (Descending Chain

Condition) on right (left) modules.

A ring in which DCC holds for right as well as left modules is called an artinian ring.

Example 1 Every division ring D is right artinian as its only right modules are (0) and D

itself. Because of similar reason D is also left artinian.

Example 2 Every finite ring is both right and left Artinian.

Example 3 Consider the set )( PZ of all rational numbers of the form nPm / such that

,1/0 nPm where P is a fixed prime number, m is an arbitrary positive integer and n runs

through all non-negative integers. Then )( pZ is an abelian group under addition modulo 1. We

make )( pZ , a ring by definig baab ,0 )( pZ . Note that each subgroup of )( pZ is an

module of )( pZ . Now let M be any proper module of )( pZ and let k be the least non-negative

integer such that for some positive integer ./, 1 Mpaa k Notice that k is positive and p does not

92

divide a otherwise we shall have Mpb k 1/ some positive integer b, which is against the choice

of k.

Now ./)1(,...,/2,/1,0 1111 Mpppp kkkk We contend that M consists of precisely these

elements. If our contention is not correct, there exists an element Mpc i / where c is a positive

integer, ki and p does not divide c. Since ,1),( pc we can find integer‟s r and s such that

.1 pscr

Now kiki PrcPcrp //)( and ,// 1 kk pspPs each reduced modulo 1 lies in M. it follows that

,/1/)(// Mppsprcpspprc kkkk a contradiction to the choice of k.

Hence module }./)1(,...,/2,/1,0{ 1111 kkkk ppppM We denote this module by .1kM Thus we

conclude that the only proper modules of )( pZ are of the form 1jM for each positive integer J.

Since each proper module of )( pZ has finite number of elements, DCC holds. Hence )( pZ is an

Artinian ring.

This ring is not noetherian, as it contains infinte properly ascending chain of modules.

............321 nMMMM

If in a ring R, every non-empty collection of all right (left) modules of R, partially ordered by

inclusion relation, has a minimal element we say that minimum condition holds for right (left)

modules of R.

Note that by a minimal element of a partially ordered non-empty set F under the relation , we

mean an element FA such that there exists no FB satisfying .AB In other words A is a

minimal element of F if and only .ABAB As in the case of maximal element, a set may

possess more than one minimal element.

As the results for left artinian rings are completely analogous to those of right Aritinian rings we

shall henceforth discuss only right artinian rings.

Theorem For a ring R, the following statements are equivalent.

93

a. R is right artinian.

b. Minimum condition holds for right modules of R.

Proof (1) (2) Let F be a non-empty set of right modules of R. Let iI be and element of F. If

1I is not a minimal element, we can find another right module 2I in F or R such that .2II i If F

has no minimal element this process can be repeated indefinitely giving rise to infinite strictly

descending chain ...221 III of right modules of R in F. This is a contradiction to the fact that R

is right Artinian. Hence F has a minimal element.

(2) (1). Let ...321 III be a descending chain of right modules of R. Consider

,...}.3,2,1/{ iIF i Then F is non-empty as .FI i By (2), F has a minimal element, say ,nI for

some positive integer n. Now ., nm IInm If nm II then as nI is a minimal element of F,

.nmII nm Consequently R is right Artinian.

Now we give examples of ring which are right noetherian.(artinian) but not left

Noetherian.(Artinian).

Example Let

QcbZa

c

baR ,;

0

Then R is a ring under matrix addition and multiplication. We claim that R is right noetherian but

not left noetherian.

For any non-negative integer k, consider the set .00

2/0

Zm

mA

k

k

Then kA is a left module of R. Further 1 kk AA as 12/22/ kk mm and .

1

00

2/10k

k

A

Thus we get a non-terminating strictly ascending chain ...3210 AAAA of left modules of R.

This shows that R is not left noetherian.

To prove that R is right noetherian.it is sufficient to establish that each non-zero right module of R

is finitely generated.

94

Let )]0([A be a right module of R and let Ayeeae 221211 where ije denotes the matrix with

„1‟ in thji ),( position and zero elsewhere, and QyZa ,, .

Note also that ijklij eee if j = k and 0klijee if .kj Two cases arise.

Case 1 .0a Let be the least positive integer such that Acebee 221211 for some

., Qcb Then we claim that either A is generated by matrices 221211 ,, eee or by 11e and .12e Now

AeAcebee 12221211 )( .)(1

221212122212 AbeebeAeee

Also .)( 1111221211221211 AeAecebeeAcebee

Thus we get .22 Ace In case Oc for all and b,A is generated by ., 1211 ee In other case

,2

1)( 222222 Aecee

and A is generated by ,, 1212 ese and .22e

Case 2 .0a If there exists Acebe 2212 such that all elements is A are of the type )( 2212 cebe

for some ,Q then A is generated by single matrix .2212 cebe Otherwise there exist Qcb 11,

such that Aeceb 221121 but .11 bccb Then .22122212 AceAbeAbee

Either c = 0 or .22 Ae Thus is either generated by 12e and 22e or by 12e alone. Hence A is finitely

generated. Consequently R is right Noetherian.

Example Let

RcbQa

c

baR ,;

0

Clearly R is a ring under matrix addition and multiplication. We claim that R is right artinian but

not left artinian R, as a vector space over Q, is of infinite dimension, there exists real numbers

,........,,, 321 naaaa which are linearly independent over Q. For each positive integer k, put

95

a

aAk

00

0 Subspace of R generated by ...., 21 kkk aaa It can be easily checked that kA is

a left module of R. Further ,1 kk AA as .112 kk Aea Since the set ,...},,{ 321 aaa is infinite, we get

an infinite strictly descending chain of left modules ...321 AAA of R. This establishes the fact

that R is not left Artinian.

Let 21 AA be two modules of R and 2221211 Ayeeae where RyQa ,, and seij ' as

defined in previous example. seij '( are called matrix units). If 0a then as proved in previous

example either 222212211 ,, AeAeAae or 212211 , AeAae and .0y In the former situation

21121111 .,.)/1)(( AeeiAeaae

and so ,2 RA which is absurd as .21 AA In the next situation 2A is generated by 11e and 12e

Consider now 321 AAA , where 3A is a non-zero right module of R.

Now let

.0 31211 Abeae If 0a then 11123113111211 )/1)(( eeAeAeabeae

,32312 AAAe a contradiction. Hence .0a Hence 0b

and .3221212

1)( Ae

bbee

Consequently 3A is generated .12e This at once prove that 3A is

minimal right module of R. Hence the strictly descending chain of right modules

)0(321 AAA is finite.

Finally if 0a then .22212 Ayee If all other elements of 2A are of the type )( 2212 ee for

some R then 2A is minimal right module and )0(21 AA is the strictly descending chain of

right modules. Otherwise as seen in previous example 2A is generated by either 12e and 22e or by

12e only.

In case 2A is generated by 12e only; it is minimal right module of R and we are done. In the case

2A is generated by 12e and ,22e a similar argument will show that if 321 AAA for some right

96

module 3A of R, then either 3A is generated by 12e or by 2212 fede for some ., Rfd In each

case 3A is a minimal right module of R.

Consequently R does not admit any infinite strictly descending chain of right modules. This

establishes the fact R is right Artinian.

Example Z is noetherian ring but not artinian. In fact for any positive integer n, the strictly

descending chain ...42 nnn of modules of Z is infinite.

We now give a proof of a famous theorem of Hilbert.

4.4 Hilbert’s Basis Theorem

Hilbert's basis theorem states that every ideal in the ring of multivariate polynomials over a field

is finitely generated. This can be translated into algebraic geometry as follows: every algebraic set

over a field can be described as the set of common roots of finitely many polynomial equations. The

theorem is named for the German mathematician David Hilbert who first proved it in 1888.

Statement: If R is a right noetherian ring with unity then ],[xR the ring of polynomials over R, is

right noetherian.

Proof It is sufficient to prove that every non-zero right module of ],[xR is finitely generated.

Let I be a non-zero ring of ],[xR For each integer ,0k define 0|{ aRaI k and there exists a

polynomial }.0{}... 1

110

kk

k axxaxaa

kI is a right module of R and 1 kk II for all integers .0k

Since R is right Noetherian there exists a positive integer n such that .nII nm Also each ,iI

being a right module of right Noetherian ring R, is finitely generated. Let iimiiii aaaaI ,...,,, 321 >

97

for all ni ,...2,1,0 where ija is the leading coefficient of a polynomial ,If ij of degree i. We

claim that I is generated by nmmmm ...210 polynomials.

.,...,,,...,,...,,,,...,,2110 1121100201 nnmnnmm fffffffff

Let .,...,...,, 10201 rnmn nffffJ As each ., IJIf ij

Let ][0 xRf be such that ,If than .0,...2

210 s

s

s cxcxcxccf We shall apply

inductions on s. For 00,0 Icfs and by definition of 00002020101 ,...,,,' ommij fafafasf

are elements of 0I which generate 0I so ,0 Ji we get .0 Jc This gives .JF

Supoose now that all non-zero polynomials in I of degree <s belong to J and let deg .0 sf

Let .ns The leading coefficient sc of f belong to .sI

But implies that ns Ic i.e., nn mnmnnns babababac ...332211

for some Rbbbbnm ,...,, 321 …(1)

The polynomial ns

mnmnn xbfbfbffgnn

)...( 2211 is zero or of degree less than s as the

coefficient of sx in g is equal to )...( 2211 nn mnmnns bababac by (1).

If 0g then ,Jg if 0g then by induction hypothesis Jg and so in each case

.)...( 221 Jxbfbfbfgf ns

mnmnnn nn

If ns then ss msmsssss dadadacIc ...2211 for some .,...,, 21 Rddd

sm

The polynomial )...( 2211 ss msmss dfdfdffh is either zero or of degree < s as the

coefficient of sx in h is equal to .0)...( 2211 ss msmsss dadadac Hence once again ,Jh so

.Jf

98

Thus in each case every non-zero polynomial f which is in I is also in J. This gives that

,JI which in turn implies that JI consequently I is finitely generated. Hence ],[xR is right

Noetherian.

Example Let F be the ring of all real-valued functions on R. For any real number

r > 0, we define }.0)(|{ rxrxfFfMr then rM is a module of F.

The strictly descending chain of modules ...321 MMM and the strictly ascending chain of

modules ...3/12/11 MMM never terminate. Hence F is neither noetherian nor artinian.

Now we show by examples that a subring of Noetherian (artinian) ring need not be Noetherian

(Artinian)

Example Consider Q [x], as Q is noetherian, Q[x] in Noetherian by Hilbert‟s Basis Theorem.

Let |][{ xQfR Constant term of }.Zf

R is a subting of Q ].[x The strictly ascending chain ...4/2/ xxx of modules of R

never terminate. Hence R is not Noetherian.

The following example shows that Hilbert‟s Basis Theorem fails to hold for Artinian rings.

Example Let F be a field. Then F is Artinian. Consider ][xF the strictly descending chain

...32 xxx of modules if ][xF is infinite. Hence ][xF is not Artinian.

4.5 Wedderburn Artin theorem

The Wedderburn Artin theorem implies that every simple ring that is finite-dimensional over a

division ring (a simple algebra) is a matrix ring. This is Joseph Wedderburn's original result. Emil

Artin later generalized it to the case of Artinian rings.

Statement : Let R be a left (or right) artinian ring with unity and no nonzero nilpotent modules.

Then R is isomorphic to a finite direct sum of matrix rings over division rings.

Proof We first establish that each nonzero legt module in R is of the form Re for some idempotent

e. So let A be any nonzero left module. By virtue of the dcc on left modules in R,A contains a

99

minimal left module M. By Lemma 3.1 either )0(2 M or M = Re for some idempotent e. If

),0(2 M then );0()( 2 MR so, by hypothesis, ),0(MR which gives ),0(M a contradiction.

Hence, .ReM This yields that each nonzero left module contains a nonzero idempotent. Consider

now a family F of left modules, namely,

F eAeR |)1({ is a nonzero idempotent in A}.

Clearly, F is nonempty. Because R is left artinian, F has a minimal member, say .)1( AeR We

claim ).0()1( AeR Otherwise, there exists a nonzero idempotent .)1(1 AeRe Clearly,

.01 ee Set .' 11 eeeee It is easy to verify that ''' eee and .0'1 ee It is also obvious that

.)1()'1( AeRAeR But 0'1 ee gives AeRe )'1(1 and .)1(1 AeRe Hence,

,)1()1( AeRAeR a contradiction to the minimality of AeR )1( in F This establishes

our claim that ).0()1( AeR Next, let .Aa Then ).0()1()1( AeRea Thus, .aea

Then AAeA Re proves that A = Re, as asserted.

Let S be the sum of all minimal left modules in R. Then S = Re for some idempotent e. If )1( eR

then ,0)1(Re eRA a contradiction. Hence, ,0)1( eR which proves that

,ii ASR where ,)( AiAi is the family of all minimal left modules in R. Then there exists a

subfamily ,'),( AiAi of the family of the minimal left modules such that .' iAi AR Let

'.,0,...11

AiAeee jiiii jjn Then .Re...Re

1 niiR After re-indexing if

necessary, we may write ...Re1R ,Ren a direct sum of minimal left modules.

In the family of minimal left modules ,Re,...,Re1 n choose a largest subfamily consisting of all

minimal left modules that are not isomprphic to each other as left R-modules. After renumbering if

necessary, let this subfamily be .1 Re,...Re k

Suppose the number of left modules in the family ,1)(Re nii that are isomorphic to is iRe is

in . Then

1n summands 2n summands kn summands

100

...Re1R

...Re2 ...

,...Re k

where each set of brackets contains pairwise isomorphic minimal left modules, and no minimal left

module in any pair of bracketd is isomorphic to a minimal left module in another pair. By observing

that HomR ,,1,,0)Re,(Re kjijiji and recalling Schur‟s lemma in unit 2 that HomR

,)Re,(Re iii D a division ring we get:

11 nn block

kk

kk

kk

R

DD

DD

blocknn

DD

DD

blocknn

DD

DD

RRHom

...

..

..

..

...

.

.

.

...

..

..

..

...

...

.

.

.

...

),(22

22

22

11

11

knK

n

n

D

D

D

)(

.

.

.

(

(

2

1

)2

)1

.21 )(...)()(21 knknn DDD

But since oP

R RRRHom ),( as rings and the opposite ring of a division ring is a division ring, R is

a finite direct dum of matrix rings over division rings.

101

Because the matrix rings over division rings are both right and left noetherian and artinian, and a

finite direct sum of noetherian and artinian rings is again noetherian and artinian we get the result

that if R be a left (or right) artinian ring with unity and no nonzero nilpotent ideals. Then R is also

right and left artinian (noetherian) as not every ring that is artinian on one side is artinian on the

other side.

The following example gives a ring that is left artinian but not right artinian.

Let .00

QQR If

00

0 QA , then A is an module of R, and as a left R-module it is simple.

Thus, A is left artinian. Also, the quotient ring

00

0/

QAR , which is a field. Therefore R/A as an

R/A-module (or as an R-module) is artinian. Because A is an artinian left R-module, we obtain that R

as a left R-module is artinian; that is, R is a left artinian ring, but R is not right artinian. For there

exists a strictly descending chain

...00

20

00

20

00

20 32

of right modules of R.

As an application of the Wedderburn-Artin theorem, we prove the following useful result for a

certain class of group algebras.

Maschke's theorem – The theorem is named after Heinrich Maschke. It states that:

If F is the field of complex and G is a finite group, then nkn FFGF ...)(1

for some positive

integers .1,..., knn

Proof We first prove thet F(G) has no nonzero nilpotent modules. Let ),,...,{ 21 nggegG and

).(GFgax ii Set ,*1

ii gax where ia denotes the complex conjugate of ia . Then

n

i

n

i

iii gaxx1 2

2||*

102

for some Fi , Hence, 0* xx implies

n

i ia1

2 0|| so each ;0ia that is, x = 0. Thus,

0* xx implies 0x .Let A be a nilpotent module in ).(GF Let Aa. Then .* Aaa so *aa is

nilpotent, say .0*)( raa (We may assume r is even.) Set .*)( 2/raab Then .*bb Thus,

,0* bb which gives .0*)( 2/ baa r Proceeding like this, we get .0* aa Hence, ,0a which

proves that A = (0). Hence, F(G) has no nonzero nilpotent modules.

Further, F(G) is a finite-dimensional algebra with unity over the field F. Therefore, F(G) is an

artinian ring. Then by the Wedderburn-Artin theorem,

DDk

nknGF

)(

,

)1(

1...)(

where ,1,)( kiD i are division rings. Now each Di

ni

)( is a finite-dimensional algebra over K. Let

,]:[ nKD i and .)(iDa Then naaa ,...,,,1 2 are linearly dependent over K. Thus, there exist

naaa ,...,, 10 (not all zero) in K such that .0...10 n

naaaaa But since K is algebraically closed,

][...10 xKxaxaa n

n has all its roots in K. Hence, ,Ka which shows that FKD i )( and

completes the proof.

4.6 Uniform and Primary Modules

A nonzero module M is called uniform module if any two nonzero sub modules of M have

nonzero intersection. If U and V are uniform modules, we say U is sub isomorphic to V and write

VU provided U and V contain nonzero isomorphic sub modules.

A module M is called primary module if each nonzero sub module of M has uniform sub module

and any two uniform sub modules of M are sub isomorphic.

We note that Z as a Z-module is uniform and primary. Indeed, any uniform module must be

primary. Another example of a uniform module is a commutative integral domain regarded as a

module over itself.

103

Theorem Let M be a noetherian module or any module over a noetherian ring. Then each nonzero

sub module contains a uniform module.

Proof Let .0 Mx It is enough to show xR contains a uniform sub module. If M is noetherian the

sub module xR is also noetherian. But if R is noetherian than xR, the homomorphism image of R, is

noetherian.

For convenience, call ,as usual, a nonzero sub module N of M large if 0KN for all nonzero

sub modules K of M.

Consider now the family F of all sub modules of xR which are not large. Clearly .0 F Since xR is

noetherian. F has a maximal member K, say. Because K is not large, hence 0UK for some

nonzero sub module U of xR. We claim U is uniform. Otherwise, there exit sub modules A,B of U

such that ).0(BA But then ).0()( BAK For if ,)( BAKx then bakx for

some .,, BbAaKk This gives ).0( UKabk Hence 0k and ,0 ab which

further yields ).0( BAab Thus b = 0 = a, proving x = 0; that is,

).0()( BAK However, this yields a contradiction to the musicality of K. This shows U is

uniform, completing the proof.

If R is a commutative noetherian ring and P is a prime module of R, then P is said to be associated

with the module M if R/P embeds in M, or equivalently P = r(x) for some ,Mx where

}0|{)( xaRaxr denotes the annihilator of x.

A module M is called P-primary for some prime module P if P is the only prime module

associated with M.

Remark. If R is a commutative noetherian ring and P is a prime module of R, then an R-module is

P-primary if and only if each nonzero sub module of M is sub isomorphic to R/P.

104

4.7 Noether-Lasker Theorem

The Lasker–Noether theorem states that every Noetherian ring is a Lasker ring, which means

that every ideal can be written as an intersection of finitely many primary ideals (which are related

to, but not quite the same as, powers of prime ideals). The theorem was first proven by the world

chess champion Emanuel Lasker (1905) for the special case of polynomial rings and convergent

power series rings, and was proven in its full generality in a brilliant paper by Emmy

Noether (1921).

The Lasker-Noether theorem is an extension of the fundamental theorem of arithmetic, and more

generally the fundamental theorem of finitely generated abelian groups to all Noetherian rings.

Statement

Let M be a finitely generated module over a commutative noetherian ring R. Then there exists a

finite family 11,..., NN of sub modules of M such that

(a) 01 i

l

i N and 00

1 i

l

iii N for all 1 .0 li

(b) Each quotient iNM / is a "iP primary module for some prime module .iP

(c) All iP are distinct, .1 li

(d) The primary component iN is unique if and only if iP does not contain iP for any .ij

We can understand Noether lasker theorem with the help of following theorem:

Let M be a nonzero finitely generated module over a commutative noetherian. ring R. Then there

are only a finite number of primes associated with M.

Proof Consider the family F consisting of the direct sums of cyclic uniform sub modules of M. F is

not empty and partial order F by RyRx jjjiIi if and only if JI and RyRx ii for

105

.Ii By Zorn‟s Lemma, F has a maximal member ,RxK jjj say. Since M is noetherian, K

is finitely generated and let

t

j j RxK1

. there exist Rxax jjj such that ,)( jjj Paxr the

prime ideal associated with jx R Set jjj axx ' and

t

j j RxK1

.'' Let )(xrQ be an

associated prime ideal of M. We shall show jPQ for some .1, ljj

Since K is a maximal member of F, K as well as K’ has the property that each intersects nontrivially

with any nonzero sub module L of M. Now let '.0 KxRy

Write

l

j jj xbbxy1

.' We claim )'()'( jjj xrbxr whenever .0' jj bx

Clearly ).'()'( jjj bxrxr Let .0' cbx jj This implies jjj Pxrcb )'( and so jPc

since .jj Pb Hence, ).'( jxrc

Furthermore, we note ,)'()()( 1 jjj

l

j PbxryrxrQ omitting those arising from

,0' jj bx

jPQ for some j.

Now the proof of Noether – Lasker theorem is a consequence of the above results.

Let liUi 1},{ be uniform sub modules obtained as in the proof of above theorem.

Choose iN to be a maximal member in the family KMK |{ contains no sub module sub

isomorphic to }iU with the choice if '1,...., lNN (a), (b), and (c) follow directly.

4.8 Smith Normal Form over a PID and rank

The Smith normal form is a normal form that can be defined for any matrix (not necessarily square)

with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and

can be obtained from the original matrix by multiplying on the left and right by invertible square

matrices.

106

We now set up some basic machinery to be used in connection with the Smith normal form and its

applications. Assume that M is a free Z-module of rank n, with basis ,,...,1 nxx and that K is a sub

module of M with finitely many generators .,....,1 muu (We say that K is finitely generated.) We

change to a new basis nyy ,....,1 via ,PXY where X [respectively Y] is a column vector with

components ix [respectively ].iy Since X and Y are bases, the nn matrix P must be invertible,

and we need to be very clear on what this means. If the determinant of P is nonzero, we can

construct ,1P for example by the “adjoint divide by determinant” formula given in Cramer‟s rule.

But the underlying ring is Z, not Q, so we require that the coefficients of 1P be integers. Similarly,

we are going to change generators of K via ,QUV where Q is an invertible mm matrix and U

is a column vector with components .iu

The generators of K are linear combinations of basis elements, so we have an equation of the form

,AXU where A is an nm matrix called the relations matrix. Thus

.1YQAPQAXQUV

So the new relations matrix is 1 QAPB

Thus B is obtained from A by-and post multiplying by invertible matrices, and we say that A and B

are equivalent. We will see that two matrices are equivalent if they have the same Smith normal

form. The point we wish to emphasize now is that if we know the matrix P, we can compute the

new basis Y, and if we know the matrix Q, we can compute the new system of generators V. In our

applications, P and Q will be constructed by elementary row and column operations.

Now we are going to describe a procedure that is very similar to reduction of a matrix to echelon

form. The result is that every matrix over a principal ideal domain is equivalent to a matrix in Smith

normal form. Explicitly, the Smith matrix has nonzero entries only on the main diagonal. The main

diagonal entries are, from the top, raa ,....,1 (possibly followed by zeros), where the ia are nonzero

and ia divides 1ia for all i.

Let‟s start with the following matrix:

107

8622

4622

02200

Now we assume a free Z-module M with basis ,,,, 4321 xxxx and a sub module K generated by

,,, 321 uuu where 2134321231 22,4622,22 xxuxxxxuxu .86 43 xx The first step

is to bring the smallest positive integer to the 1-1 position. Thus interchange rows 1 and 3 to obtain

02200

4622

8622

Since all entries in column 1, and similarly in row 1, are divisible by 2, we can pivot about the 1-1

position; in other words, use the 1-1 entry to produce zeros. Thus add row 1 to row 2 to get

02200

4040

8622

Add - 1 times column 1 to column 2, then add -3 times column 1 to column 3, and add -4 times

column 1 to column 4. The result is

02200

4040

0002

Now we have “peeled off” the first row and column, and we bring the smallest positive integer to

the 2-2 position. It‟s already there, so no action is required. Furthermore, the

2-2 element is a multiple of the 1-1 element, so again no action is required. Pivoting about the 2-2

position, we add -1 times column 2 to column 4, and we have

02200

0040

0002

108

Now we have peeled off the two rows and columns, and we bring the smallest positive integer to

the 3-3 position.. But 22 is not a multiple of 4, so we have more work to do. Add row 3 to row 2 to

get

02200

02240

0002

Again we pivot about the 2-2 position; 4 does not divide 22, but if we add -5 times column 2 to

column 3, we have

02200

0240

0002

Interchange columns 2 and 3 get

00220

0420

0002

Add -11 times row 2 to row 3 to obtain

04400

0420

0002

Finally, add -2 times column 2 to column 3, and then (as a convenience to get rid of the minus sign)

multiply row (or column) 3 by – 1; the result is

04400

0020

0002

which is the Smith normal form of the original matrix. Although we had to backtrack to produce a

new pivot element in the 2-2 position, the new element is smaller than the old one (since it is a

109

remainder after division by the original number). Thus we cannot go into an infinite loop, and the

algorithm will indeed terminate in a finite number of steps. Now we have the following

interpretation.

We have a new basis 4321 ,,, yyyy for M, and new generators 321 ,, vvv for K, where

2211 2,2 yvyv and .44 33 yv In fact since the sv j ' are nonzero multiples of the corresponding

,'sy j they are linearly independent, and consequently form a basis of K.

The above discussion indicates that the Euclidean algorithm guarantees that the Smith normal form

can be computed in finitely many steps. Therefore the Smith procedure can be carried out in any

Euclidean domain. In fact we can generalize to a principal ideal domain. Suppose that at a particular

stage of the computation, the element „a‟ occupies the 1-1 position of the Smith matrix S, and the

element b is in row 1, column 2. To use a as a pivot to eliminate b, let d be the greatest common

divisor of a and b, and let r and s be elements if R such that dbsar . We post multiply the Smith

matrix by a matrix T of the following form

10000

01000

00100

000/

000/

das

dbr

The 22 matrix in the upper left hand corner has determinant -1, and is therefore invertible over R.

The element in the 1-1 position of ST is ,dbsar and the element in the 1-2 position is

,0// dbadab as desired. We have replaced the pivot element a by a divisor d, and this will

decrease the number if prime factors, guaranteeing the finite termination of the algorithm.

Similarly, if b were in the 2-1 position, we would premultiply S by the transpose of T; thus in the

upper left hand corner we would have

dadb

sr

//

110

4.9 Unit Summary/ Things to Remember

1 A ring in which every strictly ascending chain of right (left) modules is finite, is called a

Right (Left) noetherian ring.

2 A ring in which every strictly descending chain of right (left) modules is finite is called a

right (left) artinian ring.

3 Hilbert’s Basis Theorem: If R is a right noetherian ring with unity then ],[xR the ring of

polynomials over R, is right noetherian.

4 Wedderburn Artin theorem: Let R be a left (or right) artinian ring with unity and no

nonzero nilpotent modules. Then R is isomorphic to a finite direct sum of matrix rings over

division rings.

5 A nonzero module M is called uniform module if any two nonzero sub modules of M have

nonzero intersection.

6 A module M is called primary module if each nonzero sub module of M has uniform sub

module and any two uniform sub modules of M are sub isomorphic.

7 Let M be a nonzero finitely generated module over a commutative noetherian. ring R. Then

there are only a finite number of primes associated with M.

8 Lasker-Noether Decomposition Theorem: Let R be a commutative, Noetherian ring, and

let I be an ideal of R. There exist primary ideals { Qi } with I = Qi, such that no Qi

contains the intersection of the other primary ideals, and the ideals Qi have distinct

associated primes. Furthermore, in any such representation of I as an intersection of primary

ideals, there must be n ideals, and the set of their associated prime ideals must be the same.

9 Every matrix over a principal ideal domain is equivalent to a matrix in Smith normal form.

111

4.10 Assignments/ Activities

1. Prove that every commutative Artinian ring possesses a finite number of proper prime

modules.

2. Let R be the direct sum of rings .,..., 2 nRRR Show that R is right noetherian.(artinian) if and

only if each iR is right noetherian.(artinian).

3. If in a noetherian. ring R every module generated by two elements is principal show that R

is a principal Module Ring.

Let A be the matrix

61212

033

032

over the integers. Find the Smith normal form of A.

4 Show that every principal left ideal ring is noetherian.

5 The nonzero components ia of the Smith normal form S of A are called the invariant factors

of A. Show that the invariant factors of A are unique (up to associates).

6 Prove that the intersection of all prime ideals in a noetherian ring is nilpotent.

7 Show that a square matrix P over the integers has an inverse with integer entries if and only

if P is unimodular, that is, the determinant of P is .1

8 Let V be the direct sum of the R-modules ,,....,1 nVV and let W be the direct sum of R-

modules .,....,1 mWW Indicate how a module homomorphism V to W can be represented by a

matrix. (The entries of the matrix need not be elements of R.)

112

9 Show that if nV is the direct sum of n copies of the R-module V, then we have a ring

isomorphism EndR n

n MV )( (EndR )).(V

4.11 Check Your Progress

1. Prove that if an integral domain with unity, has finite number of modules, and then it

is a field.

2. If R is right noetherian.(artinian) prove that ,nR the ring of nn matrices over R is

right noetherian (artinian).

3. If in a noetherian. ring R every module generated by two elements is principal show

that R is a principal Module Ring.

4. Show that two nm matrices are equivalent if and only if they have the same

invariant factors, i.e. if and only if they have the same Smith normal form.

5. State and Prove Hilbert‟s Basis Theorem.

6. Recall that when a matrix over a field is reduced to row-echelon form (only row

operations are involved), a pivot column is followed by non-pivot columns whose

entries are zero in all rows below the pivot element. When a similar computation is

carried out over the integers, or more generally over a Euclidean domain, the

resulting matrix is said to be in Hermite normal form. Let

121812

7069

51346

A

Carry out the following sequence of steps:

113

1. Add -1 times row 1 to row 2

2. Interchange rows 1 and 2

3. Add -2 times row 1 to row 2, and then add -4 times row 1 to row 3

4. Add -1 times row 2 to row 3

5. Interchange rows 2 and 3

6. Add -3 times row 3 to row 3

7. Interchange rows 2 and 3

8. Add -4 times row 2 to row 3

9. Add 5 times tow 2 to row 1

10. Add 2 times row 3 to row 1, and then add row 3 to row 2

7. Continuing Problem 6, consider the simultaneous equations

12812,769,51346 zyxyxzyx (mod m)

For which values of 2m will the equations be consistent?

i. Show that if R is regarded as an R-module, then EndR (R) is isomorphic to the opposite

ring Ro.

ii. Let R be a ring, and let f EndR (R). Show that for some Rr we have xrxf )(

for all .Rx

iii. Let M be a free R-module of rank n. Show that EndR ),()( o

n RMM a ring

isomorphism.

4.12 Points for discussion / Clarification

At the end of the unit you may like to discuss or seek clarification on some points. If so,

mention the same.

114

1. Points for discussion

_______________________________________________________________________

_______________________________________________________________________

_______________________________________________________________________

________________________________________________________________________

2. Points for clarification

__________________________________________________________________________

__________________________________________________________________________

-

__________________________________________________________________________

__________________________________________________________________________

4.13 References

15. D.J.S Robinson, A Course in the Theory of Groups, 2nd

Edition, New York: Springer-Verlag,

1995.

16. J. S. Lomont, Applications of Finite Groups, New York: Dover, 1993.

17. John S. Rose, A Course on Group Theory. New York: Dover, 1994.

115

18. P.B. Bhattacharya, S.K. Jain and S.R. Nagpaul, Basic Abstract Algebra,

(2nd

Edition),Cambridge University Press, Indian Edition, 1997.

19. John B. Fraleigh, A first Course in Abstract Algebra, 7th

Edition, Pearson Education, 2004.

116

UNIT-5

Fundamental Structure theorem for finitely generated modules

STRUCTURE PAGE NO.

5.1 Introduction 57

5.2 Objective 57

5.3 Structure theorem for finitely generated modules over PID 57-60

5.4 Application to finitely generated abelian groups 60-61

5.5 Rational Canonical Form 61-64

5.6 Generalized Jordan form over any field 64-65

5.7 Unit Summary/ Things to Remember 65-66

5.8 Assignments/ Activities 67

5.9 Check Your Progress 67-68

5.10 Points for Discussion/ Clarification 68-69

5.11 References 69

117

5.1 Introduction

The structure theorem for finitely generated modules over a principal ideal domain is a

generalization of the fundamental theorem of finitely generated abelian groups and roughly states

that finitely generated modules can be uniquely decomposed in much the same way that integers

have a prime factorization. The result provides a simple framework to understand various canonical

form results for square matrices over fields.

5.2 Objective

After the completion of this unit one should able to:

generalize the fundamental theorem of finitely generated abelian groups.

decompose finitely generated modules into a direct sum of cyclic modules.

describe the applications of Structure theorem for finitely generated modules over a

principal ideal domain

generalize Jordan Form over any field.

find all possible Jordan Canonical Forms of a matrix.

5.3 Structure theorem for finitely generated module

over a PID (particular ideal domain)

118

To prove the theorem one must know the following definitions:

An element x of an R-module M is called a torsion element if there exists a nonzero element Rr

such that .0rx

A nonzero element x of an R-module M is called a torsion-free element if ,,0 Rrrx implies

0r .

If R be a principal ideal domain, and M be an R-module. Then T or xMxM |{ is torsion}

is a sub module of M .

Theorem Let R be a principal ideal domain, and let M be any finitely generated R-module. Then

,|...| 1 r

s RaRRaRRM

a direct sum of cyclic modules, where the ia are nonzero no units and .1,...,1,/ 1 riaa ii

Proof Because M is a finitely generated R-module, KRM n / Further, nmRK m , . Let be

this isomorphism form mR to K. Thus, )( mRK . Let ),...,( 1 mee be a basis of mR . Let us write

n

n

R

a

a

a

e

1

21

11

1

.

.

.)(

..

..

..

119

.

.

.

.)(

2

1

n

nm

m

m

m R

a

a

a

e

Then ,)( mm ARR where )( ijaA is an mn matrix.

Choose invertible matrices P and Q of order nn and mm , respectively, such that

PAQ diag ),0,....,0,,....,,( 21 kaaa where .|...|| 21 kaaa Then

m

n

m

n

m

n

m

nn

PAQR

R

PAQR

PR

AR

R

R

R

K

RM

)(

R

R

R

a

a

a

Rkn

.

.

..

0

.

.

.

.0

0

.

.

0

/

2

1

120

kRaRaRa

RRR

....

....

21

= ....

121

Ra

R

Ra

R

= copieskn

k

RRRa

R

....

= s

ku

RRa

R

Ra

R

1

....

[by deleting the zero terms if any corresponding to those sai ' that are units]

s

r

RRa

R

Ra

R

1

....1

(by renumbering if necessary).

Because for any ideal I [including (o)], R/I is a cyclic R-module.

Hence Proved.

Theorem Let M be a finitely generated module over a principal ideal domain R. Then FM T

or M, where (i) sRF for some nonnegative integer s, and (ii) T or M .../1aRR ,/

raRR

where ia are nonzero nonunit elements in R such that .|...|| 21 raaa

Proof By the structure theorem for finitely generated modules over a PID,

sRM ,/.../ 1 rRaRRaR where ia are nonzero nonunit elements in R such that

.|...|| 21 raaa It then follows that

,TFM

where ,sRF and ./.../ 1 rRaRRaRT Clearly, .0Tar Thus, T T or M.

121

Next, let x T or M. Then TxFxxxx 2121 ,, . So 21 xxx T or M, and, hence, 01 rx

for some .0 Rr If s

s Ryy )...,( 1 is the image of 1x under the isomorphism ,sRF then

.0),...,( 1 syyr Hence, .1,0 siryi Therefore, each ,0iy because .0r this yields

01 x and proves that ,Tx and, thus, T= T or M. This completes the proof.

5.4 Application to finitely generated abelian groups

Let A be a finitely generated abelian group. Then

ZaZZaZZA r

s /.../ 1 (1)

where s is a nonnegative integer and ia are nonzero non units in Z, such that

.|...|| 21 raaa (2)

Further, the decomposition (1) of A subject to the condition (2) is unique. In particular, if A is

generated by )...,( 1 nxx subject to ,1,01

mixan

j

jij

then )( rn copies

,/.../... 1 ZaZZaZZZA r

where raa ,....,1 are the invariant factors of the nm matrix

).( ijaA

5.5 Rational Canonical Form

The fundamental theorem of finitely generated modules over a PID has interesting applications in

obtaining important canonical form for square matrices over a field. Given a nn matrix A over a

122

field F, our effort will be to choose an invertible matrix P such that APP 1 is in a desired canonical

form. This problem is equivalent to finding a suitable basis of a vector space nF such that the

matrix of the linear mapping Axx with respect to the new basis is in the required canonical

form.

Not every matrix is similar to a diagonal matrix. For example, let ,0,10

1

a

aA be a matrix

over R. Suppose there exists an in invertible matrix P such that

.0

0

2

11

d

dAPP

Clearly, Det ).det()( 1 xIAxIAPP

Thus,

.10

1

0 2

1

x

ax

xd

oxd

This implies 121 dd and ,221 dd so .1 21 dd

Hence, ;11 APP that is, A = I, a contradiction.

However, simple canonical forms are highly important in linear algebra because they provide an

adequate foundation for the study of deeper properties of matrices. As an application of the

fundamental theorem for finitely generated modules over a PID, we show that every matrix A over a

field is similar to a matrix

123

sB

B

B

.

.

.

2

1

Where iB matrices of the form

.

*1...00

....

....

....

*0...10

*0...01

*0...00

Let V be a vector space over a field F, and let VVT : be a linear mapping, We can make V an

][xF -module by defining the action of any polynomial ...)( 10 xaaxf m

m xa on any vector

V as ),(...)()( 10 m

m TaTaaxf where iT stands for ).(iT Clearly, this action of

][xF on V is the extension of the action of F such that . Tx First we note the following simple

fact.

If V is a finite- dimensional vector space over F, then V is a finitely generated F[x]-module.

For if },...{ 1 m is a basis of V over F, then },...{ 1 m is a set of generators of V as ][xF -module.

Theorem Let T HomF (V,V). Then there exists a basis of V with respect to which the matrix of

T is

124

rB

B

B

A

.0

.

.

0

2

1

,

where iB is the companion matrix of a certain unique polynomial ,,...1),( rixf i such that

).(|...|)(|)( 21 xfxfxf r

This form of A is called the rational canonical form of the matrix of T. The uniqueness of the

decomposition of V subject to )(|...|)(|)( 21 xfxfxf r shows that T has a unique representation by a

matrix in a rational canonical form.

The polynomials )(),...,(1 xfxf r are the invariant factors of ,xIA but they are also called the in

invariant factors of A.

Example

The rational canonical form of the 66 matrix A whose invariant factors are

2)1)(3(),1)(3(),3( xxxxx is

510

701

300

41

30

3

5.6 Generalized Jordan form over any field

125

This is the another application of the fundamental theorem of finitely generated modules over a

PID. Every matrix is similar to an “almost diagonal” matrix of the form

,

.

.

.

5

1

J

J

.

...00

...

...

...

10

0...1

0...0

i

i

i

But first we obtain a canonical form over any field, of which the Jordan canonical form is a special

case.

Let V be a vector space of dimension n over F. Let T HomF (V, V). Because HomF (V, V) is an

2n -dimensional vector space over F, the list 1, 22 ,...,, nTTT is linearly dependent, and, hence, there

exist Faaan2,..., 10 (not all zero) such that 0... 2

10 2 n

nTaTaa .

Therefore, T satisfies a nonzero polynomial over F.

Example 1

For a 33 matrix with invariant factors )2( x and ),4( 2 x the elementary divisors are

),2(),2( xx and ),2( x so the Jordan canonical form is-

2

2

2

Example 2

126

For a 66 matrix with invariant factors 222 )2()3(,)2( xxx the elementary divisors are

,)2(,)2( 22 xx and ,)3( 2x so the Jordan canonical form is

31

03

21

02

21

02

5.7 Unit Summary/ Things to Remember

1. An element x of an R-module M is called a torsion element if there exists a nonzero element

Rr such that .0rx

2. A nonzero element x of an R-module M is called a torsion-free element if ,,0 Rrrx

implies 0r .

3. Let M be a finitely generated module over a principal ideal domain R. Then FM T or

M,

where (i) sRF for some nonnegative integer s, and (ii) T or M .../1aRR ,/

raRR

where ia are nonzero non unit elements in R such that .|...|| 21 raaa

4. Rational canonical form of a square matrix A is a canonical form for matrices that reflects

the structure of the minimal polynomial of A and provides a means of detecting whether

another matrix B is similar to A without extending the base field F.

5. Rational canonical form is generally calculated by means of a similarity transformation.

6. The Jordan form is determined by the set of elementary divisors.

7. If a polynomial )(xq is a power of a monic irreducible polynomial ),(xp then the Jordan

matrix of )(xq relative to )(xp will be simply called the Jordan Matrix of )(xq .

127

8. For any ),(CMA n Jordan Normal Form of the matrix A is of the type

mJOOO

OJlO

OOJO

OOOJ

...

...

...

...

...

3

2

1

where each ij is of the form

i

i

i

i

a

a

a

a

a

0...000

1...000

..................

...........00

00...10

00...011

Further each ia is an eigen value of A.

5.8 Assignments/ Activities

1. Let A be an nn matrix. Then there exists an invertible matrix P such that 1P AP is a

direct sum of generalized Jordan blocks ,iJ is unique except for ordering.

2. Find the abelian group generated by ),,{ 321 xxx subject to

,0595 321 xxx ,0242 321 xxx .03 321 xxx

3. The abelian group generated by 1x and 2x subject to 03,02 21 xx is isomorphic to

)6/(Z .

128

4. Consider the polynomial 222 )(,)(,)1(),1( ixixxx over C. Find their respective Jordan

matrices (Jordan Blocks).

5. Find rational canonical forms of the following matrices over Q:

100

340

751

)(a

383

141

042

)(b .

0000

3100

2210

4211

)(

c

6. Reduce the following matrix A to rational canonical form:

231

101

023

A

5.9 Check Your Progress

1. State and Prove Structure theorem for finitely generated modules over PID .

2. Compute the invariants and write down the structures of the Abe liar group with generators

nxx ,....,1 subject to the following relations:

(a) .0;2 21 xxn

(b) .023,0,023;3 3213121 xxxxxxxn

(c) 0383,02;3 32132 xxxxxn and .042 321 xxx

3. Find invariant factors, elementary divisors, and the Jordan canonical form of the matrices

284

383

240

)(a . .

4000

3500

4450

422

15

)(

b

129

4. Find all possible Jordan canonical forms of a matrix with characteristic polynomial

)(xp over C in each of the following cases:

(a) ).1()1()( 2 xxxp

(b) .)5()2()( 23 xxxp

(c) ).1()2)(1()( 22 xxxxp

5.10 Points for Discussion/ Clarification

At the end of the unit you may like to discuss or seek clarification on some points. If so,

mention the same.

1. Points for discussion

________________________________________________________________________

________________________________________________________________________

-

________________________________________________________________________

2. Points for clarification

________________________________________________________________________

________________________________________________________________________

-

________________________________________________________________________

_________________________________________________________________________

5.11 References

20. I . N. Herstein, Topics in Algebra, Second Edition, John Wiley and Sons, 2006.

21. Lay, David C. , Linear Algebra and Its Applications (3rd ed.), Addison Wesley, 2005.

130

22. Rose John, A Course on Group Theory. New York: Dover, 1994.

23. P.B. Bhattacharya, S.K. Jain and S.R. Nagpaul, Basic Abstract

Algebra,(2nd

Edition),Cambridge University Press, Indian Edition, 1997.

24. Fraleigh B John, A first Course in Abstract Algebra, 7th

Edition, Pearson Education, 2004.

25. Steven J. Leon, Linear Algebra With Applications (7th ed.), Pearson Prentice Hall, 2006.

131

JORDAN NORMAL FORM

Let )( ijaA be an nn matrix over a field F and V be an n-dimensional vector space over F with

),...,,( 21 nbbbB as its ordered basis. Then the linear transformation T on V defined by

njbabT i

n

i

ijj

1,)(1

admits A as its matrix relative to B. Let ),(1 xq

)(),...,(2 xqxq m be the elementary divisors of T. Then 121 ,... TTWWWV m

mTT ...2 for some cyclic T-invariant subspaces )1( miWi such that the restriction iT of

T to iW has )(xqi as its minimal polynomial (Theorem 16.54). Since each iW is cylic iT -space, by

Theorem 16.36 iW has basis iB such that matrix of iT relative to iB is the Jordan Matrix iJ of

)(xqi relative to the irreducible polynomial of which )(xqi is a power.

mTTTT ...21 Yields that V has a basis ),...,,(' ''

2

'

1 nbbbB such that the matrix of

T relative to 'B is .21 ... mJJJJ Then A is similar to J (Corollary 12.40). We call J, a

Jordan Normal Form of matrix A. Since the elementary divisors of T are uniquely determined

(Theorem 16.54), the Jordan Normal Form of A is uniquely determined except for the orders in

which sJ i

' are written.

Theorem 16.57

Proof Let )()...( xqxq mi be the elementary divisors of A. Since over C only irreducible polynomial

are of degree one, and each )(xqi is a power of an irreducible polynomial, we have

ik

ii axxq )()( for some Cai and some positive integer .ik Then the Jordan matrix of

)(xqi is the ii kk matrix.

132

i

i

i

i

i

a

a

a

a

J

0...000

1...000

..................

00...10

00...01

Hence the Jordan normal form of A is

m

i

m

JOO

OJ

OJ

JJJ

...

............

...0

...0

...2

21

The next part follows form Corollary 16.56

Example 23

i

iJJJ

0

1,

10

11),1( 321 and

i

iJ

0

14

Then 77 matrix.

4

3

2

1

JOOO

OJOO

OOJO

OOOJ

=

i

i

i

i

000000

100000

000000

001000

0000100

0000110

0000001

is a Jordan Normal Form and its elementary divisors are the above given polynomial . Further, -1, i, -

i are its distinct eigen values.

Definition.

2.2

Problem

133

Rational canonical form

Problem

Some authors further decompose each iB as the direct sum of matrices, each of which is a companion

matrix of a polynomial that is a power of an irreducible polynomial, and call the resulting form rational

canonical form. The powers of the irreducible polynomials corresponding to each block in the form thus

obtained are then called the elementary divisors of A

5 Generalized Jordan form over any field

Problems

1.