Solution of System of Linear Equations

14
Chapter 3 Solution of System of Linear Equations 3.1 Introduction System of linear equations occur in a variety of applications in the fields like elasticity, electrical engineering, statistical analysis. The techniques and methods for solving system of linear equations belong to two categories: direct and iterative methods. Some of the direct methods are Gauss elimination method, matrix inverse method, LU factorization and Cholesky method. Elimination approach reduces the given system of equations to a form from which the solution can be obtained by simple substitution. Since calculators and computers have some limit to the number of digits for their use. This may lead to round off errors and produces poorer results. It will be assumed that readers are familiar with some of the direct methods suitable for small systems. Handling of large systems are also time consuming. Iterative methods provide an alternative to the direct methods for solving system of linear equations. This method involves assumptions of some initial values which are then refined repeatedly till they reach some accepted range of accuracy. In this chapter we shall consider Gauss elimination, matrix factorisation and iterative methods suitable for numerical calculations. 3.2 Method of Elimination

Transcript of Solution of System of Linear Equations

Page 1: Solution of System of Linear Equations

Chapter 3

Solution of System of Linear Equations

3.1 Introduction

System of linear equations occur in a variety of applications in the fields like elasticity, electrical engineering, statistical analysis. The techniques and methods for solving system of linear equations belong to two categories: direct and iterative methods.Some of the direct methods are Gauss elimination method, matrix inverse method, LU factorization and Cholesky method. Elimination approach reduces the given system of equations to a form from which the solution can be obtained by simple substitution. Since calculators and computers have some limit to the number of digits for their use. This may lead to round off errors and produces poorer results. It will be assumed that readers are familiar with some of the direct methods suitable for small systems. Handling of large systems are also time consuming.Iterative methods provide an alternative to the direct methods for solving system of linear equations. This method involves assumptions of some initial values which are then refined repeatedly till they reach some accepted range of accuracy. In this chapter we shall consider Gauss elimination, matrix factorisation and iterative methods suitable for numerical calculations.

3.2 Method of Elimination

Equivalent Systems: Two systems of equations are called equivalent if and only if they have the same solution set.

Elementary Transformations: A system of equations is transformed into an equivalent system if the following elementary operations are applied on the system:

(1) two equations are interchanged(2) an equation is multiplied by a non-zero constant(3) an equation is replaced by the sum of that equation and a multiple of

any other equation.

Gaussian Elimination

The process which eliminates an unknown from succeeding equations using elementary operations is known as Gaussian elimination. The equation which is used to eliminate an unknown from the succeeding equations is known as the pivotal equation. The coefficient of the eliminated variable in a pivotal equation is known as the pivot. If the pivot is zero, it cannot be used to eliminate the

Page 2: Solution of System of Linear Equations

24 Introduction to NUMERICAL METHODS

variable from the other equations. However, we can continue the elimination process by interchanging the equation with a nonzero pivot.

Solution of a Linear System

A systematic procedure for solving a linear system is to reduce a system that is easier to solve. One such system is the upper triangular form. The back substitution is then used to solve the system in reverse order.

3.3 Pivotal Elimination Method

Computers and calculators use fixed number of digits in its calculation and we may need to round the numbers. This introduces error in the calculations. Also when two nearly equal numbers are subtracted, the accuracy in the calculation is lost. Consider the following example to study the difficulty faced in the elimination method.

Example 3.1The linear system

has the exact solution and .Solve the above system using four-digit arithmetic by

(a) Gaussian elimination without changing the order.(b) Gaussian elimination by interchanging the order of the equations.

SolutionUsing four-digit arithmetic the system can be written as

(E1)(E2)

(a) Here a multiplier is .Multiplying (E1) by 1410 and retaining 4 digits with rounding, we have

Subtracting we get

To 4 digits this can be written as

This gives

Substituting the value of y in (E1), we get

This produces a wrong result.

(b) Interchanging the rows, we can write(E3)

(E4)For this system multiplier is

Page 3: Solution of System of Linear Equations

CHAPTER 3 : Solution of System of Linear Equations 25

The replacement of (E4) by reduces the system t0

The 4-digits arithmetic gives the solution

and which is the exact solution of the system.

This example shows that the order in which we treat the equations for elimination affects the accuracy in the elimination process.Note that . Thus in eliminating x, we should use the equation with maximum numerical coefficient of x.

3.3.1 Gaussian Elimination with Partial Pivoting

(a) Select pivotal equation for a variable for elimination (the equation with maximum numerical coefficient of that variable in the system).

(b) Eliminate the chosen variable from the remaining equations with respect to the pivotal equation.

(c) Repeat the process for the subsystem.(d) Using back-substitution find solutions by using pivotal equations.

For convenience of calculations we made the coefficients of eliminated variable to unity before elimination. This is illustrated through an example below.

Example 3.2Solve the following linear system by the Gaussian elimination with partial pivoting, giving your answers to 3 decimal places.

Taking first equation as the pivotal equation we can write the system as Operation Coefficient of R. H. S. Eq. #

x y z 16

5 8

5 12 11

7 9 20

29 5 35

Eq1* Eq2 Eq3

Eq1/16 Eq2/5 Eq3/8

1 1 1

0.3125 2.40001.3750

0.43751.80002.5000

1.8125 1.0000 4.3750

Eq4* Eq5 Eq6

Eq5Eq4 Eq6Eq4

2.08751.0625

1.36252.0625

0.8125 2.5625

Eq7** Eq8

Eq7/2.0875 Eq8/1.0625

1 1

0.6527 1.9412

0.3892 2.4118

Eq9** Eq10

Eq10Eq9 1.2885 2.8010 Eq11 Eq11/1.2685 1 2.1738 Eq12

Solution of the system is obtained by the back substitution as follows:

From (Eq12)From (Eq9)From (Eq4)

Page 4: Solution of System of Linear Equations

26 Introduction to NUMERICAL METHODS

Solutions correct to 3 d.p. are

Partial pivoting is adequate for most of the simultaneous equations which arise in practice. But we may encounter sets of equations where wrong or incorrect solutions may results. To improve the calculation in such cases total pivoting is used. In total pivoting, maximum magnitude of the coefficients is used for the pivot in each case.

3.4 Solution by Triangular Factorization

In Gaussian elimination process, a linear system is reduced to an upper-triangular system and then solved by backward substitution. The linear system A X = B can effectively be solved by expressing the coefficient matrix A as the product of a lower-triangular matrix L and an upper-triangular matrix U :

A = L UWhen this is possible we say that A has an LU-decomposition. In this case the equation can be written as

L U X = Band the solution can be obtained by defining Y = U X and then solving the two systems

(i) L Y = B for Y, and(ii) U X = Y for X.

To check for the possibility of LU- decomposition, we may use the idea of principal minor of the matrix.

Principal Minor: The r th principal minor of a square matrix A is the determinant of the sub-matrix Ar formed by the first r rows and r columns of A.

Theorem: Let A be a non-singular n×n matrix. Then A has an LU-factorization if and only if all the principal minors , are non-zero.

In matrix A = , the principal minors are

, ,

and hence A has LU-decomposition.

In matrix B = , the principal minors are

,

are not non-zero and hence B has no LU-decomposition.

3.4.1 Solution by LU Factorization

LU-decomposition of a non-singular matrix (when it exists) is not unique. In practice we use one of the following alternatives to get unique factorization:

(a) Crout’s formalism: All the diagonal elements of U is choesen 1.(b) Dolittle’s formalism: All the diagonal elements of L is choesen 1.

Example 3.2: Given that A = , B = , X

=

Page 5: Solution of System of Linear Equations

CHAPTER 3 : Solution of System of Linear Equations 27

(i) Determine a lower triangular matrix L and an upper triangular matrix U such that L U = A.

(ii) Use the above factorization to solve the equation A X = B.

Solution: (i) Let A = = L U =

=

Equating the corresponding elements of the two matrices, we have

or

or or or

or

or Thus

L = U =

(ii) The equation can be written asAX = LUX = LY = B

where

UX = Y and Y =

Consider the solution ofLY = B

or =

Using forward elimination, we have

Now consider the solution ofUX = Y

or =

Using backward elimination, we have

and hence

X =

3.4.2 Solution by Cholesky Factorization

Page 6: Solution of System of Linear Equations

28 Introduction to NUMERICAL METHODS

Positive definite: A matrix A is positive definite if its quadratic form is greater than zero for all non-zero vector x, i.e. .It is hard to check the positive definiteness using this definition. The transpose of a matrix is defined to be a matrix that results from interchanging the rows and columns of the matrix.The transpose of a matrix A is written AT.A matrix A is called symmetric if AT = A.The following theorem may be used to the positive definiteness for symmetric matrix.

Theorem: A symmetric matrix A is positive definite if and only if all its principal minors are strictly positive.

A symmetric positive definite matrix A may be decomposed into

This is the Cholesky decomposition.The solution of a linear system AX = B with A symmetric and positive definite can be obtained by first computing the Cholesy decomposition , then solving LY = B for Y and finally solving for X.In this case the inverse can be obtained as follows:

Recall that inverse of a lower triangular matrix is also a lower triangular matrix. This property may be used to find .For a third order lower triangular matrix L, we may write the relation as

.

Comparing two sides we may then find the unknowns . Then use the relation to find .

3.5 Solution of Linear System by Iterative Method

Iterative method for linear system is similar as the method of fixed-point iteration for an equation in one variable, To solve a linear system by iteration, we solve each equation for one of the variables , in turn, in terms of the other variables. Starting from an approximation to the solution, if convergent, derive a new sequence of approximations. Repeat the calculations till the required accuracy is obtained.An iterative method converges, for any choice of the starting value, if every equation satisfies the condition that the magnitude of the coefficient of solving variable is greater than the sum of the absolute values of the coefficients of the other variables. A system satisfying this condition is called diagonally dominant. A linear system can always be reduced to diagonally dominant form by elementary operations.

For example in the following system, we have(E1)(E2)(E3)

and is not diagonally dominant.

Page 7: Solution of System of Linear Equations

CHAPTER 3 : Solution of System of Linear Equations 29

Arranging the system as as (E1), (E3) (E2), (E2), we have

The system reduces to diagonally dominant form.

Two commonly used iterative process are discussed below:

3.5.1 Jacobi Iterative Method

In this method, a fixed set of values is used to calculate all the variables and then repeated for the next iteration with the values obtained previously. The Jacobi iterative formulas of the above system are

Starting with initial values

we get

Second approximation is

and so on. Results are summarized below.

Successive iterates of solution (Jacobi Method)n 0 1 2 3 . . . 10 11 12xn 0 1. 667 1.816 2.481 . . . 2.285 2.285 2.285yn 0 2.714 2.104 1.777 . . . 1.614 1.615 1.615zn 0 0.727 1.113 0.889 . . . 0.839 0.838 0.838

Solutions correct to 2 d.p. are

3.5.2 Gauss-Seidel Iterative Method

In this method, the values of each variable is calculated using the most recent approximations to the values of the other variables. The Gauss-Seidel iterative formulas of the above system are

Page 8: Solution of System of Linear Equations

30 Introduction to NUMERICAL METHODS

Starting with initial values

we get the solutions as follows:First approximation:

Second approximation:

Third approximation:

Fourth approximation:

Fifth approximation:

which gives the results correct to 2 d.p.Solutions correct to 2 d.p. are

It can be observed that the Gauss-Seidel method converges twice as fast as the Jacobi method.

Exercises 3

1. Solve the following system of linear equations:

Page 9: Solution of System of Linear Equations

CHAPTER 3 : Solution of System of Linear Equations 31

(a) By Gaussian elimination carrying just 3 significant figures. Do not interchange the rows.

(b) By Gaussian elimination with partial pivoting carrying just 3 significant figures.(c) Substitute each set of answers into the original equations and observe that the left-

and right-hand sides match better with the answers to part (b).

2. Solve the following system of linear equations by the Gaussian elimination method with partial pivoting, giving your answers to 2 decimal places.

3. Solve the following system of equations by the Gauss elimination method with partial/total pivoting, giving your answers to 2 decimal places.

(a)(b)(c)

4. Consider a matrix A = , determine a lower triangular

matrix L and an upper triangular matrix U such that L U = A. Hence obtain the solution X of the equation A X = (13 85 130)T.

5. Solve the following system of equations by LU factorization method.(a) (b) (c)

6. Solve the following system of equations by Gaussian elimination with partial pivoting and also by LU factorization: (a) (b)

7. Given A = , B = , X =

(i) Determine a lower triangular matrix L such that L LT = A.(ii) Solve the equation A X = B by the Cholesky factorization.(iii) Find the inverse of the matrix A by the Cholesky factorization.

8. Solve the following system of equations by by Cholesky factorization :

9. Consider a symmetric matrix A = , determine a lower

triangular matrix L such that L LT = A. Find the inverse of the matrix A by the Cholesky factorization.

10. Use three-digit rounding arithmetic to solve the following system by

(a) Jacobi’s iteration method, (b) Gauss-Seidel iteration method.

Page 10: Solution of System of Linear Equations

32 Introduction to NUMERICAL METHODS

(i) Without changing the order of the equations.(ii) By rearranging the system to diagonally dominant form.

11. The linear system

is not diagonally dominant. Find the equivalent system which is diagonally dominant.Use an iterative method to find the solution correct to 2 decimal places

12. Use Gauss-Seidel iterative method to find the solution correct to 2 decimal places of the following system by reducing to diagonally dominant form:(a) (b) (c)

13.. Consider the following system of linear equations.

Reduce the system to an equivalent system which is diagonally dominant.Use an iterative method to find the solution correct to 2 decimal places starting with

.

14. Use an iterative method to estimate the solution of the following system, correct to 2 decimal places with the given starting values.

(a) with

(b) with

(c) with

(d) with