Machine Problem on ES 204

download Machine Problem on ES 204

of 21

description

Minimum Iteration Method and Bi-conjugate Method

Transcript of Machine Problem on ES 204

MACHINE PROBLEM 1ES 204

Submitted by:Timothy John S. AcostaSubmitted on:March 18, 2015

Executive Summary: Minimum Residual Norm IterationA. Define the ProblemA linear sparse matrix A with a corresponding right hand vector b was given for this problem. To solve the system, the minimum residual (minres) norm iteration was used. A program in C for the steepest descent algorithm was given by Dr. Cortes. It was tasked to compare the performance of the minres method when using the three different norms (L1,L2,L). Reference for the accuracy of the solution is the Gaussian elimination method. (60 pts).The Minimum Residual Method uses the following algorithm.Until Convergence

EndB. Problems EncounteredThe first problem encountered was to understand the program written by Dr. Cortes. After understanding the code written, certain parts were altered to change the code for the minres algorithm. The lattar part of the program was also altered to use the different residual norms since the stopping criterion of the original code was for a L2 norm.C. References Dr. Cortes Program for Steepests DescentD. ResultsMinimum Residual Norm Iteration

L2 NormL1 NormL NormGaussian Elimination

Iterations425355734129189n/a

Values

x[190]0.8729540.8728580.8738810.8729

x[191]0.8839260.8838390.8847690.8838

x[192]0.895010.8949310.8957690.8949

x[193]0.9062070.9061370.9068820.9061

x[194]0.9175180.9174570.918110.9175

x[195]0.9289460.9288930.9294530.9289

Table 1. Summary of Results for MinresExecutive Summary: Successive over relaxation methodA. Define the problemFor the same given linear system of equations, the successive over relaxation algorithm was to be used. The optimum omega (relaxation parameter) was to be determined for the system. Again, the results were to be compared with using the 3 different norms as stopping criteria. (60 pts)The SOR method is presented below:

B. Problems encounteredUnlike the previous problem, there was no base code to work from. For this problem, instead of considering the whole matrix A to be used for computation, only the tri-diagonal band diagonal was stored as three separate arrays.

The program then used the following algorithm:Do Until Convergence

For i=2,3,n-1

The relaxation parameter was varied until the smallest iteration was attained. One of the problems was that the solution did not converge for relaxation parameters greater than 1.48.

C. References Gerald and Wheatley.(2004).Applied Numerical Analysis. 7th Ed. USA: Pearson Education, Inc..

D. Results

Figure 1. Iteration vs Relaxation Parameter using L Norm Stopping Criteria

SOR Algorithm

L2L1L

Alpha1.4771.4761.477

Iterations9357120026972

Values

x[190]0.8728670.8728520.873019

x[191]0.8838460.8838330.883985

x[192]0.8949380.8949260.895063

x[193]0.9061430.9061320.906254

x[194]0.9174630.9174530.91756

x[195]0.9288980.9288890.928981

Table 2. Summary of Optimal Relaxation Parameters for different Norms

The optimal relaxation parameter was estimated by running the programs at different values. As shown in figure 1, the lowest iterations can be seen at values of 1.4. This gives the general behavior for the algorithm for the given system of equations. For values higher than 1.5, the solution does not converge using the algorithm. Table 2 shows the optimal relaxation parameter to an accuracy of 1e-3.

Executive Summary: Conjugate-gradient-squared (CGS) methodA. Define the ProblemA linear sparse matrix B with a corresponding right hand side vector b was given for this problem. Using a tolerance of 1e-6, the linear system of equations is to be solved using the CGS method.The Algorithm for the method is shown below:1. Compute is arbitrary2. Set 3. For until convergence

4. End

B. Problems EncounteredFor the problem, the base code used was taken from the book Numerical Recipes 2nd Edition. The first challenged encountered was to write and understand the code in the book. The code was then altered accordingly for the given problem. The program makes use of a function that converts the matrix B into two vectors sa and ija which only takes into account the non-zero values of the sparse matrix. This was very confusing at first.

C. References Press, Teukolsky et al. (1992). Numerical Recipes in C. 2nd Edition. Cambridge University Press. Saad, Y. (2000). Iterative Methods for Sparse Linear Systems. 2nd Edition.

D. ResultsCGS Method

Iterations200

Est. Error5.31E-12

Values

x[190]1.008166

x[191]1.005189

x[192]1.002212

x[193]0.999234

x[194]0.996256

x[195]0.993278

Table 3. Summary of Results for CGS MethodGaussian Elimination

Values

x[190]1.008166

x[191]1.005189

x[192]1.002212

x[193]0.999234

x[194]0.996256

x[195]0.993278

Table 4. Solution attained by Gaussian EliminationThe table 3 above shows the partial results for the solution vector x to the system of equations given by the Matrix B and right hand vector b while table 4 gives the partial results by Gaussian elimination. As observed, the values are the same and we can conclude that an accuracy up to 1e-6 significant figures is obtained. Although the program computed that the estimated error was 5.32e-12, checking the significant figures beyond 6 decimal places is beyond the scope of this machine problem.

Executive Summary: Bi-conjugate gradient (BiCG) method A. Define the ProblemThe same sparse matrix B with corresponding vector b was given for this problem. Instead of the CGS method, the BiCG method is to be used which is a variant of the conjugate method wherein the transpose of the original matrix is not needed. But instead it makes use of the projection of two vectors po and uo.The algorithm is shown below:1. Compute 2. Set 3. For until convergence

4. End

B. Problems EncounteredFor the bi-conjugate method, there was not much of a problem since the code in C was given in the book numerical recipes 2nd edition. But the understanding and the encoding of the functions was rather tedious.C. References Press, Teukolsky et al. (1992). Numerical Recipes in C. 2nd Edition. Cambridge University Press. Saad, Y. (2000). Iterative Methods for Sparse Linear Systems. 2nd Edition.

D. ResultsBi-Conjugate Method

Iterations200

Est. Error2.33E-15

Values

x[190]1.008166

x[191]1.005189

x[192]1.002212

x[193]0.999234

x[194]0.996256

x[195]0.993278

Table 6. Summary of Results for BiCG MethodAs observed in the table above, it also gives the same values up to 1e-6 decimal point as the Gaussian elimination solution. The iterations also give the same value for the CGS method. But the estimated error for the BiCG was computed to be 2.33e-15 which we can infer that this would have a more accurate solution if the significant figures were increased.

Appendix 1. List of ProgramsMP1.c #include#include#define size 200

double a[size][size];int main(void){ double v[size],r[size],t[size]; double tv[size],x_old[size],x_new[size]; double b[size]; double eps,tol,temp,num,den,h,alpha,max,max2; int i,j,k,count,n;

n=size; /*number of interior points or number of equations */ h=1.0/(double)(n+1);

tol=1.0e-6; eps=1.0;

/*Routine for generating the A matrix and the b vector for the homework*/ n=200; /* h = spacing between points*/ h=1.0/(float)(n+1);

/*initialize the A matrix*/

for(i=1;i