TOWARDS - psasir.upm.edu.mypsasir.upm.edu.my/41627/2/0001.pdfThe matrix-storage free BFGS (MF-BFGS)...

52

Transcript of TOWARDS - psasir.upm.edu.mypsasir.upm.edu.my/41627/2/0001.pdfThe matrix-storage free BFGS (MF-BFGS)...

TOWARDS LA RGESCALE

UNCONSTRAINEDOP IIMIZATION

PROFESOR DR. MALIK HJ. ABU HASSANB.Sc. Malaya, M.Sc. Aston, Ph. D. Loughborough

20 April 2007

Dewan TaklimatTingkat I, Bangunan Pentadbiran

Universiti Putra Malaysia

~P'UTMIML.A'nIArr:l'.n~l."l.:T.T.l'.:l'JJ

Penerbit Universiti Putra MalaysiaSerdang • 2007

http://www.penerbit.upm.edu.my

© Penerbit Universiti Putra Malaysia 2007First print 2007

All rights reserved. No parts of this book may be reproduce in any form withoutpermission in writing from the publisher, except by a reviewer who wishes to quotebrief passages in a review written for inclusion in a magazine or newspaper.

UPM Press is a member of the Malaysian Book Publishers Association (MABOPA)Membership No.: 9802

Perpustakaan Negara Malaysia Cataloguing-in-Publication Data

Malik Hj. Abu HassanTowards large scale unconstrained optimization I Malik Hj. AbuHassan.(Inaugural Lecture Series)ISBN 978-983-3455-90-41. Mathematical optimization. 2. Mathematical analysis.3. Algorithms.I. Tilte. II. Series.519.62

Designed, layout and printed by:Penerbit Universiti Putra Malaysia43400 UPM, SerdangSelangor Darul Ehsan.

ABSTRACT

A large scale unconstrained optimization problem can beformulated when the dimension n is large. The notion of 'largescale' is machine dependent and hence it could be difficult tostate a priori when a problem is of large size. However, todayan unconstrained problem with 400 or more variables is usuallyconsidered a large scale problem.

The main difficulty in dealing with large scale problems isthe fact that effective algorithms for small scale problems donot necessarily translate into efficient algorithms when appliedto solve large scale problems. Therefore in dealing with largescaleunconstrained problems with a large number of variables,modifications must be made to the standard implementationof the many existing algorithms for the small scale case.

One of the most effectiveNewton-type methods for solvinglarge-scale problems is the truncated Newton method. Thismethod computes a Newton-type direction by truncating theconjugate gradient method iterates(inner iterations) whenevera required accuracy is obtained, thereby the superlinearconvergence is guaranteed.

Another effective approach to large-scale unconstrained isthe limited memory BFGS method. This method satisfies therequirement to solve large-scale problems because the storageof matrices is avoided by storing a number of vector pairs.

The symmetric rank one (SR1) update is of the simplestquasi-Newton updates for solving large-scale problems.However a basic disadvantage is that the SRl update may notpreserve the positive definiteness with a positive definitenessapproximation. A simple restart procedure for the SRl methodusing the standard line search to avoid the loss of positivedefiniteness will be implemented.

The matrix-storage free BFGS (MF-BFGS) method is amethod that combines with a restarting strategy to the BFGSmethod. We also attempt to construct a new matrix-storage freewhich uses the SR1 update (MF-SR1). The MF-SR1 method ismore superior than the MF-BFGS method in some problems.However for other problems the MF-BFGS method is morecompetitive because of its rapid convergence. The matrix-storage methods can be gready accelerated by means of a simplescaling. Therefore, by a simple scaling on SR1 and BFGSmethods, we can improve the methods tremendously.

ABSTRAK

Suatu masalah pengoptimuman tak berkekangan berskala besarboleh dirumuskan apabila matra n adalah besar. Idea 'berskalabesar' bersandarkan mesin dan justeru itu adalah terlalu rumituntuk menyatakan a priori apabila suatu masalah adalah bersaizbesar. Bagaimanapun, hari ini suatu masalah tak berkekangandengan 400 pembolehubah atau lebih, biasanya dipertimbangkansebagai suatu masalah berskala besar.

Kerumitan ,utama dalam mengendalikan masalah berskalabesar timbul daripada fakta algoritma efektif untuk masalahberskala kecil tidak semestinya menterjemahkannya ke dalamalgoritma cekap apabila digunakan untuk menyelesaikan masalahberskala besar. Oleh yang demikian, untuk mengendalikanmasalah tak berkekangan berskala besar dengan bilanganpembolehubah yang besar, pengubahsuaian mesti dilakukanterhadap implimentasi piawai bagi algorimta yang sedia ada untukkes berskala kecil.

Salah satu kaedah jenis Newton yang efektif untukmenyelesaikan masalah berskala besar adalah kaedah Newtonterpangkas. Kaedah inimengira suatu arah jenis Newton denganmemangkaskan lelaran kaedah kecerunan konjugat (lelarandalam) apabila suatu kejituan yang diperlukan diperoleh, justerumenjamin penumpuan superlinear.

Pendekatan efektif yang lain untuk masalah tak berkekanganberskala besar ialah kaedah ingatan terhad BFGS. Kaedah inimemenuhi keperluan untuk menyelesaikan masalah berskalabesar sebab storan matriks dapat dielakkan dengan menyimpansuatu bilangan pasangan vector.

Rumus kemaskini pangkat satu yang simetri (SR1) adalahyang termudah bagi rumus kemaskini quasi-Newton untukmenyelesaikan masalah beskala besar. Walau bagaimanapun satukelemahan adalah rumus kernaskini SR1 ini tak boleh

menyimpan atau mengekal ketentuan positif dengan suatupenghampiran tentu positif. Suatu prosedur mula semula yangmudah untuk kaedah SRI dengan menggunakan gelintaran garispiawai diimplimentasikan untuk mengelak kehilangan ketentuanpositif.

Kaedah storan matriks bebas BFGS (MF-BFGS) adalahsuatu kaedah yang menggabungkan strategi mula semula kepadakaedah BFGS. Kita akan membina suatu storan matriks bebasmenggunakan rumus kemaskini SRI (MF-SRI). Kaedah MF-SRI adalah lebih baik daripada kaedah MF-BFGS dalam sesuatumasalah. Walau bagaimanapun untuk kebanyakan masalahkaedah MF-BFGS adalah lebih kompetitif disebabkankepantasan penumpuan. Kaedah storan matriks bolehdipercepatkan melalui suatu pengskalaan mudah. Justeru itu,dengan menggunakan pengskalaan mudah ke atas kaedah SRIdan BFGS kita boleh memperbaiki kaedah denganmemberangsangkan.

GENERAL INTRODUCTION TO OPTIMIZATION

Optimization is concerned with getting the best from a givensituation by a systematic analysis of alternative decisionshopefully without the need to examine them all.

This involves a performance measure or index for thesituation under assessment and the ability to calculate it fromthe variables at the optimizer's disposal. The variables are thenadjusted to give the best possible value of the performanceindex.

Performance indices differ from situation to situation butgenerallyinvolve economic considerations, e.g.maximum returnon investment, minimum cost per unit yield, etc. They may alsoinvolve technical considerations such as minimum time ofproduction, minimum miss-distance of a missilewith its targetand so on. There may of course be a combination of differentindices requiring simultaneous optimization. Such multi-objective problems are difficult and are often dealt with byselecting one of the objectives as primary, fixing suitable valuesfor the others and then regarding them as constraints.

The way in which the performance index is obtained fromthe variables of the problem can vary widely from one situationto another. At one extreme it may be only possible to give aqualitativeindication of the dependence and at the other a highlysophisticated mathematical model will enable the index to becomputed accurately given fixed values of variables. Theimportant thing is to be able to determine when one set ofvalues of the variablesgivesa "better" valueof the performancethan some other sets.The variablescan be adjusted systematicallyuntil the "best" performance is obtained.

Mathematical models involved in optimization can rangefrom simple algebraic formulae to sets of linked algebraic,ordinary and partial differential equations.

Towards Large Scale Unconstrained Optimization

Furthermore in real-life problems the variables are almostalways constrained in some way - sometimes by having simpleupper lower bounds and sometimes by complex functionalconstraints - so that the best value of the objective is sought forsome restricted set of values of the variables of the problem.

Optimization problems arise in a variety of different waysranging from the best design of some single component of aprocess through to the organization of a company (or even acountry) to achieve some desired objective.

They can also result from problems concerned with thedetermination of parameters in models, natural variationalprinciples in science and engineering (e.g. de,termination ofequilibrium compositions by the minimization of Gibbs freeenergy in chemical systems), the numerical solutions of algebraicand differential equations, and so on.

A further difficulty arises if these are stochastic processesinvolved in the system to be optimized.

General Mathematical Formulation

Find the values of the variables x. , i = 1,2, ,n which willI

minimize (or maximize) the objective function

j(Xp x2' •••••••••x) (1)

and satisfy the set of constraints

The nature of an optimization problem depends on thenature of the functions f, gk and of the variables Xi •

2

Malik.Hj. Abu Hassan

When m = 0 the problem is unconstrained. Iff and all the gkare linear functions the problem is a linear programme and if f isquadratic and the gk linear a quadratic programme arises.

When f is a convex function and the constraints form aconvex set then we have a convex programming problem. (Animportant property of a convex programming problem is thatany local solution is also global).

If the variables Xi can only take on integer values theproblem is known as integerprogramming.

In general when f is nonlinear and the gk are all linear wehave linear constrained optimization and when f and the gk areall nonlinear we have nonlinear programming.

Optimization in which only a finite number of variables Xi

are involved is known as static optimization but when theperformance index is a functional (of one or more functions)we have tfynamic optimization and we become concerned withproblems in the calculus of variations and optimal control.

In optimal control the performance index is determinedfrom the solutions of a set of differential equations whichdepend on a set of functions which have to be selected in anoptimal way. The differential equations describing the systemare regarded as the constraints gk = O.

Problems with only equality constraints (gk= 0 for all k) areeasier to deal with in general than those with only inequalities(gk~ 0 for all k) or mixed problems.

Most of the techniques discussed in the following lectureswill be concerned with "local" optimization in the sense thatany optimum obtained will not necessarily be the overall or"global" optimum but will be the best value in someneighbourhood.

When functions having multiple maxima or minima areinvolved the problem is extremely difficult and very much thesubject of current research.

3

Towards Large Scale Unconstrained Optimization

Sometimes the only way to proceed in global optimizationproblems is to start searching from different initial points,selected using a grid or by some form of random sampling, andthen to compare the local optima so obtained to find the best.This procedure can be very time consuming and, of course, isby no means foolproof

Another important area of current research is "non smooth"optimization which arises when the performance index is non-differentiable.

METHODS OF OPTIMIZATION

Problem: Given a performance index

find a (local) minimizer x· ERn

General Iteration

(2)

where :xf.0) is the estimate of x*, cl) is the direction of searchand JJi) is the "step length" along this direction.

Different optimization techniques result from different waysof selecting eli) and "A/i) •

4

STEEPEST DESCENT

Algorithm

Let :>f0) be an estimate of x*

1. Set i = 0

3. Compute Al) such that

f (x(i) - A. (i)g(i») = min f (X(i) _ A. g(i»)if,

(this is called a line search)

4. ComputeX(i+l) = X(i) _ A. (i)g(i)

5. Has process converged?

YES : then :>fi+l) is minimizerNO : set i = i+1 and go to 2.

Useful for theoretical reasons but not in practice sinceconvergence is too slow (linear).

NEWTON'S METHOD

(3)

( a2 f 1where G(>li1) is the Hessian ax. ax . .

I J x-x(')

and Vf(X(i» is the gradient of f(x) at >Ii).

S

Towards Large Scale Unconstrained Optimization

One step convergence on quadratics. For general fwith aminimum at which G is positive definite convergence is rapid(quadratic) if :>f.0} is sufficiently close to x".

Basic Newton may fail because G is not positive definite oreven when it is iff does not decrease in moving to the minimumof the local quadratic approximation.

Other disadvantages are that G must be computed and aset of n linear equations solved.

MODIFIED NEWTON METHODS

1. (4)

with "Afi) selected so that fi..:>f.i+l}) is minimized (linesearch).

2. (5)

where ris selected to make (... ) positive definite.

Modifications 1. and 2. can be combined.

Modified Newton Methods are regarded as among the bestavailable. This is true even if G(:>f./}) (but not Vf(x(i) also) has tobe obtained numerically provided that n is not too large.

6

Malik Hj. Abu Hassan

QUASI-NEWTON METHODS

Algorithm

Let :x!0) be an estimate of x", 11<0) be a positive definite matrix

and If) = V[(X(i» , i = 0,1,2, .

1. Set i = 0

3. Find ,Ji) by minimizing j(:x!t) + Adt» using a suitable linesearch.

4. ComputeX(i+l) = XCi) _ A (i)d(i)

p(i) = X(i+l) _XCi) = A (i)d(i)

q(i) == g(i+l) _ s'"

5. Update H(i) by

H(i)q(i)q(il H(i)

q(i)TH(i)q(i)(6)

6. Set i = i+ 1, go to 2.

Method (a) is known as the Davidon Fletcher Powell (DFP)method and method (b) as the Broyden Fletcher GoldfarbShanno (BFGS) method.

7

Towards Large Scale Unconstrained Optimization

There are many other quasi-Newton methods dependingon how H(k) is updated. (Some methods approximate to G ratherthan G-1).

Quasi-Newton methods are some of the best availableoptimization techniques both when "If in known analyticallyandwith numerical evaluation of "If provided suitable care is taken.The best quasi-Newton method is currently held to be the BFGSmethod.

CONJUGATE DIRECTION METHODS

Algorithm for Fletcher Reeves Method

1. Set i = 1d(O) = -V!(x(O»

2. Computex(i) = X(i-l) _ A (i-l)d(i-l)

where Z (i-I) is found by minimizing !(x(i» using asuitable line search.

3. Computedei) = _g{i) _ P (i-l)d{i-l)

where(8)

(i)T (i)fJ (i-I) - g g- g(i_l)T g(i-l) (9)

ands" _Vf(X{i»

8

Malik Hj. Abu Hassan

4. Has process converged?

Yes:No:

StopSet i = i + 1. Go to 2.

For quadratic f this algorithm produces conjugate directionsin at most n steps and hence x*.

For general f the process is usually restarted with a stepalong the current gradient direction after each cycle of n steps.

There is a number of other ways of selecting p, for examplein the Polak-Ribiere method

(10)

Notice that some quasi-Newton methods, e.g. the DFPmethod, are also conjugate gradient methods.

Conjugate direction methods are not generally as efficientas quasi-Newton methods.

Those conjugate direction methods that only use vectorsand not matrices can be useful when n is large because of theirrelatively small storage requirements.

SIMPLEX OR POLYTOPE METHODS

General Algorithm

1. Compute fat the vertices of a simplex in n - dimensions.

2. Find the "worst" point by comparison. "Worst" meanspoint at which f has its largest value when looking fora minimum.

9

Towards Large Scale Unconstrained Optimization

3. Replace the worst point by its reflection in the centroidof the other n points.

4. Repeat process with newly formed simplex.

5. Reduce size of simplexes as the minimum is approached.

6. Stop when standard deviation of the values off at thevertices of the current simplex is less than a preset value.

Simplex methods are not generally recommended whenmethods using derivatives are available. They can be useful inproblems where fis not very well determined (as in the case ofmany chemical engineering problems) and is of small dimension.

USE OF DIFFERENTIAL EQUATIONS

The equations of the orthogonal trajectories of j(x):Rn -R

are

dx-=±Vf(x)dt

(11)

A generalization of the trapezoidal rule for integrating (11) is

(i+l) (i)X -x =(1-0)Vf(x(i)+OVf(x(i+l»

h(12)

Applying Newton's method to solving the nonlinearequations (12) gives

X(i+l)= XCi)-[0 G(x(i»-i1nr1Vf(x(i» (13)

This is a modified-Newton optimization process.

10

Malik Hj. Abu Hassan

We have devised an algorithm that selects () and h to giveA-stability and an optimization process between steepest descentand Newton. The algorithm has been adapted to optimal controlproblems including tubular reactors.

GLOBAL OPTIMIZATION

Algorithm (Malik, 1982)

1. Find a local minimizer using a differential method andany minimization method.

2. Transfer the local minimizer to the origin and computeits domain of attraction by numerical use of Zubov'smethod.

3. Find a point outside this domain of attraction (if any)and using this as the initial point go to 1.

4. Repeat until all local minimizers and domains of attractionhave been located.

The method seems to work well in 1 or 2 dimensions. Therewould be considerable difficulty in extending the technique tomore than 2 dimensions.

LARGE - SCALE UNCONSTRAINED OPTIMIZATION

A large scale unconstrained optimization problem can beformulated when the dimension n is large. The notion of 'largescale' is machine dependent and hence it could be difficult tostate a priori when a problem is of large size. However, todayan unconstrained problem with 400 or more variables is usuallyconsidered a large scale problem.

11

Towards Large Scale Unconstrained Optimization

Besides its theoretical importance, the growing interest inthe last years in solving problems of large size derives from thefact that problems with larger and larger number of variablesare arising very frequently from the real world as a result ofmodeling systems of a very complex structure.

The main difficulty in dealing with large scale problems isthe fact that effective algorithms for small scale problems donot necessarily translate into efficient algorithms when appliedto solve large scale problems. Therefore in dealing with largescale unconstrained problems with a large number of variables,modifications must be made to the standard implementationof the many existing algorithms for the small scale case.

A basic feature of an algorithm for large-scale problems isa low storage overhead needed to make practicable itsimplementation. As in small-scale case, most of the large-scaleunconstrained optimization algorithms are iterative methods,which generate a sequence of points according to the scheme,

X(i+l) = X(i) + A, (i) d (i) , i = 0,1,2, .

where x(O) is the estimate of x", tfi) is the direction of searchand 'A.(iJ is the "step length" along this direction. Obviously, inlarge-scale optimization it is important that an algorithm be ableto compute the search direction efficiently and economically.Therefore it is always possible to include modifications in thecalculation of the search direction to serve this purpose.

Steepest Descent Method

One good method for solving large-scale unconstrainedoptimization problems is the steepest descent method. Due toits very low storage required by a standard implementation and

12

Malik Hj. Abu Hassan

the ensured global convergence, it could be attractive in thelarge-scale setting. However, its convergence rate is only linearand therefore it is too slow to be used. Some particular scalinghad been considered and this led to an implementation of theefficiency of the method.

Newton-type Methods

One of the most effective Newton-type methods for solvinglarge-scale problems is the truncated Newton methodintroduced by Dembo and Steihaug (19&3).This methodcomputes a Newton-type direction by truncating the conjugategradient method(CG) iterates(inner iterations) whenever arequired accuracy is obtained, thereby the superlinearconvergence is guaranteed.

Truncated Newton Algorithm

Step1. Outer Iterations

For i= 0,1, .Compute jr',Test for covegence.

Step 2. Inner iterations (Computation of the direction)Iterate CG algorithm until a termination criterion issatisfied.

Step 3. Compute a step-length A/i) by a line search procedure.Set fi+l) =fi) +A(i)d(i); i:= i + 1 go to step 1.

13

Towards Large Scale Unconstrained Optimization

Lucidi et al. (1998) used a curvilinear line search in thetruncated Newton algorithm to solve large-scale problems andare globally convergent towards points where positive semi-definiteness of the Hessian matrix is satisfied.

Limited Memory BFGS Method (L-BFGS)

Another effective approach to large-scale unconstrained is thelimited memory BFGS method (L-BFGS) proposed by Nocedal(1980) and then studied Nash and Nocedal (1991).This methodresembles the BFGS quasi-Newton method and satisfies therequirement to solve large-scale problems because the storageof matrices is avoided by storing a number of vector pairs.

L-BFGS Algorithm

Step 1. Choose :xf.0) and the initial matrix HO). Set i= 0

Step 2. Compute cfi) =_lfi)g{i)

and X(i+l) = xfi) + A.(i) d(')

where H(i) is the approximation to the inverse Hessianof f(xJ at the th iteration and "Ai) satisfies the Wolfeconditions

(the step-length A.=1 is tried first with ~1=10-4and~2=0.9.

14

Malik Hj. Abu Hassan

A A

Step 3. Let m = min{k,m-l}. Update forH(V)m +1 times by using

the pairs k(j), e'" r . A ,i.e. let}-I-m

+ p(i-';,.)(V(il V(i-":+l)T)p(i-';,.)p(i-~l (V(i-:'+l) .•. V(i»

+ p(i-:"'l) (0il Vi-m~2l)p(i-:'+l)p(i-:"'ll (0i-:"'2) ...Vi»

(15)

Step 4. Set i:= i+1, and go to Step 2.

Positive-definite Scaled Symmetric Rank One Method

The symmetric rank one (SR1) update is of the simplest quasi-Newton updates and this simplicity makes the SR1 update acandidate for large-scale problems. However a basic disadvantageto the SR1 update is that the SR1 update may not preserve thepositive definiteness with a positive definiteness approximation.Moreover, the SR1 update may also be indefinite. Severalresearchers have renewed their interest in the study of SR1update. Khalfan (1989) use trust region to avoid the possibleloss of positive-definiteness. IP and Todd (1988) suggested tosize up the SR1 update for avoiding the lost of definitenesswhich results in the optimal condition sized SR1 updates. Asimple treatment to these problems is to restart the update withthe initial approximation mostly the identity matrix wheneverthis loss arises. However our numerical experience shows thatrestart with the identity matrix is not a good choice. A simplerestart procedure for the SR1 method using the standard linesearch to avoid the loss of positive definiteness will beimplemented.

15

Towards Large Scale Unconstrained Optimization

Algorithm - Quasi Newton SR1 Method with Restart(NS-SR1)

Given an initial point :x!0), an initial positive matrix H<O-Y = I, seti= o.

Step 1. If the convergence criterion, IIg(i)ll:s:e max~,llx(i)II} is

achieved, then stop.

Step 2. Compute a quasi-Newton direction

(16)

(17)

Step 3. If c!i)l»O, (H(~ is not positive definite) set H(i) = I and

d(i) = -s" subsequently, Else retain (16)

Step 4. Using line search, find an acceptable step-length Ill),such that the Wolfe conditions (14) are satisfied

(AY) =1 is always tried first, with A. =l~andA =0.9».

Step 6. Compute the next inverse Hessian approximationH (i+l).

Step 7. Set i:= i+l, and go to Step 1.

16

Malik Hj. Abu Hassan

Numerical results show that restarting with the identitymatrix may not be a convenient choice. The SRI update maynot preserve positive definiteness at the next iteration even ifthe current is, i.e. when H (i) is positive definite and p(i)' q(i) > o.The algorithm will keep on restarting with little or no progressuntil the maximum number of function evaluation allowed isexceeded. Hence, restart with the identity matrix is clearly not agood choice. Instead, we consider the cheap choice of replacingthe identity matrix with a positive multiple of the identity matrix.Malik et al. (2002a,2002b) used a positive multiple of the identitymatrix instead of identity matrix itself. The used positive scalingfactor is the optimal solution of the measure defined by theproblem. A replacement in the form of positive multiple ofidentity matrix is necessary for the SRI when it is not positivedefinite.

We present a description of the scaled SRI (S-SRl)algorithm that ensures the positive deftniteness of the SRIupdate.

Algorithm - S-SRl

Given an initial point :x:O), an initial positive matrix H~ = I, seti= O.

Step 1. If the convergence criterion, Ill;)11 ~E max{l,llx(i)II} is

achieved, then stop.

Step 2. Compute a quasi-Newton directiond(i) __ H(i)g(i)

where H(i) is given by (17)

17

Towards Large Scale Unconstrained Optimization

Step 3 If d(i)s" >0, (H(i) is not positive definite) or i = 1, set-

H(i) = t5(i-l)

and subsequently

tfi) = _ dJ-I) g{i). Else retain (16)

Step 4. Using line search, find an acceptable step-length lIJi) ,such that the Wolfe conditions (14) are satisfied (,-/i)=1

is always tried first with PI = 10-4and P2 = 0.9) .

Step 6. Compute the next inverse Hessian approximationH (i+1).

Step 7. Set i:= i+1, and go to Step 1.

A set of test problems is carried out and we find that the S-SR1 update requires less iterations and function calls than NS-SR1.Moreover, we see that most of the problems can be solvedby S-SR1 under a certain number of function calls but no forNS-SR1. Therefore, by a simple scaling on SR1 method, wecan improve the SR1 method tremendously. Also, the S-SR1method, on the average, requires fewer iterations than the BFGSmethod in a line search algorithm. Under conditions that donot assume uniform linear independence, but do assume positive

18

Malik Hj. Abu Hassan

definiteness and boundedness of the Hessian approximationsthe S-SR1method has super-linear and quadratic convergences.

MATRIX-STORAGE FREE QUASI-NEWfON METHOD

The matrix-storage free BFGS (MF-BFGS) method is amethodthat combines with a restarting strategy to the BFGS method.We also attempt to construct a new matrix-storage free whichuses the SR1 update (MF-SR1). The MF-SR1 method is moresuperior than the MF-BFGS method in some problems.However for other problems the MF-BFGS method is morecompetitive because of its rapid convergence. The matrix-storage methods can be greatly accelerated by means of a simplescaling.

Matrix-storage free quasi-Newton methods (MF methodsfor Short) are intended to solve large-scale problems when theHessian of the objective function has no particular structure.In their general setting, these methods do not try to takeadvantage of the possible sparsity of the Hessian. It mighthelp in filling the gap between conjugate gradient (CG) andquasi-Newton. The former uses few locations in memory, O(n),but converge rather slow and require expensive linear searches.Quasi-Newton methods have fast rate of convergence(superlinear) and no need exact line searches, but require largememory, O(Ji) storage locations.

The MF methods are an adaptation of the quasi-Newtonmethod to large-scale problems. The implementation describedis almost identical to that of the standard quasi-Newton method.The only difference is in the matrix update. Instead of storing

the matrices H(i), one stores only the most recent {q<i) ,p(i)} thatdefines them implicitly,

19

Towards Large Scale Unconstrained Optimization

Algorithm - MF-BFGS MethodStep 1. Choose >1-°), and starting matrix H<~ = I.Set i= o.

Step 2. (Step computation procedure using BFGS formula).Let :x!i) he the current iterate.Compute p(i-l), rji-l) andii):

Step a. Compute p(i-lf rji-l), q{i-l)T q{i-I), p(i-rf g{i)

and q{i_l)T g{i) •

Step h. Compute

= [pi-Ii (1/ pi-I) +4i-II tfi-I»(p{i-II g{I)_pi-I)tfi-II g(i)]Y _ pi-I)r:" g(l)

Step c. Compute

Step 3. Compute

where 'A/i) satisfies the Wolfe conditions (14) (/./i) = 1 is

always tried first, with PI = 10-4and P2 = 0.9) .

Step 4. If the stopping criterion, is achieved, then stop; elseset i:= i+1, and go to Step 2.

20

Malik Hj. Abu Hassan

In step 2 (step computation procedure), each inner product instep a. requires nmultiplication; Step h. requires a 2 x 1 vectorstorage. When this procedure is part of an algorithm using a

line search procedure, the scalar p(i-1l g(i-l) is also required

for the line search, whereas g(il s" is needed to check thestopping conditions of the algorithm.

Algorithm (MF-SRi method)

The MF-SR1method differs from the MF-BFGS only in Step2.

Step 2. Let >I) he the current iterate. Compute p(i.1), cfi-1)

and ii):Step a. Compute p(i-1l q(i-l) , q(i_l)T q(i-l) , p(i-1l s"

and q(i-l)T g(i) •

Then calculate v (i-ll q(i-l) and V (i-l)T g(i),

where V(i+l) = p(i-l) _ q(i-l).

Step b. Compute

y = m(i-l)v(i-l)T s":O)(i-l) _ 1

where - (i-ll (i-I)V q

Step c. ComputeH(i)g(i) = g(i) +YV(i-l)

21

Towards Large Scale Unconstrained Optimization

In this procedure we require 1x 1vector storage for V (i-I) .

Each inner product in Step a; requires n multiplication. Both

U(i-l)Tq(i-l) and U(i-l)T s" can be obtained using p(i-1l q(i-l) ,

q(i_l)T q(i-l), p(i-1l s" and q(i_l)Tg(i) • The cost for OJ (i-I) is

free since the inner products have been stored.The evaluation of optimization on large-scale test problems

is more difficult than in small dimensional case. When thenumber of variables is very large, the computational effort ofthe iteration dominates the cost of evaluating the function andgradient. The performance of MF-SR1 method is poor due tothe choice of starting matrix HfO)= I. Malik et al(2001, 2003,2004) and Leong and Malik (2005) find a suitable replacementof the identity matrix as the starting/restart matrix. A simplescaling can dramatically reduce the number of iterations of theirpartitioned quasi-Newton methods.

MF-BFGS with Scaling (MF-BFGS-S)

The steps of the algorithm are similar to MF-BFGS, the onlydifference is in step 2.

Step2. Let xfi) be the current iterate.

Step a. Compute p(i-1l q(i-l) , q(i_l)T q(i-l) , p(i-1l g(i)

q(i-llg(i)and .

p(i-1l q(i-l)

Step b. Compute 8(i) = q(i-1l q(i-l)

22

Malik Hj. Abu Hassan

Step c. Compute

y = [P(i_l)' (1/ p(i-l) + g(i) «=q(i-l»)(p(:-l)'g(i») _ p(i-l) 8(i) q(i-l)' g(i)]

_ p(.-l) p(.-l) g(')

Step d. Compute

H(i) g(i) = J(i) g(i) + [p<i) J(i) q(i) 1y

The choice of step b. is free since both inner products havebeen stored while performing step b.

MF-SR1 with Scaling (MF-SR1-S)

The steps of the algorithm are similar to MF-SRl except forstep 2.

Step 2. Let:x!i) be the current iterate. Compute p(;-I), rfi-l) andii):

Step a. Compute p(i-ll q(i-l), q(i-ll q(i-l) , p(i-ll p(i-l) ,

p(i-ll g(i) and q(i-l)T g{i).

Step b. Compute

_ . p(l-l)' p(i-l) {(p(i_ll p(i-l»2

8(1) = q(i-l)Tp(i-l) - (q(i_l)Tp(i-l»2

-Step c. Compute and store then calculate V(i-l>' q(i-l) and

- -where v (i-I) = p(i-l) _ 0 (i) q(i-l)

23

Towards Large Scale Unconstrained Optimization

Step d. Compute-y = w(i-l) V(i-I)T s":

1where W(i-I) = -----

Step e. Compute

Comparing with the step computation procedure for MF-SR1, an additional inner product is required for calculating

p(i-Il p(i-I) •Both v (i-Il q(i-I) and v (i-Il g(i) can alsobe obtained

Comparison between the MF-BFGS and MF-SRI Methods

From numerical results the number of function calls andgradient calls are in the range of 1.5 - 1.7 times the number ofiterations. Function evaluations are not needed in computingthe search directions and hence only the gradient callsare beingcomputed. To compute the new search direction, for example,in step 2 of truncated Newton's algorithm, requirestwo gradientcalls per iteration, that is, we have to estimate t/i) and p(i) •

Moreover, when an exact line search is used, at least one cubicinterpolation is performed at each iteration requiring a furthertwo gradient calls. Although the iteration number is verycompetitive, the computational labour is quite satisfactory.

In terms of the number of iterations and functions, theMF-BFGS-S is slightly better than that of the MF-SRI-Smethod. However when n is large (~5000) the MF-SRI-S

24

Malik Hj. Abu Hassan

method always performs better In fact the MF-SR1-S requiresless CPU time to converge to the optimal point. This is due tothe fact that SR1 updates require less computational effort whencompared with the BFGS updates.

Scalingis very important in the MF-SR1-S method especiallywhen the SR1 updates are not necessarily positive definite. Thechoice of the scaling factor plays an important role since wehave to maintain the positive definiteness of the new update.On the other hand, the influences of scaling factor for BFGSupdates are difficult to interpret. For example, when the scalingfactor is large, 'near the optimal solution, the step remains toolarge during the run.

Comparison between MF-BFGS and L-BFGS methods

We compare the amount of storage required by the limitedmemory methods (L-BFGS) for various values of m (the numberwhich determines the number of matrix updates that can bestored) and n and the storage required by the MF-BFGS method.The MF-BFGS method requires less number of storage ascompared with the L-BFGS method.

Defining the index of computational labour as ICL =(n + 1) nj / np we compare the performance of both methodsand we found that MF-BFGS is very competitive since thedifference in ICL of both methods is so small. Sometimes theICL of MF-BFGS is slightly lower than that of L-BFGS orvice versa.

To investigate the effect of increasing the storage in bothmethods, we define 'storage-up' as the ratio of storage locationsfor MF-BFGS: storage locations for L-BFGS and 'speed up' interms of n[ and nj to be the ratios of total n[ of MF-BFGS :total n[ of L-BFGS and total nj of MF-BFGS : total nj of L-BFGS. Thus if the 'speed-up' factors are less than the 'storage-

25

Towards Large Scale Unconstrained Optimization

up factors, the L-BFGS methods do not gain much fromadditional storage, whereas a large number means a substantialimprovement. This means that although there is improvementwhen we switch from MF-BFGS method to L-BFGS methods,the gain is not dramatic. Hence we conclude that the MF-BFGSmethod is efficient if the resource in storage is low.

Comparison between MF-BFGS and ConjugateGradient Methods

We compare the MF-BFGS methods with some of the well-known conjugate gradient methods, viz the conjugate gradientmethods (CG) using Fletcher-Reeves and Polak-Ribiere formula.The performances of function calls and number of iterationsof the MF-BFGS method are better than the conjugate gradientmethods. In terms of ICL, clearly the MF-BFGS method is 'much more superior than the conjugate methods.

Solving Extremely Large-Scale problems

The L-BFGS, MF-SR1, MF-BFGS and CG methods can solvequite easily problems with 1000 variables. Some require manyfunction and gradient calls and large number of iterations. Theindex of computational labour varies from one method toanother. We have tested the performance of all these methodson problems with 106 variables. The MF-BFGS is the onlysuccessful method in solving these extremely large-scaleproblems efficiently. The L-BFGS method cannot solve theseextremely large scale problems because the memory isinsufficient to start the runs and they require large storage. AllCG methods fail to converge when applied to large-scaleproblems.

26

Malik Hj. Abu Hassan

CONCLUSIONS

Besides its theoretical importance, the growing interest in thelast years in solving problems of large size derives from the factthat problems with larger and larger number of variables arearisingvery frequently from the real world as a result of modelingsystems of a very complex structure.

Solving large scale unconstrained optimization problemsrequire convergence properties, ample storage and memory, highcomputational cost, scaling factors, restart strategies and manyothers that are related to large scale unconstrained optimization.We have shown that scaling is important in MF-methods bothin BFGS and SRI methods. A positive definite initial matrix isneeded at each iteration. Here, a positive multiple of identitymatrix and positive scaling factors are used to obtained theoptimal solution.

The numerical experiments show that the MF-SRI-Smethod gives good results. The MF- SRI-S uses the SRI updatefor the approximate inverse Hessian. The SRI update has majoradvantages in that it is simple, requires less computation andhas some very strong convergence properties. So the MF-SRI-S is an ideal method for large scale optimization.

The L-BFGS methods require more storage and is notsuitable when the problem has very large number of variablesor the resource in storage is very low. We found that the CGmethods require only low storage but more function andgradient calls.This is very unsuitable if the function and gradientevaluations are expensive or an inaccurate line search is used.Moreover there is no guarantee of convergence for CG methodsglobally when applied to large scale problems.

The MF-BFGS method is appealing for several reasons: Itis simple to implement. Itrequires moderate number of functionand gradient calls and low storage requirement. In terms of

27

Towards Large Scale Unconstrained Optimization

computational labour, MF-BFGS is competitive with L-BFGSand is clearly superior over the CG methods. Lastly, The MF-BFGS method successfully solves extremely large scale problemswith 106 variables while other methods fail to find the optimalsolution.

28

Malik Hj. Abu Hassan

REFERENCES

Dembo, R. and T. Steihaug. 1983. Truncated-Newton algorithm forlarge scale unconstrained optimization. Mathematical Programming26: 190-212.

IP, CM. and M.J.Todd. 1988. Optimal conditioning and convergencein rank one quasi-Newton updates. SIAM Journal on Numen'cal.Anaiysis 25: 206-221.

Khalfan, H.F.H. 1989. Topics in quasi-Newton for unconstrainedoptimization. Ph.D Thesis, University of Colorado.

Leong Wah June and Malik Hi. Abu Hassan. 2005. Modifications ofthe limited memory BFGS algorithm for large scale nonlinearoptimization. Mathematical Journal of Okf!Yama University 47: 175-188.

Lucidi, S., F.Rochetich, and M. Roma. 1998. Curvilinear stabilizationtechniques for truncated newton methods in large scaleunconstrained optimization. SIAM Journalon Optimization 8: 916-939.

Malik Abu Hassan. 1982. Extension of Zubov's method for thedetermination of domains of attraction. Ph. D Thesis, Lough.Univ. of Tech., England.

Malik Hi. Abu Hassan, Mansor b. Monsi and Leong Wah June. 2001.Limited BFGS method for large scale optimization.MAIEMAI1KA 17: 5-23.

Malik Hi. Abu Hassan, Mansor b. Monsi and Leong Wah June. 2002a.Scaling symmetric rank one update for unconstrainedoptimization. ANAUSIS 9: 63-76.

Malik Hi. Abu Hassan, Mansor b. Monsi and Leong Wah June. 2002b.Convergence of a positive definite scaled SRI method.MAIEMAI1KA 18: 117-127.

29

Towards Large Scale Unconstrained Optimization

Malik Hj. Abu Hassan, Mansor b. Monsi and Leong Wah June. 2003.Numerical experiments with matrices storage free BFGS methodfor large scale unconstrained optimization. MATEMATIKA 19:107-119.

Malik Hj. Abu Hassan, Mansor b. Monsi and Leong Wah June. 2004.Limited memory BFGS update with modifications. Journal ofMathematics and Natural S aences 14: 103-117.

Nash, S. and J.Nocedal. 1991. A numerical study of the limited BFGSand the truncated-Newton method for large scale optimization.SL4M Journal on Optimization 1: 358-372.

Nocedal, J. 1980. Updating quasi-Newton matrices with limitedstorage. Mathematics of Computation 35: 773-78.

30

BIOGRAPHY

DR. MALIK HJ. ABU HASSAN was born on 17th July 1948in Masjid Tanah Malacca, the second eldest in the family ofseven. He received his early education at Masjid Tanah EnglishSchool, St. David's High School and Malacca High School. Hewas the best student in Mathematics both in the LowerCertificate of Education (LCE) and SC/FMC examinationswhile studying at St. David. After obtaining a full certificate inthe Higher School Certificate (HSC) examination in 1968, hegained admission to the University of Malaya in 1969 andobtained his B.Sc.(Hons.) degree in Mathematics in 1973. Inthe same year he started his career as a tutor in the MathematicsDepartment of the Basic Science Division in Universiti PertanianMalaysia. Due to his love for Mathematics he went on tofurther his studies at University of Aston in Birmingham,England. There he obtained his Master of Science in IndustrialMathematics degree in 1974 and was also the top student in hisM.Sc class. Due to the shortage of staff at the UniversitiPertanian Malaysia at that time he was immediately appointedas lecturer in 1974. After four years of gaining experience as alecturer he left again to continue his studies at LoughboroughUniversity of Technology in England where he was conferredhis Doctorate in of Philosophy in Applied Mathematics in 1982.

Dr. Malik was appointed the Head of Department ofMathematics in UPM between 1990-1994 and 2002-2004. Hispositive attitude and support of his colleagues within hisdepartment as well as the University at large make him enjoyhis work as a lecturer in UPM tremendously. Dr. Malik lovesteaching his students both at the undergraduate and graduatelevels. He teaches several Mathematics courses ranging fromfirst year courses up to graduate courses. After 34 years ofservice he has taught thousands of students and supervised

31

Towards Large Scale Unconstrained Optimization

several graduate students. Apart from teaching he is also activein research activities. His current research interest is focused onlarge scale unconstrained optimization problems, domains ofattractions of non autonomous systems, linear quadraticproblems for descriptor systems and predator-prey populationmodels including time delay and harvesting. He has won severalmedals for his work in the Invention and Research Exhibitionat UPM. He also received the Excellent Service Award in 1995,1997 and 2005 and since 1995 onwards, the Excellent ServiceCertificate.

During his younger days he was active in sports especiallyin gymnastic, sepak takraw, hockey, carom and squash. Heparticipated in the inter faculty competitions and was championin sepak takraw for four successive years. In 1977 he representedthe Faculty Science in hockey and emerged as champion.Between 1976-78 he was elected as chairman of the Faculty ofScience's Sport, Social and Welfare Club. In 1977-78 he was thechairman of the UPM Sepak Takraw Club and Advisor of UPMstudent's Sepak Takraw team.

Dr. Malik was the chief examiner in Applied Mathematicsand Further Mathematics for STPM examination from 1989-2002. He is also professionally involved in the MatriculationDivision of the Ministry of Education where he is the chiefexaminer and chief assessor from since 1999 till present.

32

Malik Hj. Abu Hassan

LIST OF INAUGURALLECTURES

1. Prof. Dr. Sulaiman M. YassinThe Challenge to Communication Research in Extension22July 1989

2. Pro£ Ir. Abang Abang AliIndigenous Materials and Technologyfor Low Cost Housing30 August1990

3. Pro£ Dr. Abdul Rahman Abdul RazakPlant ParasiticNematodes, Lesser Known Pests of Agricultural Crops30 January 1993

4. Prof Dr. Mohamed SuleimanNumerical Solution of Ordinary DijJerential Equations. AHistorical Perspective11 December 1993

5. Prof. Dr. Mohd. Ariff HusseinChanging Roles ofAgricultural Economics5 March 1994

6. Prof. Dr. Mohd. Ismail AhmadMarketing Management: Prospects and Challengesfor Agricultural6 April 1994

7. Prof. Dr. Mohamed Mahyuddin Mohd. DahanThe Changing Demand for Livestock Products20 April 1994

8. Pro£ Dr. Ruth KiewPlant Taxonomy, Biodiversity and Conservation11 May 1994

33

Towards Large Scale Unconstrained Optimization

9. Pro£ Ir. Dr. Mohd. Zohadie BardaieEngil;eering Technology Developments Propelling Agriculture intothe 21st Century28 May 1994

10. Prof. Dr. Shamsuddin jusopRock, Mineral and Soil18 June 1994

11. Prof. Dr. Abdul Salam AbdullahNatural Toxicants AffictingAnimal Health and Production29 June 1994

12. Prof. Dr. Mohd. Yusof HusseinPest Control: A Challenge in Applied Ecology9 July 1994

13. Prof. Dr. Kapt. Mohd. Ibrahim Haji MohamedManaging Challenges in Fisheries Development through Science andTechnology23 July 1994

14. Prof. Dr. Hj. AmatJuhari MoinSf!jarah Keagungan Bahasa Melqyu6 Ogos 1994

15. Prof. Dr. Law Ah TheemOil Pollution in the Malaysian Seas24 September 1994

16. Prof. Dr. Md. Nordin Hj. LajisFine Chemicals from Biological Resources: The Wealth from Nature21 January 1995

34

Malik Hj. Abu Hassan

17. Prof. Dr. Sheikh Omar Abdul RahmanHealth, Disease and Death in Creatures Great and Small25 February 1995

18. ProE Dr. Mohamed Shariff Mohammed DinFish Health: An Otfyssry through the Asia - Pacific Region25 March 1995

19. Prof Dr. Tengku Azmi Tengku IbrahimChromosome Distribution and Production Petformance of WaterBuffaloes .6 May 1995

20. Prof. Dr. Abdul Hamid MahmoodBahasa Melqyu sebagaiBahasa IImu- Cabaran dan Harapan10 Jun 1995

21. Prof. Dr. Rahim Md. SailExtension Education for Industrialising Malqysia: Trends, Prioritiesand Emerging Issues22July 1995

22. Prof. Dr. Nik Muhammad Nik Abd. MajidThe Diminishing Tropical Rain Forest: Causes, Symptoms and Cure19 August 1995

23. ProE Dr. Ang KokJeeThe Evolution of an EnvironmentallY FriendlYHatchery Technologyfor Udang Galah, the King of Freshwater Prawns and a Glimpseinto the Future of Aquaculture in the 21st Century14 October 1995

35

Towards Large Scale Unconstrained Optimization

24. Prof. Dr. Sharifuddin Haji Abdul HamidManagement of HighlY Weathered Add Soils for Sustainable CropProduction28 October 1995

25. Prof. Dr. Yu Swee YeanFish Processing and Preservation: Recent Advances and FutureDirections9 December 1995

26. Prof. Dr. Rosli MohamadPesticide Usage: Concern and Options10 February 1996

27. Prof. Dr. Mohamed Ismail Abdul KarimMicrobial Fermentation and Utilization of Agricultural Bioresourcesand Wastes in Malaysia2 March 1996

28. Pro£ Dr. Wan Sulaiman Wan HarunSoil Pbysics: From Glass Beads to Precision Agriculture16 Marc h 1996

29. Prof. Dr. AbdulAziz Abdul RahmanSustained Growth and Sustainable Development: Is there a Trade-Off 1orMaloysia13 April 1996

30. Prof. Dr. Chew Tek AnnSharecropping in PeifectIYCompetitive Markets: A Contradiction inTerms27April 1996

36

Malik Hj. Abu Hassan

31. Pro£ Dr. Mohd. Yusuf SulaimanBack to the Future with the Sun18 May 1996

32. Prof Dr. Abu Bakar SallehEnzyme Technology:The Basis for BiotechnologicalDevelopment8 June 1996

33. Prof Dr. Kamel Ariffin Mohd. AtanThe Fascinating Numbers28 June 1996

34. Prof. Dr. Ho Yin WanFungi, Friends or Foes27 July 1996

35. Prof. Dr. Tan Soon GuanGenetic Diversity of Some Southeast Asian Animals: Of Buffaloesand Goats and Fishes Too10 August 1996

36. Prof. Dr. Nazaruddin Mohd. JaliWill Rural SociologyRemain Relevant in the 21st. Century21 September 1996

37. Prof. Dr. Abdul Rani BahamanLeptospirosis-A Model for Epidemiology, Diagnosis and ControlInfectious Diseases16 November 1996

38. Prof. Dr. Marziah MahmoodPlant Biotechnology - Strategiesfor Commercialization21 December 1996

37

Towards Large Scale Unconstrained Optimization

39. Pro£ Dr. Ishakl Hj. OmarMarket Relationships in the Malaysian Fish Trade; Theory andApplication22 March 1997

40. Prof. Dr. Suhaila MohamadFood and Its Healing Power12April1997

41. Prof. Dr. Malay Raj MukerjeeA Distrobuted Collaborative Environment for Distance LearningApplications17 June 1998

42. Pro£ Dr. Wong Kai ChooAdvancing theFruit Industry inMalaysia: A Need to Shift ResearchEmphasis15 May 1999

43. Prof Dr. Aini IderisAvian Respiratory and Immunosuppressive Diseases- A FatalAttraction10 July 1999

44. Prof. Dr. Sariah MeonBiological Control of Plant Pathogens: Harnessing the Rich 14August1999

45. Prof Dr. Azizah hashimThe Endomyco"hiza: A Futile Investment?23 Oktober 1999

38

Malik Hj. Abu Hassan

46. Prof Dr. Noraini Abdul SamadMolecular Plant Virology: The W~ Forward2 February 2000

47. Prof Dr. Muhamad AwangDo We have Enough Clean Air to Breathe?7 April2000

48. ProE Dr. Lee Chnoong KhengGreen Environment, Clean Power24 June 2000

49. Prof Dr. Mohd. Ghazali MohayidinManaging Change in theAgriculture Sector: The Need for InnovativeEducational Initiatives12 January 2002

50. Prof. Dr. Fatimah Mohd. Arshad.Analisis Pemasaran Pertanian di Malaysia: Keperluan AgendaPembaharuan26 Januari 2002

51. ProE Dr. Nik Mustapha R. AbdullahFisheries Co-Management: An Institutional Innovation TowardsSustainable Fisheries Industry28 February 2002

52. Prof. Dr. Gulam Rusul Rahmat AliFood Scife!y:Perspectivesand Challenges23 March 2002

39

Towards Large Scale Unconstrained Optimization

53. Prof. Dr. Zaharah A. RahmanNutrient Management Strategiesfor Sustainable Crop Production inAcid Soils: The Role of Research using Isotopes13 April2002

54. Prof. Dr. Maisom AbdullahProductivity Driven Growth: Problems & Possibilities27 April 2002

55. Prof. Dr. Wan Omar AbdullahImmunodiagnosis and Vaccination for Brugian Filariasis: DirectRewards from Research Investments6June 2002

56. Prof. Dr. Syed Tajuddin Syed HassanAgro-ento Bioinformation: Towards the Edge of Reality22June 2002

57. Prof. Dr. Dahlan IsmailS ustainability of TropicalAnimal-Agricultural Production Systems:Integration of Dynamic Complex Systems27 June 2002

58. Prof. Dr. Ahmad Zubaidi BaharumshahThe Economics of Exchange Rates in the East Asian Countries26 October 2002

59. Prof. Dr ..Shaik Md. Noor Alam S.M. HussainContractual Justice in .Asean: A Comparative View of Coercion31 October 2002

40

Ma1ik Hj. Abu Hassan

60. Prof. Dr. Wan Md. Zin Wan YunusChemical Modijication of PolYmers:Current and Future Routes forSynthesizing New PolYmeric Compounds9 November 2002

61. Prof. Dr. Annuar Md. NassirIs the KLSE Efficient: Efficient Market Hypothesis vs BehaviouralFinance23 November 2002

62. ProE Ir. Dr. Radin Umar Radin SohadiRoad Safety Interventions in Malt!Jsia: How Effective Are They?21 February 2003

63. ProE Dr. Shamsher MohamadThe New Shares Market: Regulatory Intervention, Forecast Errorsand Challenges26 April 2003

64. Prof. Dr. Han Chun KwongBlueprintfor Transformation orBusiness as Usual? A S tructurationalPerspectiveof the Knowledge-Based Economy in Malaysia31 May 2003

65. ProE Dr. Mawardi RahmaniChemical Diversity of Malt!Jsian Flora: Potential Source of RichTherapeutic Chemicals26 July 2003

66. Prof. Dr. Dr. Fatimah Md. YusoffAn EcologicalApproach: A Viable Optionfor Aquaculture Industryin Malaysia9 August 2003

41

Towards Large Scale Unconstrained Optimization

67. Prof Dr. Mohamed Ali RajionThe Essential Fatry Acids-Revisited23 August 2003

68. Prof Dr. Azhar Md. ZainPsychotheraphf?yfor Rural Malays - Does it Work?23 August 2003

69. Prof Dr. Azhar Md ZinRespiratory Tract Infection: Establishment and Control27 September 2003

70. Prof. Dr. Jinap SelamatCocoa-Wonders for Chocolate Lovers14 February 2004

71. Prof. Dr. Abdul Halim ShaariHigh Temperature S uperconductivi!J: Puzife & Promises13 March 2004 .

72. Prof. Dr. Yaakob Che ManOils and Fats .Analysis - Recent Advances and Future prospects27 March 2004

73. Prof. Dr. Kaida KhalidMicrowave Aquametry: A Growing Technology24 April 2004

74. Prof. Dr. Hasanah Mohd. GhazaliTapping the Power of EniYmes- Greening the FoodIndustry11 May 2004

42

Malik Hj. Abu Hassan

75. Prof Dr. Yusof IbrahimThe Spider Mite Saga: Quest for Biorational ManagementStrategies22 May 2004

76. Prof. Datin Dr. Sharifah Md. NorThe Education of At-Risk Children: The ChallengesAhead26June 2004

77. Prof. Dr. Ir. Wan Ishak Wan IsmailAgricultural Robot: A New TechnologyDevelopmentfor Agro-BasedIndustry14 August 2004

78. Prof. Dr. Ahmad Said SajapInsect Diseases: Resourcesfor Biopesticide Development28 August 2004

79. Prof. Dr. Aminah AhmadThe Interface of Work and FamilY Roles: A Quest for BalancedUves11 March 2005

80. Prof Dr. Abdul Razak AlimonChallenges in Feeding Uvestock: From Wastes to Feed23 April 2005

81. Prof. Dr. Haji Azimi Hj. HamzahHelping Maloysia« Youth Move Forward: Unleashing the PrimeEnablers29 April 2005

82. Prof. Dr. Rasedee AbdullahIn Search of An EarlY Indicator of Kidnf!YDisease11 June 2005

43

Towards Large Scale Unconstrained Optimization

83. Prof. Dr. Zulkifli Hj. ShamsuddinSmart Partnership Plant-Rhizobacteria Associates17 June 2005

84. Prof. Dr. Mohd. Khanif YusopFrom the Soil to the Table1July 2005

85. Prof. Dr. Annuar KassimMaterials Science and Technology:Past, Present and the Future8July 2005

86. Prof. Dr. Othman MohamedEnhancing CareerDevelopment Counsellingand theBeau!),of CareerGames12 August 2005

87. Prof. Dr. Ir. Mohd. Amin Mohd. SoomEngineering Agricultural Water Management Towards PrecisionFarming26August 2005

88. Prof. Dr. Mohd. Arif SyedBioremediation-A Hope ¥et for the Environment?9 September 2005

89. Prof Dr. Abdul Hamid Abdul RashidThe Wondeer of Out Neuromotor System and the TechnologicalChallenges Thry Pose23 December 2005

44

Malik Hj. Abu Hassan

90. Prof. Dr. Norhani AbdullahRumen Microbes and Some of Their Biotechnological Application27 January 2006

91. Prof. Dr. Abdul Aziz SahareeHaemorhagic Septicaemia in Cattle and Buffaloes: Are We Reacfyfor Freedom?

92. Prof. Dr. Kamariah Abu BakarActivating Teachers' Knowledge and Lifelong Journry in TheirProfessional ,Development3 March2006

93. Prof. Dr. Borhanuddin Mohd. AliInternet Unwired24 March 2006

94. Prof. Dr. Sundararajan ThilagarDevelopment and Innovation in the Fracture Management of Animals31 March 2005

95. Prof. Dr. Zainal Aznam Md. JelanStrategic Feedingfora Sustainable Ruminant Farming19 May 2006

45