WCSMO4 FAIPA SAND Herskovits Mappa Juillen 2001

download WCSMO4 FAIPA SAND Herskovits Mappa Juillen 2001

of 6

Transcript of WCSMO4 FAIPA SAND Herskovits Mappa Juillen 2001

  • 8/2/2019 WCSMO4 FAIPA SAND Herskovits Mappa Juillen 2001

    1/6

    FAIPA_SAND: An Interior Point Algorithm for Simultaneous ANalysis and DesignOptimization

    Jos Herskovits*, Paulo Mappa* and Lionel Juillen**

    *COPPE / Federal University of Rio de Janeiro, Mechanical Engineering Program,

    Caixa Postal 68503, 21945 970 Rio de Janeiro, Brazil.e-mail: jose @ com.ufrj.br, Web page: http:// www.pem.ufrj.br/prof/jose

    **RENAULT, Research Service

    1 Av. du Golf, 78288 Guyancourt cedex, France

    e-mail: Lionel.Juillen @ renault.fr

    1. Abstract

    In the classical approach for Engineering Design Optimization, a Mathematical Program that depends on the design

    variables is solved. That is, the objective function and the constraints depend exclusively of the design variables. Thus,

    the state equation that represents the system to be designed must be solved at each of the iterations of the optimization

    process. The simultaneous analysis and optimal design technique (SAND) consists on adding the state variables to the

    design variables and including the state equation as additional equality constraints [6,7,8,9]. In this way, the state

    equation is solved at the same time as the optimization problem. We present a new Algorithm for SAND optimizationthat solves the enlarged problem in a very efficient way and takes advantage of numerical tools normally included in

    Engineering Analysis software. This one is an extension of the Feasible Arc Interior Point Algorithm, FAIPA.

    2. Keywords: Design Optimization, Nonlinear Programming, Numerical Optimization, Engineering Design.

    3. Introduction

    We consider the Optimal Design of Engineering Systems represented by a State Equation 0)u,x(e = , where rRe .

    The equation depends on the parametersnRx , that we call design variables, being rRu the state variables . The

    classical model for this problem can be represented by the Nonlinear Program

    =

    ,0))x(u,x(hand0))x(u,x(gsubject to

    ))x(u,x(fMinimizex

    (1)

    where )x(u solves the state equation for x , f is the Objective Function, gRm and hRp are the inequality and

    the equality constraints respectively. We assume thatf,gandh are continuous, as well as their first derivatives.

    The problem (1) is solved iteratively and, at each iteration, the state equation must be solved and the sensitivity of the

    state variables must be computed. If the solution of the state is iterative, the whole process can be very painful.

    The simultaneous analysis and optimal design technique (SAND) consists on adding the state variables to the design

    variables and including the state equation as additional equality constraints. Then, the state equation is solved at the

    same time as the optimization problem. This is very advantageous in the case of nonlinear systems but, on the other

    hand, the size of the Mathematical Program is greatly increased. The Nonlinear Program for SAND Optimization is

    stated as follows:

    =

    =

    .0e(x,u)and

    0)u,x(h

    0)u,x(gsubject to

    )u,x(fMinimizeux,

    (2)

    We present a new Nonlinear Programming Algorithm for SAND optimization that solves the enlarged problem in a

    very efficient way and takes advantage of numerical tools normally included in Engineering Analysis software. This

    one is an extension of the Feasible Arc Interior Point Algorithm, FAIPA [1,2,3].

  • 8/2/2019 WCSMO4 FAIPA SAND Herskovits Mappa Juillen 2001

    2/6

    FAIPA makes iterations in the primal and dual variables of the optimization problem to solve Karush - Kuhn - Tucker

    optimality conditions. Given an initial interior point, it defines a sequence of interior points with the objective

    monotonically reduced. At each point, a feasible descent arc is obtained and an inexact line search is done along this

    one.

    At each of these iterations, to compute a feasible arc, FAIPA solves three linear systems with the same matrix. There is

    classical a quasi-Newton version of FAIPA and also a Limited Memory quasi-Newton algorithm. In the present

    contribution we present a technique to reduce the size of the linear systems and of the quasi-Newton matrix to the samemagnitudes as in classical design optimization.

    The present method can be considered as Reduced Newtonlike Algorithm. In general reduced algorithms require

    feasibility of the equality constraints at each iterate. This means that, at each iteration, feasibility must be restored with

    an iterative procedure. This procedure is avoided in the present method.

    4. FAIPA The Feasible Arc Interior Point Algorithm

    Consider now the standard Nonlinear Program:

    =

    ,0)x(hand

    0)x(gsubject to

    )x(fMinimizex

    (3)

    wherenRx ,gRm and hRp . The Feasible arc interior Point Algorithm requires an initial estimate of x at the

    interior of the feasible region defined by the inequality constraints, and generates a sequence of points also at the

    interior of this set. At each iteration, FAIPA defines an arc that is feasible with respect to the inequality constraints and

    descent regarding the objective or another appropriate function. That is, we can walk along the feasible arc reducing

    the objective and remaining feasible. When only inequality constraints are considered, FAIPA reduces the objective At

    each iteration. In the complete problem, an increase of the objective may be necessary in order to have the equalities

    satisfied. In this contribution we consider a quasi Newton version of FAIPA.

    The Algorithm:

    Parameters: > 0, ( )1,0 and 0r> , pRr .

    Data. Initialize x, 0> andnn

    RB

    symmetric and positive definite. x is a feasible point.Step 1. Computation of the direction d

    Compute (d0,0,0) and (d1,1 ,1) by solving the linear systems:

    =

    =+

    =++

    )x(hd)x(h

    .0)x(Gd)x(g

    ),x(f)x(h)x(gBd

    0t

    00t

    000

    and

    =

    =+

    =++

    0d)x(h

    .)x(Gd)x(g

    ,0)x(h)x(gBd

    1t

    11t

    111

    If d0 = 0, stop. If

    i0i

    r , takei0i

    r > , i = 1,2, ,p.

    Take ( ) ( ) ( ) ( )xh]xhsgn[rxfr,x t+= .If ( ) 0r,xdt1 > take ( ) ( ) ( ) ( )r,xd/r,xd1;dinf

    t1

    t0

    2

    0 = else2

    0d= .

    10

    ddd +=

  • 8/2/2019 WCSMO4 FAIPA SAND Herskovits Mappa Juillen 2001

    3/6

    Step 2. Computation of the descent feasible arc Take .m,...,1i.dh)x(h)dx(hw~anddg)x(g)dx(gw~ tiiiEitiiiIi =+=+= Compute ~andd~ by solving the following linear system:

    =

    =+

    =++

    Ei

    t

    Ii

    t

    w~

    d~

    )x(h

    w~~

    )x(Gd~

    )x(g

    0~)x(h~)x(gd~B

    Step 3. Curvilinear search

    Find a step t satisfying a given constrained line search criterion in the auxiliary function )r,x( and such that

    0if0)d~

    ttdx(g i2

    i and B symmetric and definitive positive. Go back to step 1.

    5. FAIPA_SAND Algorithm

    To simplify this presentation we consider the SAND Optimization problem with only inequality constraints:

    =

    .0e(x,u)and

    0)u,x(gsubject to

    )u,x(fMinimizeux,

    (4)

    When applied to this problem, FAIPA solves the linear systems:

    =

    ~

    ~u

    x

    ~

    10

    ~

    10

    ~

    uu1u0

    ~

    xx1x0

    tu

    tx

    tu

    tx

    uuuuux

    xxxuxx

    )u,x(e

    0

    00)u,x(f

    00)u,x(f

    ddd

    ddd

    00)u,x(e)u,x(e

    0G)u,x(g)u,x(g

    )u,x(e)u,x(gBB

    )u,x(e)u,x(gBB

    (5)

    where

    =

    xxux

    xuxx

    BB

    BBB (6)

    In general the number of degrees of freedom of the model is much larger than the number of design variables. In

    consequence, the size of the systems above and of the quasi - Newton matrix are greatly increased in SAND approach.

    In this contribution e present a new technique that reduces the state variables and the state equations from the linearsystem also reduces the quasi Newton matrix of the size of the design variables.

    Now we call

    [ ] )u,x(e)u,x(eu 1tu

    = and [ ] )u,x(e)u,x(eDu tx1t

    u =

    (7)

    From the first system of equation (5), we get

    x0u0

    d]Du[ud = (8)

    [ ] ]dBdB)u,x(f[)u,x(eu0uux0uxu

    1tu0 ++=

    (9)

  • 8/2/2019 WCSMO4 FAIPA SAND Herskovits Mappa Juillen 2001

    4/6

    Then, we can eliminate the state equation and the corresponding Lagrange multipliers from the first system in Eq.(5).

    If we define

    ]DuI[M t= , (10)

    tMBMB = , (11)

    [ ] rnnx RR0II+= , (12)

    [ ] rnru RRI0I+= , (13)

    tuxxu BIIB = and

    tuuuu BIIB = , (14)

    we can write the first systems of the Eq. (5 ) as

    =

    ug

    bd

    GdDu

    gDugBtu0

    x0

    tu

    tx

    ut

    x , (15)

    where

    [ ] [ ] uBI}IDuI{)u,x(fDu)u,x(fb tuut

    xut

    x ++= . (16)

    The present approach will be effective if b and

    B are computed without need of storing B. Existing ReducedAlgorithms restore the state equation at each iteration. Then the third term of the right side of the Eq. (16) is null, since

    0u = . We present a formulation that avoids this procedure, that is equivalent to solve the state equation at each

    iteration. Our method also avoids the storage of the quasi-Newton matrix to evaluate B in the Eq. (15). With this

    object, we use limited memory representation of quasi-Newton matrices.

    5.1 Using the limited memory technique [4].

    Let be

    k1kk xxs=

    + and )x(f)x(fy k1kk=

    + , (17)

    the direct BFGS update formula is given by

    ktk

    tkk

    kktk

    ktkkk

    k1ksy

    yy

    sBs

    BssBBB +=+ . (18)

    Considering the q pairs }y,s{ ii , 1k,...,qki = , we define

    ]y,...,y[Y 1kqkk = and ]s,...,s[S 1kqkk = .

    Thus, we can write the direct BFGS update formula as

    [ ]

    =

    tk

    t

    k

    1

    k2

    1

    kk

    2

    1

    k

    1

    tk

    tk2

    1

    k2

    1

    kkkkS

    Y

    JDL

    0D

    J0LDDSYIB (19)

    where

    ]ys,...,ys[diagD 1kt

    1kqkt

    qkk = ,

    >

    = ++

    otherwise0

    jiifys)L( iqk

    ti1qk

    ijk ,

    IB qk = and

    J is the lower triangular matrix that satisfiestk

    1kkk0

    tk

    tkk LDLSBSJJ

    += . We can prove that J exists and is nonsingular.

  • 8/2/2019 WCSMO4 FAIPA SAND Herskovits Mappa Juillen 2001

    5/6

    Then, given a vector v we can evaluate the product vBk

    without storing B from Eq.(19) by means of the following

    procedures:

    i. Update S , Y and compute kktkk D,SS,L .ii. Compute the Cholesky factorization of t

    k1

    kkktk

    LDLSS + to obtain tkk

    JJ .

    iii. Compute

    =

    vS

    vYp

    tk

    tk .

    iv. Perform a forward and then a backward solve to obtain

    pS

    Y

    JDL

    0D

    J0

    LDDqtk

    tk

    1

    tk

    2

    1

    ktk

    2

    1

    k

    1

    tk

    tk

    2

    1

    k2

    1

    k

    =

    v. Compute q]SY[vvB tkkkk = .We use this procedure for evaluate b, without need of restoring the equilibrium at each iteration.

    Given the vectors u and v, we can use the Eq. (19) to evaluate the product Bvut

    :

    vW

    JDL

    0D

    J0

    LDDWuvuvButk

    1

    tk

    2

    1

    ktk

    2

    1

    k

    1

    tk

    tk

    2

    1

    k2

    1

    kktt

    kt

    = , (21)

    where

    =

    tk

    tk

    kS

    YW .

    Now we represent each element of the matrix B forij

    B and each row of the matrix M fori

    M . Then we write the

    matrix B from the Eq. (11) as

    tjiij BMMB = . (22)

    Thus, we ban evaluate B without storing B by means of (21). The second and third of the systems (5) produce the

    same reduced systems with the same matrix and with right that we compute in a similar way as b.

    6. Conclusions.

    We obtained a new algorithm that avoids restoring the equilibrium at each iteration and that reduces the size of

    the linear systems and of the quasi-Newton matrix to the same magnitudes as in classical design optimization. This

    formulation makes SAND optimization of nonlinear engineering systems very efficient, when compared with the

    classical approach. Moreover, solvers of commercial simulation codes can be employed. These solvers take advantage

    of the structure of the linear systems and, in general, they are very efficient.

    7. References.

    1. Herskovits J and Santos G. Feasible Arc Interior Point Algorithms for Nonlinear Optimization, Fourth World

    Congress on Computational Mechanics, (in CD-ROM), Buenos Aires, Argentina, June-July, 1998.

    2. Herskovits J. A View on Nonlinear Optimization, pg. 71-116, Chapter of the book " Advances in Structural

    Optimization ", J. Herskovits Ed., KLUWER Academic Publishers, Holland, June, 1995.

    3. Herskovits J.A Feasible Directions Interior Point Technique for Nonlinear Optimization, JOTA - Journal of

    Optimization Theory and Applications, Vol. 99, N 1, pg. 121-146, October, 1998.

  • 8/2/2019 WCSMO4 FAIPA SAND Herskovits Mappa Juillen 2001

    6/6

    4. Byrd R H , Nocedal J and Schanabel R H. Representation of Quasi-Newton Matrices and their Use in Limited

    Memory Methods, Technical Report CU-CS-612-92, University of Colorado (Boulder, CO, 1992).

    5. Luenberg D G. Linear and Nonlinear programming , 2nd printed , Sddilsson-Wesley, 1984.

    6. Leontiev A and Herskovits J. Interior Point Techniques for Optimal Control of Variational Inequality, Structural

    Optimization Research Journal, Vol. 14, N 2/3, pg. 101-107, October, 1997.

    7. Herskovits J , Dias G P, Santos G and Mota Soares CM. Shape Structural Optimization with an Interior Point

    Mathematical Programming Algorithm, Structural Optimization and multidisciplinary Journal, pg107-115, October

    2000.8. Dias G, Herskovits J, Rochinha F. Simultaneous Shape Optimization and Nonlinear Analysis of Elastic Solids,

    Fourth World Congress on Computational Mechanics (in CD-ROM), Buenos Aires, Argentina. June-July, 1998.

    9. Herskovits J, Leontiev A, Dias G, Santos G. Contact Shape Optimization: A Mathematical Programming Approach,

    Applied Mechanics in the Americas. Vol. 6, pg. 385-388, printed for AAM and ABCM - Rio de Janeiro, Brazil, 4-8

    January, 1999. Sixth Pan-American Congress of Applied Mechanics and Eighth International Conference on Dynamic

    Problems in Mechanics.