Gradient Methods in Structural Optimization

Post on 16-Oct-2014

99 views 3 download

Tags:

Transcript of Gradient Methods in Structural Optimization

Gradient Methods in Structural Optimization

Jegan Mohan C.Ferienakadamie – Sarntal18th September 2006

Gradient Methods - Contents

IntroductionTypes of AlgorithmsGradient Methods

Steepest Descent MethodConjugate Gradient MethodMethod of feasible Directions…..

Introduction

Optimization problem is basically a minimization problem.

Introduction

Structural optimization – parameters to be optimized can be weight, stress distribution, loads etc.The constrain parameters can be displacements, stress etc.One step solution generally not possibleIterative methods are best suited

Types of Algorithms

One of the ways of classifying optimizationalgorithms are based on the kind of datathey use.

Zero order methods – function values ( direct search)First order methods – gradient ( Steepest descent, conjugate gradient)Second order methods (Newton type)

Descent Methods

Start point inside feasible domainChoose a descent search directionFollow this direction until function value is sufficiently decreased, however keep inside feasible domainChoose new search direction

Quadratic form

Descent Method

Steepest Descent

Gradient of the function is the direction of steepest ascentNegative of the function gradient would be the direction of the steepest descentLength of search path determined by line search algorithmSubsequent directions orthogonal to each other

Steepest Descent

Steepest Descentf(x) is a quadratic functionSearch direction sk is negative of gradient

Steepest Descent

Steepest Descent

Line search is performed to determine the minimum of function along a lineNon quadratic functions require numerical techniques like Equal interval search, Polynomial curve fitting etc., for line searchAnalytical methods for Quadratic functions

Steepest DescentFor badly conditioned problem, the steepest descent exhibit “zigzag” behaviorCondition number of a matrix ( w.r.t L2 Norm) is the ratio of biggest and smallest eigen-values.Bad condition = large condition numberEigen values are the axes of the hyper surfaceBadly conditioned systems have narrow function valleySolution oscillates and takes more iterations to reach solution

Steepest Descent

Steepest Descent

To avoid zigzaging, one may consider artificial widening of function by variable transformation

Optimization is done for the transformed variable and the results are then transformed back to the orginalvariablesHessian of the Matrix has to be computed – time consuming

Conjugate Gradient

Takes into account the curvature of the problemGenerates improved search directions and converges faster than Steepest DescentSearch directions are conjugate to each otherQuadratic problems – CG converges in the maximum of n steps ( n – no. of variables)

Conjugate Gradient

W.r.t steepest descent, the search directions are modified by

Criterion for β is such that all search directions are conjugate w.r.t. Q

Conjugate Gradient

Method of feasible directions

Extension of steepest descent or conjugate gradientWidely used in structural optimizationQuite robustMethod starts inside feasible domainAs long as inside feasible domain, no difference between this method and CG/Steepest descentIf in one of the iterations, the boundary is hit, the next search direction should bring back to inside domain

Method of feasible directions

Search direction must satisfy the following criteria

1. Feasible ( i.e. keeping inside feasible domain)

2. Usable ( i.e. must reduce objective )

Method of feasible directions

Other gradient methods

Generalized reduced gradient method (GRG)Modified feasible direction method

The End

Questions???

Coffee? ☺