Optimization

317
Optimization Assoc. Prof. Dr. Pelin Gündeş [email protected]

description

Optimization. Assoc. Prof. Dr. Pelin Gündeş [email protected]. Optimization. Basic Information Instructor: Assoc. Professor Pelin Gundes (http://atlas.cc.itu.edu.tr/~gundes/) E-mail: [email protected] Office Hours: TBD by email appointment - PowerPoint PPT Presentation

Transcript of Optimization

Page 1: Optimization

Optimization

Assoc. Prof. Dr. Pelin Gündeş

[email protected]

Page 2: Optimization

2

Optimization

Basic Information• Instructor: Assoc. Professor Pelin Gundes

(http://atlas.cc.itu.edu.tr/~gundes/)• E-mail: [email protected]• Office Hours: TBD by email appointment• Website:

http://atlas.cc.itu.edu.tr/~gundes/teaching/Optimization.htm

• Lecture Time: Wednesday 13:00 - 16:00• Lecture Venue: M 2180

Page 3: Optimization

3

Optimization literatureTextbooks:

1. Nocedal J. and Wright S.J., Numerical Optimization, Springer Series in Operations Research, Springer, 636 pp, 1999.

2. Spall J.C., Introduction to Stochastic Search and Optimization, Estimation, Simulation and Control, Wiley, 595 pp, 2003.

3. Chong E.K.P. and Zak S.H., An Introduction to Optimization, Second Edition, John Wiley & Sons, New York, 476 pp, 2001.

4. Rao S.S., Engineering Optimization - Theory and Practice, John Wiley & Sons, New York, 903 pp, 1996.

5. Gill P.E., Murray W. and Wright M.H., Practical Optimization, Elsevier, 401 pp., 2004.

6. Goldberg D.E., Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, Reading, Mass., 1989.

7. S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.(available at http://www.stanford.edu/~boyd/cvxbook/)

Page 4: Optimization

4

Optimization literatureJournals:

1. Engineering Optimization

2. ASME Journal of Mechnical Design

3. AIAA Journal

4. ASCE Journal of Structural Engineering

5. Computers and Structures

6. International Journal for Numerical Methods in Engineering

7. Structural Optimization

8. Journal of Optimization Theory and Applications

9. Computers and Operations Research

10. Operations Research and Management Science

Page 5: Optimization

5

Optimization

Course Schedule:

1. Introduction to Optimization

2. Classical Optimization Techniques

3. Linear programming and the Simplex method

4. Nonlinear programming-One Dimensional Minimization Methods

5. Nonlinear programming-Unconstrained Optimization Techniques

6. Nonlinear programming-Constrained Optimization Techniques

7. Global Optimization Methods-Genetic algorithms

8. Global Optimization Methods-Simulated Annealing

9. Global Optimization Methods- Coupled Local Minimizers

Page 6: Optimization

6

Optimization

Course Prerequisite:

• Familiarity with MATLAB, if you are not familiar with MATLAB, please visit

http://www.ece.ust.hk/~palomar/courses/ELEC692Q/lecture%2006%20-%20cvx/matlab_crashcourse.pdf

http://www.ece.ust.hk/~palomar/courses/ELEC692Q/lecture%2006%20-%20cvx/official_getting_started.pdf

Page 7: Optimization

7

Optimization

• 70% attendance is required!

• Grading:

Homeworks: 15%

Mid-term projects: 40%

Final Project: 45%

Page 8: Optimization

8

Optimization

• There will also be lab sessions for MATLAB exercises!

Page 9: Optimization

9

1. Introduction1. Introduction

• Optimization is the act of obtaining the best result under given circumstances.

• Optimization can be defined as the process of finding the conditions that give the maximum or minimum of a function.

• The optimum seeking methods are also known as mathematical programming techniques and are generally studied as a part of operations research.

• Operations research is a branch of mathematics concerned with the application of scientific methods and techniques to decision making problems and with establishing the best or optimal solutions.

Page 10: Optimization

10

1. Introduction1. Introduction• Operations research (in the UK) or operational research (OR)

(in the US) or yöneylem araştırması (in Turkish) is an interdisciplinary branch of mathematics which uses methods like:

– mathematical modeling– statistics– algorithms to arrive at optimal or good decisions in complex

problems which are concerned with optimizing the maxima (profit, faster assembly line, greater crop yield, higher bandwidth, etc) or minima (cost loss, lowering of risk, etc) of some objective function.

• The eventual intention behind using operations research is to elicit a best possible solution to a problem mathematically, which improves or optimizes the performance of the system.

Page 11: Optimization

11

1. Introduction1. Introduction

Page 12: Optimization

12

1. Introduction

Historical development

• Isaac Newton (1642-1727) (The development of differential calculus methods of optimization)

• Joseph-Louis Lagrange (1736-1813) (Calculus of variations, minimization of functionals, method of optimization for constrained problems)

• Augustin-Louis Cauchy (1789-1857) (Solution by direct substitution, steepest descent method for unconstrained optimization)

Page 13: Optimization

13

1. Introduction

Historical development

• Leonhard Euler (1707-1783)

(Calculus of variations, minimization of

functionals)

• Gottfried Leibnitz (1646-1716)

(Differential calculus methods

of optimization)

Page 14: Optimization

14

1. Introduction

Historical development

• George Bernard Dantzig (1914-2005)

(Linear programming and Simplex method (1947))

• Richard Bellman (1920-1984)

(Principle of optimality in dynamic

programming problems)

• Harold William Kuhn (1925-)

(Necessary and sufficient conditions for the optimal solution of programming problems, game theory)

Page 15: Optimization

15

1. Introduction

Historical development

• Albert William Tucker (1905-1995) (Necessary and sufficient conditions for the optimal solution of programming problems, nonlinear programming, game theory: his PhD student was John Nash)

• Von Neumann (1903-1957) (game theory)

Page 16: Optimization

16

1. Introduction1. Introduction

• Mathematical optimization problem:

• f0 : Rn R: objective function• x=(x1,…..,xn): design variables (unknowns of the problem,

they must be linearly independent)• gi : Rn R: (i=1,…,m): inequality constraints

• The problem is a constrained optimization problem

mibxg

xf

ii ,....,1,)( subject to

)( minimize 0

Page 17: Optimization

17

1. Introduction1. Introduction

• If a point x* corresponds to the minimum value of the function f (x), the same point also corresponds to the maximum value of the negative of the function, -f (x). Thus optimization can be taken to mean minimization since the maximum of a function can be found by seeking the minimum of the negative of the same function.

Page 18: Optimization

18

1. Introduction

Constraints

• Behaviour constraints: Constraints that represent limitations on the behaviour or performance of the system are termed behaviour or functional constraints.

• Side constraints: Constraints that represent physical limitations on design variables such as manufacturing limitations.

Page 19: Optimization

19

1. Introduction

Constraint Surface• For illustration purposes, consider an optimization problem with only

inequality constraints gj (X) 0. The set of values of X that satisfy the equation gj (X) =0 forms a hypersurface in the design space and is called a constraint surface.

Page 20: Optimization

20

1. Introduction

Constraint Surface• Note that this is a (n-1) dimensional subspace, where n is the

number of design variables. The constraint surface divides the design space into two regions: one in which gj (X) 0and the other in which gj (X) 0.

Page 21: Optimization

21

1. Introduction

Constraint Surface• Thus the points lying on the hypersurface will satisfy the constraint

gj (X) critically whereas the points lying in the region where gj (X) >0 are infeasible or unacceptable, and the points lying in the region where gj (X) < 0 are feasible or acceptable.

Page 22: Optimization

22

1. Introduction

Constraint Surface• In the below figure, a hypothetical two dimensional design space is

depicted where the infeasible region is indicated by hatched lines. A design point that lies on one or more than one constraint surface is called a bound point, and the associated constraint is called an active constraint.

Page 23: Optimization

23

1. Introduction

Constraint Surface• Design points that do not lie on any constraint surface are known as

free points.

Page 24: Optimization

24

1. Introduction

Constraint Surface

Depending on whether a particular design point belongs to the acceptable or unacceptable regions, it can be identified as one of the following four types:

• Free and acceptable point

• Free and unacceptable point

• Bound and acceptable point

• Bound and unacceptable point

Page 25: Optimization

25

1. Introduction

• The conventional design procedures aim at finding an acceptable or adequate design which merely satisfies the functional and other requirements of the problem.

• In general, there will be more than one acceptable design, and the purpose of optimization is to choose the best one of the many acceptable designs available.

• Thus a criterion has to be chosen for comparing the different alternative acceptable designs and for selecting the best one.

• The criterion with respect to which the design is optimized, when expressed as a function of the design variables, is known as the objective function.

Page 26: Optimization

26

1. Introduction

• In civil engineering, the objective is usually taken as the minimization of the cost.

• In mechanical engineering, the maximization of the mechanical efficiency is the obvious choice of an objective function.

• In aerospace structural design problems, the objective function for minimization is generally taken as weight.

• In some situations, there may be more than one criterion to be satisfied simultaneously. An optimization problem involving multiple objective functions is known as a multiobjective programming problem.

Page 27: Optimization

27

1. Introduction

• With multiple objectives there arises a possibility of conflict, and one simple way to handle the problem is to construct an overall objective function as a linear combination of the conflicting multiple objective functions.

• Thus, if f1 (X) and f2 (X) denote two objective functions, construct a new (overall) objective function for optimization as:

where 1 and 2 are constants whose values indicate the relative importance of one objective function to the other.

)()()( 2211 XXX fff

Page 28: Optimization

28

1. Introduction

• The locus of all points satisfying f (X) = c = constant forms a hypersurface in the design space, and for each value of c there corresponds a different member of a family of surfaces. These surfaces, called objective function surfaces, are shown in a hypothetical two-dimensional design space in the figure below.

Page 29: Optimization

29

1. Introduction

• Once the objective function surfaces are drawn along with the constraint surfaces, the optimum point can be determined without much difficulty.

• But the main problem is that as the number of design variables exceeds two or three, the constraint and objective function surfaces become complex even for visualization and the problem has to be solved purely as a mathematical problem.

Page 30: Optimization

30

Example

Example:

Design a uniform column of tubular section to carry a compressive load P=2500 kg f

for minimum cost. The column is made up of a material that has a yield stress of 500

kgf/cm2, modulus of elasticity (E) of 0.85e6 kgf/cm2, and density () of 0.0025 kgf/cm3.

The length of the column is 250 cm. The stress induced in this column should be less

than the buckling stress as well as the yield stress. The mean diameter of the column

is restricted to lie between 2 and 14 cm, and columns with thicknesses outside the

range 0.2 to 0.8 cm are not available in the market. The cost of the column includes

material and construction costs and can be taken as 5W + 2d, where W is the weight

in kilograms force and d is the mean diameter of the column in centimeters.

Page 31: Optimization

31

Example

Example:

The design variables are the mean diameter (d) and tube thickness (t):

The objective function to be minimized is given by:

t

d

x

x

2

1X

121 282.92525)( xxxddtldWf X

Page 32: Optimization

32

Example

• The behaviour constraints can be expressed as:

stress induced ≤ yield stress

stress induced ≤ buckling stress

• The induced stress is given by:

21i

2500stress induced

xxdt

P

Page 33: Optimization

33

Example

• The buckling stress for a pin connected column is given by:

where I is the second moment of area of the cross section of the column given by:

dtl

EI

2

2

b area sectionalcross

load bucklingEuler stress buckling

)(8

)(8

)()()()()()(64

))()((64

)(64

22

2121

22

22

2244

xxxxtddt

tdtdtdtdtdtd

ddddddddI ioioioio

Page 34: Optimization

34

Example

• Thus, the behaviour constraints can be restated as:

• The side constraints are given by:

0)250(8

))(1085.0(2500)(

05002500

)(

2

22

21

62

212

211

xx

xxg

xxg

X

X

8.02.0

142

t

d

Page 35: Optimization

35

Example

• The side constraints can be expressed in standard form as:

08.0)(

02.0)(

014)(

02)(

26

25

14

13

xg

xg

xg

xg

X

X

X

X

Page 36: Optimization

36

Example

• For a graphical solution, the constraint surfaces are to be plotted in a two dimensional design space where the two axes represent the two design variables x1 and x2. To plot the first constraint surface, we have:

• Thus the curve x1x2=1.593 represents the constraint surface g1(X)=0. This curve can be plotted by finding several points on the curve. The points on the curve can be found by giving a series of values to x1 and finding the corresponding values of x2

that satisfy the relation x1x2=1.593 as shown in the Table below:

593.105002500

)( 2121

1 xxxx

g

X

x1 2 4 6 8 10 12 14

x2 0.7965 0.3983 0.2655 0.199 0.1593 0.1328 0.114

Page 37: Optimization

37

Example

• The infeasible region represented by g1(X)>0 or x1x2< 1.593 is shown by hatched lines. These points are plotted and a curve P1Q1 passing through all these points is drawn as shown:

Page 38: Optimization

38

Example

• Similarly the second constraint g2(X) < 0 can be expressed as:

• The points lying on the constraint surface g2 (X)=0 can be obtained as follows (These points are plotted as Curve P2Q2:

3.47)( 22

2121 xxxx

x1 2 4 6 8 10 12 14

x2 2.41 0.716 0.219 0.0926 0.0473 0.0274 0.0172

Page 39: Optimization

39

Example

• The plotting of side constraints is simple since they represent straight lines.

• After plotting all the six constraints, the feasible region is determined as the bounded area ABCDEA

3.47)( 22

2121 xxxx

Page 40: Optimization

40

Example

• Next, the contours of the objective function are to be plotted before finding the optimum point. For this, we plot the curves given by:

for a series of values of c. By giving different values to c, the contours of f can be plotted with the help of the following points.

constant

282.9)( 121

cxxxf X

Page 41: Optimization

41

Example

• For

• For

• For

• For

0.50282.9)( 121 xxxf X

x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7

x1 16.77 12.62 10.10 8.44 7.24 6.33 5.64

0.40282.9)( 121 xxxf X

x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7

x1 13.40 10.10 8.08 6.75 5.79 5.06 4.51

C)point corner he through t(passing 58.31282.9)( 121 xxxf X

x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7

x1 10.57 7.96 6.38 5.33 4.57 4 3.56

B)point corner he through t(passing 53.26282.9)( 121 xxxf X

x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7

x1 8.88 6.69 5.36 4.48 3.84 3.36 2.99

Page 42: Optimization

42

Example

• These contours are shown in the

figure below and it can be seen

that the objective function can not

be reduced below a value of 26.53

(corresponding to point B) without

violating some of the constraints.

Thus, the optimum solution is

given by point B with d*=x1*=5.44

cm and t*=x2*=0.293 cm with

fmin=26.53.

Page 43: Optimization

43

ExamplesExamples

Design of civil engineering structures• variables: width and height of member cross-sections

• constraints: limit stresses, maximum and minimum dimensions

• objective: minimum cost or minimum weight

Analysis of statistical data and building empirical models from measurements

• variables: model parameters

• Constraints: physical upper and lower bounds for model parameters

• Objective: prediction error

Page 44: Optimization

44

Classification of optimization problemsClassification of optimization problems

Classification based on:

• Constraints– Constrained optimization problem

– Unconstrained optimization problem

• Nature of the design variables– Static optimization problems

– Dynamic optimization problems

Page 45: Optimization

45

Classification of optimization problemsClassification of optimization problems

Classification based on:

• Physical structure of the problem– Optimal control problems

– Non-optimal control problems

• Nature of the equations involved– Nonlinear programming problem

– Geometric programming problem

– Quadratic programming problem

– Linear programming problem

Page 46: Optimization

46

Classification of optimization problemsClassification of optimization problems

Classification based on:

• Permissable values of the design variables– Integer programming problems

– Real valued programming problems

• Deterministic nature of the variables– Stochastic programming problem

– Deterministic programming problem

Page 47: Optimization

47

Classification of optimization problemsClassification of optimization problems

Classification based on:

• Separability of the functions– Separable programming problems

– Non-separable programming problems

• Number of the objective functions– Single objective programming problem

– Multiobjective programming problem

Page 48: Optimization

48

Geometric Programming

• A geometric programming problem (GMP) is one in which the objective function and constraints are expressed as posynomials in X.

Page 49: Optimization

49

Page 50: Optimization

50

Quadratic Programming Problem

• A quadratic programming problem is a nonlinear programming problem with a quadratic objective function and linear constraints. It is usually formulated as follows:

subject to

where c, qi,Qij, aij, and bj are constants.

ji

n

jij

n

ii

n

ii xxQxqcF

111

)(X

nix

mjbxa

i

ji

n

iij

,,2,1,0

,,2,1,1

Page 51: Optimization

51

Optimal Control Problem

• An optimal control (OC) problem is a mathematical programming problem involving a number of stages, where each stage evolves from the preceding stage in a prescribed manner.

• It is usually described by two types of variables: the control (design) and the state variables. The control variables define the system and govern the evolution of the system from one stage to the next, and the state variables describe the behaviour or status of the system in any stage.

Page 52: Optimization

52

Optimal Control Problem

• The problem is to find a set of control or design variables such that the total objective function (also known as the performance index) over all stages is minimized subject to a set of constraints on the control and state variables.

• An OC problem can be stated as follows:

Find X which minimizes

subject to the constraints

where xi is the ith control variable, yi is the ith control variable, and fi is the contribution of the ith stage to the total objective function; gj, hk and qi are functions of xj, yk and xi and yi, respectively, and l is the total number of stages.

),( )(l

1iiii yxff

X

lky

ljx

liyyyxq

kk

jj

iiiii

,,2,1,0)(

,,2,1,0)(

,,2,1,),( 1

h

g

Page 53: Optimization

53

Integer Programming Problem

• If some or all of the design variables x1,x2,..,xn of an optimization problem are restricted to take on only integer (or discrete) values, the problem is called an integer programming problem.

• If all the design variables are permitted to take any real value, the optimization problem is called a real-valued programming problem.

Page 54: Optimization

54

Stochastic Programming Problem

• A stochastic programming problem is an optimization problem in which some or all of the parameters (design variables and/or preassigned parameters) are probabilistic (nondeterministic or stochastic).

• In other words, stochastic programming deals with the solution of the optimization problems in which some of the variables are described by probability distributions.

Page 55: Optimization

55

Separable Programming Problem

• A function f (x) is said to be separable if it can be expressed as the sum of n single variable functions, f1(x1), f2(x2),….,fn(xn), that is,

• A separable programming problem is one in which the objective function and the constraints are separable and can be expressed in standard form as:

Find X which minimizes

subject to

where bj is constant.

i

n

ii xff

1

)(X

)()(1

i

n

ii xff

X

mjbxgg ji

n

iijj ,,2,1,)()(

1

X

Page 56: Optimization

56

Multiobjective Programming Problem

• A multiobjective programming problem can be stated as follows:

Find X which minimizes f1 (X), f2 (X),…., fk (X)

subject to

where f1 , f2,…., fk denote the objective functions to be minimized simultaneously.

mjg j ,...,2,1,0)( X

Page 57: Optimization

57

Review of mathematics

Concepts from linear algebra:Concepts from linear algebra: Positive definiteness Positive definiteness

• Test 1: A matrix A will be positive definite if all its eigenvalues are positive; that is, all the values of that satisfy the determinental equation

should be positive. Similarly, the matrix A will be negative definite if its eigenvalues are negative.

0 I A

Page 58: Optimization

58

Review of mathematicsReview of mathematics

Positive definitenessPositive definiteness• Test 2: Another test that can be used to find the positive definiteness of a matrix A of

order n involves evaluation of the determinants

• The matrix A will be positive definite if and only if all the values A1, A2, A3,An are positive

• The matrix A will be negative definite if and only if the sign of Aj is (-1)j for j=1,2,,n • If some of the Aj are positive and the remaining Aj are zero, the matrix A will be positive

semidefinite

333231

232221

131211

3

2221

12112

11

aaa

aaa

aaa

A

aa

aaA

aA

nnnnn

n

n

n

aaaa

aaaa

aaaa

aaaa

A

321

3333231

2232221

1131211

3

Page 59: Optimization

59

Review of mathematicsReview of mathematics

Negative definitenessNegative definiteness

• Equivalently, a matrix is negative-definite if all its eigenvalues are negative

• It is positive-semidefinite if all its eigenvalues are all greater than or equal to zero

• It is negative-semidefinite if all its eigenvalues are all less than or equal to zero

Page 60: Optimization

60

Concepts from linear algebra:Concepts from linear algebra:

Nonsingular matrix: The determinant of the matrix is not zero.

Rank: The rank of a matrix A is the order of the largest nonsingular square submatrix of A, that is, the largest submatrix with a determinant other than zero.

Review of mathematicsReview of mathematics

Page 61: Optimization

61

Review of mathematicsReview of mathematics

Solutions of a linear problem Minimize f(x)=cTxSubject to g(x): Ax=bSide constraints: x ≥0• The existence of a solution to this problem depends on the

rows of A.

• If the rows of A are linearly independent, then there is a unique solution to the system of equations.

• If det(A) is zero, that is, matrix A is singular, there are either no solutions or infinite solutions.

Page 62: Optimization

62

Review of mathematicsReview of mathematics

Suppose

The new matrix A* is called the augmented matrix- the columns of b are added to A. According to the theorems of linear algebra:

• If the augmented matrix A* and the matrix of coefficients A have the same rank r which is less than the number of design variables n: (r < n), then there are many solutions.

• If the augmented matrix A* and the matrix of coefficients A do not have the same rank, a solution does not exist.

• If the augmented matrix A* and the matrix of coefficients A have the same rank r=n, where the number of constraints is equal to the number of design variables, then there is a unique solution.

1.50.511-

3111 A*

1.5

3b A

5.011

111

Page 63: Optimization

63

Review of mathematicsReview of mathematics

In the example

The largest square submatrix is a 2 x 2 matrix (since m = 2 and m < n). Taking the submatrix which includes the first two columns of A, the determinant has a value of 2 and therefore is nonsingular. Thus the rank of A is 2 (r = 2). The same columns appear in A* making its rank also 2. Since

r < n, infinitely many solutions exist.

1.50.511-

3111 A*

1.5

3b A

5.011

111

Page 64: Optimization

64

Review of mathematicsReview of mathematics

In the example

One way to determine the solutions is to assign ( n-r) variables arbitrary values and use them to determine values for the remaining r variables. The value n-r is often identified as the degree of freedom for the system of equations.

In this example, the degree of freedom is 1 (i.e., 3-2). For instance x3 can be assigned a value of 1 in which case x1=0.5 and x2=1.5

1.50.511-

3111 A*

1.5

3b A

5.011

111

Page 65: Optimization

65

HomeworkHomework What is the solution of the system given below?

Hint: Determine the rank of the matrix of the coefficients and the augmented matrix.

12:

1:

2:

213

212

211

xxg

xxg

xxg

Page 66: Optimization

66

2. Classical optimization techniques2. Classical optimization techniques Single variable optimization

• Useful in finding the optimum solutions of continuous and differentiable functions

• These methods are analytical and make use of the techniques of differential calculus in locating the optimum points.

• Since some of the practical problems involve objective functions that are not continuous and/or differentiable, the classical optimization techniques have limited scope in practical applications.

Page 67: Optimization

67

2. Classicial optimization techniques2. Classicial optimization techniques Single variable optimization

• A function of one variable f (x) has a relative or local minimum at x = x* if f (x*) ≤ f (x*+h) for all sufficiently small positive and negative values of h

• A point x* is called a relative or local maximum if f (x*) ≥ f (x*+h) for all values of h sufficiently close to zero.

Local minimum

Global minima

Local minima

Page 68: Optimization

68

2. Classicial optimization techniques2. Classicial optimization techniques Single variable optimization

• A function f (x) is said to have a global or absolute minimum at x* if f (x*) ≤ f (x) for all x, and not just for all x close to x*, in the domain over which f (x) is defined.

• Similarly, a point x* will be a global maximum of f (x) if f (x*) ≥ f (x) for all x in the domain.

Page 69: Optimization

69

Necessary conditionNecessary condition

• If a function f (x) is defined in the interval a ≤ x ≤ b and has a relative minimum at x = x*, where a < x* < b, and if the derivative df (x) / dx = f’(x) exists as a finite number at x = x*, then f’ (x*)=0

• The theorem does not say that the function necessarily will have a minimum or maximum at every point where the derivative is zero. e.g. f’ (x)=0 at x= 0 for the function shown in figure. However, this point is neither a minimum nor a maximum. In general, a point x* at which f’(x*)=0 is called a stationary point.

Page 70: Optimization

70

Necessary conditionNecessary condition

• The theorem does not say what happens if a minimum or a maximum occurs at a point x* where the derivative fails to exist. For example, in the figure

depending on whether h approaches zero through positive or negative values, respectively. Unless the numbers or are

equal, the derivative f’ (x*) does not exist. If f’ (x*) does not exist, the theorem is not applicable.

(negative) mor (positive) *)()*(

lim -

0

m

h

xfhxfh

FIGURE 2.2 SAYFA 67

m m

Page 71: Optimization

71

Sufficient conditionSufficient condition

• Let f’(x*)=f’’(x*)=…=f (n-1)(x*)=0, but f(n)(x*) ≠ 0. Then f(x*) is – A minimum value of f (x) if f (n)(x*) > 0 and n is even

– A maximum value of f (x) if f (n)(x*) < 0 and n is even

– Neither a minimum nor a maximum if n is odd

Page 72: Optimization

72

Example

Determine the maximum and minimum values of the function:

Solution: Since f’(x)=60(x4-3x3+2x2)=60x2(x-1)(x-2),

f’(x)=0 at x=0,x=1, and x=2.

The second derivative is:

At x=1, f’’(x)=-60 and hence x=1 is a relative maximum. Therefore,

fmax= f (x=1) = 12

At x=2, f’’(x)=240 and hence x=2 is a relative minimum. Therefore,

fmin= f (x=2) = -11

5404512)( 345 xxxxf

)494(60)( 23 xxxxf

Page 73: Optimization

73

Example

Solution cont’d:

At x=0, f’’(x)=0 and hence we must investigate the next derivative.

Since at x=0, x=0 is neither a maximum nor a minimum, and it is an inflection point.

0at 240)41812(60)( 2 xxxxf

0)( xf

Page 74: Optimization

74

Multivariable optimization with no Multivariable optimization with no constraintsconstraints

• Definition: rth Differential of f

If all partial derivatives of the function f through order r ≥ 1 exist and are continuous at a point X*, the polynomial

is called the rth differential of f at X*.

kji

r

kj

n

ki

n

j

n

i

r

xxx

fhhhfd

*)(

*)(111

XX

r summations

Page 75: Optimization

75

Multivariable optimization with no Multivariable optimization with no constraintsconstraints

• Example: rth Differential of f

when r = 2 and n = 3, we have

kji

r

kj

n

ki

n

j

n

i

r

xxx

fhhhfd

*)(

*)(111

XX

r summations

)(2)(2)(2

)()()(

)(*)*,*,()(

31

2

3132

2

3221

2

21

23

2232

2

2222

1

22

1

23

1

3

1321

22

X*X*X*

X*X*X*

X*X*

xx

fhh

xx

fhh

xx

fhh

x

fh

x

fh

x

fh

xx

fhhxxxfdfd

jij

ji

i

Page 76: Optimization

76

Multivariable optimization with no Multivariable optimization with no constraintsconstraints

• Definition: rth Differential of f

The Taylor series expansion of a function f (X*) about a point X* is given by:

where the last term, called the remainder is given by:

)h*,()*(!

1

)*(!3

1)*(

!2

1)*()*()( 32

XX

XXXXX

NN Rfd

N

fdfddfff

*X-Xh

hX*,hX*,

and 10 where

)()!1(

1)( 1

fdN

R NN

Page 77: Optimization

77

Example

Find the second order Taylor’s series approximation of the function

about the point

Solution: The second order Taylor’s series approximation of the function f about point X* is given by

313

22321 ),,( xexxxxxxf

2-

0

1

X*

2

0

1

!2

1

2

0

1

2

0

1

)( 2 fddfff X

Page 78: Optimization

78

Example cont’d

where

2

2

0

1

ef

23

2113

2233221

33

22

11

2

0

1

])2([

2

0

1

2

0

1

2

0

1

2

0

1

33

ehehexhxhxxheh

x

fh

x

fh

x

fhdf

xx

Page 79: Optimization

79

Example cont’d

where

231

23

222

31232211233

22

21

31

2

3132

2

3221

2

21

23

22

322

22

221

22

1

23

1

3

1

2

2-4

2

0

1

)](2)2(2)0(2)()2()0([

2

0

1

)222

(

2

0

1

2

0

1

33

ehhheh

ehhxhhhhexhxhh

xx

fhh

xx

fhh

xx

fhh

x

fh

x

fh

x

fh

xx

fhhfd

xx

jiji

ji

Page 80: Optimization

80

Example cont’d

Thus, the Taylor’s series approximation is given by:

Where h1=x1-1, h2=x2, and h3=x3+2

)2(-4!2

1)()( 2

3123

22231

22 ehhhehhheef X

Page 81: Optimization

81

Multivariable optimization with no Multivariable optimization with no constraintsconstraints

• Necessary condition If f(X) has an extreme point (maximum or minimum) at X=X*

and if the first partial derivatives of f (X) exist at X*, then

• Sufficient condition A sufficient condition for a stationary point X* to be an

extreme point is that the matrix of second partial derivatives (Hessian matrix) of f (X*) evaluated at X* is – Positive definite when X* is a relative minimum point– Negative definite when X* is a relative maximum point

0*)(*)(*)(21

XXXnx

f

x

f

x

f

Page 82: Optimization

82

ExampleExample

Figure shows two frictionless rigid bodies (carts) A and B connected by three linear elastic springs having spring constants k1, k2, and k3. The springs are at their natural positions when the applied force P is zero. Find the displacements x1 and x2 under the force P by using the principle of minimum potential energy.

Page 83: Optimization

83

ExampleExample

Solution: According to the principle of minimum potential energy, the system will be in equilibrium under the load P if the potential energy is a minimum. The potential energy of the system is given by:

Potential energy (U)

= Strain energy of springs-work done by external forces

The necessary condition for the minimum of U are

2221

2123

212 ]

2

1)(

2

1

2

1[ Pxxkxxkxk

0)(

0)(

211232

123121

Pxkxxkx

U

xxkxkx

U

323121

322

323121

31

)(*

*

kkkkkk

kkPx

kkkkkk

Pkx

Page 84: Optimization

84

ExampleExample

Solution cont’d: The sufficiency conditions for the minimum at (x1*,x2*) can also be verified by testing the positive definiteness of the Hessian matrix of U. The Hessian matrix of U evaluated at (x1*,x2*) is:

The determinants of the square submatrices of J are

Since the spring constants are always positive. Thus the matrix J is positive definite and hence (x1*,x2*) corresponds to the minimum of potential energy.

313

332

*)*,(

22

2

21

2

21

2

21

2

*)*,(

21

21 kkk

kkk

x

U

xx

U

xx

U

x

U

xx

xxJ

0

0

323121313

3322

32321

kkkkkkkkk

kkkJ

kkkkJ

Page 85: Optimization

85

Semi-definite caseSemi-definite case

The sufficient conditions for the case when the Hessian matrix of the given function is semidefinite:

• In case of a function of a single variable, the higher order derivatives in the Taylor’s series expansion are investigated

Page 86: Optimization

86

Semi-definite caseSemi-definite case

The sufficient conditions for a function of several variables for the case when the Hessian matrix of the given function is semidefinite:

• Let the partial derivatives of f of all orders up to the order k ≥ 2 be continuous in the neighborhood of a stationary point X*, and

so that dk f |X=X* is the first nonvanishing higher-order differential of f at X*. • If k is even:

– X* is a relative minimum if dk f |X=X* is positive definite– X* is a relative maximum if dk f |X=X* is negative definite– If dk f |X=X* is semidefinite, no general conclusions can be drawn

• If k is odd, X* is not an extreme point of f(X*)

0|

110|

*XX

*XX

fd

krfdk

r

Page 87: Optimization

87

Saddle pointSaddle point

• In the case of a function of two variables f (x,y), the Hessian matrix may be neither positive nor negative definite at a point (x*,y*) at which

In such a case, the point (x*,y*) is called a saddle point.• The characteristic of a saddle point is that it corresponds

to a relative minimum or maximum of f (x,y) wrt one variable, say, x (the other variable being fixed at y=y* ) and a relative maximum or minimum of f (x,y) wrt the second variable y (the other variable being fixed at x*).

0

y

f

x

f

Page 88: Optimization

88

Saddle pointSaddle point

Example: Consider the function f (x,y)=x2-y2. For this function:

These first derivatives are zero at x* = 0 and y* = 0. The Hessian matrix of f at (x*,y*) is given by:

Since this matrix is neither positive definite nor negative definite, the point ( x*=0, y*=0) is a saddle point.

yy

fx

x

f2 and 2

20

02

J

Page 89: Optimization

89

Saddle pointSaddle point

Example cont’d: It can be seen from the figure that f (x, y*) = f (x, 0) has a relative minimum and f

(x*, y) = f (0, y) has a relative maximum at the saddle point (x*, y*).

Page 90: Optimization

90

ExampleExample

Find the extreme points of the function

Solution: The necessary conditions for the existence of an extreme point are:

These equations are satisfied at the points: (0,0), (0,-8/3), (-4/3,0), and (-4/3,-8/3)

642),( 22

21

32

3121 xxxxxxf

0)83(83

0)43(43

22222

2

11121

1

xxxxx

f

xxxxx

f

Page 91: Optimization

91

ExampleExample

Solution cont’d: To find the nature of these extreme points, we have to use the sufficiency conditions. The second order partial derivatives of f are given by:

The Hessian matrix of f is given by:

0 864621

2

222

2

121

2

xx

fx

x

fx

x

f

860

046

2

1

x

xJ

Page 92: Optimization

92

ExampleExample

Solution cont’d:

If J1=|6x1+4| and , the values of J1 and J2 and

the nature of the extreme point are as given in the next slide:

860

046

2

1

x

xJ

860

046

2

1

x

x2J

Page 93: Optimization

93

Example

Point X Value of J1 Value of J2 Nature of J Nature of X

f (X)

(0,0) +4 +32 Positive definite

Relative minimum

6

(0,-8/3) +4 -32 Indefinite Saddle point

418/27

(-4/3,0) -4 -32 Indefinite Saddle point

194/27

(-4/3,-8/3) -4 +32 Negative definite

Relative

maximum

50/3

Page 94: Optimization

94

Multivariable optimization with equality Multivariable optimization with equality constraintsconstraints

• Problem statement: Minimize f = f (X) subject to gj(X)=0, j=1,2,…..,m where

Here m is less than or equal to n, otherwise the problem becomes overdefined and, in general, there will be no solution.

• Solution:– Solution by direct substitution– Solution by the method of constrained variation– Solution by the method of Lagrange multipliers

nx

x

x

2

1

X

Page 95: Optimization

95

Solution by direct substitutionSolution by direct substitution

For a problem with n variables and m equality constraints:

• Solve the m equality constraints and express any set of m variables in terms of the remaining n-m variables

• Substitute these expressions into the original objective function, the result is a new objective function involving only n-m variables

• The new objective function is not subjected to any constraint, and hence its optimum can be found by using the unconstrained optimization techniques.

Page 96: Optimization

96

Solution by direct substitutionSolution by direct substitution

• Simple in theory

• Not convenient from a practical point of view as the constraint equations will be nonlinear for most of the problems

• Suitable only for simple problems

Page 97: Optimization

97

ExampleExample

Find the dimensions of a box of largest volume that can be inscribed in a sphere of unit radius

Solution: Let the origin of the Cartesian coordinate system x1, x2, x3 be at the center of the sphere and the sides of the box be 2x1, 2x2, and 2x3. The volume of the box is given by:

Since the corners of the box lie on the surface of the sphere of unit radius, x1, x2 and x3 have to satisfy the constraint

321321 8),,( xxxxxxf

123

22

21 xxx

Page 98: Optimization

98

ExampleExample

This problem has three design variables and one equality constraint. Hence the equality constraint can be used to eliminate any one of the design variables from the objective function. If we choose to eliminate x3:

Thus, the objective function becomes:

f(x1,x2)=8x1x2(1-x12-x2

2)1/2

which can be maximized as an unconstrained function in two variables.

2/122

213 )1( xxx

Page 99: Optimization

99

ExampleExample

The necessary conditions for the maximum of f give:

which can be simplified as:

From which it follows that x1*=x2*=1/3 and hence x3*= 1/3

0])1(

)1[(8

0])1(

)1[(8

2/122

21

222/12

2211

2

2/122

21

212/12

2212

1

xx

xxxx

x

f

xx

xxxx

x

f

021

02122

21

22

21

xx

xx

Page 100: Optimization

100

ExampleExample

This solution gives the maximum volume of the box as:

To find whether the solution found corresponds to a maximum or minimum, we apply the sufficiency conditions to f (x1,x2) of the equation f (x1,x2)=8x1x2(1-x1

2-x22)1/2. The

second order partial derivatives of f at (x1*,x2*) are given by:

*)*,(at 3

32

)1(2)1(1

8

)1(

8

21

2/122

2112/12

221

31

22

21

22/12

221

2121

2

xx

xxxxx

x

xx

x

xx

xx

x

f

33

8max f

Page 101: Optimization

101

ExampleExample

The second order partial derivatives of f at (x1*,x2*) are given by:

*)*,(at 3

32

)1(2)1(1

8

)1(

8

21

2/122

2122/12

221

32

22

21

12/12

221

2122

2

xx

xxxxx

x

xx

x

xx

xx

x

f

*)*,(at 3

16

])1(

)1[(1

8

)1(

8)1(8

21

2/122

21

222/12

2212

221

21

2/122

21

222/12

221

21

2

xx

xx

xxx

xx

x

xx

xxx

xx

f

Page 102: Optimization

102

ExampleExample

Since

the Hessian matrix of f is negative definite at (x1*,x2*). Hence the point (x1*,x2*) corresponds to the maximum of f.

0 and 02

21

2

22

2

21

2

22

2

xx

f

x

f

x

f

x

f

Page 103: Optimization

103

Solution by constrained variationSolution by constrained variation

• Minimize f (x1,x2) subject to g(x1,x2)=0

• A necessary condition for f to have a minimum at some point (x1*,x2*) is that the total derivative of f (x1,x2) wrt x1 must be zero at (x1*,x2*)

• Since g(x1*,x2*)=0 at the minimum point, any variations dx1 and dx2 taken about the point (x1*,x2*) are called admissable variations provided that the new point lies on the constraint:

022

11

dxx

fdx

x

fdf

0),( 2*21

*1 dxxdxxg

Page 104: Optimization

104

Solution by constrained variationSolution by constrained variation

• Taylor’s series expansion of the function about the point (x1*,x2*):

• Since g(x1*, x2*)=0

• Assuming

• Substituting the above equation into

),(at0 *2

*12

21

1

xxdxx

gdx

x

gdg

0),(),(),(),( 2*2

*1

21

*2

*1

1

*2

*12

*21

*1

dxxxx

gdxxx

x

gxxgdxxdxxg

02

x

g

1*2

*1

2

12 ),(

/

/dxxx

xg

xgdx

022

11

dxx

fdx

x

fdf

0)/

/( 1),(

22

1

1

*2

*1

dxx

f

xg

xg

x

fdf

xx

Page 105: Optimization

105

• The expression on the left hand side is called the constrained variation of f

• Since dx1 can be chosen arbitrarily:

• This equation represents a necessary condition in order to have (x1*,x2*) as an extreme point (minimum or maximum)

Solution by constrained variationSolution by constrained variation

0)/

/( 1),(

22

1

1

*2

*1

dxx

f

xg

xg

x

fdf

xx

0)(),(

1221

*2

*1

xxx

g

x

f

x

g

x

f

Page 106: Optimization

106

A beam of uniform rectangular cross section is to be cut from a log having a circular cross secion of diameter 2 a. The beam has to be used as a cantilever beam (the length is fixed) to carry a concentrated load at the free end. Find the dimensions of the beam that correspond to the maximum tensile (bending) stress carrying capacity.

Example Example

Page 107: Optimization

107

Solution: From elementary strength of materials, we know that the tensile stress induced in a rectangular beam at any fiber located at a distance y from the neutral axis is given by

where M is the bending moment acting and I is the moment of inertia

of the cross-section about the x axis. If the width and the depth of the rectangular beam shown in the figure are 2x and 2y, respectively, the maximum tensile stress induced is given by:

Example Example

I

M

y

23

max 4

3

)2)(2(121 xy

M

yx

Myy

I

M

Page 108: Optimization

108

solution cont’d: Thus for any specified bending moment, the beam is said to have maximum tensile stress carrying capacity if the maximum induced stress (max) is a minimum. Hence we need to minimize k/xy2 or maximize Kxy2, where k=3M/4 and K=1/k, subject to the constraint

This problem has two variables and one constraint; hence the equation

can be applied for finding the optimum solution.

Example Example

222 ayx

0)(),(

1221

*2

*1

xxx

g

x

f

x

g

x

f

Page 109: Optimization

109

Solution: Since

we have:

Equation gives:

Example Example

222

21

ayxg

ykxf

31

22

2

ykxy

f

ykxx

f

yy

g

xx

g

2

2

0)(),(

1221

*2

*1

xxx

g

x

f

x

g

x

f

)**,(at0)2(2)2( 3122 yxxykxyykx

Page 110: Optimization

110

Solution: that is

Thus the beam of maximum tensile stress carrying capacity has a depth of 2 times its breadth. The optimum values of x and y can be obtained from the above equation and

as:

Example Example

*2* xy

222 ayxg

32* and

3*

ay

ax

Page 111: Optimization

111

Necessary conditions for a general problem

• The procedure described can be generalized to a problem with n variables and m constraints.

• In this case, each constraint equation gj(x)=0, j=1,2,..,m gives rise to a linear equation in the variations dxi, i=1,2,…,n.

• Thus, there will be in all m linear equations in n variations. Hence any m variations can be expressed in terms of the remaining n-m variations.

• These expressions can be used to express the differential of the objective function, df, in terms of the n-m independent variations.

• By letting the coefficients of the independent variations vanish in the equation df = 0, one obtains the necessary conditions for the cnstrained optimum of the given function.

Solution by constrained variationSolution by constrained variation

Page 112: Optimization

112

Necessary conditions for a general problem

• These conditions can be expressed as:

• It is to be noted that the variations of the first m variables (dx1, dx2,.., dxm) have been expressed in terms of the variations of the remaining n-m variables (dxm+1, dxm+2,.., dxn) in deriving the above equation.

Solution by constrained variationSolution by constrained variation

nmmk

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

f

x

f

x

f

x

f

xxxxx

gggfJ

m

mmm

k

m

mk

mk

mk

mk

m ,,2,1where0,,,,,

,,,,

21

2

2

2

1

22

1

2

1

1

11

21

321

21

Page 113: Optimization

113

Necessary conditions for a general problem

• This implies that the following relation is satisfied:

• The n-m equations given by the below equation represent the necessary conditions for the extremum of f(X) under the m equality constraints, gj(X) = 0, j=1,2,…,m.

Solution by constrained variationSolution by constrained variation

0,,,

,,,

21

21

m

m

xxx

gggJ

nmmk

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

f

x

f

x

f

x

f

xxxxx

gggfJ

m

mmm

k

m

mk

mk

mk

mk

m ,,2,1where0,,,,,

,,,,

21

2

2

2

1

22

1

2

1

1

11

21

321

21

Page 114: Optimization

114

Minimizesubject to

Solution: This problem can be solved by applying the necessary conditions given by

ExampleExample)(

2

1)( 2

423

22

21 yyyyf Y

015652)(

010532)(

43212

43211

yyyyg

yyyyg

Y

Y

nmmk

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

f

x

f

x

f

x

f

xxxxx

gggfJ

m

mmm

k

m

mk

mk

mk

mk

m ,,2,1where0,,,,,

,,,,

21

2

2

2

1

22

1

2

1

1

11

21

321

21

Page 115: Optimization

115

Solution cont’d: Since n = 4 and m = 2, we have to select two variables as independent variables. First we show that any arbitrary set of variables can not be chosen as independent variables since the remaining (dependent) variables have to satisfy the condition of

In terms of the notation of our equations, let us take the independent variables as x3=y3 and x4=y4 so that x1=y1 and x2=y2. Then the Jacobian becomes:

and hence the necessary conditions can not be applied.

ExampleExample

0,,,

,,,

21

21

m

m

xxx

gggJ

021

21

,

,

2

2

1

2

2

1

1

1

21

21

y

g

y

g

y

g

y

g

xx

ggJ

Page 116: Optimization

116

Solution cont’d: Next, let us take the independent variables as x3=y2

and x4=y4 so that x1=y1 and x2=y3. Then the Jacobian becomes:

and hence the necessary conditions of

can be applied.

ExampleExample

0251

31

,

,

3

2

1

2

3

1

1

1

21

21

y

g

y

g

y

g

y

g

xx

ggJ

nmmk

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

f

x

f

x

f

x

f

xxxxx

gggfJ

m

mmm

k

m

mk

mk

mk

mk

m ,,2,1where0,,,,,

,,,,

21

2

2

2

1

22

1

2

1

1

11

21

321

21

Page 117: Optimization

117

Solution cont’d: The equation

give for k = m+1=3

ExampleExample

nmmk

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

f

x

f

x

f

x

f

xxxxx

gggfJ

m

mmm

k

m

mk

mk

mk

mk

m ,,2,1where0,,,,,

,,,,

21

2

2

2

1

22

1

2

1

1

11

21

321

21

042)22()610()35(

512

312 12312

312

3

2

1

2

2

2

3

1

1

1

2

1

312

2

2

1

2

3

2

2

1

1

1

3

1

213

yyyyy

yyy

y

g

y

g

y

g

y

g

y

g

y

g

y

f

y

f

y

f

x

g

x

g

x

g

x

g

x

g

x

g

x

f

x

f

x

f

Page 118: Optimization

118

Solution cont’d: For k = m+2=4

From the two previous equations, the necessary conditions for the minimum or the maximum of f is obtained as:

ExampleExample

072)65()1825()35(

516

315 314314

314

3

2

1

2

4

2

3

1

1

1

4

1

314

2

2

1

2

4

2

2

1

1

1

4

1

214

yyyyyy

yyy

y

g

y

g

y

g

y

g

y

g

y

g

y

f

y

f

y

f

x

g

x

g

x

g

x

g

x

g

x

g

x

f

x

f

x

f

24143

21

2

7272

2

1

yyyyy

yy

Page 119: Optimization

119

Solution cont’d: When the equations

are substituted, the equations

take the form:

ExampleExample

015652)(

010532)(

43212

43211

yyyyg

yyyyg

Y

Y

24143

21

2

7272

2

1

yyyyy

yy

151615

10118

42

42

yy

yy

Page 120: Optimization

120

Solution cont’d: from which the desired optimum solution can be obtained as:

ExampleExample

37

30*

74

155*

37

5*

74

5*

4

3

2

1

y

y

y

y

Page 121: Optimization

121

Sufficiency conditions for a general problem

• By eliminating the first m variables, using the m equality constraints, the objective function can be made to depend only on the remaining variables xm+1, xm+2, …,xn. Then the Taylor’s series expansion of f , in terms of these variables, about the extreme point X* gives:

where is used to denote the partial derivative of f wrt xi (holding all the other variables xm+1, xm+2, …,xi-1, xi+1, xi+2,…,xn constant) when x1, x2, …,xm are allowed to change so that the constraints gj(X*+dX)=0, j=1,2,…,m are satisfied; the second derivative is used to denote a similar meaning.

Solution by constrained variationSolution by constrained variation

ji

gji

n

mj

n

mii

g

n

mi i

dxdxxx

fdx

x

f)fdf

2

111 2

1()( *XX*X

gixf )/(

gji xxf )/( 2

Page 122: Optimization

122

Example

Consider the problem of minimizing

Subject to the only constraint

Since n=3 and m=1 in this problem, one can think of any of the m variables, say x1, to be dependent and the remaining n-m variables, namely x2 and x3, to be independent.

Here the constrained partial derivative means the rate of change of f with respect to x2 (holding the other independent variable x3 constant) and at the same time allowing x1 to change about X* so as to satisfy the constraint g1(X)=0

Solution by constrained variationSolution by constrained variation

),,()( 321 xxxff X

08)( 23

22

211 xxxg X

gxf )/( 2

Page 123: Optimization

123

Example

In the present case, this means that dx1 has to be chosen to satisfy the relation

since g1(X*)=0 at the optimum point and dx3= 0 (x3 is held constant.)

Solution by constrained variationSolution by constrained variation

022

isthat

0)()()()()(

2*21

*1

33

12

2

11

1

111

dxxdxx

dxx

gdx

x

gdx

x

ggdg X*X*X*X*X*X

Page 124: Optimization

124

Example

Notice that (df/dxi)g has to be zero for i=m+1, m+2,...,n since the dxi appearing in the equation

are all independent. Thus, the necessary conditions for the existence of constrained optimum at X* can also be expressed as:

Solution by constrained variationSolution by constrained variation

ji

gji

n

mj

n

mii

g

n

mi i

dxdxxx

fdx

x

f)fdf

2

111 2

1()( *XX*X

nmmix

f

gi

,...,2,1,0

Page 125: Optimization

125

Example

It can be shown that the equations

are nothing bu the equation

Solution by constrained variationSolution by constrained variation

nmmix

f

gi

,...,2,1,0

nmmk

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

g

x

f

x

f

x

f

x

f

xxxxx

gggfJ

m

mmm

k

m

mk

mk

mk

mk

m ,,2,1where0,,,,,

,,,,

21

2

2

2

1

22

1

2

1

1

11

21

321

21

Page 126: Optimization

126

• A sufficient condition for X* to be a constrained relative minimum (maximum) is that the quadratic form Q defined by

is positive (negative) for all nonvanishing variations dxi and the matrix

has to be positive (negative) definite to have Q positive (negative) for all choices of dxi

Sufficiency conditions for a general problem

ji

g

n

mj ji

n

mi

dxdxxx

fQ

1

2

1

gngmngmn

gnmgmmgm

x

f

xx

f

xx

f

xx

f

xx

f

x

f

2

2

2

2

1

2

1

2

21

2

21

2

Page 127: Optimization

127

• The computation of the constrained derivatives in the sufficiency condition is difficult and may be prohibitive for problems with more than three constraints

• Simple in theory

• Difficult to apply since the necessary conditions involve evaluation of determinants of order m+1

Solution by constrained variationSolution by constrained variation

Page 128: Optimization

128

Problem with two variables and one constraint:

Minimize f (x1,x2)

Subject to g(x1,x2)=0

For this problem, the necessary condition was found to be:

By defining a quantity , called the Lagrange multiplier as:

Solution by Lagrange multipliersSolution by Lagrange multipliers

0)/

/(

),(12

2

1

*2

*1

xxx

g

xg

xf

x

f

*)*,(2

2

21

/

/

xxxg

xf

Page 129: Optimization

129

Problem with two variables and one constraint:

Necessary conditions for the point (x1*,x2*) to be an extreme point The problem can be rewritten as:

In addition, the constraint equation has to be satisfied at the extreme point:

Solution by Lagrange multipliersSolution by Lagrange multipliers

0)(),(

11

*2

*1

xxx

g

x

f

0)(),(

22

*2

*1

xxx

g

x

f

0),(),(21 *

2*1

xx

xxg

Page 130: Optimization

130

Problem with two variables and one constraint:

• The derivation of the necessary conditions by the method of Lagrange multipliers requires that at least one of the partial derivatives of g(x1,x2) be nonzero at an extreme point.

• The necessary conditions are more commonly generated by constructing a function L,known as the Lagrange function, as

Solution by Lagrange multipliersSolution by Lagrange multipliers

0),(),(),,( 212121 xxgxxfxxL

Page 131: Optimization

131

Problem with two variables and one constraint:

• By treating L as a function of the three variables x1, x2 and , the necessary conditions for its extremum are given by:

Solution by Lagrange multipliersSolution by Lagrange multipliers

0),(),,(

0),(),(),,(

0),(),(),,(

2121

212

212

212

211

211

211

xxgxxL

xxx

gxx

x

fxx

x

L

xxx

gxx

x

fxx

x

L

Page 132: Optimization

132

Example: Find the solution using the Lagrange multiplier method.

SolutionThe Lagrange function is

ExampleExample

0

022

02

),( of minimum for the conditionsnecessary The

)(),(),(),,(

222

31

22

22221

ayxL

yykxy

L

xykxx

L

yxf

ayxykxyxgyxfyxL

0),(

subject to

),( Minimize

222

21

ayxyxg

ykxyxf

Page 133: Optimization

133

Solution cont’dwhich yield:

ExampleExample

3

a2* and

3

ax*

:assolution optimum thegives 0

withalongrelation This obtained. becan *)2/1(*relation which thefrom

22

222

423

y

ayxL

yx

xy

k

yx

k

Page 134: Optimization

134

Necessary conditions for a general problem:

Minimize f(X)

subject to

gj (X)= 0, j=1, 2,….,m

The Lagrange function, L, in this case is defined by introducing one Lagrange multiplier j for each constraint gj(X) as

Solution by Lagrange multipliersSolution by Lagrange multipliers

)()()()(),,,,,,,( 22112121 XXXX mmmn gggfxxxL

Page 135: Optimization

135

By treating L as a function of the n+m unknowns, x1, x2,…,xn,1, 2,…, m, the necessary conditions for the extremum of L, which also corresponds to the solution of the original problem are given by:

The above equations represent n+m equations in terms of the n+m unknowns, xi and j

Solution by Lagrange multipliersSolution by Lagrange multipliers

mjgL

nix

g

x

f

x

L

jj

i

jm

jj

ii

,,2,1,0)(

,,2,1,01

X

Page 136: Optimization

136

The solution:

The vector X* corresponds to the relative constrained minimum of f(X) (sufficient conditions are to be verified) while the vector * provides the sensitivity information.

Solution by Lagrange multipliersSolution by Lagrange multipliers

*

*2

*1

*

*2

*1

*and

mnx

x

x

X*

Page 137: Optimization

137

Sufficient Condition

A sufficient condition for f(X) to have a constrained relative minimum at X* is that the quadratic Q defined by

evaluated at X=X* must be positive definite for all values of dX for which the constraints are satisfied. If

is negative for all choices of the admissable variations dxi, X* will be a constrained maximum of f(X)

Solution by Lagrange multipliersSolution by Lagrange multipliers

ji

n

j ji

n

i

dxdxxx

LQ

1

2

1

ji

n

j ji

n

i

dxdxxx

LQ )(

1

2

1

*X*,

Page 138: Optimization

138

A necessary condition for the quadratic form Q to be positive (negative) definite for all admissable variations dX is that each root of the polynomial zi, defined by the following determinantal equation, be positive (negative):

• The determinantal equation, on expansion, leads to an (n-m)th-order polynomial in z. If some of the roots of this polynomial are positive while the others are negative, the point X* is not an extreme point.

Solution by Lagrange multipliersSolution by Lagrange multipliers

Page 139: Optimization

139

Find the dimensions of a cylindirical tin (with top and bottom) made up of sheet metal to maximize its volume such that the total surface area is equal to A0=24.

Solution

If x1 and x2 denote the radius of the base and length of the tin, respectively, the problem can be stated as:

Maximize f (x1,x2) = x12x2

subject to

Example 1Example 1

2422 02121 Axxx

Page 140: Optimization

140

Solution

Maximize f (x1,x2) = x12x2

subject to

The Lagrange function is:

and the necessary conditions for the maximum of f give:

Example 1Example 1

2422 02121 Axxx

)22(),,( 02121

2121 AxxxxxxL

022

02

0242

02121

121

2

21211

AxxxL

xxx

L

xxxxx

L

Page 141: Optimization

141

Solution

that is,

The above equations give the desired solution as:

Example 1Example 1

121

21

2

1

2x

xx

xx

21 2

1xx

2/1

0

2/1

0*2

2/1

0*1 24

* and,3

2,

6

AA

xA

x

Page 142: Optimization

142

SolutionThis gives the maximum value of f as

If A0 = 24, the optimum solution becomes

To see that this solution really corresponds to the maximum of f, we apply the sufficiency condition of equation

Example 1Example 12/13

0

54*

A

f

16* and,1*,4,2 *2

*1 fxx

Page 143: Optimization

143

SolutionIn this case:

Example 1Example 1

222 **1)(

21

2

12

xxx

LL

*X*,

442 **2)(2

1

2

11

xx

LL

*X*,

0)(2

2

2

22

*X*,x

LL

1624 *2

*1

)(1

111

xxx

gg

*X*,

42 *1

)(2

112

xx

gg

*X*,

Page 144: Optimization

144

SolutionThus, equation

becomes

Example 1Example 1

0

0416

402

1624

z

z

Page 145: Optimization

145

Solutionthat is,

This gives

Since the value of z is negative, the point (x1*,x2*) corresponds to the maximum of f.

Example 1Example 1

0192272 32 z

17

12z

Page 146: Optimization

146

Find the maximum of the function f (X) = 2x1+x2+10 subject to g (X)=x1

2+2x22 = 3 using the Lagrange multiplier method. Also find the

effect of changing the right-hand side of the constraint on the optimum value of f.

Solution The Lagrange function is given by:

The necessary conditions for the solution of the problem are:

Example 2Example 2

)23(102)( 22121 xxxxL X,

221

22

1

23

041

02

xxL

xx

L

x

L

Page 147: Optimization

147

Solution The solution of the equation is:

The application of the sufficiency condition yields:

Example 2Example 2

2*

13.0

97.2*2

*1

x

xX*

0

052.01

52.080

10

041

440

10

2

2

z

z

x

xz

z

0

01211

122221

111211

gg

gzLL

gLzL

Page 148: Optimization

148

Solution

Hence X* will be a maximum of f with f* = f (X*)=16.07

Example 2Example 2

2972.6

082704.0

z

zz

Page 149: Optimization

149

Minimize f (X)

subject to

gj (X) ≤ 0, j=1, 2,…,m The inequality constraints can be transformed to equality constraints by adding nonnegative slack variables, yj

2, as

gj (X) + yj2 = 0, j = 1,2,…,m

where the values of the slack variables are yet unknown.

Multivariable optimization with Multivariable optimization with inequality constraintsinequality constraints

Page 150: Optimization

150

Minimize f(X) subject to

Gj(X,Y)= gj (X) + yj2=0, j=1, 2,…,m

where is the vector of slack variables

This problem can be solved by the method of Lagrange multipliers. For this, the Lagrange function L is constructed as:

Multivariable optimization with Multivariable optimization with inequality constraintsinequality constraints

my

y

y

2

1

Y

smultiplier Lagrange of vector theiswhere

)()()(

2

1

1

m

j

m

jjGfL

YX,XY,X,

Page 151: Optimization

151

The stationary points of the Lagrange function can be found by solving the following equations (necessary conditions):

(n+2m) equations (n+2m) unknowns

The solution gives the optimum solution vector X*, the Lagrange multiplier vector, *, and the slack variable vector, Y*.

Multivariable optimization with Multivariable optimization with inequality constraintsinequality constraints

mjyy

L

mjygGL

nix

g

x

f

x

L

jjj

jjjj

i

jm

jj

ii

,,2,1,02)(

,,2,1,,0(()(

,,2,1,0)()()(

2

1

Y,X,

X)Y)X,Y,X,

XXY,X,

Page 152: Optimization

152

Equation

ensure that the constraints

are satisfied, while the equation

implies that either j=0 or yj=0

Multivariable optimization with Multivariable optimization with inequality constraintsinequality constraints

mjygGL

jjjj

,,2,1,,0(()( 2

X)Y)X,Y,X,

mjg j ,,2,1,0)( X

mjyy

Ljj

j

,,2,1,02)( Y,X,

Page 153: Optimization

153

• If j=0, it means that the jth constraint is inactive and hence can be ignored.

• On the other hand, if yj= 0, it means that the constraint is active (gj = 0) at the optimum point.

• Consider the division of the constraints into two subsets, J1 and J2, where J1 + J2 represent the total set of constraints.

• Let the set J1 indicate the indices of those constraints that are active at the optimum point and J2 include the indices of all the inactive constraints.

• Those constraints that are satisfied with an equality sign, gj= 0, at the optimum point are called the active constraints, while those that are satisfied with a strict inequality sign, gj< 0 are termed inactive constraints.

Multivariable optimization with Multivariable optimization with inequality constraintsinequality constraints

Page 154: Optimization

154

• Thus for j J1, yj = 0 (constraints are active), for j J2, j=0 (constraints are inactive), and the equation

can be simplified as:

Multivariable optimization with Multivariable optimization with inequality constraintsinequality constraints

nix

g

x

f

i

jm

Jjj

i

,,2,1,01

nix

g

x

f

x

L

i

jm

jj

ii

,,2,1,0)()()(1

XXY,X,

Page 155: Optimization

155

Similarly, the equation

can be written as:

The equations (1) and (2) represent n+p+(m-p)=n+m equations in the n+m unknowns xi (i=1,2,…,n), j (j J1), and yj (j J2), where p denotes the number of active constraints.

Multivariable optimization with Multivariable optimization with inequality constraintsinequality constraints

22

1

,0)(

,0)(

Jjyg

Jjg

jj

j

X

X

mjygGL

jjjj

,,2,1,,0(()( 2

X)Y)X,Y,X,

nix

g

x

f

i

jm

Jjj

i

,,2,1,01

(1)

(2)

Page 156: Optimization

156

Assuming that the first p constraints are active, the equation

can be expressed as:

These equations can be collectively written as

Multivariable optimization with Multivariable optimization with inequality constraintsinequality constraints

nix

g

x

g

x

g

x

f

i

pp

iii

,,2,1,,22

11

nix

g

x

f

i

jm

Jjj

i

,,2,1,01

ly.respective ,constraintjth theandfunction objective theof gradients theare and where

2211

j

pp

gf

gggf

Page 157: Optimization

157

Equation

indicates that the negative of the gradient of the objective function can be expressed as a linear combination of the gradients of the active constraints at the optimum point.

Multivariable optimization with Multivariable optimization with inequality constraintsinequality constraints

/

/

/

and

/

/

/

2

1

2

1

nj

j

j

j

n xg

xg

xg

g

xf

xf

xf

f

pp gggf 2211

Page 158: Optimization

158

• A vector S is called a feasible direction from a point X if at least a small step can be taken along S that does not immediately leave the feasible region.

• Thus for problems with sufficiently smooth constraint surfaces, vector S satisfying the relation

can be called a feasible direction.

Multivariable optimization with Multivariable optimization with inequality constraints-Feasible regioninequality constraints-Feasible region

0 jT gS

Page 159: Optimization

159

• On the other hand, if the constraint is either linear or concave, any vector satisfying the relation

can be called a feasible region.

• The geometric interpretation of a feasible direction is that the vector S makes an obtuse angle with all the constraint normals.

Multivariable optimization with Multivariable optimization with inequality constraints-Feasible regioninequality constraints-Feasible region

0 jT gS

Page 160: Optimization

160

Multivariable optimization with Multivariable optimization with inequality constraints-Feasible regioninequality constraints-Feasible region

Page 161: Optimization

161

• Further we can show that in the case of a minimization problem, the j values (j J1), have to be positive. For simplicity of illustration, suppose that only two constraints (p=2) are active at the optimum point.

• Then the equation

reduces to

Multivariable optimization with Multivariable optimization with inequality constraintsinequality constraints

pp gggf 2211

2211 ggf

Page 162: Optimization

162

• Let S be a feasible direction at the optimum point. By premultiplying both sides of the equation

by ST, we obtain:

where the superscript T denotes the transpose. Since S is a feasible direction, it should satisfy the relations:

Multivariable optimization with Multivariable optimization with inequality constraintsinequality constraints

2T

21T

1T ggf SSS

2211 ggf

0

0

2

1

gS

gST

T

Page 163: Optimization

163

• Thus if, 1 > 0 and 2 > 0 the quantity STf is always positive.

• As f indicates the gradient direction, along which the value of the function increases at the maximum rate, STf represents the component of the increment of f along the direction S.

• If STf > 0, the function value increases, the function value increases as we move along the direction S.

• Hence if 1 and 2 are positive, we will not be able to find any direction in the feasible domain along which the function value can be decreased further.

Multivariable optimization with Multivariable optimization with inequality constraintsinequality constraints

Page 164: Optimization

164

• Since the point at which the equation

is valid is assumed to be optimum, 1 and 2 have to be positive.

• This reasoning can be extended to cases where there are more than two constraints active. By proceeding in a similar manner, one can show that the j values have to be negative for a maximization problem.

Multivariable optimization with Multivariable optimization with inequality constraintsinequality constraints

0

0

2

1

gS

gST

T

Page 165: Optimization

165

Kuhn-Tucker Conditions

• The conditions to be satisfied at a constrained minimum point, X*, of the problem can be expressed as:

• These conditions are in general not sufficient to ensure a relative minimum.

• There is only a class of problems, called convex programming problems, for which the Kuhn-Tucker conditions are necessary and sufficient for a global minimum.

1,0

,,2,1,01

Jj

nix

g

x

f

j

i

j

Jjj

i

Page 166: Optimization

166

Kuhn-Tucker Conditions

• Those constraints that are satisfied with an equality sign, gj=0, at the optimum point are called the active constraints. If the set of active constraints is not known, the Kuhn-Tucker conditions can be stated as follows:

mj

mjg

mjg

nix

g

x

f

j

j

jj

i

jm

jj

i

,,2,1,0

,,2,1,0

,,2,1,0

,,2,1,01

Page 167: Optimization

167

Kuhn-Tucker Conditions

• Note that if the problem is one of maximization or if the constraints are of the type gj ≥ 0, the j have to be nonpositive in the equations below :

• On the other hand, if the problem is one of maximization with constraints in the form gj ≥ 0, the j have to be nonnegtaive in the above equations.

mj

mjg

mjg

nix

g

x

f

j

j

jj

i

jm

jj

i

,,2,1,0

,,2,1,0

,,2,1,0

,,2,1,01

Page 168: Optimization

168

Constraint Qualification

• When the optimization problem is stated as: Minimize f(X) subject to gj (X) ≤ 0, j=1, 2,…,m hk(X) = 0, k=1,2,….,p the Kuhn-Tucker conditions become

ly.respective ,0 and 0

sconstraint with theassociated smultiplier Lagrange thedenote and where

,,2,1,0

,,2,1,0

,,2,1,0

,,2,1,0

0

j

11

kj

k

j

k

j

jj

k

p

kkj

m

jj

hg

mj

pkh

mjg

mjg

hgf

Page 169: Optimization

169

Constraint Qualification• Although we found that the Kuhn-Tucker conditions represent the necessary

conditions of optimality, the following theorem gives the precise conditions of optimality:

• Theorem: Let X* be a feasible solution to the problem of

Minimize f(X)

subject to

gj (X) ≤ 0, j=1, 2,…,m

hk(X) = 0, k=1,2,….,p

If gj(X*), j J1 and hk(X*), k=1,2,…,p are linearly independent, there exist * and * such that (X*, *, *) satisfy the equations below:

mj

pkh

mjg

mjg

hgf

j

k

j

jj

k

p

kkj

m

jj

,,2,1,0

,,2,1,0

,,2,1,0

,,2,1,0

011

Page 170: Optimization

170

Example 1

Consider the problem:

Minimize f(x1,x2)=(x1-1)2 +x22

subject to

g1 (x1,x2) =x13-2x2≤ 0

g2 (x1,x2) =x13+2x2≤ 0

Determine whether the constraint qualification and the Kuhn-Tucker conditions are satisfied at the optimum point.

Page 171: Optimization

171

Example 1

Solution: The feasible region and the contours of the objective function are shown in the figure below. It can be seen that the optimum solution is at (0,0).

Page 172: Optimization

172

Example 1

Solution cont’d: Since g1 and g2 are both active at the optimum point (0,0), their gradients can be computed as:

It is clear that g1(X*) and g2(X*) are not linearly independent. Hence the constraint qualification is not satisfied at the optimum point.

2

0

2

3*)X(

2

0

2

3*)X(

)0,0(

21

2

)0,0(

21

1

xg

xg

Page 173: Optimization

173

Example 1 Solution cont’d: Noting that:

The Kuhn Tucker conditions can be written using the equations

as:

Since equation (E4) is not satisfied and equation (E5) can be satisfied for negative values of 1= 2 also, the Kuhn-Tucker conditions are not satisfied at the optimum point.

0

2

2

)1(2*)X(

)0,0(2

1

x

xf

1,0

,,2,1,01

Jj

nix

g

x

f

j

i

j

Jjj

i

(E4) 0

(E3) 0

(E2) 0)2()2(0

(E1) 0)0()0(2

2

1

21

21

Page 174: Optimization

174

Example 2

A manufacturing firm producing small refrigerators has entered into a contract to supply 50 refrigerators at the end of the first month, 50 at the end of the second month, and 50 at the end of the third. The cost of producing x refrigerators in any month is given by $(x2+1000). The firm can produce more refrigerators in any month and carry them to a subsequent month. However, it costs $20 per unit for any refrigerator carried over from one month to the next. Assuming that there is no initial inventory, determine the number of refrigerators to be produced in each month to minimize the total cost.

Page 175: Optimization

175

Example 2

Solution:

Let x1, x2, x3 represent the number of refrigerators produced in the first, second and third month respectively. The total cost to be minimized is given by:

total cost= production cost + holding cost

2123

22

21

21123

22

21321

2040

)100(20)50(20)1000()1000()1000(),,(

xxxxx

xxxxxxxxxf

Page 176: Optimization

176

Example 2

0150),,(

0100),,(

050),,(

3213213

213212

13211

xxxxxxg

xxxxxg

xxxxg

Solution cont’d:

The constraints can be stated as:

The first Kuhn Tucker condition is given by:

(E3)02

(E2)0202

(E1)0402

isthat

3,2,1,0

33

322

3211

33

22

11

x

x

x

ix

g

x

g

x

g

x

f

iiii

Page 177: Optimization

177

Example 2

Solution cont’d:

The second Kuhn Tucker condition is given by :

(E6)0)150(

(E5)0)100(

(E4)0)50(

isthat

3,2,1,0

3213

212

11

xxx

xx

x

jg jj

Page 178: Optimization

178

Example 2

Solution cont’d:

The third Kuhn Tucker condition is given by :

(E9)0)150(

(E8)0)100(

(E7)0)50(

is,that

3,2,10

321

21

1

xxx

xx

x

jg j

Page 179: Optimization

179

Example 2

Solution cont’d:

The fourth Kuhn Tucker condition is given by :

(E12)0

(E11)0

(E10)0

is,that

3,2,10

3

2

1

jj

Page 180: Optimization

180

Example 2

Solution cont’d:

The solution of Eqs.(E1) to (E12) can be found in several ways. We proceed to solve these equations by first noting that either 1=0 or x1=50 according to (E4).

Using this information, we investigate the following cases to identify the optimum solution of the problem:

• Case I: 1=0

• Case II: x1=50

Page 181: Optimization

181

Example 2

Solution cont’d:

• Case I: 1=0

Equations (E1) to (E3) give:

22

20x

(E13) 22

10

2

321

322

33

x

x

Page 182: Optimization

182

Example 2

Solution cont’d:

• Case I: 1=0

Substituting Equations (E13) into Eqs. (E5) and (E6) give:

(E14) 0) 2

3180(

0)130(

323

322

Page 183: Optimization

183

Example 2

Solution cont’d:

Case I: 1=0

The four possible solutions of Eqs. (E14) are:

1. 2=0, -180- 2-3/2 3=0. These equations along with Eqs. (E13) yield the solution:

2=0, 3=-120, x1=40, x2=50, x3=60

This solution satisfies Eqs.(E10) to (E12) but violates Eqs.(E7) and (E8) and hence can not be optimum.

Page 184: Optimization

184

Example 2

Solution cont’d:

Case I: 1=0

The second possible solution of Eqs. (E14) is:

2. 3=0, -130- 2-3=0. The solution of these equations leads to:

2=-130, 3=0, x1=45, x2=55, x3=0

This solution satisfies Eqs.(E10) to (E12) but violates Eqs.(E7) and (E9) and hence can not be optimum.

Page 185: Optimization

185

Example 2

Solution cont’d:

Case I: 1=0

The third possible solution of Eqs. (E14) is:

3. 2=0, 3=0. Equations (E13) give:

x1=-20, x2=-10, x3=0

This solution satisfies Eqs.(E10) to (E12) but violates the constraints Eqs.(E7) and (E9) and hence can not be optimum.

Page 186: Optimization

186

Example 2

Solution cont’d:

Case I: 1=0

The third possible solution of Eqs. (E14) is:

4. -130- 2-3=0, -180- 2-3/2 3=0. The solutions of these equations and Equations (E13) give:

2 =-30, 3 =-100, x1=45, x2=55, x3=50

This solution satisfies Eqs.(E10) to (E12) but violates the constraint Eq.(E7) and hence can not be optimum.

Page 187: Optimization

187

Example 2

23211

32322

33

2120240

(E15) 2220220

2

xx

xxx

x

Solution cont’d:

Case II: x1=50. In this case, Eqs. (E1) to (E3) give:

Substitution of Eqs.(E15) in Eqs

give:

(E16) 0)150)(2(

0)100)(2220(

3213

2132

xxxx

xxxx

(E6)0)150(

(E5)0)100(

3213

212

xxx

xx

Page 188: Optimization

188

Example 2

Solution cont’d:

Case II: x1=50. Once again, there are four possible solutions to Eq.(E16) as indicated below:

1. -20 - 2x2 + 2x3 = 0, x1 + x2 + x3 -150 = 0: The solution of these equations yields:

x1 = 50, x2 = 45, x3 = 55

This solution can be seen to violate Eq.(E8) which says:

(E8)0)100( 21 xx

Page 189: Optimization

189

Example 2

Solution cont’d:

Case II: x1=50. Once again, there are four possible solutions to Eq.(E16) as indicated below:

2. -20 - 2x2 + 2x3 = 0, -2x3 = 0: The solution of these equations yields:

x1 = 50, x2 = -10, x3 = 0

This solution can be seen to violate Eqs.(E8) and (E9) which say:

(E9)0)150(

(E8)0)100(

321

21

xxx

xx

Page 190: Optimization

190

Example 2

Solution cont’d:

Case II: x1=50. Once again, there are four possible solutions to Eq.(E16) as indicated below:

3. x1 + x2 -100 = 0, -2x3 = 0: The solution of these equations yields:

x1 = 50, x2 = 50, x3 = 0

This solution can be seen to violate Eq. (E9) which say:

(E9)0)150( 321 xxx

Page 191: Optimization

191

Example 2

Solution cont’d:

Case II: x1=50. Once again, there are four possible solutions to Eq.(E16) as indicated below:

4. x1 + x2 -100 = 0, x1 + x2 + x3 -150 = 0 : The solution of these equations yields:

x1 = 50, x2 = 50, x3 = 50

This solution can be seen to satisfy all the constraint Eqs.(E7-E9) which say:

(E9)0)150(

(E8)0)100(

(E7)0)50(

321

21

1

xxx

xx

x

Page 192: Optimization

192

Example 2

Solution cont’d:

Case II: x1=50.

The values of 1 , 2 , and 3 corresponding to this solution can be obtained from

as:

23211

32322

33

2120240

(E15) 2220220

2

xx

xxx

x

100,20,20 321

Page 193: Optimization

193

Example 2

Solution cont’d:

Case II: x1=50.

Since these values of i satisfy the requirements:

this solution can be identified as the optimum solution. Thus

100,20,20 321

(E12)0

(E11)0

(E10)0

3

2

1

50*,50*,50* 321 xxx

Page 194: Optimization

194

Convex functions• A function f(X) is said to be convex if for any pair of points

that is, if the segment joining the two points lies entirely above or on the graph of f(X).

• A convex function is always bending upward and hence it is apparent that the local minimum of a convex function is also a global minimum

)()1()()1(

1,0 , all and

and

1212

)2(

)2(2

)2(1

2

)1(

)1(2

)1(1

1

XXXX

XX

fff

x

x

x

x

x

x

nn

Page 195: Optimization

195

Convex functions

• A function f(x) is convex if for any two points x and y, we have

• A function f(X) is convex if the Hessian matrix

is positive semidefinite.

• Any local minimum of a convex function f(X) is a global minimum

x)(x)(yff(x)f(y) T

ji xxf /)(2 XH(X)

Page 196: Optimization

196

Concave function

• A function f(X) is called a concave function if for any two points X1 and X2, and for all 0 1,

that is, if the line segment joining the two points lies entirely below or on the graph of f(X).

• It can be seen that a concave function bends downward and hence the local maximum will also be its global maximum.

• It can be seen that the negative of a convex function is a concave function.

)()1()()1( 1212 XXXX fff

Page 197: Optimization

197

Concave function

• Convex and concave functions in one variable

Page 198: Optimization

198

Concave function

• Convex and concave functions in two variables

Page 199: Optimization

199

Example

Determine whether the following function is convex or concave.

f(x) = ex

Solution:

convex.strictly is f(x) Hence x.of valuesreal allfor 0)(2

2

xedx

fdxH

Page 200: Optimization

200

Example

Determine whether the following function is convex or concave.

f(x) = -8x2

Solution:

concave.strictly is f(x) Hence x.of valuesreal allfor 016)(2

2

dx

fdxH

Page 201: Optimization

201

Example

Determine whether the following function is convex or concave.

f(x1,x2) = 2x13-6x2

2

Solution:

Here

Hence H(X) will be negative semidefinite and f(X) is concave for x1 ≤ 0

120

012)( 1

22

2

21

2

21

2

21

2

x

x

f

xx

f

xx

f

x

f

H X

0for 0

0for 012

1

1121

2

x

xxx

f

0for 0

0for 0144

1

11

x

xxH(X)

Page 202: Optimization

202

Example

Determine whether the following function is convex or concave.

Solution:

15236534),,( 21312123

22

21321 xxxxxxxxxxxxf

1001

066

168

)(

23

2

32

2

31

2

32

2

22

2

21

2

31

2

21

2

21

2

x

f

xx

f

xx

f

xx

f

x

f

xx

f

xx

f

xx

f

x

f

H X

Page 203: Optimization

203

Example

Determine whether the following function is convex or concave.

Solution cont’d:

Here the principal minors are given by:

and hence the matrix H(X) is positive definite for all real values of x1, x2, x3. Therefore f(X) is strictly convex function.

15236534),,( 21312123

22

21321 xxxxxxxxxxxxf

0114

1001

066

168

01266

68

088

Page 204: Optimization

204

Convex programming problem

• When the optimization problem is stated as:

Minimize f (X) subject to gj (X) ≤ 0, j = 1, 2,…,m it is called a convex programming problem if the objective function f

(X), and the constraint functions, gj (X) are convex.• Supposing that f (X) and gj(X), j=1,2,…,m are convex functions, the

Lagrange function can be written as:

2

1

)()()( jj

m

Jj ygfL

XXλY,X,

Page 205: Optimization

205

Convex programming problem

• If j ≥ 0, then jgj(X) is convex, and since jyj=0 from

L(X,Y,) will be a convex function

• A necessary condition for f (X) to be a relative minimum at X* is that

L(X,Y,) have a stationary point at X*. However, if L(X,Y,) is a convex

function, its derivative vanishes only at one point, which must be an

absolute minimum of the function f (X). Thus the Kuhn-Tucker conditions

are both necessary and sufficient for an absolute minimum of f (X) at X*.

2

1

)()()( jj

m

Jj ygfL

XXλY,X,

mjyy

Ljj

j

,,2,1,02)( Y,X,

Page 206: Optimization

206

Convex programming problem

• If the given optimization problem is known to be a convex programming

problem, there will be no relative minima or saddle points, and hence the

exteme point found by applying the Kuhn-Tucke conditions is guaranteed

to be an absolute minimum of f (X). However, it is often very difficult to

ascertain whether the objective and constraint functions involved in a

practical engineering problem are convex.

Page 207: Optimization

207

Linear Programming I:Simplex method

• Linear programming is an optimization method applicable for the solution of problems in which the objective function and the constraints appear as linear functions of the decision variables.

• Simplex method is the most efficient and popular method for solving general linear programming problems.

• At least four Nobel prizes were awarded for contributions related to linear programming (e.g. In 1975, Kantorovich of the former Soviet Union and T.C. Koopmans of USA were awarded for application of LP to the economic problem of allocating resources).

Page 208: Optimization

208

Linear Programming I:Simplex method-Applications

• Petroleum refineries– choice of buying crude oil from several different sources with

differing compositions and at differing prices– manufacturing different products such as aviation fuel, diesel fuel,

and gasoline, in varying quantities– Constraints due to the restrictions on the quantity of the crude oil

from a particular source, the capacity of the refinery to produce a particular product

– A mix of the purchased crude oil and the manufactured products is sought that gives the maximum profit

• Optimal production plan in a manufacturing firm– Pay overtime rates to achieve higher production during periods of

higher demand• The routing of aircraft and ships can also be decided using

LP

Page 209: Optimization

209

Standard Form of a Linear Programming Problem

• Scalar form

ariablesdecision v theare and

constants,known are ),,2,1;,,2,1( and , where

0

0

0

sconstraint thesubject to

),,,( Minimize

2

1

2211

22222121

11212111

221121

j

ijjj

n

mnmnmm

nn

nn

nnn

x

njmiabc

x

x

x

bxaxaxa

bxaxaxa

bxaxaxa

xcxcxcxxxf

Page 210: Optimization

210

Standard Form of a Linear Programming Problem

• Matrix form

mnmmm

n

n

nmn

aaaa

aaaa

aaaa

c

c

c

b

b

b

x

x

x

f

321

2232221

1131211

2

1

2

1

2

1

,,

where

0

sconstraint thesubject to

)( Minimize

a

cbX

X

baX

XcTX

Page 211: Optimization

211

Characteristic of a Linear Programming Problem

• The objective function is of the minimization type

• All the constraints are of the equality type

• All the decision variables are nonnegative

• The number of the variables in the problem is n. This includes the slack and surplus variables.

• The number of constraints is m (m < n).

Page 212: Optimization

212

Characteristic of a Linear Programming Problem

• The number of basic variables is m (same as the number of constraints).

• The number of nonbasic variables is n-m.

• The column of the right hand side b is positive and greater than or equal to zero.

• The calculations are organized in a table.

• Only the values of the coefficients are necessary for the calculations. The table therefore contains only coefficient values, the matrix A in previous discussions. These are the coefficients in the constraint equations.

Page 213: Optimization

213

Characteristic of a Linear Programming Problem

• The objective function is the last row in the table. The constraint coefficients are written first.

• Row operations consist of adding (subtracting)a definite multiple of the pivot row from other rows of the table.

Page 214: Optimization

214

Transformation of LP Problems into Standard Form

• The maximization of a function f(x1,x2,…,xn ) is equivalent to the minimization of the negative of the same function. For example, the objective function

Consequently, the objective function can be stated in the minimization form in any linear programming problem.

nn

nn

xcxcxcff

xcxcxcf

2211

2211

maximize

toequivalent is

minimize

Page 215: Optimization

215

Transformation of LP Problems into Standard Form

• A variable may be unrestricted in sign in some problems. In such cases, an unrestricted variable (which can take a positive, negative or zero value) can be written as the difference of two nonnegative variables.

• Thus if xj is unrestricted in sign, it can be written as xj=xj'-xj", where

• It can be seen that xj will be negative, zero or positive, depending on whether xj" is greater than, equal to, or less than xj

.

0" and 0' jj xx

Page 216: Optimization

216

Transformation of LP Problems into Standard Form

If a constraint appears in the form of a “less than or equal to” type of inequality as:

it can be converted into the equality form by adding a

nonnegative slack variable xn+1 as follows:

knknkk bxaxaxa 2211

knnknkk bxxaxaxa 12211

Page 217: Optimization

217

Transformation of LP Problems into Standard Form

If a constraint appears in the form of a “greater than or equal to” type of inequality as:

it can be converted into the equality form by subtracting a variable as:

where xn+1 is a nonnegative variable known as a surplus variable.

knknkk bxaxaxa 2211

knnknkk bxxaxaxa 12211

Page 218: Optimization

218

Geometry of LP Problems

Example: A manufacturing firm produces two machine parts using lathes, milling machines, and grinding machines. The different machining times required for each part, the machining times available for different machines, and the profit on each machine part are given in the following table. Determine the number of parts I and II to be manufactured per week to maximize the profit.

Type of machine Machine time required (min)

Machine Part I

Machine time required (min)

Machine Part II

Maximum time available per week (min)

Lathes 10 5 2500

Milling machines 4 10 2000

Grinding machines 1 1.5 450

Profit per unit $50 $100

Page 219: Optimization

219

Geometry of LP Problems

Solution: Let the machine parts I and II manufactured per week be denoted by x and y, respectively. The constraints due to the maximum time limitations on the various machines are given by:

Since the variables x and y can not take negative values, we have

Page 220: Optimization

220

Geometry of LP Problems

Solution: The total profit is given by:

Thus the problem is to determine the nonnegative values of x and y that satisfy the constraints stated in Eqs.(E1) to (E3) and maximize the objective function given by (E5). The inequalities (E1) to (E4) can be plotted in the xy plane and the feasible region identified as shown in the figure. Our objective is to find at least one point out of the infinite points in the shaded region in figure which maximizes the profit function (E5).

Page 221: Optimization

221

Geometry of LP Problems

Solution: The contours of the objective function, f, are defined by the linear equation:

As k is varied, the objective function line is moved parallel to itself. The maximum value of f is the largest k whose objective f, unction line has at least one point in common with the feasible region). Such a point can be identified as point G in the figure. The optimum solution corresponds to a value of x*=187.5, y*=125, and a profit of $21,875.00.

Page 222: Optimization

222

Geometry of LP Problems

Solution: In some cases, the optimum solution may not be unique. For example, if the profit rates for the machine parts I and II are $40 and $100 instead of $50 and $100 respectively, the contours of the profit function will be parallel to side CG of the feasible region as shown in the figure. In this case, line P”Q” which coincides with the boundary line CG will correspond to the maximum feasible profit. Thus there is no unique optimal solution to the problem and any point between C and G online P”Q” can be taken as optimum solution with a profit value of $ 20000.

Page 223: Optimization

223

Geometry of LP Problems

Solution: There are three other possibilities. In some problems. In some prolems, the feasible region may not be a closed convex polygon. In such a case, it may happen that the profit level can be increased to an infinitely large value without leaving the feasible region as shown in the figure. In this case, the solution of the linear programming problem is said to be unbounded.

Page 224: Optimization

224

Geometry of LP problems

• On the other extreme, the constraint set may be empty in some problems. This could be due to the inconsistency of the constraints; or, sometimes, even though the constraints may be consistent, no point satisfying the constraints may also satisfy the nonnegativity restrictions.

• The last possible case is when the feasible region consists of a single point. This can occur only if the number of constraints is at least equal to the number of variables. A problem of this kind is of no interest to us since there is only one feasible point and there is nothing to be optimized.

Page 225: Optimization

225

Solution of LP Problems

A linear programming problem may have

• A unique and finite optimum solution

• An infinite number of optimal solutions

• An unbounded solution

• No solution

• A unique feasible point

Page 226: Optimization

226

Solution of LP Problems

A linear programming problem may have a unique and finite optimum solution:

The condition necessary for this to occur is:

• The objective function and the constraints have dissimilar slopes

• The feasible region is bounded/closed.

Page 227: Optimization

227

Solution of LP Problems

A linear programming problem may have infinite solution.

The condition necessary for this to occur is:

• The objective function must be parallel to one of the constraints.

Page 228: Optimization

228

Solution of LP Problems

A linear programming problem may have no solution.

The condition necessary for this to occur is:

• There is no point that is feasible. There is no solution to the problem.

Page 229: Optimization

229

Solution of LP Problems

A linear programming problem may have unbounded solution.

• In this case, the feasible region is not bounded.

• The presence of an unbounded solution also suggests that the formulation of the problem may be lacking. Additional meaningful constraints can be accomodated to define the solution.

Page 230: Optimization

230

Geometrical characteristics of the graphical solution of LP Problems

• The feasible region is a convex polygon*

• The optimum value occurs at an extreme point or vertex of the feasible region

*A convex polygon consists of a set of points having the property that the line segment joining any two points in the set is entirely in the convex set. In problems having more than two decision variables, the feasible region is called a convex polyhedron

Page 231: Optimization

231

Definitions

• Point in n-Dimensional Space A point X in an n-dimensional space is characterized by an ordered set of n values or coordinates (x1,x2,…,xn). The coordinates of X are also called the components of X.

• Line segment in n-Dimensions (L) If the coordinates of two points A and B are given by xj(1) and

xj(2) (j=1,2,…,n), the line segment (L) joining these points is the collection of points X() whose

coordinates are given by xj= xj(1)+(1- ) xj

(2), j=1,2,…,n, with 0 ≤ ≤ 1. Thus,

In one dimension, it is easy to see that the definition is in accordance with our experience

(2)(1) XXXX )1( L

10],[)( )1()2()2( xxxx

0

A

x(1)

x()

B

x(2)

10,)1()( )2()1( xxx

Page 232: Optimization

232

Definitions

• Hyperplane In n-dimensional space, the set of points whose coordinates satisfy a linear equation

is called a hyperplane. A hyperplane, H, is represented as H(a,b) = {X| aT X = b}

A hyperplane has n-1 dimensions in an n-dimensional space. For example, in three dimensional space it is a plane, and in two dimensional space it is a line.

bxaxa nn XaT11

Page 233: Optimization

233

Definitions

• Hyperplane continued: The set of points whose coordinates satisfy a linear inequality like

is called a closed half space, closed due to the inclusion of an equality sign in the inequality above. A hyperplane partitions the n-dimensional space (En) into two closed half-spaces, so that

bxaxa nn 11

bH

bH

Xa

XaT

T

X

X

Page 234: Optimization

234

Definitions

• Convex set: A convex set is a collection of points such that if X(1) and X(2) are any two points in the collection, the line segment joining them is also in the collection. A convex set, S, can be defined mathematically as follows:

A set containing only one point is always considered to be convex. Some examples of convex sets in two dimensions are shown shaded.

10 ,)1(

where

then ,, If

(2)(1)

)2((1)

XXX

X SS XX

Page 235: Optimization

235

Definitions

Nonconvex sets:

• The sets depicted by the shaded region on the figure below are not convex.

• The L-shaped region, for example, is not a convex set because it is possible to find two points a and b in the set such that not all points on the line joining them belong to the set.

Page 236: Optimization

236

Definitions

Convex Polyhedron and Convex Polytope:

• A convex polyhedron is a set of points common to one or more half spaces• A convex polyhedron that is bounded is called a convex polytope

Page 237: Optimization

237

Definitions

Vertex or Extreme Point:

• This is a point in the convex set that does not lie on a line segment joining two other points of the set. e.g. – Every point on the circumference of a circle– Each corner point of a polygon

Feasible Solution:

• In an LP problem, any solution that satisfies the constraints

is called a feasible solution

0X

baX

Page 238: Optimization

238

Definitions

Basic Solution:

• A basic solution is one in which n-m variables are set equal to zero. A basic solution can be obtained by setting n-m variables to zero and solving the constraint

simultaneously. In general, for n design variables and m constraints, the number of basic solutions is given by the combination

Basis:

• The collection of variables not set equal to zero to obtain the basic solution is called the basis.

baX

)!(!

!

mnm

n

m

n

Page 239: Optimization

239

DefinitionsBasic Feasible Solution:

• This is a basic solution that satisfies the nonnegativity conditions of

Nondegenerate Feasible Solution:

• This is a basic feasible solution that has got exactly m positive xi

Optimal Solution:

• A feasible solution that optimizes (minimizes) the objective function is called an optimal solution

Optimal Basic Solution:

• This is a basic feasible solution for which the objective function is optimal

0X

Page 240: Optimization

240

TheoremsTheorem 1:

• The intersection of any number of convex sets is also convex

Theorem 2:

• The feasible region of a linear programming problem is convex

Page 241: Optimization

241

TheoremsTheorem 3:

• Any local minimum solution is global for a linear programming problem

Page 242: Optimization

242

TheoremsTheorem 4:

• Every basic feasible solution is an extreme point of the convex set of feasible solutions

Theorem 5:

• Let S be a closed, bounded convex polyhedron with Xie, i=1 to p, as the set of its extreme points. Then any

vector X Є S can be written as:

Theorem 6:

• Let S be a closed convex polyhedron. Then the minimum of a linear function over S is attained at an extreme point of S.

1,01

1

p

iii

ei

p

ii

XX

Page 243: Optimization

243

Solution of a system of linear simultaneous equations

• Consider the following system of n equations in n unknowns:

• Assuming that this set of equations possesses a unique solution, a method of solving the system consists of reducing the equations to a form known as canonical form

)(E

)(E

)(E

)(E

2211

333232131

222222121

111212111

nnnnnnn

nn

nn

nn

bxaxaxa

bxaxaxa

bxaxaxa

bxaxaxa

Page 244: Optimization

244

Solution of a system of linear simultaneous equations

• The equation:

will not be altered under the following elementary operations:– Any equation Er is replaced by the equation kEr , where k is a nonzero constant– Any equation Er is replaced by the equation Er+ kEs, where Es is any other equation of the system

)(E

)(E

)(E

)(E

2211

333232131

222222121

111212111

nnnnnnn

nn

nn

nn

bxaxaxa

bxaxaxa

bxaxaxa

bxaxaxa

Page 245: Optimization

245

Solution of a system of linear simultaneous equations

Pivot operations• By making use of the elementary operations, the system can be reduced to a convenient equivalent form as

follows. Let us select some variable xi and try to eliminate it from all the equations except the jth one (for which aji is nonzero). This can be accomplished by dividing the jth equation by aji and subtracting aki times the result from each of the other equations, k=1,2,...,j-1,j+1,...,n. The resulting system of equations can be written as:

where the primes indicate that the a’ij and b’j are changed from the original system. This procedure of eliminating a particular variable from all but one equations is called a pivot operation.

0

0

1

0

0

0

11,11,2211

1,111,111,122,111,1

,11,11,2211

1,111,111,122,111,1

2211,211,2222121

1111,111,1212111

nnnniiniiinnn

jnnjiijiiijjj

jnnjniijiiijjj

jnniiijiiijjj

nniiiii

nniiiii

bxaxaxxaxaxa

bxaxaxxaxaxa

bxaxaxxaxaxa

bxaxaxxaxaxa

bxaxaxxaxaxa

bxaxaxxaxaxa

Page 246: Optimization

246

Solution of a system of linear simultaneous equations

• The system of equations

produced by the pivot operation have exactly the same solution as the original set

2211

33232131

22222121

11212111

nnnnnn

nn

nn

nn

bxaxaxa

bxaxaxa

bxaxaxa

bxaxaxa

0

0

1

0

0

0

11,11,2211

1,111,111,122,111,1

,11,11,2211

1,111,111,122,111,1

2211,211,2222121

1111,111,1212111

nnnniiniiinnn

jnnjiijiiijjj

jnnjniijiiijjj

jnniiijiiijjj

nniiiii

nniiiii

bxaxaxxaxaxa

bxaxaxxaxaxa

bxaxaxxaxaxa

bxaxaxxaxaxa

bxaxaxxaxaxa

bxaxaxxaxaxa

Page 247: Optimization

247

Solution of linear simultaneous equations - Canonical form

• Next time, if we perform a new pivot operation by eliminating xs, s≠i, in all the equations except the tth equation, t≠j, the zeros or the 1 in the ith column will not be disturbed. The pivotal operations can be repeated by using a different variable and equation each time until the system of the equations is reduced to the form:

This system of equations is said to be in canonical form and has been obtained after carrying out n pivot operations.

1000

0100

0010

0001

321

3321

2321

1321

nn

n

n

n

bxxxx

bxxxx

bxxxx

bxxxx

Page 248: Optimization

248

Solution of linear simultaneous equations - Canonical form

• From the canonical form, the solution vector can be directly obtained as:

• Thus the solution above is the desired solution of

nibx ii ,,2,1,

2211

33232131

22222121

11212111

nnnnnn

nn

nn

nn

bxaxaxa

bxaxaxa

bxaxaxa

bxaxaxa

Page 249: Optimization

249

Pivotal reduction of a general system of equations

• Instead of a square system, let us consider a system of m equations in n variables with n ≥ m. This system of equations is assumed to be consistent so that it will have at least one solution

• The solution vector X that satisfy the above equation are not evident from the equations. However, it is possible to reduce this system to an equivalent canonical system from which at least one solution can readily be deduced.

2211

22222121

11212111

mnmnmm

nn

nn

bxaxaxa

bxaxaxa

bxaxaxa

Page 250: Optimization

250

Pivotal reduction of a general system of equations

• If pivotal operations with respect to any set of m variables, say, x1, x2,….,xm, are carried, the resulting set of equations can be written as follows: Canonical system with pivotal variables x1, x2,….,xm

• One special solution that can always be deduced is:

• This solution is called a basic solution since the solution vector contains no more than m nonzero terms

iables var

tindependen variables

Constants or Nonpivotal Pivotal

100

010

001

11,21

2211,221

1111,121

mnmnmmmm

nnmmm

nnmmm

bxaxaxxx

bxaxaxxx

bxaxaxxx

nmmi

mibi

,,2,1,0

,,2,1,

Page 251: Optimization

251

Pivotal reduction of a general system of equations

• One special solution that can always be deduced is:

• The pivotal variables xi, i=1,2,…,m are called the basic variables and the other variables xi, i=m+1,m+2,…,n are called the nonbasic variables.

• This is not the only solution, but it is the one most readily deduced from

nmmi

mibi

,,2,1,0

,,2,1,

iables var

tindependen variables

Constants or Nonpivotal Pivotal

100

010

001

11,21

2211,221

1111,121

mnmnmmmm

nnmmm

nnmmm

bxaxaxxx

bxaxaxxx

bxaxaxxx

Page 252: Optimization

252

Pivotal reduction of a general system of equations cont’d

If all bi", i=1,2,…,m in the solution given by

are nonnegative, it satisfies Eqs

and hence it can be called a basic feasible solution.

nmmi

mibi

,,2,1,0

,,2,1,

0X

baX

Page 253: Optimization

253

Motivation of the simplex method

• Given a system in canonical form corresponding to a basic solution, we have seen how to move to a neighboring basic solution by a pivot operation

• Thus one way to find the optimal solution of the given LP problem is to generate all the basic solutions and pick the one that is feasible and corresponds to the optimal value of the objective function

• This can be done because the optimal solution, if one exists, always occurs at an extreme point or vertex of the feasible domain.

Page 254: Optimization

254

Motivation of the simplex method

• If there are m equality constraints in n variables with n m, a basic solution can be obtained by setting any of the n-m variables equal to zero.

• The number of basic solutions to be inspected is thus equal to the number of ways in which m variables can be selected from a set of n variables, that is,

• e.g. If n = 10 and m=5, we have 252 basic solutions

If n = 20 and m=10, we have 184756 basic solutions

!)!(

!

mmn

n

m

n

Page 255: Optimization

255

Motivation of the simplex method (contd)

• Usually we do not have to inspect all these basic solutions since many of them will be infeasible

• However, for large values of n and m , this is still a very large number to inspect one by one

• Hence, what we really need is a computational schemethat examines a sequence of basic feasible solutions, each of which corresponds to a lower value of f until a minimum is reached

• The simplex method of Dantzig is a powerful scheme for obtaining a basic feasible solution.

Page 256: Optimization

256

Motivation of the simplex method (contd)

• The simplex method of Dantzig is a powerful scheme for obtaining a basic feasible solution.

• If the solution is not optimal, the method provides for finding a neighboring basic feasible solution that has a lower or equal value of f .

• The process is repeated until, in a finite number of steps, an optimum is found.

Page 257: Optimization

257

Motivation of the simplex method (contd)

• The first step involved in the simplex method is to construct an auxiliary problem by introducing certain variables known as artificial variables into the standard form of the linear programming problem.

• The primary aim of adding artificial variables is to bring the resulting auxiliary problem into a canonical form from which its basic feasible solution can be obtained immediately.

• Starting from this canonical form, the optimal solution of the original linear programming problem is sought in two phases.

Page 258: Optimization

258

Motivation of the simplex method (contd)

• The first phase is intended to find a basic feasible solution to the original linear programming problem.– It consists of a sequence of pivot operations that produces a succession of

different canonical forms from which the optimal solutiobn of the auxiliary problem can be found.

• The second phase is intended to find the optimal solution of the original LP problem.– It consists of a second sequence of pivot operations that enables us to move

from one basic feasible solution to the next of the original LP problem.

• In this process, the optimal solution of the problem, if one exists, will be identified. The sequence of different canonical forms that is necessary in both phases of the simplex method is generated according to the simplex algorithm.

Page 259: Optimization

259

Simplex Algorithm

• The starting point of the simplex algorithm is always a set of equations, which includes the objective function along with the equality constraints of the problem in canonical form.

• Thus, the objective of the simplex algorithm is to find the vector X 0 that minimizes the function f (X) and satisfies the equations:

where aij, cj, bj, and fo are constants. Notice that (-f) is treated as a basic variable in the canonical form

onmnmmm

mnmnmmmm

nnmmm

nnmmm

fxcxcfxxx

bxaxaxxx

bxaxaxxx

bxaxaxxx

1121

11,21

2211,221

1111,121

000

100

010

001

Page 260: Optimization

260

Simplex Algorithm

• The basic solution which can be deduced is:

• In Phase I of the simplex method, the basic solution corresponding to the canonical form obtained after the introduction of the artificial variables will be feasible for the auxiliary problem

• Phase II of the simplex method starts with a basic feasible solution of the original linear programming problem. Hence the initial canonical form at the start of the simplex algorithm will always be a basic feasible solution.

nmmix

ff

mibx

i

n

ii

,,2,1,0

,,2,1,

0

Page 261: Optimization

261

Simplex Algorithm

• We know from Theorem

‘Let S be a closed convex polyhedron. Then the minimum of a linear function over S is attained at an extreme point of S.’

that the optimal solution of a linear programming problem lies at one of the basic feasible solutions.

• Since the simplex algorithm is intended to move from one basic feasible solution to the other through pivotal operations, before moving to the next basic feasible solution, we have to make sure that the present basic feasible solution is not the optimal solution.

• By merely glancing at the number cj", j=1,2,…,n, we can tell whether or not the present basic feasible solution is optimal.

Page 262: Optimization

262

Simplex Algorithm

Identifying an optimal point:• Theorem: A basic feasible solution is an optimal solution with a

minimum objective function value of f0" if all the cost coefficients cj" ,

j=m+1, m+2, …,n, in the equations

are nonnegative.

onmnmmm

mnmnmmmm

nnmmm

nnmmm

fxcxcfxxx

bxaxaxxx

bxaxaxxx

bxaxaxxx

1121

11,21

2211,221

1111,121

000

100

010

001

Page 263: Optimization

263

Improving a nonoptimal basic feasible solution

• From the last row of the Equation

we can write the objective function as:

for the solution given by

onmnmmm

mnmnmmmm

nnmmm

nnmmm

fxcxcfxxx

bxaxaxxx

bxaxaxxx

bxaxaxxx

1121

11,21

2211,221

1111,121

000

100

010

001

0

110

f

xcxcff j

n

mjji

m

ii

nmmix

ff

mibx

i

n

ii

,,2,1,0

,,2,1,

0

Page 264: Optimization

264

Improving a nonoptimal basic feasible solution

• If at least one cj" is negative, the value of f can be reduced by making the corresponding xj" > 0.

• In other words, the nonbasic variable xj, for which the cost coefficient is cjn

is negative, is to be made a basic variable in order to reduce the value of the objective function.

• At the same time, due to the pivotal operation, one of the current basic variables will become nonbasic and hence the values of the new basic variables are to be adjusted in order to bring the value of f less than f0" .

Page 265: Optimization

265

Improving a nonoptimal basic feasible solution

• If there are more than one cj" < 0, the index s of the nonbasic variable xs which is to be made basic is chosen such that

• Although this may not lead to the greatest possible decrease in f (since it may not be possible to increase xs very far), this is intuitively at least a good rule for choosing the variable to become basic.

• It is the one generally used in practice because it is simple and it usually leads to fewer iterations than just choosing any cj" < 0.

• If there is a tie in applying the above equation, (i.e., if more than one cj" has the same minimum value), we select one of them arbitrarily as cs"

0 minimum js cc

Page 266: Optimization

266

Improving a nonoptimal basic feasible solution

• Having decided on the variable xs to become basic, we increase it from zero holding all other nonbasic variables zero and observe the effect on the current basic variables. From Eq.

we can obtain

onmnmmm

mnmnmmmm

nnmmm

nnmmm

fxcxcfxxx

bxaxaxxx

bxaxaxxx

bxaxaxxx

1121

11,21

2211,221

1111,121

000

100

010

001

0,

0,

0,

0,

0

2222

1111

sss

msmsmm

ss

ss

cxcff

bxabx

bxabx

bxabx

Page 267: Optimization

267

Improving a nonoptimal basic feasible solution

• Since cs" < 0, the equation

suggests that the value of xs should be made as large as possible in order to reduce the value of f as much as possible.

• However, in the process of increasing the value of xs, some of the variables xi (i=1,2,...,m) in the Equation

may become negative.

0,

0,

0

sss

msmsmm

cxcff

bxabx

0,

0,

2222

1111

bxabx

bxabx

ss

ss

Page 268: Optimization

268

Improving a nonoptimal basic feasible solution

• If all the coefficients ais" 0, then xs can be made infinitely large without making any xi < 0, i=1,2,...,m

• In such a case, the minimum value of f is minus infinity and the linear programming problem is said to have an unbounded solution.

• If at least one ais" is positive, the maximum value that xs can take without making xi negative is bi" / ais"

• If there are more than one ais" > 0, the largest value xs* that xs can take is

given by the minimum of the ratios bi" / ais" for which ais" > 0. Thus,

is

i

rs

rs a

b

a

bx * minimum

0 ais"

Page 269: Optimization

269

Improving a nonoptimal basic feasible solution

• The choice of r in the case of a tie, assuming that all b"i > 0 , is arbitrary.

• If for any b"i for which a"is > 0 is zero in the Eq.

xs can not be increased by any amount. Such a solution is called a degenerate solution.

0,

0,

2222

1111

bxabx

bxabx

ss

ss

Page 270: Optimization

270

Improving a nonoptimal basic feasible solution

• In the case of a nondegenerate basic feasible solution, a new basic feasible solution can be constructed with a lower value of the objective function.

• By substituting the value of xs* given by

into

is

i

rs

rs a

b

a

bx * minimum

0 ais"

0,

0,

0,

0,

0

2222

1111

sss

msmsmm

ss

ss

cxcff

bxabx

bxabx

bxabx

Page 271: Optimization

271

Improving a nonoptimal basic feasible solution (contd)

• We obtain

which can readily be seen to be a feasible solution different from the previous one.

0*

0

*

and ,,2,1,0

0

and ,,2,1,0

*

fxcff

sjnmmjx

x

rimixabx

xx

ss

j

r

sisii

ss

Page 272: Optimization

272

Improving a nonoptimal basic feasible solution (contd)

• Since a“rs > 0 in Equation

a single pivot operation on the element a“rs in the system of equations

will lead to a new canonical form from which the basic feasible solution of the equation

can easily be deduced.

is

i

rs

rs a

b

a

bx * minimum

0 ais"

onmnmmm

mnmnmmmm

nnmmm

nnmmm

fxcxcfxxx

bxaxaxxx

bxaxaxxx

bxaxaxxx

1121

11,21

2211,221

1111,121

000

100

010

001

sjnmmjx

x

rimixabx

xx

j

r

sisii

ss

and ,,2,1,0

0

and ,,2,1,0

**

Page 273: Optimization

273

Improving a nonoptimal basic feasible solution (contd)

• Also, the equation

shows that the basic feasible solution corresponds to a lower objective function value compared to that of

• This basic feasible solution can be again be tested for optimality by seeing whether all c"i > 0 in the new canonical form.

• If the solution is not optimal, the entire procedure of moving to another basic feasible solution from the present one has to be repeated.

nmmix

ff

mibx

i

n

ii

,,2,1,0

,,2,1,

0

0*

0 fxcff ss

Page 274: Optimization

274

Improving a nonoptimal basic feasible solution (contd)

• In the simplex algorithm, this procedure is repeated in an iterative manner until the algorithm finds either

1. A class of feasible solutions for which f - or

2. An optimal basic feasible solution with all c"i 0, i=1,2,...,n

• Since there are only a finite number of ways to choose a set of m basic variables out of n variables, the iterative process of the simplex algorithm will terminate in a finite number of cycles.

Page 275: Optimization

275

Example

Maximize F = x1 + 2x2 + x3

subject to

3,2,1,0

64

652

22

321

321

321

ix

xxx

xxx

xxx

i

Page 276: Optimization

276

Example

Solution: We first change the sign of the objective function to convert it to a minimization problem and the signs of the inequalities where necessary so as to obtain nonnegative avlues for bi (to see whether an initial basic feasible solution can be obtained readily. The resulting problem can be stated as:

Minimize f = -x1 - 2x2 - x3

subject to

3,2,1,0

64

652

22

321

321

321

ix

xxx

xxx

xxx

i

Page 277: Optimization

277

Example

Solution cont’d: By introducing the slack variables x4 ≥ 0, x5 ≥ 0, and x6 ≥ 0, the system of equations can be stated in canonical form as:

where x4, x5 , x6 and -f can be treated as basic variables.

02

(E1) 6 4

652

2 2

321

6321

5321

4321

fxxx

xxxx

xxxx

xxxx

Page 278: Optimization

278

Example

Solution cont’d: The basic solution corresponding to Eqs. (E1) is given by:

which can be seen to be feasible.

Since the cost coefficients corresponding to nonbasic variables in Eqs. (E1) are negative (c1” = -1, c2” = -2, c3” = -1), the present solution given by Eqs.(E2) is not optimum.

0

(E2) ) variables(nonbasic 0

) variables(basic 6 ,6,2

321

654

f

xxx

xxx

Page 279: Optimization

279

Example

Solution cont’d: To improve the present basic feasible solution, we first decide the variable xs to be brought into the basis as:

Thus x2 enters the next basic set. To obtain the new canonical form, we select the pivot element ars” such that

2)0min( 2 ccc js

)"

"(min

0"is

i

ars

r

a

b

a

bis

Page 280: Optimization

280

Example

Solution cont’d: In the present case, s = 2 and a12” and a32” are ≥ 0. Since b1”/ a12” =2/1 and b3”/ a32” =6/1 , xr=x1. By pivoting on a12” , the new system of equations can be obtained as:

02303

(E3) 4202

8404

2 2

4321

64321

54321

4321

fxxxx

xxxxx

xxxxx

xxxx

Page 281: Optimization

281

Example

Solution cont’d: The basic feasible solution corresponding to this canonical system is given by:

Since c3” = -3, the present solution is not optimum. As cs”=min(ci”<0) = c3” , xs=x3 enters the next basis.

4

(E4) ) variables(nonbasic0

) variables(basic4,8,2

431

652

f

xxx

xxx

Page 282: Optimization

282

Example

Solution cont’d: To find the pivot element ars” , we find the ratios bi”/ ais” > 0. In the Equation below only a23 ” and a23 ” > 0

Hence:

02303

(E3) 4202

8404

2 2

4321

64321

54321

4321

fxxxx

xxxxx

xxxxx

xxxx

2

4 and

4

8

33

3

23

2

a

b

a

b

Page 283: Optimization

283

Example

Solution cont’d: Since both of these ratios are the same, we arbitrarily select a23 ” as the pivot element. Pivoting on a23 ” gives the following canonical system of equations:

The basic feasible solution corresponding to this canonical form is:

104

3

4

11006

(E5) 02

1

2

3000

24

1

4

1101

4 4

1

4

503

54321

654321

54321

54321

fxxxxx

xxxxxx

xxxxx

xxxxx

10

(E6) ) variables(basic 0

) variables(basic 02,,4

541

632

f

xxx

xxx

Page 284: Optimization

284

Example

Variables

Basic variables

x1 x2 x3 x4 x5 x6 -f bi” b1”/ ais” for

ais”>0

x4 2 1 -1 1 0 0 0 2 2

x5 2 -1 5 0 1 0 0 6

x6 4 1 1 0 0 1 0 6 6

-f -1 -2 -1 0 0 0 1 0

Most negative ci (x2 enters the next basis)

Smaller one (x4 drops from the next basis)

1

Page 285: Optimization

285

ExampleResult of pivoting

Variables

Basic variables

x1 x2 x3 x4 x5 x6 -f bi” b1”/ ais” for

ais”>0

x2 2 1 -1 1 0 0 0 2

x5 4 0 4 1 1 0 0 8 2

x6 2 0 2 -1 0 1 0 4 2

-f 3 0 -3 2 0 0 1 4

Most negative ci (x3 enters the next basis)

Select this arbitrarily. (x5 drops from the next basis)

4

Page 286: Optimization

286

ExampleResult of pivoting

Variables

Basic variables

x1 x2 x3 x4 x5 x6 -f bi”

x2 3 1 0 5/4 1/4 0 0 4

x3 1 0 1 1/4 1/4 0 0 2

x6 0 0 0 -3/2 -1/2 1 0 0

-f 6 0 0 11/4 3/4 0 1 10

All ci are ≥ 0 and hence the present solution is optimum

Page 287: Optimization

287

Example 2Unbounded solution

Minimize f = -3x1- 2x2

subject to

0,0

623

1

21

21

21

xx

xx

xx

Page 288: Optimization

288

Example 2Unbounded solution

Solution: Introducing the slack variables x3 ≥ 0 and x4 ≥ 0, the given system of equations can be written in canonical form as:

The basic feasible solution corresponding to this canonical form is given by:

023

(E1) 623

1

21

421

321

fxx

xxx

xxx

0

(E2) ) variables(nonbasic0

) variables(basic6,1

21

43

f

xx

xx

Page 289: Optimization

289

Example 2Unbounded solution

Solution: Introducing the slack variables x3 ≥ 0 and x4 ≥ 0, the given system of equations can be written in canonical form as:

The basic feasible solution corresponding to this canonical form is given by:

023

(E1) 623

1

21

421

321

fxx

xxx

xxx

0

(E2) ) variables(nonbasic0

) variables(basic6,1

21

43

f

xx

xx

Page 290: Optimization

290

Example 2Unbounded solution

Solution: Since the cost coefficients corresponding to the nonbasic variables are negative, the solution given by Eq.( E2) is not optimum. Hence the Simplex procedure is applied to the canonical system of Eqs.(E1) starting from the solution Eqs.(E2). The computations are done in tableau form as shown in the next slides.

Page 291: Optimization

291

Variables

Basic variables

x1 x2 x3 x4 -f bi” b1”/ ais” for

ais”>0

x3 1 -1 1 0 0 1 1

x4 3 -2 0 1 0 6 2

-f -3 -2 0 0 1 0

Example 2Unbounded solution

Smaller value (x3 leaves the basis)

Most negative ci (x1 enters the next basis)

1

Page 292: Optimization

292

Variables

Basic variables

x1 x2 x3 x4 -f bi” b1”/ ais” for

ais”>0

x1 1 -1 1 0 0 1

x4 0 1 -3 1 0 3 3

-f 0 -5 3 0 1 3

Example 2Results of pivoting

Smaller value (x4 leaves the basis)

Most negative ci (x2 enters the next basis)

1

Page 293: Optimization

293

Variables

Basic variables

x1 x2 x3 x4 -f bi” b1”/ ais” for

ais”>0

x1 1 0 -2 1 0 4

x2 0 1 -3 1 0 3

-f 0 0 -12 5 1 18

Example 2Results of pivoting

Both ais” are negative (i.e. No variable leaves the basis

Most negative ci (x3 enters the next basis)

Page 294: Optimization

294

Example 2Unbounded solution

• At this stage, we notice that x3 has the most negative cost coefficient and hence it should be brought into the next basis.

• However, since all the coefficients ai3” are negative, the value of f can be decreased indefinitely without violating any of the constraints if we bring x3 into the basis. Hence the problem has no bounded solution.

• In general, if all the coefficients of the entering variable xs (ais”) have negative or zero values at any iteration, we can conclude that the problem has an unbounded solution.

Page 295: Optimization

295

Example 3Infinite number of solutions

Minimize f = -40x1- 100x2

subject to

0,0

90032

2000104

2500510

21

21

21

21

xx

xx

xx

xx

Page 296: Optimization

296

Example 3Infinite number of solutions

Solution: By adding the slack variables x3 ≥ 0, x4 ≥ 0 and x5 ≥ 0, the equations can be written in canonical form as follows:

010040

90032

2000104

2500510

21

521

421

321

fxx

xxx

xxx

xxx

Page 297: Optimization

297

Example 3Infinite number of solutions

Smaller value (x4 leaves the basis)

Most negative ci (x2 enters the next basis)

10

Variables

Basic variables

x1 x2 x3 x4 x5 -f bi” b1”/ ais” for

ais”>0

x3 10 5 1 0 0 0 2500 500

x4 4 0 1 0 0 2000 200

x5 2 3 0 0 1 0 900 300

-f -40 -100 0 0 0 1 0

Page 298: Optimization

298

Example 3Infinite number of solutions

Variables

Basic variables

x1 x2 x3 x4 x5 -f bi”

x3 8 0 1 -1/2 0 0 1500

x2 4/10 1 0 -1/10 0 0 200

x5 8/10 0 0 -3/10 1 0 300

-f 0 0 0 10 0 1 20,000

Page 299: Optimization

299

Example 3Infinite number of solutions

• Since all ci” ≥ 0, the present solution is optimum. The optimum values are given by:

• It can be observed from the last row of the preceding tableau that the cost coefficient corresponding to the nonbasic variable x1 (c1”) is zero. This is an indication that an alternative solution exists.

• Here x1 can be brought into the basis and the resulting new solution will also be a basic feasible solution. For example, introducing x1 into the basis in place of x3 (i.e., by pivoting on a13”), we obtain the new canonical system of equations as shown in the following tableau:

000,20

) variables(nonbasic0

) variables(basic300,1500,200

min

41

532

f

xx

xxx

Page 300: Optimization

300

Example 3: Infinite number of solutions

Variables

Basic variables

x1 x2 x3 x4 x5 -f bi”

x1 1 0 1/8 -1/16 0 0 1500/8

x2 0 1 -1/20 1/8 0 0 125

x5 0 0 -1/10 -1/4 1 0 150

-f 0 0 0 10 0 1 20,000

Page 301: Optimization

301

Example 3: Infinite number of solutions

• The solution corresponding to this canonical form is given by:

• Thus, the value of f has not changed compared to the preceding value since x1 has a zero cost coefficient in the last row of the preceding tableau. Thus, the two basic (optimal) feasible solutions are:

000,20

) variables(nonbasic0

) variables(basic150,125,8/1500

min

43

521

f

xx

xxx

150

0

0

125

8/1500

and

300

0

1500

200

0

21 XX

Page 302: Optimization

302

Example 3: Infinite number of solutions

• An infinite number of nonbasic (optimal) feasible solutions can be obtained by taking any weighted average of the two solutions as:

• It can be verified that the solution X* will always give the same value of -20,000 for f for all 0 ≤ ≤ 1.

10

150150

0

1500

751258

1500)1(

150)1(300

0

1500

125)1(2008

1500)1(

*

*

*

*

*

)1(

5

4

3

2

1

x

x

x

x

x

X*

XXX* 21

Page 303: Optimization

303

Two Phases of the Simplex Method

• This problem is to find nonnegative values for the variables x1, x2,….,xn that satisfy the equations

and minimize the objective function given by

2211

22222121

11212111

mnmnmm

nn

nn

bxaxaxa

bxaxaxa

bxaxaxa

fxcxcxc nn 2211

Page 304: Optimization

304

Two Phases of the Simplex Method

• The general problems encountered in solving this problem are:

– An initial feasible canonical form may not be readily available. This is the case when the linear programming problem does not have slack variables for some of the equations or when the slack variables have negative coefficients.

– The problem may have redundancies and/or inconsistencies, and may not be solvable in nonnegative numbers.

• The two-phase simplex method can be used to solve the problem.

Page 305: Optimization

305

Two Phases of the Simplex Method

• Phase I of the simplex method uses the simplex algorithm itself to find whether the linear programming problem has a feasible solution.If a feasible solution exists, it provides a basic feasible solution in canonical form ready to initiate Phase II of the method.

• Phas II in turn, uses the simplex algorithm to find whether the problem has a bounded optimum. If a bounded optimum exists, it finds the basic feasible solution which is optimal.

Page 306: Optimization

306

Two Phases of the Simplex Method

The simplex method is described in the following steps:1. Arrange the original system of the equations below so that

all the constant terms bi are positive or zero by changing, where necessary, the signs on both sides of the equations.

2211

22222121

11212111

mnmnmm

nn

nn

bxaxaxa

bxaxaxa

bxaxaxa

Page 307: Optimization

307

Two Phases of the Simplex Method

2. Introduce to this system a set of artificial variables y1, y2, …. ym (which serve as basic variables in Phase I), where each yi ≥ 0 , so that it becomes

The objective function can be written as:

0

2211

222222121

111212111

i

mmnmnmm

nn

nn

b

byxaxaxa

byxaxaxa

byxaxaxa

0)(2211 fxcxcxc nn

Page 308: Optimization

308

Two Phases of the Simplex Method

3. Phase I of the method: Define a quantity w as the sum of the artificial variables:

and use the simplex algorithm to find xi ≥ 0 (i=1,2,…,n), yi ≥ 0 (i=1,2,…,m) which minimize w and satisfy the following equations:

myyyw 21

0)(2211 fxcxcxc nn

0

2211

222222121

111212111

i

mmnmnmm

nn

nn

b

byxaxaxa

byxaxaxa

byxaxaxa

Page 309: Optimization

309

Two Phases of the Simplex Method

3. Phase I of the method cont’d: Consequently consider the array:

0)(y

0)(

21

2211

2211

222222121

111212111

-wyy

-fxcxcxc

byxaxaxa

byxaxaxa

byxaxaxa

m

nn

mmnmnmm

nn

nn

Page 310: Optimization

310

Two Phases of the Simplex Method3. Phase I of the method cont’d: This array is not in canonical form;

however, it can be rewritten as a canonical system with basic variables y1, y2, …. ym, -f, and –w by subtracting the sum of the first m equations from the last to obtain the new system which provides the initial basic solution that is necessary for starting Phase I:

)...(

,...,2,1),(

where

-)(

0)(

210

21

02211

2211

2211

222222121

111212111

m

miiii

nn

nn

mmnmnmm

nn

nn

bbbw

niaaad

w-wxdxdxd

-fxcxcxc

byxaxaxa

byxaxaxa

byxaxaxa

Page 311: Optimization

311

Two Phases of the Simplex Method

4. w is called the infeasibility form and has the property that if as a result of Phase I, with a minimum of w>0, no feasible solution exists for the original linear programming problem stated in the equations:

the procedure is terminated.

2211

22222121

11212111

mnmnmm

nn

nn

bxaxaxa

bxaxaxa

bxaxaxa

0)(2211 fxcxcxc nn

Page 312: Optimization

312

Two Phases of the Simplex Method

4. On the other hand, if the minimum of w=0, the resulting array will be in canonical form and hence initiate Phase II by eliminating the w equation as well as the columns corresponding to each of the artificial variables y1, y2,..,ym

from the array.

5. Apply the simplex algorithm to the adjusted canonical system at the end of Phase I to obtain a solution, if a finite exists which optimizes the value of f.

Page 313: Optimization

313

Example

• Minimize

5 to1,0

23

02433

:sconstraint thesubject to

232

54321

54321

54321

ix

xxxxx

xxxxx

xxxxxf

i

Page 314: Optimization

314

Example

Solution:

Step I: As the constants on the right hand side of the constraints, are already nonnegative, the application of step I is unnecessary.

Step II: Introducing the artificial variables y1 ≥ 0, and y2 ≥ 0, the equations can be written as follows:

0 232

(E1) 2 3

0 2433

54321

254321

154321

fxxxxx

yxxxxx

yxxxxx

Page 315: Optimization

315

Example

Solution:

Step III: By defining the infeasibility form w as

0

(E2) 0 232

2 3

0 2433

21

54321

254321

154321

wyy

fxxxxx

yxxxxx

yxxxxx

21 yyw

Page 316: Optimization

316

Example

Solution:

Step III cont’d: This array can be rewritten as a canonical system with basic variables as y1, y2,-f,-w by subtracting the sum of the first two equations of (E2) from the last equation of E2. Thus the last equation of (E2) becomes:

Since this canonical system (first three equations of (E2) and (E3), provides an initial basic feasible solution, Phase I of the simplex method can be started. The Phase I computations are shown in tableau form.

(E3) 2055-24 54321 --wxxxxx

Page 317: Optimization

317

Example

Solution: Step III cont’d: Since this canonical system (first three equations of (E2) and (E3) provides an initial basic feasible solution, Phase I of the simplex method can be started.

Basic variables

Admissable variables Artificial variables Value of

bi”/ ais”>0

x1 x2 x3 x4 x5 y1 y2 bi”

y1 3 -3 4 2 -1 1 0 0 0

y2 1 1 1 3 1 0 1 2 2/3

-f 2 3 2 -1 1 0 0 0

-w -4 2 -5 -5 0 0 0 -2

Most negative

Smaller value (y1

drops from next basis)