Basic calculus (ii) recap

51
Basic Calculus (II) Recap (for MSc & PhD Business, Management & Finance Students) Lecturer: Farzad Javidanrad First Draft: Sep. 2013 Revised: Sep. 2014 Multi - Variable Functions

Transcript of Basic calculus (ii) recap

Page 1: Basic calculus (ii) recap

Basic Calculus (II) Recap

(for MSc & PhD Business, Management & Finance Students)

Lecturer: Farzad Javidanrad

First Draft: Sep. 2013

Revised: Sep. 2014

Multi-Variable Functions

Page 2: Basic calculus (ii) recap

Multi-Variable Functions

โ€ข In the case of one-variable function, in the form of ๐‘ฆ = ๐‘“(๐‘ฅ) , the variable ๐’™ is called โ€œindependent variableโ€ and ๐’š โ€œdependent variableโ€.

โ€ข There are many examples of the dependency of ๐‘ฆ on ๐‘ฅ (e.g, the state of boiling of water depends on the amount of heat; or consumption expenditure depends on the level of income) but the concept of function should be understood beyond the concept of dependency. In most of the cases, dependency is not the issue at all. The modern concept of function is based on the idea of mapping.

Page 3: Basic calculus (ii) recap

Multi-Variable Functions

โ€ข When a painter paint a scene on a canvas s(he) uses a correspondence rule (mapping rule): every point in three-dimensional space (๐‘…3) is corresponded (mapped) to just one and only one point in two-dimensional space (๐‘…2).

โ€ข Mathematically speaking the function ๐‘“: ๐‘…3 โ†’ ๐‘…2 can represent the type of

corresponding (mapping)

rule that the painter is

applying.

Page 4: Basic calculus (ii) recap

The Concept of Function as Mapping

โ€ข Transformation of an object is a mapping from ๐‘…2 to ๐‘…2;

โ€ข Mathematical operations describe a function from ๐‘…2 to ๐‘…

x

y

y

-x o

๐‘“: ๐‘…2 โ†’ ๐‘…2

๐‘Ž, ๐‘ โ†’ (๐‘, โˆ’๐‘Ž)(๐‘Ž, ๐‘)

(๐‘, โˆ’๐‘Ž)

a

b

a+b o o

Figure1-6: Geometrical interpretation of

the sum operator as a function. This is a

transformation from space to . x x

๐‘”: ๐‘…2 โ†’ ๐‘…๐‘Ž, ๐‘ โ†’ ๐‘Ž + ๐‘

Page 5: Basic calculus (ii) recap

Multi Variables Functions

โ€ข All basic mathematical operators such as summation, subtraction, division and multiplication introduce a function from two-dimensional space (๐‘…2) to the real number set (one-dimensional space, ๐‘…), that is:

๐‘“: ๐‘…2 โ†’ ๐‘…

For e.g. for division: ๐‘Ž, ๐‘ โ†’๐‘Ž

๐‘(๐‘ โ‰  0)

โ€ข One of the important family of the multi-variable functions is the โ€œreal (scalar) multi variables functionโ€, which can be shown as ๐‘“: ๐‘…๐‘› โ†’ ๐‘… or simply, ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘›), where ๐‘ฆ is the dependent variable and ๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘› are independent variables.

Page 6: Basic calculus (ii) recap

Two Variables Functions

โ€ข A simple form of this function is when we have two independent variables ๐‘ฅ, ๐‘ฆ and one dependent variable ๐‘ง, in the form of ๐‘ง =๐‘“(๐‘ฅ, ๐‘ฆ). This is called โ€œtwo variables functionโ€ as there are two independent variables.

โ€ข E.g. a Cobb-Douglas production function :

๐‘Œ = ๐‘“ ๐พ, ๐ฟ = ๐ด๐พ๐›ผ๐ฟ๐›ฝ

Where ๐‘Œ is the level of production,

๐พ and ๐ฟ are the levels of capital and

labour employed for production,

respectively.

โ€ข ๐ด, ๐›ผ and ๐›ฝ are constants of the

function.Adopted from http://en.citizendium.org/wiki/File:Cobb-Douglas_with_dimishing_returns_to_scale.png

Y

K

L

Page 7: Basic calculus (ii) recap

Two Variables Functionsโ€ข ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) represents a functional relationship if for every ordered

pair (๐‘ฅ, ๐‘ฆ) in the domain of the function there will be one and only one value of ๐‘ง in the range of the function.

o Which graph does represent a function?๐’™๐Ÿ

๐’‚๐Ÿ+

๐’š๐Ÿ

๐’ƒ๐Ÿ+

๐’›๐Ÿ

๐’„๐Ÿ= ๐Ÿ

Ellipsoid

Hyperboloid of Two Sheets

โˆ’๐’™๐Ÿ

๐’‚๐Ÿโˆ’

๐’š๐Ÿ

๐’ƒ๐Ÿ+

๐’›๐Ÿ

๐’„๐Ÿ= ๐Ÿ

Hyperbolic Paraboloid

๐’™๐Ÿ

๐’‚๐Ÿโˆ’

๐’š๐Ÿ

๐’ƒ๐Ÿ=

๐’›

๐’„

Elliptic Paraboloid

๐’™๐Ÿ

๐’‚๐Ÿ+

๐’š๐Ÿ

๐’ƒ๐Ÿ=

๐’›

๐’„

Ad

op

ted

from

http

://tuto

rial.math

.lamar.ed

u/C

lasses/C

alcIII/Qu

adricSu

rfaces.aspx

Page 8: Basic calculus (ii) recap

Derivative of Two Variables Functionsโ€ข Consider the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ); ๐‘ง changes if ๐‘ฅ or ๐‘ฆ or both of

them change. If we control the change of ๐‘ฆ and allow just ๐‘ฅ to

change then the average change of ๐‘ง in terms of ๐‘ฅ, is ฮ”๐‘ง

ฮ”๐‘ฅ. The limiting

state of this ratio when โˆ†๐‘ฅ โ†’ 0 is what is called โ€œpartial derivative of ๐’› in terms of ๐’™ โ€ and is shown by:

๐œ•๐‘ง

๐œ•๐‘ฅ,

๐œ•๐‘“(๐‘ฅ,๐‘ฆ)

๐œ•๐‘ฅ, ๐‘ง๐‘ฅ

โ€ฒ, ๐‘“๐‘ฅ

โ€ข This cutter plane shows that the

variable ๐‘ฆ is controlled (fixed)

at ๐‘ฆ = 1 but ๐‘ฅ can change from

-2 to +2 and the movement is on

the curve of intersection between

The plane and the surface of the

function.Adopted from http://msemac.redwoods.edu/~darnold/math50c/matlab/pderiv/index.xhtml

Page 9: Basic calculus (ii) recap

Partial Differentiation

โ€ข If ๐‘ฅ is controlled (fixed) and ๐‘ฆ is allowed to change the partial derivative of ๐’› in terms of ๐’š can be shown by:

๐œ•๐‘ง

๐œ•๐‘ฆ,

๐œ•๐‘“(๐‘ฅ,๐‘ฆ)

๐œ•๐‘ฆ, ๐‘ง๐‘ฆ

โ€ฒ ,๐‘“๐‘ฆ

โ€ข The cutter plane shows that

๐‘ฅ is controlled (fixed) at

๐‘ฅ = 0 but ๐‘ฆ can change from

-3 to +3 on the curve of intersection

between the plane and the surface

of the function.

z

yx

Adopted from http://www.uwec.edu/math/Calculus/216-Spring2007/assignments.htm

Page 10: Basic calculus (ii) recap

Partial Differentiation

โ€ข So, in general, the slope of the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) on the curve of intersection between the surface of the function and the cutting plane parallel to x-axis at any point of the domain is:

๐œ•๐‘ง

๐œ•๐‘ฅ= ๐‘“๐‘ฅ = lim

โˆ†๐‘ฅโ†’0

๐‘“ ๐‘ฅ + โˆ†๐‘ฅ , ๐‘ฆ โˆ’ ๐‘“(๐‘ฅ , ๐‘ฆ)

โˆ†๐‘ฅ

= ๐‘™๐‘–๐‘šโ„Žโ†’0

๐‘“ ๐‘ฅ + โ„Ž , ๐‘ฆ โˆ’ ๐‘“(๐‘ฅ , ๐‘ฆ)

โ„Ž

It means when calculating ๐œ•๐‘ง

๐œ•๐‘ฅthe

variable ๐‘ฆ should be treated as a constant. The same rule applies for multi variables functions.

Adopted from http://moodle.capilanou.ca/mod/book/view.php?id=19700&chapterid=240

๐’› = ๐Ÿ๐ŸŽ โˆ’ ๐’™๐Ÿ โˆ’ ๐’š๐Ÿ

Page 11: Basic calculus (ii) recap

Partial Differentiation โ€ข And the slope of the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) on the curve of

intersection between the surface of the function and the cutting plane parallel to y-axis at any point of the domain is:

๐œ•๐‘ง

๐œ•๐‘ฆ= ๐‘“๐‘ฆ = ๐‘™๐‘–๐‘š

โˆ†๐‘ฆโ†’0

๐‘“ ๐‘ฅ , ๐‘ฆ + โˆ†๐‘ฆ โˆ’ ๐‘“(๐‘ฅ, ๐‘ฆ)

โˆ†๐‘ฆ

= ๐‘™๐‘–๐‘šโ„Žโ†’0

๐‘“ ๐‘ฅ , ๐‘ฆ + โ„Ž โˆ’ ๐‘“(๐‘ฅ, ๐‘ฆ)

โ„Ž

It means when calculating ๐œ•๐‘ง

๐œ•๐‘ฆthe

variable ๐‘ฅ should be treated as a constant. The same rule applies for multi variables functions.

Adopted from http://moodle.capilanou.ca/mod/book/view.php?id=19700&chapterid=240

๐’› = ๐Ÿ๐ŸŽ โˆ’ ๐’™๐Ÿ โˆ’ ๐’š๐Ÿ

Page 12: Basic calculus (ii) recap

Partial Differentiation

โ€ข To find the partial derivatives (slope of tangent lines on the surface) at a specific point ๐‘ƒ(๐‘Ž, ๐‘, ๐‘) we have:

โ€ข ๐œ•๐‘“(๐‘ฅ,๐‘ฆ)

๐œ•๐‘ฅ ๐‘(๐‘Ž,๐‘,๐‘)= ๐‘™๐‘–๐‘š

โ„Žโ†’0

๐‘“ ๐‘Ž+โ„Ž , ๐‘ โˆ’๐‘“(๐‘Ž , ๐‘)

โ„Ž

โ€ข ๐œ•๐‘“(๐‘ฅ,๐‘ฆ)

๐œ•๐‘ฆ ๐‘(๐‘Ž,๐‘,๐‘)= ๐‘™๐‘–๐‘š

โ„Žโ†’0

๐‘“ ๐‘Ž , ๐‘+โ„Ž โˆ’๐‘“(๐‘Ž , ๐‘)

โ„Ž

Example:

o Find partial derivatives of ๐‘ง = 10๐‘ฅ2๐‘ฆ3.

๐๐’›

๐๐’™= ๐Ÿ๐ŸŽ๐’™๐’š๐Ÿ‘ ,

๐๐’›

๐๐’š= ๐Ÿ‘๐ŸŽ๐’™๐Ÿ๐’š๐Ÿ

(๐’‚, ๐’ƒ, ๐ŸŽ)

(๐’‚, ๐’ƒ, ๐’„)

(๐ŸŽ, ๐’ƒ, ๐ŸŽ)

(๐’‚, ๐ŸŽ, ๐ŸŽ)

Adopted from http://www.solitaryroad.com/c353.html

Page 13: Basic calculus (ii) recap

Rules of Partial Differentiationโ€ข If ๐‘“(๐‘ฅ, ๐‘ฆ) and ๐‘”(๐‘ฅ, ๐‘ฆ) are two differentiable functions with respect

to ๐‘ฅ and ๐‘ฆ ;

๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ ยฑ ๐‘” ๐‘ฅ, ๐‘ฆ โ†’

๐œ•๐‘ง

๐œ•๐‘ฅ=

๐œ•๐‘“

๐œ•๐‘ฅยฑ

๐œ•๐‘”

๐œ•๐‘ฅ= ๐‘“๐‘ฅ ยฑ ๐‘”๐‘ฅ

๐œ•๐‘ง

๐œ•๐‘ฆ=

๐œ•๐‘“

๐œ•๐‘ฆยฑ

๐œ•๐‘”

๐œ•๐‘ฆ= ๐‘“๐‘ฆ ยฑ ๐‘”๐‘ฆ

๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ ร— ๐‘” ๐‘ฅ, ๐‘ฆ โ†’

๐œ•๐‘ง

๐œ•๐‘ฅ= ๐‘“๐‘ฅ . ๐‘” + ๐‘”๐‘ฅ . ๐‘“

๐œ•๐‘ง

๐œ•๐‘ฆ= ๐‘“๐‘ฆ . ๐‘” + ๐‘”๐‘ฆ . ๐‘“

๐‘ง =๐‘“(๐‘ฅ,๐‘ฆ)

๐‘”(๐‘ฅ,๐‘ฆ)โ†’

๐œ•๐‘ง

๐œ•๐‘ฅ=

๐‘“๐‘ฅ . ๐‘”โˆ’๐‘”๐‘ฅ . ๐‘“

๐‘”2

๐œ•๐‘ง

๐œ•๐‘ฆ=

๐‘“๐‘ฆ . ๐‘”โˆ’๐‘”๐‘ฆ . ๐‘“

๐‘”2

Page 14: Basic calculus (ii) recap

Some Exampleso Find partial derivatives of the function ๐‘ง = ๐‘ฅ2 โˆ’ ๐‘ฅ๐‘ฆ3 โˆ’ 5๐‘ฆ2.

๐œ•๐‘ง

๐œ•๐‘ฅ= ๐Ÿ๐’™ โˆ’ ๐’š๐Ÿ‘ ,

๐œ•๐‘ง

๐œ•๐‘ฆ= โˆ’๐Ÿ‘๐’™๐’š๐Ÿ โˆ’ ๐Ÿ๐ŸŽ๐’š

o Find partial derivatives of ๐‘ง = ๐‘ฅ๐‘ฆ. ๐‘ฅ2 + ๐‘ฆ2 .

๐œ•๐‘ง

๐œ•๐‘ฅ= y. ๐‘ฅ2 + ๐‘ฆ2 +

2๐‘ฅ

2 ๐‘ฅ2 + ๐‘ฆ2. ๐‘ฅ๐‘ฆ = ๐ฒ. ๐’™๐Ÿ + ๐’š๐Ÿ +

๐’™๐Ÿ๐’š

๐’™๐Ÿ + ๐’š๐Ÿ

๐œ•๐‘ง

๐œ•๐‘ฆ= ๐‘ฅ. ๐‘ฅ2 + ๐‘ฆ2 +

2๐‘ฆ

2 ๐‘ฅ2 + ๐‘ฆ2. ๐‘ฅ๐‘ฆ = ๐’™. ๐’™๐Ÿ + ๐’š๐Ÿ +

๐’š๐Ÿ๐’™

๐’™๐Ÿ + ๐’š๐Ÿ

o Find partial derivatives of ๐‘ง =3๐‘ฅ2๐‘ฆ2

๐‘ฅ4+๐‘ฆ4.

๐œ•๐‘ง

๐œ•๐‘ฅ=

6๐‘ฅ๐‘ฆ2 ๐‘ฅ4 + ๐‘ฆ4 โˆ’ 4๐‘ฅ3 ร— 3๐‘ฅ2๐‘ฆ2

๐‘ฅ4 + ๐‘ฆ4 2

๐œ•๐‘ง

๐œ•๐‘ฆ=

6๐‘ฆ๐‘ฅ2 ๐‘ฅ4 + ๐‘ฆ4 โˆ’ 4๐‘ฆ3 ร— 3๐‘ฅ2๐‘ฆ2

๐‘ฅ4 + ๐‘ฆ4 2

Page 15: Basic calculus (ii) recap

Chain Rule (Different Cases)Case 1: If ๐‘ง = ๐‘“ ๐‘ข and ๐‘ข = ๐‘”(๐‘ฅ, ๐‘ฆ) then ๐‘ง = ๐‘“(๐‘” ๐‘ฅ, ๐‘ฆ ) and

Examples:

o Find partial derivatives of ๐‘ง = ๐‘’๐‘ฅ๐‘ฆ2.

Suppose ๐‘ข = ๐‘ฅ๐‘ฆ2 , so, ๐‘ง = ๐‘’๐‘ข and

๐œ•๐‘ง

๐œ•๐‘ฅ=

๐œ•๐‘ง

๐œ•๐‘ข.๐œ•๐‘ข

๐œ•๐‘ฅ= (๐‘’๐‘ข). ๐‘ข๐‘ฅ = ๐’†๐’™๐’š๐Ÿ

. ๐’š๐Ÿ

and ๐œ•๐‘ง

๐œ•๐‘ฆ=

๐œ•๐‘ง

๐œ•๐‘ข.๐œ•๐‘ข

๐œ•๐‘ฆ= (๐‘’๐‘ข). ๐‘ข๐‘ฆ = ๐’†๐’™๐’š๐Ÿ

. ๐Ÿ๐’™๐’š

๐œ•๐‘ง

๐œ•๐‘ฅ= ๐‘“โ€ฒ.

๐œ•๐‘”

๐œ•๐‘ฅ=

๐œ•๐‘ง

๐œ•๐‘ข.๐œ•๐‘ข

๐œ•๐‘ฅand

๐œ•๐‘ง

๐œ•๐‘ฆ= ๐‘“โ€ฒ.

๐œ•๐‘”

๐œ•๐‘ฆ=

๐œ•๐‘ง

๐œ•๐‘ข.๐œ•๐‘ข

๐œ•๐‘ฆ

Page 16: Basic calculus (ii) recap

Chain Rule (Different Cases)

o Find partial derivatives of the function ๐‘ง = ๐‘’๐‘ฅ

๐‘ฆ + cos(๐‘ฅ๐‘ฆ) .๐œ•๐‘ง

๐œ•๐‘ฅ=

๐Ÿ

๐’š๐’†

๐’™๐’š โˆ’ ๐’š. ๐’”๐’Š๐’ ๐’™๐’š ,

๐œ•๐‘ง

๐œ•๐‘ฆ=

โˆ’๐’™

๐’š๐Ÿ๐’†

๐’™๐’š โˆ’ ๐’™. ๐’”๐’Š๐’(๐’™๐’š)

โ€ข Case 2: If ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ is a differentiable function of ๐‘ฅ and ๐‘ฆ and these two variables are differentiable functions of ๐‘Ÿ , such that ๐‘ฅ =๐‘” ๐‘Ÿ and ๐‘ฆ = โ„Ž(๐‘Ÿ) , then:

o Find partial derivatives of ๐‘ง = ๐‘ฅ โˆ’ ๐‘™๐‘›๐‘ฆ when ๐‘ฅ = ๐‘Ÿ and ๐‘ฆ = ๐‘Ÿ2 โˆ’ 1๐œ•๐‘ง

๐œ•๐‘Ÿ= 1.

1

2 ๐‘Ÿโˆ’

1

๐‘ฆ. 2๐‘Ÿ =

๐Ÿ

๐Ÿ ๐’“โˆ’

๐Ÿ๐’“

๐’“๐Ÿ โˆ’ ๐Ÿ

โ€ข Can you suggest another way?

๐œ•๐‘ง

๐œ•๐‘Ÿ=

๐œ•๐‘ง

๐œ•๐‘ฅ.๐‘‘๐‘ฅ

๐‘‘๐‘Ÿ+

๐œ•๐‘ง

๐œ•๐‘ฆ.๐‘‘๐‘ฆ

๐‘‘๐‘Ÿ

The same rules apply for multi variables functions

Page 17: Basic calculus (ii) recap

Chain Rules (Different Cases)โ€ข Case 3: If ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ is a differentiable function of ๐‘ฅ and ๐‘ฆ and

these two variables are differentiable functions of ๐‘Ÿ and ๐‘  , such that ๐‘ฅ = ๐‘” ๐‘Ÿ, ๐‘  and ๐‘ฆ = โ„Ž(๐‘Ÿ, ๐‘ ) and ๐‘Ÿ and ๐‘  are independent from

each other (๐‘‘๐‘Ÿ

๐‘‘๐‘ ,๐‘‘๐‘ 

๐‘‘๐‘Ÿ= 0), then:

โ€ข These derivatives are called โ€œtotal derivatives of ๐’› with respect to ๐’“ and ๐’”โ€.

o Find partial derivatives of ๐‘ง =3

๐‘ฅ2 โˆ’ ๐‘ฆ where ๐‘ฅ = ๐‘Ÿ2 + ๐‘ 2 and

๐‘ฆ =๐‘Ÿ

๐‘ .

๐œ•๐‘ง

๐œ•๐‘Ÿ=

๐œ•๐‘ง

๐œ•๐‘ฅ.๐‘‘๐‘ฅ

๐‘‘๐‘Ÿ+

๐œ•๐‘ง

๐œ•๐‘ฆ.๐‘‘๐‘ฆ

๐‘‘๐‘Ÿand

๐œ•๐‘ง

๐œ•๐‘ =

๐œ•๐‘ง

๐œ•๐‘ฅ.๐‘‘๐‘ฅ

๐‘‘๐‘ +

๐œ•๐‘ง

๐œ•๐‘ฆ.๐‘‘๐‘ฆ

๐‘‘๐‘ 

Page 18: Basic calculus (ii) recap

Implicit Differentiationโ€ข The Chain Rule can be used for implicit differentiation even for one

variable functions:๐น ๐‘ฅ, ๐‘ฆ = 0

Using the chain rule we have:๐œ•๐น

๐œ•๐‘ฅ=

๐œ•๐น

๐œ•๐‘ฅ.๐‘‘๐‘ฅ

๐‘‘๐‘ฅ+

๐œ•๐น

๐œ•๐‘ฆ.๐‘‘๐‘ฆ

๐‘‘๐‘ฅ= 0

So,

โ€ข The same rule can be used for implicit two or multi variables functions. For example, for an implicit function ๐น ๐‘ฅ, ๐‘ฆ, ๐‘ง = 0, we have:

As ๐’…๐’™

๐’…๐’™= ๐Ÿ

๐‘‘๐‘ฆ

๐‘‘๐‘ฅ= โˆ’

๐œ•๐น๐œ•๐‘ฅ๐œ•๐น๐œ•๐‘ฆ

= โˆ’๐น๐‘ฅ

๐น๐‘ฆ

๐œ•๐‘ง

๐œ•๐‘ฅ= โˆ’

๐œ•๐น๐œ•๐‘ฅ๐œ•๐น๐œ•๐‘ง

= โˆ’๐น๐‘ฅ

๐น๐‘ง๐‘Ž๐‘›๐‘‘

๐œ•๐‘ง

๐œ•๐‘ฆ= โˆ’

๐œ•๐น๐œ•๐‘ฆ๐œ•๐น๐œ•๐‘ง

= โˆ’๐น๐‘ฆ

๐น๐‘ง

Page 19: Basic calculus (ii) recap

Examples of Implicit Functionso Find the slope of the tangent line on the curve of intersection

between the surface ๐‘ฅ2 + ๐‘ฆ2 + ๐‘ง2 = 9 and the plane ๐‘ฆ = 2 at the point ๐ด(1,2,2) .

As ๐‘ฆ is fixed at 2 so, we are looking for ๐œ•๐‘ง

๐œ•๐‘ฅat point A :

2๐‘ฅ + 0 + 2๐‘ง.๐œ•๐‘ง

๐œ•๐‘ฅ= 0 โ†’

๐œ•๐‘ง

๐œ•๐‘ฅ=

โˆ’๐‘ฅ

๐‘ง= โˆ’

1

2Or using implicit differentiation:

๐œ•๐‘ง

๐œ•๐‘ฅ= โˆ’

๐น๐‘ฅ

๐น๐‘ง= โˆ’

2๐‘ฅ

2๐‘ง= โˆ’

๐‘ฅ

๐‘ง

o Find ๐œ•๐‘ง

๐œ•๐‘ฆfor ๐‘’๐‘ฅ+๐‘ฆ+๐‘ง = ๐‘ฅ2 โˆ’ 2๐‘ฆ2 + ๐‘ง2 .

0 + 1 +๐œ•๐‘ง

๐œ•๐‘ฆ๐‘’๐‘ฅ+๐‘ฆ+๐‘ง = 0 โˆ’ 4y + 2z.

๐œ•๐‘ง

๐œ•๐‘ฆโ†’

๐œ•๐‘ง

๐œ•๐‘ฆ=

๐‘’๐‘ฅ+๐‘ฆ+๐‘ง + 4๐‘ฆ

2๐‘ง โˆ’ ๐‘’๐‘ฅ+๐‘ฆ+๐‘ง

Use the implicit differentiation for this question.

Page 20: Basic calculus (ii) recap

Higher Orders Partial Derivatives

โ€ข For the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) the partial derivatives ๐œ•๐‘ง

๐œ•๐‘ฅand

๐œ•๐‘ง

๐œ•๐‘ฆare in

turn functions of ๐‘ฅ and ๐‘ฆ , in general. So, we can think of second partial derivatives of ๐‘ง , but in this case there are three different second derivatives:

๐‘ง๐‘ฅ๐‘ฅ = ๐‘“๐‘ฅ๐‘ฅ =๐œ•

๐œ•๐‘ง๐œ•๐‘ฅ

๐œ•๐‘ฅ=

๐œ•

๐œ•๐‘ฅ

๐œ•๐‘ง

๐œ•๐‘ฅ=

๐œ•2๐‘ง

๐œ•๐‘ฅ2

๐‘ง๐‘ฆ๐‘ฆ = ๐‘“๐‘ฆ๐‘ฆ =๐œ•

๐œ•๐‘ง๐œ•๐‘ฆ

๐œ•๐‘ฆ=

๐œ•

๐œ•๐‘ฅ

๐œ•๐‘ง

๐œ•๐‘ฆ=

๐œ•2๐‘ง

๐œ•๐‘ฆ2

๐‘ง๐‘ฅ๐‘ฆ = ๐‘“๐‘ฅ๐‘ฆ =๐œ•

๐œ•๐‘ง๐œ•๐‘ฅ

๐œ•๐‘ฆ=

๐œ•

๐œ•๐‘ฆ

๐œ•๐‘ง

๐œ•๐‘ฅ=

๐œ•2๐‘ง

๐œ•๐‘ฆ. ๐œ•๐‘ฅ

Second-order direct

partial derivatives

Second-order cross

partial derivative

Page 21: Basic calculus (ii) recap

The Equality of Mixed (Cross) Partial Derivatives

๐‘ง๐‘ฆ๐‘ฅ = ๐‘“๐‘ฆ๐‘ฅ =๐œ•

๐œ•๐‘ง๐œ•๐‘ฆ

๐œ•๐‘ฅ=

๐œ•

๐œ•๐‘ฅ

๐œ•๐‘ง

๐œ•๐‘ฆ=

๐œ•2๐‘ง

๐œ•๐‘ฅ. ๐œ•๐‘ฆ

โ€ข If the cross (mixed) partial derivatives ๐‘“๐‘ฅ๐‘ฆ and ๐‘“๐‘ฆ๐‘ฅ are continuous

and finite in their domain then they are equal to one another; i.e.

๐‘“๐‘ฅ๐‘ฆ = ๐‘“๐‘ฆ๐‘ฅ

Or ๐œ•2๐‘ง

๐œ•๐‘ฆ.๐œ•๐‘ฅ=

๐œ•2๐‘ง

๐œ•๐‘ฅ.๐œ•๐‘ฆ

๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ)

๐œ•๐‘ง

๐œ•๐‘ฅ=๐‘“๐‘ฅ

๐œ•๐‘ง

๐œ•๐‘ฆ=๐‘“๐‘ฆ

๐‘“๐‘ฅ๐‘ฅ

๐‘“๐‘ฅ๐‘ฆ = ๐‘“๐‘ฆ๐‘ฅ

๐‘“๐‘ฆ๐‘ฆ

Second-order cross

partial derivative

Page 22: Basic calculus (ii) recap

Total Differentialโ€ข The meaning of differential in multi variables scalar function is not

different with that in the one variable function. The only difference is that the source of change in dependent variable is the change of all independent variables., that is;

๐‘ง + โˆ†๐‘ง = ๐‘“(๐‘ฅ + โˆ†๐‘ฅ, ๐‘ฆ + โˆ†๐‘ฆ)

Or โˆ†๐‘ง = ๐‘“ ๐‘ฅ + โˆ†๐‘ฅ, ๐‘ฆ + โˆ†๐‘ฆ โˆ’ ๐‘“(๐‘ฅ, ๐‘ฆ)

But ๐‘‘๐‘ง, which is called โ€œtotal differentialโ€is defined as:

๐‘‘๐‘ง =๐œ•๐‘ง

๐œ•๐‘ฅ. ๐‘‘๐‘ฅ +

๐œ•๐‘ง

๐œ•๐‘ฆ๐‘‘๐‘ฆ

Or

๐‘‘๐‘ง = ๐‘“๐‘ฅ . ๐‘‘๐‘ฅ + ๐‘“๐‘ฆ. ๐‘‘๐‘ฆAdopted from Calculus Early Transcendental James Stewart p897

Page 23: Basic calculus (ii) recap

Total Differentialโ€ข For a multi variables scalar function the same rule applies:

๐‘ง = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘›)

๐‘‘๐‘ง =๐œ•๐‘ง

๐œ•๐‘ฅ1. ๐‘‘๐‘ฅ1 +

๐œ•๐‘ง

๐œ•๐‘ฅ2. ๐‘‘๐‘ฅ2 + โ‹ฏ +

๐œ•๐‘ง

๐œ•๐‘ฅ๐‘›. ๐‘‘๐‘ฅ๐‘›

โ€ข in the case of two variables function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) we assumed ๐‘ฅ and ๐‘ฆ are independent, but if they depend on other variables the differential of each one of them can be treated as the total differential of a dependent variable, that is;

๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ โ†’ ๐‘‘๐‘ง =๐œ•๐‘ง

๐œ•๐‘ฅ. ๐‘‘๐‘ฅ +

๐œ•๐‘ง

๐œ•๐‘ฆ. ๐‘‘๐‘ฆ ๐ด

๐‘ฅ = โ„Ž ๐‘Ÿ, ๐‘  โ†’ ๐‘‘๐‘ฅ =๐œ•๐‘ฅ

๐œ•๐‘Ÿ. ๐‘‘๐‘Ÿ +

๐œ•๐‘ฅ

๐œ•๐‘ . ๐‘‘๐‘  ๐ต

๐‘ฆ = ๐‘˜ ๐‘Ÿ, ๐‘  โ†’ ๐‘‘๐‘ฆ =๐œ•๐‘ฆ

๐œ•๐‘Ÿ. ๐‘‘๐‘Ÿ +

๐œ•๐‘ฆ

๐œ•๐‘ . ๐‘‘๐‘  ๐ถ

Substituting B and C into A:

๐‘‘๐‘ง =๐œ•๐‘ง

๐œ•๐‘ฅ.

๐œ•๐‘ฅ

๐œ•๐‘Ÿ. ๐‘‘๐‘Ÿ +

๐œ•๐‘ฅ

๐œ•๐‘ . ๐‘‘๐‘  +

๐œ•๐‘ง

๐œ•๐‘ฆ.

๐œ•๐‘ฆ

๐œ•๐‘Ÿ. ๐‘‘๐‘Ÿ +

๐œ•๐‘ฆ

๐œ•๐‘ . ๐‘‘๐‘ 

Page 24: Basic calculus (ii) recap

Total Differential

If we are looking for total derivatives of ๐‘ง with respect to ๐‘Ÿ and ๐‘ , which is introduced before as the chain rule (case 3), we need to suppose that ๐‘Ÿ and ๐‘  are independent variables and not associated to

each other (๐‘‘๐‘ 

๐‘‘๐‘Ÿ๐‘œ๐‘Ÿ

๐‘‘๐‘Ÿ

๐‘‘๐‘ = 0); then:

๐‘‘๐‘ง =๐œ•๐‘ง

๐œ•๐‘ฅ.๐œ•๐‘ฅ

๐œ•๐‘Ÿ+

๐œ•๐‘ง

๐œ•๐‘ฆ.๐œ•๐‘ฆ

๐œ•๐‘Ÿ. ๐‘‘๐‘Ÿ +

๐œ•๐‘ง

๐œ•๐‘ฅ.๐œ•๐‘ฅ

๐œ•๐‘ +

๐œ•๐‘ง

๐œ•๐‘ฆ.๐œ•๐‘ฆ

๐œ•๐‘ . ๐‘‘๐‘ 

๐œ•๐‘ง

๐œ•๐‘Ÿ=

๐œ•๐‘ง

๐œ•๐‘ฅ.๐‘‘๐‘ฅ

๐‘‘๐‘Ÿ+

๐œ•๐‘ง

๐œ•๐‘ฆ.๐‘‘๐‘ฆ

๐‘‘๐‘Ÿ

and

๐œ•๐‘ง

๐œ•๐‘ =

๐œ•๐‘ง

๐œ•๐‘ฅ.๐‘‘๐‘ฅ

๐‘‘๐‘ +

๐œ•๐‘ง

๐œ•๐‘ฆ.๐‘‘๐‘ฆ

๐‘‘๐‘ 

Page 25: Basic calculus (ii) recap

Second Order Total Differential โ€ข The sign of the second order total differential ๐‘‘2๐‘ง shows the convexity and

concavity of the surface with respect to the ๐‘ฅ๐‘œ๐‘ฆ plane.

โ€ข Considering the total differential ๐‘‘๐‘ง , the second order total differential ๐‘‘2๐‘ง can be obtained by applying the differential rules:

๐‘‘2๐‘ง = ๐‘‘ ๐‘‘๐‘ง = ๐‘‘(๐œ•๐‘ง

๐œ•๐‘ฅ. ๐‘‘๐‘ฅ +

๐œ•๐‘ง

๐œ•๐‘ฆ๐‘‘๐‘ฆ)

= ๐‘‘ ๐‘“๐‘ฅ . ๐‘‘๐‘ฅ + ๐‘“๐‘ฆ . ๐‘‘๐‘ฆ= ๐‘‘๐‘“๐‘ฅ . ๐‘‘๐‘ฅ + ๐‘“๐‘ฅ . ๐‘‘ ๐‘‘๐‘ฅ + ๐‘‘๐‘“๐‘ฆ. ๐‘‘๐‘ฆ + ๐‘“๐‘ฆ. ๐‘‘(๐‘‘๐‘ฆ)

As ๐‘‘ ๐‘‘๐‘ฅ = ๐‘‘2๐‘ฅ = 0

๐‘‘ ๐‘‘๐‘ฆ = ๐‘‘2๐‘ฆ = 0, and

๐‘‘๐‘“๐‘ฅ = ๐‘“๐‘ฅ๐‘ฅ . ๐‘‘๐‘ฅ + ๐‘“๐‘ฅ๐‘ฆ. ๐‘‘๐‘ฆ

๐‘‘๐‘“๐‘ฆ = ๐‘“๐‘ฆ๐‘ฅ . ๐‘‘๐‘ฅ + ๐‘“๐‘ฆ๐‘ฆ. ๐‘‘๐‘ฆtherefore :

โ€ข Factorising ๐‘‘๐‘ฅ2 from the right hand side, we have:

๐‘‘2๐‘ง = ๐‘‘๐‘ฆ2. ๐‘“๐‘ฅ๐‘ฅ .๐‘‘๐‘ฅ

๐‘‘๐‘ฆ

2

+ 2๐‘“๐‘ฅ๐‘ฆ.๐‘‘๐‘ฅ

๐‘‘๐‘ฆ+ ๐‘“๐‘ฆ๐‘ฆ

๐‘‘2๐‘ง = ๐‘“๐‘ฅ๐‘ฅ . ๐‘‘๐‘ฅ2 + 2๐‘“๐‘ฅ๐‘ฆ . ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐‘“๐‘ฆ๐‘ฆ . ๐‘‘๐‘ฆ2

Page 26: Basic calculus (ii) recap

Second Order Differential โ€ข ๐‘‘๐‘ฆ2 > 0 (why?); so the sign of ๐‘‘2๐‘ง depends on the sign of the

expression in the bracket.

โ€ข From elementary algebra we know that the quadratic form ๐‘Ž๐‘‹2 + ๐‘๐‘‹ + ๐‘ has the same sign as the parameter ๐‘Ž when โˆ†=๐‘2 โˆ’ 4๐‘Ž๐‘ < 0 .

โ€ข If we assume that ๐‘‹ =๐‘‘๐‘ฅ

๐‘‘๐‘ฆand ๐‘Ž = ๐‘“๐‘ฅ๐‘ฅ , ๐‘ = 2๐‘“๐‘ฅ๐‘ฆ , ๐‘ = ๐‘“๐‘ฆ๐‘ฆ then

๐‘‘2๐‘ง = ๐‘‘๐‘ฆ2. ๐‘Ž๐‘‹2 + ๐‘๐‘‹ + ๐‘ has the same sign as ๐‘Ž = ๐‘“๐‘ฅ๐‘ฅ if

2๐‘“๐‘ฅ๐‘ฆ2

โˆ’ 4๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ < 0 โ†’ ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ > ๐‘“๐‘ฅ๐‘ฆ2

So;

1. ๐‘‘2๐‘ง > 0 if ๐‘“๐‘ฅ๐‘ฅ > 0 and ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ > ๐‘“๐‘ฅ๐‘ฆ2

.

2. ๐‘‘2๐‘ง < 0 if ๐‘“๐‘ฅ๐‘ฅ < 0 and ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ > ๐‘“๐‘ฅ๐‘ฆ2

.

Adopted from Calculus Early Transcendental James Stewart DIFFERENT PAGES

Page 27: Basic calculus (ii) recap

Optimising of Two Variables Functionsโ€ข The two variables function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) have a relative maximum

(relative minimum) at a point in its domain if at that point :

Note 1: If ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ2

< 0 , it

means the critical point is not a maximum

or a minimum but a saddle point.

(looks maximum from one axis but

minimum from another axis)

Adopted from http://commons.wikimedia.org/wiki/File:Saddle_point.png

๐’› = ๐’™๐Ÿ โˆ’ ๐’š๐Ÿ

i. ๐‘“๐‘ฅ = 0 and ๐‘“๐‘ฆ = 0 , simultaneously.

ii. ๐‘“๐‘ฅ๐‘ฅ < 0 (๐‘“๐‘ฅ๐‘ฅ > 0)

iii. ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ2

> 0Sufficient Conditions

Necessary conditions for differentiable functions

Page 28: Basic calculus (ii) recap

Optimising of Two Variables Functions

โ€ข Note 2: If ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ2

= 0 at the critical point further

investigation is needed to find about the nature of the point.

โ€ข Example:

o Find the local extremum of the function ๐‘“(๐‘ฅ, ๐‘ฆ) = ๐‘ฅ3 โˆ’ 6๐‘ฅ๐‘ฆ + 8๐‘ฆ3, if any.

๐‘“๐‘ฅ = 0๐‘“๐‘ฆ = 0

โ†’ 6๐‘ฅ2 โˆ’ 6๐‘ฆ = 0

โˆ’6๐‘ฅ + 24๐‘ฆ2 = 0โ†’

๐‘ฅ2 = ๐‘ฆ

โˆ’๐‘ฅ + 4๐‘ฆ2 = 0

After solving these simultaneous equations two critical points emerge

๐‘จ(๐ŸŽ, ๐ŸŽ, ๐ŸŽ) and ๐‘ฉ(๐Ÿ‘ ๐Ÿ

๐Ÿ’,

๐Ÿ‘ ๐Ÿ

๐Ÿ๐Ÿ”,โˆ’๐Ÿ‘

๐Ÿ’) .

Page 29: Basic calculus (ii) recap

Optimising of Two Variables Functions

Now, ๐‘“๐‘ฅ๐‘ฅ = 12๐‘ฅ and ๐‘“๐‘ฆ๐‘ฆ = 48๐‘ฆ and ๐‘“๐‘ฅ๐‘ฆ = ๐‘“๐‘ฆ๐‘ฅ = โˆ’6 .

So, ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ2

= 12๐‘ฅ. 48๐‘ฆ โˆ’ โˆ’6 2 = 576๐‘ฅ๐‘ฆ โˆ’ 36 .

At the point ๐ด 0,0,0 : ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ2

= โˆ’36 < 0 โ†’

๐ด ๐ข๐ฌ ๐š ๐ฌ๐š๐๐ฅ๐ฅ๐ž ๐ฉ๐จ๐ข๐ง๐ญ.

At the point

๐ต(3 1

4,

3 1

16,โˆ’3

4) :๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ

2= 144 โˆ’ 36 = 108 > 0 and

๐‘“๐‘ฅ๐‘ฅ > 0 , so this point is a local minimum.

Page 30: Basic calculus (ii) recap

The Jacobian & Hessian Determinants

โ€ข From the matrix algebra we know that for any square matrix ๐ดif:

๐ด = 0 โŸน ๐ด ๐‘–๐‘  ๐‘Ž ๐‘ ๐‘–๐‘›๐‘”๐‘ข๐‘™๐‘Ž๐‘Ÿ ๐‘š๐‘Ž๐‘ก๐‘Ÿ๐‘–๐‘ฅ,

Which means, there exists linear dependence between at least two rows or two columns of the matrix.

And if:

๐ด โ‰  0 โŸน ๐ด ๐‘–๐‘  ๐‘Ž ๐‘›๐‘œ๐‘› โˆ’ ๐‘ ๐‘–๐‘›๐‘”๐‘ข๐‘™๐‘Ž๐‘Ÿ ๐‘š๐‘Ž๐‘ก๐‘Ÿ๐‘–๐‘ฅ,

Which means, all rows and all columns are linearly independent.

โ€ข So to test for linear dependence between the equations in a simultaneous system the determinant of the coefficients matrix can be used.

Page 31: Basic calculus (ii) recap

The Jacobian & Hessian Determinantsโ€ข To test for functional dependence (both linear and non-linear) between

different functions we use Jacobian Determinant shown by ๐ฝ .

โ€ข The Jacobian Matrix is the matrix of all first-order partial derivatives of a vector function ๐น: ๐‘…๐‘› โ†’ ๐‘…๐‘š, which corresponds a vector in ๐‘›dimensional space(real n-tuples) into a vector in ๐‘š dimensional space (real m-tuples):

๐‘ฆ1 = ๐น1(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘›)๐‘ฆ2 = ๐น2(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘›)

โ‹ฎ๐‘ฆ๐‘š = ๐น๐‘š(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘›)

So, the Jacobian matrix of ๐น is:

๐ฝ =

๐œ•๐น1

๐œ•๐‘ฅ1โ‹ฏ

๐œ•๐น1

๐œ•๐‘ฅ๐‘›

โ‹ฎ โ‹ฑ โ‹ฎ๐œ•๐น๐‘š

๐œ•๐‘ฅ1โ‹ฏ

๐œ•๐น๐‘š

๐œ•๐‘ฅ๐‘›

Each row is the partial derivatives of one of the functions (e.g. ๐น1) with respect to all independent variables ๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘›.

Page 32: Basic calculus (ii) recap

The Jacobian & Hessian Determinantsโ€ข If ๐‘š = ๐‘›, the Jacobian matrix is a square matrix and its

determinant shows if there is functional dependence or independence between the functions.

๐ฝ = 0 โŸน ๐‘‡โ„Ž๐‘’ ๐‘’๐‘ž๐‘ข๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘  ๐‘Ž๐‘Ÿ๐‘’ ๐‘“๐‘ข๐‘›๐‘๐‘ก๐‘–๐‘œ๐‘›๐‘Ž๐‘™๐‘™๐‘ฆ ๐‘‘๐‘’๐‘๐‘’๐‘›๐‘‘๐‘’๐‘›๐‘ก

This means, there is a linear or non-linear association between two functions.

๐ฝ โ‰  0 โŸน ๐‘‡โ„Ž๐‘’ ๐‘’๐‘ž๐‘ข๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘  ๐‘Ž๐‘Ÿ๐‘’ ๐‘“๐‘ข๐‘›๐‘๐‘ก๐‘–๐‘œ๐‘›๐‘Ž๐‘™๐‘™๐‘ฆ ๐‘–๐‘›๐‘‘๐‘’๐‘๐‘’๐‘›๐‘‘๐‘’๐‘›๐‘ก

This means, there is no linear or non-linear association between two functions.

Example: Use the Jacobian determinant to test the functional dependency of the following equations:

๐‘ฆ1 = 2๐‘ฅ1 โˆ’ 3๐‘ฅ2

๐‘ฆ2 = 4๐‘ฅ12 โˆ’ 12๐‘ฅ1๐‘ฅ2 + 9๐‘ฅ2

2

Page 33: Basic calculus (ii) recap

The Jacobian & Hessian Determinants

โ€ข The Jacobian determinant is :

๐ฝ =

๐œ•๐‘ฆ1

๐œ•๐‘ฅ1

๐œ•๐‘ฆ1

๐œ•๐‘ฅ2

๐œ•๐‘ฆ2

๐œ•๐‘ฅ1

๐œ•๐‘ฆ2

๐œ•๐‘ฅ2

=2 โˆ’3

8๐‘ฅ1 โˆ’ 12๐‘ฅ2 โˆ’12๐‘ฅ1 + 18๐‘ฅ2

= 2 โˆ’12๐‘ฅ1 + 18๐‘ฅ2 โˆ’ โˆ’3 8๐‘ฅ1 โˆ’ 12๐‘ฅ2 = 0

โ€ข So, the functions are not independent.

โ€ข We expected such a result as we know that there is a quadratic functional relationship between ๐‘ฆ1 and ๐‘ฆ2:

๐‘ฆ12 = ๐‘ฆ2

Page 34: Basic calculus (ii) recap

The Jacobian & Hessian Determinants

โ€ข Hessian Matrix is a square matrix which is composed of the second-order partial derivatives of a real (scalar) multi variables function, (๐‘“: ๐‘…๐‘› โ†’ ๐‘…). For a function ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘›), Hessian determinant is defined as:

๐ป =

๐œ•2๐‘“

๐œ•๐‘ฅ12

๐œ•2๐‘“

๐œ•๐‘ฅ1๐œ•๐‘ฅ2โ‹ฏ

๐œ•2๐‘“

๐œ•๐‘ฅ1๐œ•๐‘ฅ๐‘›

๐œ•2๐‘“

๐œ•๐‘ฅ2๐œ•๐‘ฅ1

๐œ•2๐‘“

๐œ•๐‘ฅ22 โ‹ฏ

๐œ•2๐‘“

๐œ•๐‘ฅ2๐œ•๐‘ฅ๐‘›

โ‹ฎ๐œ•2๐‘“

๐œ•๐‘ฅ๐‘›๐œ•๐‘ฅ1

โ‹ฎ๐œ•2๐‘“

๐œ•๐‘ฅ๐‘›๐œ•๐‘ฅ2

โ€ฆ

โ‹ฎ๐œ•2๐‘“

๐œ•๐‘ฅ๐‘›2

=

๐‘“11 ๐‘“12 โ€ฆ ๐‘“1๐‘›

๐‘“21 ๐‘“22 โ€ฆ ๐‘“2๐‘›

โ‹ฎ๐‘“๐‘›1

โ‹ฎ๐‘“๐‘›2

โ‹ฑ โ‹ฎโ€ฆ ๐‘“๐‘›๐‘›

โ€ข In the optimisation of two variables function if the first-order (necessary) conditions ๐‘“๐‘ฅ = ๐‘“๐‘ฆ = 0 are met, second-order

(sufficient) conditions are:

๐‘“๐‘ฅ๐‘ฅ , ๐‘“๐‘ฆ๐‘ฆ > 0 ๐‘“๐‘œ๐‘Ÿ ๐‘Ž ๐‘š๐‘–๐‘›๐‘–๐‘š๐‘ข๐‘š and ๐‘“๐‘ฅ๐‘ฅ , ๐‘“๐‘ฆ๐‘ฆ < 0 ๐‘“๐‘œ๐‘Ÿ ๐‘Ž ๐‘š๐‘Ž๐‘ฅ๐‘–๐‘š๐‘ข๐‘š

๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ2

> 0

Page 35: Basic calculus (ii) recap

The Jacobian & Hessian Determinants

โ€ข Using the Hessian determinant, we can simply show the sufficient conditions as:

The optimal point is minimum if ๐ป1 > 0 and ๐ป2 > 0, because:

o ๐ป1 = ๐‘“๐‘ฅ๐‘ฅ > 0

o ๐ป2 =๐‘“๐‘ฅ๐‘ฅ ๐‘“๐‘ฅ๐‘ฆ

๐‘“๐‘ฆ๐‘ฅ ๐‘“๐‘ฆ๐‘ฆ= ๐‘“๐‘ฅ๐‘ฅ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ

2 > 0

And, the optimal point is maximum if ๐ป1 < 0 and ๐ป2 > 0.

โ€ข There is the same story for a multi-variable function ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘›):

If ๐ป1 , ๐ป2 , ๐ป3 , โ€ฆ , ๐ป๐‘› > 0, the critical point is local minimum.

If the principal minors change their signs consecutively, the critical point is the local maximum. (e.g. in case of ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, ๐‘ฅ3), ๐ป1 < 0 , ๐ป2 > 0 and

๐ป3 < 0)

Page 36: Basic calculus (ii) recap

Optimisation with a Constraint โ€ข In reality, independent variables in a function, are not fully

independent from each other. They might be in a linear or even non-linear relationship with one another and make a constraint in the process of optimisation and change the result of that.

Adopted from http://staff.www.ltu.se/~larserik/applmath/chap7en/part7.htmlAdopted & altered from http://en.wikipedia.org/wiki/Lagrange_multiplier

๐‘” ๐‘ฅ, ๐‘ฆ = ๐‘

Linear constraintNon-linear constraint

๐’ˆ ๐’™, ๐’š = ๐’„

Page 37: Basic calculus (ii) recap

Optimisation with a Constraint

โ€ข In each case, the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) is the target function for optimisation, subject to a constraint ๐‘” ๐‘ฅ, ๐‘ฆ = ๐‘ (where ๐‘ is a constant). So;

Max or Min โˆถ ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆSubject to โˆถ ๐‘” ๐‘ฅ, ๐‘ฆ = ๐‘

โ€ข If the constraint function ๐‘” ๐‘ฅ, ๐‘ฆ = ๐‘ is linear (e.g. ๐‘ฅ โˆ’ 2๐‘ฆ = โˆ’1)one way to include the constraint into the optimisation process is to find one variable with respect to another from the constraint function (here; ๐‘ฅ = 2๐‘ฆ โˆ’ 1) and put it into the target function to make it a function with one independent variable, ๐‘ง = ๐น(๐‘ฆ), and follow the optimisation process of two-variable function.

Page 38: Basic calculus (ii) recap

Example

โ€ข Example: Find the maximum of the function ๐‘ง = ๐‘ฅ๐‘ฆ subject to the constraint ๐‘ฅ + ๐‘ฆ = 1.

From the constraint function we have ๐‘ฆ = โˆ’๐‘ฅ + 1 and if we substitute this with the ๐‘ฆ in the target function, we will have ๐‘ง =โˆ’ ๐‘ฅ2 + ๐‘ฅ.

๐‘‘๐‘ง

๐‘‘๐‘ฅ= 0 โ†’ โˆ’2๐‘ฅ + 1 = 0 โ†’ ๐‘ฅ = 0.5

Putting this into the constraint equation to find ๐‘ฆ and both into the target function to find ๐‘ง ; the maximum point will be ๐ด(0.5, 0.5, 0.25) .

How do we know the point is the maximum point?

Page 39: Basic calculus (ii) recap

The Lagrange Methodโ€ข If the constraint function is non-linear the previous method

might become very complicated. Another method, which is called โ€œLagrange Methodโ€ or the โ€œMethod of Lagrange Multipliersโ€, can help us to find local extremum points.

โ€ข In the Lagrange method the constraint function comes into the process of optimisation by introducing a new variable ฮป(Lagrange Multiplier, Lagrange coefficient) to make the Lagrange function ๐ฟ , in the form of:

๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘“ ๐‘ฅ, ๐‘ฆ + ๐œ† . [๐‘ โˆ’ ๐‘” ๐‘ฅ, ๐‘ฆ ]

โ€ข By changing ๐‘ฅ and ๐‘ฆ a point is moving on the surface of the function but the movement is limited to the constraint ๐‘” ๐‘ฅ, ๐‘ฆ =๐‘ .

โ€ข This means ๐‘ โˆ’ ๐‘” ๐‘ฅ, ๐‘ฆ = 0 and ๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘“ ๐‘ฅ, ๐‘ฆ . So, the optimisation of ๐ฟ is equivalent to the optimisation of ๐‘“ .

Page 40: Basic calculus (ii) recap

The Lagrange Method

โ€ข To find the extremum values we need to find the derivative of the Lagrange function with respect to its variables and solve the following simultaneous equations :

๐œ•๐ฟ

๐œ•๐‘ฅ= 0 โ†’

๐œ•๐‘“

๐œ•๐‘ฅโˆ’ ๐œ†.

๐œ•๐‘”

๐œ•๐‘ฅ= 0

๐œ•๐ฟ

๐œ•๐‘ฆ= 0 โ†’

๐œ•๐‘“

๐œ•๐‘ฆโˆ’ ๐œ†.

๐œ•๐‘”

๐œ•๐‘ฆ= 0

๐œ•๐ฟ

๐œ•๐œ†= 0 โ†’ ๐‘ โˆ’ ๐‘” ๐‘ฅ, ๐‘ฆ = 0

โ€ข Solving this simultaneous equations gives us the critical values of ๐‘ฅ

and ๐‘ฆ and a value for ๐œ† .

โ€ข ๐œ† shows the sensitivity of the target (objective)function to the change in the constraint function.

Necessary conditions for having extremums A

Page 41: Basic calculus (ii) recap

Sufficient Conditionโ€ข To make sure that the critical point(s) from solving the

simultaneous equations are extremum(s) we need sufficient evidence which is the sign of second order differential of the Lagrange function ๐‘‘2๐ฟ at the critical point(s).

โ€ข If ๐ฟ = ๐‘“ ๐‘ฅ, ๐‘ฆ + ๐œ† . [c โˆ’ ๐‘” ๐‘ฅ, ๐‘ฆ ] then

๐‘‘๐ฟ = ๐‘‘๐‘“ โˆ’ ๐‘”. ๐‘‘๐œ† โˆ’ ๐œ†. ๐‘‘๐‘”

And ๐‘‘2๐ฟ = ๐‘‘2๐‘“ โˆ’ ๐‘‘๐‘”. ๐‘‘๐œ† โˆ’ ๐‘”. ๐‘‘2๐œ† โˆ’ ๐‘‘๐œ† . ๐‘‘๐‘” โˆ’ ๐œ† . ๐‘‘2๐‘”

Since:

๐‘‘2๐œ† = 0

๐‘‘2๐‘“ = ๐‘“๐‘ฅ๐‘ฅ . ๐‘‘๐‘ฅ2 + 2๐‘“๐‘ฅ๐‘ฆ . ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐‘“๐‘ฆ๐‘ฆ . ๐‘‘๐‘ฆ2

๐‘‘๐‘” = ๐‘”๐‘ฅ . ๐‘‘๐‘ฅ + ๐‘”๐‘ฆ . ๐‘‘๐‘ฆ

๐‘‘2๐‘” = ๐‘”๐‘ฅ๐‘ฅ . ๐‘‘๐‘ฅ2 + 2๐‘”๐‘ฅ๐‘ฆ. ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐‘”๐‘ฆ๐‘ฆ . ๐‘‘๐‘ฆ2

, therefore

Page 42: Basic calculus (ii) recap

๐‘‘2๐ฟ = ๐‘“๐‘ฅ๐‘ฅ โˆ’ ๐œ†. ๐‘”๐‘ฅ๐‘ฅ . ๐‘‘๐‘ฅ2 + 2 ๐‘“๐‘ฅ๐‘ฆ โˆ’ ๐œ†. ๐‘”๐‘ฅ๐‘ฆ . ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐œ†. ๐‘”๐‘ฆ๐‘ฆ . ๐‘‘๐‘ฆ2 โˆ’

2๐‘”๐‘ฅ๐‘‘๐‘ฅ. ๐‘‘๐œ† โˆ’ 2๐‘”๐‘ฆ๐‘‘๐‘ฆ. ๐‘‘๐œ†

= ๐ฟ๐‘ฅ๐‘ฅ . ๐‘‘๐‘ฅ2 + 2๐ฟ๐‘ฅ๐‘ฆ . ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐ฟ๐‘ฆ๐‘ฆ. ๐‘‘๐‘ฆ2 โˆ’ 2๐‘”๐‘ฅ๐‘‘๐‘ฅ. ๐‘‘๐œ† โˆ’ 2๐‘”๐‘ฆ๐‘‘๐‘ฆ. ๐‘‘๐œ†

โ€ข In the matrix form we can use the bordered Hessian Matrix to represent the above quadratic form:

๐‘‘2๐ฟ = ๐‘‘๐œ† ๐‘‘๐‘ฅ ๐‘‘๐‘ฆ

0 โˆ’๐‘”๐‘ฅ โˆ’๐‘”๐‘ฆ

โˆ’๐‘”๐‘ฅ ๐ฟ๐‘ฅ๐‘ฅ ๐ฟ๐‘ฅ๐‘ฆ

โˆ’๐‘”๐‘ฆ ๐ฟ๐‘ฆ๐‘ฅ ๐ฟ๐‘ฆ๐‘ฆ

๐‘‘๐œ†๐‘‘๐‘ฅ๐‘‘๐‘ฆ

โ€ข Where the bordered Hessian matrix is:

๐ป3 =

0 โˆ’๐‘”๐‘ฅ โˆ’๐‘”๐‘ฆ

โˆ’๐‘”๐‘ฅ ๐ฟ๐‘ฅ๐‘ฅ ๐ฟ๐‘ฅ๐‘ฆ

โˆ’๐‘”๐‘ฆ ๐ฟ๐‘ฆ๐‘ฅ ๐ฟ๐‘ฆ๐‘ฆ

or sometimes ๐ป3 =

๐ฟ๐‘ฅ๐‘ฅ ๐ฟ๐‘ฅ๐‘ฆ โˆ’๐‘”๐‘ฅ

๐ฟ๐‘ฆ๐‘ฅ ๐ฟ๐‘ฆ๐‘ฆ โˆ’๐‘”๐‘ฆ

โˆ’๐‘”๐‘ฅ โˆ’๐‘”๐‘ฆ 0

Sufficient Condition

Page 43: Basic calculus (ii) recap

โ€ข In the second form, the components of vectors of the first differentials of the variables, need to be re-arranged, i.e.:

๐‘‘๐‘ฅ ๐‘‘๐‘ฆ ๐‘‘๐œ†

๐ฟ๐‘ฅ๐‘ฅ ๐ฟ๐‘ฅ๐‘ฆ โˆ’๐‘”๐‘ฅ

๐ฟ๐‘ฆ๐‘ฅ ๐ฟ๐‘ฆ๐‘ฆ โˆ’๐‘”๐‘ฆ

โˆ’๐‘”๐‘ฅ โˆ’๐‘”๐‘ฆ 0

๐‘‘๐‘ฅ๐‘‘๐‘ฆ๐‘‘๐œ†

โ€ข Note: In some books the constraint function ๐‘” enters in the Lagrange function with a positive sign, so, the signs of the first derivatives of ๐‘”in the bordered Hessian matrix are positive, but there is no difference between their determinants. (Based on the properties of determinant, if just a row or just a column of a matrix multiplied by ๐‘˜, the determinant of the matrix is multiplied by ๐‘˜. In this case, the first row and the first column multiplied by -1, so, the determinant is multiplied by -1x(-1)=1)

Sufficient Condition

Page 44: Basic calculus (ii) recap

So, we have a minimum if

1. ๐‘‘2๐ฟ > 0 (i.e. all the principle minors of the Hessian matrix should be negative: ๐ป2 , ๐ป3 < 0 )

And a maximum if:

2. ๐‘‘2๐ฟ < 0 (i.e. the principle minors of the Hessian matrix change their sign one after another: ๐ป2 > 0, ๐ป3 < 0 )

โ€ข For a multi variable function ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘›), The Hessian matrix is ๐‘› ร— ๐‘› but the rule is the same:

โ€ข For minimum: ๐ป2 , ๐ป3 , โ€ฆ , ๐ป๐‘› < 0 .

โ€ข For maximum: Tโ„Ž๐‘’ ๐‘ ๐‘–๐‘”๐‘› ๐‘œ๐‘“ ๐‘กโ„Ž๐‘’ ๐‘๐‘Ÿ๐‘–๐‘›๐‘๐‘–๐‘๐‘™๐‘’ ๐‘š๐‘–๐‘›๐‘œ๐‘Ÿ๐‘  ๐‘โ„Ž๐‘Ž๐‘›๐‘”๐‘’ ๐‘๐‘œ๐‘›๐‘ ๐‘’๐‘๐‘ข๐‘ก๐‘–๐‘ฃ๐‘’๐‘™๐‘ฆ.

Sufficient Condition

Page 45: Basic calculus (ii) recap

Exampleโ€ข Find the extremums of the function ๐‘“ ๐‘ฅ, ๐‘ฆ = ๐‘ฅ โˆ’ ๐‘ฆ subject to the ๐‘ฅ2 + ๐‘ฆ2 =

100, if any?

๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘ฅ โˆ’ ๐‘ฆ + ๐œ†[100 โˆ’ ๐‘ฅ2 โˆ’ ๐‘ฆ2]

๐ฟ๐‘ฅ = 1 โˆ’ 2๐œ†๐‘ฅ = 0๐ฟ๐‘ฆ = โˆ’1 โˆ’ 2๐œ†๐‘ฆ = 0

๐ฟ๐œ† = 100 โˆ’ ๐‘ฅ2 โˆ’ ๐‘ฆ2 = 0

1

โˆ’1=

2๐œ†๐‘ฅ

2๐œ†๐‘ฆ

From the first two equations ๐œ† can be omitted and we have ๐‘ฅ = โˆ’๐‘ฆ. Substituting this new equation into the third equation we will have:

100 โˆ’๐‘ฆ2 โˆ’ ๐‘ฆ2 = 0 โ†’ ๐‘ฆ = ยฑ5 2

So, the critical points are A โˆ’5 2, 5 2, โˆ’10 2 and ๐ต(5 2, โˆ’5 2, 10 2) and

๐œ† = โˆ“2

20.

Without any further investigation it can be said that point A is minimum and point๐ต is maximum. (Why?)

Page 46: Basic calculus (ii) recap

Example

โ€ข Using hessian determinant method we have:

๐ป3 =

0 โˆ’2๐‘ฅ โˆ’2๐‘ฆโˆ’2๐‘ฅ โˆ’2๐œ† 0โˆ’2๐‘ฆ 0 โˆ’2๐œ†

= 8๐œ†(๐‘ฅ2 + ๐‘ฆ2)

Obviously, the sign of this determinant depends on the sign of ๐œ†.

At point A โˆ’5 2, 5 2, โˆ’10 2 , ๐œ† = โˆ’2

20,so, ๐ป3 <0 and the point is

minimum. ( ๐ป2 is also negative).

At point ๐ต 5 2, โˆ’5 2, 10 2 , ๐œ† = +2

20, so, ๐ป3 >0 and the point is

maximum.

โ€ข If there are more than one constraint the process of optimisation is the same but there will be more than one Lagrange multiplier.

โ€ข This case is the generalisation of the previous case and will not be discussed here.

Page 47: Basic calculus (ii) recap

Interpretation of the Lagrange Multiplier ๐œ†

โ€ข The first-order conditions in the form of the simultaneous equations (slide 40), provides the critical (and perhaps) optimal values of the independent variables (๐‘ฅโˆ—, ๐‘ฆโˆ—) and the corresponding value(s) of the Lagrange multiplier (๐œ†โˆ—).

โ€ข The Lagrange multiplier shows the sensitivity of the optimal value of the target (objective) function(๐‘“โˆ—) to the change in the constant value of the constraint function (๐‘). It is calculated as the ratio, i.e.:

๐œ†โˆ— =๐œ•๐‘“โˆ— ๐‘ฅโˆ—, ๐‘ฆโˆ—

๐œ•๐‘This means if ๐œ†โˆ— = 2, and ๐‘ increases by 1%, the value of the target function (calculated at the optimal values ๐‘ฅโˆ—and ๐‘ฆโˆ—) increases 2%.

A

Page 48: Basic calculus (ii) recap

Duality in Optimisation Analysisโ€ข Consider the process of maximisation of the target (objective)

function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ), subject to the constraint ๐‘” = ๐‘”(๐‘ฅ, ๐‘ฆ).

โ€ข As we know, the solution is the tangency point on both functions, so, the process of optimisation can be done through different approaches. The primal approach is what we have discussed and done so far but the dual approach is when the constraint function๐‘” = ๐‘”(๐‘ฅ, ๐‘ฆ) is the new target function and ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) as the new constraint.

โ€ข The initial idea comes from the mathematical fact that if ๐‘“ reaches to its maximum at the point ๐‘ฅ = ๐‘ฅโˆ—, the function โˆ’๐‘“ will have a minimum at that point.

โ€ข Therefore, instead of finding the maximum of ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ), subject to the constraint ๐‘” = ๐‘” ๐‘ฅ, ๐‘ฆ , we can find the minimum of ๐‘” =๐‘” ๐‘ฅ, ๐‘ฆ , subject to the constraint ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ), i.e. if we know that ๐‘งcannot be bigger than ๐‘งโˆ—what is the minimum value of ๐‘” ๐‘ฅ, ๐‘ฆ , which satisfies this constraint.

Page 49: Basic calculus (ii) recap

Duality in Optimisation Analysis

โ€ข Let ๐‘ˆ = ๐‘ˆ(๐‘ฅ, ๐‘ฆ) is the utility function subject to the budget constraint ๐‘ฅ. ๐‘ƒ๐‘ฅ + ๐‘ฆ. ๐‘ƒ๐‘ฆ = ๐‘š.

โ€ข The Lagrange function is:

๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘ˆ ๐‘ฅ, ๐‘ฆ + ๐œ†(๐‘š โˆ’ ๐‘ฅ. ๐‘ƒ๐‘ฅ โˆ’ ๐‘ฆ. ๐‘ƒ๐‘ฆ)

The first-order conditions are:๐ฟ๐‘ฅ = ๐‘ˆ๐‘ฅ โˆ’ ๐œ†๐‘ƒ๐‘ฅ = 0๐ฟ๐‘ฆ = ๐‘ˆ๐‘ฆ โˆ’ ๐œ†๐‘ƒ๐‘ฆ = 0

๐ฟ๐œ† = ๐‘š โˆ’ ๐‘ฅ. ๐‘ƒ๐‘ฅ + ๐‘ฆ. ๐‘ƒ๐‘ฆ = 0

โ€ข The optimal value for ๐‘ฅ and ๐‘ฆ which shows the Marshallian demand (consumption) function for ๐‘ฅ and ๐‘ฆ and the optimal value for ๐œ† are:

๐‘ฅ๐‘€ = ๐‘ฅ๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ , ๐‘š

๐‘ฆ๐‘€ = ๐‘ฆ๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ , ๐‘š

๐œ†๐‘€ = ๐œ†๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ , ๐‘š

B

Page 50: Basic calculus (ii) recap

Duality in Optimisation Analysis

โ€ข Substituting these solutions into the target function gives the maximum value of the utility can be achieved by the constraint:

๐‘ˆโˆ— = ๐‘ˆโˆ— ๐‘ฅ๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘š , ๐‘ฆ๐‘€ ๐‘ƒ๐‘ฅ , ๐‘ƒ๐‘ฆ, ๐‘š

We call this as the indirect utility function, as it is the maximum value of the utility obtained at the optimal values of ๐‘ฅ and ๐‘ฆ, but it is an indirect function because now its values depends on the parameters ๐‘ƒ๐‘ฅ , ๐‘ƒ๐‘ฆ and ๐‘š.

โ€ข Now, the dual problem is when the expenditure on ๐‘ฅ and ๐‘ฆ is minimised subject to the maintaining of a given level of utility ๐‘ˆโˆ—. So, the new Lagrange function is:

๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘ฅ. ๐‘ƒ๐‘ฅ + ๐‘ฆ. ๐‘ƒ๐‘ฆ + ๐œ†[๐‘ˆโˆ— โˆ’ ๐‘ˆ ๐‘ฅ, ๐‘ฆ ]

The first-order conditions provide optimal solutions for ๐‘ฅ,๐‘ฆ and ๐œ†.

Page 51: Basic calculus (ii) recap

Duality in Optimisation Analysis๐ฟ๐‘ฅ = ๐‘ƒ๐‘ฅ โˆ’ ๐œ†๐‘ˆ๐‘ฅ = 0๐ฟ๐‘ฆ = ๐‘ƒ๐‘ฆ โˆ’ ๐œ†๐‘ˆ๐‘ฆ = 0

๐ฟ๐œ† = ๐‘ˆโˆ— โˆ’ ๐‘ˆ(๐‘ฅ, ๐‘ฆ) = 0

The optimal solutions represent the demand functions for ๐‘ฅ and ๐‘ฆ .๐‘ฅ๐ป = ๐‘ฅ๐ป ๐‘ƒ๐‘ฅ , ๐‘ƒ๐‘ฆ , ๐‘ˆโˆ—

๐‘ฆ๐ป = ๐‘ฆ๐ป ๐‘ƒ๐‘ฅ , ๐‘ƒ๐‘ฆ , ๐‘ˆโˆ—

๐œ†๐ป = ๐œ†๐ป ๐‘ƒ๐‘ฅ , ๐‘ƒ๐‘ฆ , ๐‘ˆโˆ—

โ€ข The first two equations are called Hicksion demand functions.

Both simultaneous equations and give us the same results:

๐‘ˆ๐‘ฅ

๐‘ƒ๐‘ฅ=

๐‘ˆ๐‘ฆ

๐‘ƒ๐‘ฆ๐‘œ๐‘Ÿ

๐‘ˆ๐‘ฅ

๐‘ˆ๐‘ฆ=

๐‘ƒ๐‘ฅ

๐‘ƒ๐‘ฆ

So, primal and dual analysis leads us to the same conclusion. The only difference is that:

๐œ†๐ป =1

๐œ†๐‘€

C

B C