Basic calculus (ii) recap
-
Upload
farzad-javidanrad -
Category
Education
-
view
246 -
download
6
Transcript of Basic calculus (ii) recap
Basic Calculus (II) Recap
(for MSc & PhD Business, Management & Finance Students)
Lecturer: Farzad Javidanrad
First Draft: Sep. 2013
Revised: Sep. 2014
Multi-Variable Functions
Multi-Variable Functions
โข In the case of one-variable function, in the form of ๐ฆ = ๐(๐ฅ) , the variable ๐ is called โindependent variableโ and ๐ โdependent variableโ.
โข There are many examples of the dependency of ๐ฆ on ๐ฅ (e.g, the state of boiling of water depends on the amount of heat; or consumption expenditure depends on the level of income) but the concept of function should be understood beyond the concept of dependency. In most of the cases, dependency is not the issue at all. The modern concept of function is based on the idea of mapping.
Multi-Variable Functions
โข When a painter paint a scene on a canvas s(he) uses a correspondence rule (mapping rule): every point in three-dimensional space (๐ 3) is corresponded (mapped) to just one and only one point in two-dimensional space (๐ 2).
โข Mathematically speaking the function ๐: ๐ 3 โ ๐ 2 can represent the type of
corresponding (mapping)
rule that the painter is
applying.
The Concept of Function as Mapping
โข Transformation of an object is a mapping from ๐ 2 to ๐ 2;
โข Mathematical operations describe a function from ๐ 2 to ๐
x
y
y
-x o
๐: ๐ 2 โ ๐ 2
๐, ๐ โ (๐, โ๐)(๐, ๐)
(๐, โ๐)
a
b
a+b o o
Figure1-6: Geometrical interpretation of
the sum operator as a function. This is a
transformation from space to . x x
๐: ๐ 2 โ ๐ ๐, ๐ โ ๐ + ๐
Multi Variables Functions
โข All basic mathematical operators such as summation, subtraction, division and multiplication introduce a function from two-dimensional space (๐ 2) to the real number set (one-dimensional space, ๐ ), that is:
๐: ๐ 2 โ ๐
For e.g. for division: ๐, ๐ โ๐
๐(๐ โ 0)
โข One of the important family of the multi-variable functions is the โreal (scalar) multi variables functionโ, which can be shown as ๐: ๐ ๐ โ ๐ or simply, ๐ฆ = ๐(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ๐), where ๐ฆ is the dependent variable and ๐ฅ1, ๐ฅ2, โฆ , ๐ฅ๐ are independent variables.
Two Variables Functions
โข A simple form of this function is when we have two independent variables ๐ฅ, ๐ฆ and one dependent variable ๐ง, in the form of ๐ง =๐(๐ฅ, ๐ฆ). This is called โtwo variables functionโ as there are two independent variables.
โข E.g. a Cobb-Douglas production function :
๐ = ๐ ๐พ, ๐ฟ = ๐ด๐พ๐ผ๐ฟ๐ฝ
Where ๐ is the level of production,
๐พ and ๐ฟ are the levels of capital and
labour employed for production,
respectively.
โข ๐ด, ๐ผ and ๐ฝ are constants of the
function.Adopted from http://en.citizendium.org/wiki/File:Cobb-Douglas_with_dimishing_returns_to_scale.png
Y
K
L
Two Variables Functionsโข ๐ง = ๐(๐ฅ, ๐ฆ) represents a functional relationship if for every ordered
pair (๐ฅ, ๐ฆ) in the domain of the function there will be one and only one value of ๐ง in the range of the function.
o Which graph does represent a function?๐๐
๐๐+
๐๐
๐๐+
๐๐
๐๐= ๐
Ellipsoid
Hyperboloid of Two Sheets
โ๐๐
๐๐โ
๐๐
๐๐+
๐๐
๐๐= ๐
Hyperbolic Paraboloid
๐๐
๐๐โ
๐๐
๐๐=
๐
๐
Elliptic Paraboloid
๐๐
๐๐+
๐๐
๐๐=
๐
๐
Ad
op
ted
from
http
://tuto
rial.math
.lamar.ed
u/C
lasses/C
alcIII/Qu
adricSu
rfaces.aspx
Derivative of Two Variables Functionsโข Consider the function ๐ง = ๐(๐ฅ, ๐ฆ); ๐ง changes if ๐ฅ or ๐ฆ or both of
them change. If we control the change of ๐ฆ and allow just ๐ฅ to
change then the average change of ๐ง in terms of ๐ฅ, is ฮ๐ง
ฮ๐ฅ. The limiting
state of this ratio when โ๐ฅ โ 0 is what is called โpartial derivative of ๐ in terms of ๐ โ and is shown by:
๐๐ง
๐๐ฅ,
๐๐(๐ฅ,๐ฆ)
๐๐ฅ, ๐ง๐ฅ
โฒ, ๐๐ฅ
โข This cutter plane shows that the
variable ๐ฆ is controlled (fixed)
at ๐ฆ = 1 but ๐ฅ can change from
-2 to +2 and the movement is on
the curve of intersection between
The plane and the surface of the
function.Adopted from http://msemac.redwoods.edu/~darnold/math50c/matlab/pderiv/index.xhtml
Partial Differentiation
โข If ๐ฅ is controlled (fixed) and ๐ฆ is allowed to change the partial derivative of ๐ in terms of ๐ can be shown by:
๐๐ง
๐๐ฆ,
๐๐(๐ฅ,๐ฆ)
๐๐ฆ, ๐ง๐ฆ
โฒ ,๐๐ฆ
โข The cutter plane shows that
๐ฅ is controlled (fixed) at
๐ฅ = 0 but ๐ฆ can change from
-3 to +3 on the curve of intersection
between the plane and the surface
of the function.
z
yx
Adopted from http://www.uwec.edu/math/Calculus/216-Spring2007/assignments.htm
Partial Differentiation
โข So, in general, the slope of the function ๐ง = ๐(๐ฅ, ๐ฆ) on the curve of intersection between the surface of the function and the cutting plane parallel to x-axis at any point of the domain is:
๐๐ง
๐๐ฅ= ๐๐ฅ = lim
โ๐ฅโ0
๐ ๐ฅ + โ๐ฅ , ๐ฆ โ ๐(๐ฅ , ๐ฆ)
โ๐ฅ
= ๐๐๐โโ0
๐ ๐ฅ + โ , ๐ฆ โ ๐(๐ฅ , ๐ฆ)
โ
It means when calculating ๐๐ง
๐๐ฅthe
variable ๐ฆ should be treated as a constant. The same rule applies for multi variables functions.
Adopted from http://moodle.capilanou.ca/mod/book/view.php?id=19700&chapterid=240
๐ = ๐๐ โ ๐๐ โ ๐๐
Partial Differentiation โข And the slope of the function ๐ง = ๐(๐ฅ, ๐ฆ) on the curve of
intersection between the surface of the function and the cutting plane parallel to y-axis at any point of the domain is:
๐๐ง
๐๐ฆ= ๐๐ฆ = ๐๐๐
โ๐ฆโ0
๐ ๐ฅ , ๐ฆ + โ๐ฆ โ ๐(๐ฅ, ๐ฆ)
โ๐ฆ
= ๐๐๐โโ0
๐ ๐ฅ , ๐ฆ + โ โ ๐(๐ฅ, ๐ฆ)
โ
It means when calculating ๐๐ง
๐๐ฆthe
variable ๐ฅ should be treated as a constant. The same rule applies for multi variables functions.
Adopted from http://moodle.capilanou.ca/mod/book/view.php?id=19700&chapterid=240
๐ = ๐๐ โ ๐๐ โ ๐๐
Partial Differentiation
โข To find the partial derivatives (slope of tangent lines on the surface) at a specific point ๐(๐, ๐, ๐) we have:
โข ๐๐(๐ฅ,๐ฆ)
๐๐ฅ ๐(๐,๐,๐)= ๐๐๐
โโ0
๐ ๐+โ , ๐ โ๐(๐ , ๐)
โ
โข ๐๐(๐ฅ,๐ฆ)
๐๐ฆ ๐(๐,๐,๐)= ๐๐๐
โโ0
๐ ๐ , ๐+โ โ๐(๐ , ๐)
โ
Example:
o Find partial derivatives of ๐ง = 10๐ฅ2๐ฆ3.
๐๐
๐๐= ๐๐๐๐๐ ,
๐๐
๐๐= ๐๐๐๐๐๐
(๐, ๐, ๐)
(๐, ๐, ๐)
(๐, ๐, ๐)
(๐, ๐, ๐)
Adopted from http://www.solitaryroad.com/c353.html
Rules of Partial Differentiationโข If ๐(๐ฅ, ๐ฆ) and ๐(๐ฅ, ๐ฆ) are two differentiable functions with respect
to ๐ฅ and ๐ฆ ;
๐ง = ๐ ๐ฅ, ๐ฆ ยฑ ๐ ๐ฅ, ๐ฆ โ
๐๐ง
๐๐ฅ=
๐๐
๐๐ฅยฑ
๐๐
๐๐ฅ= ๐๐ฅ ยฑ ๐๐ฅ
๐๐ง
๐๐ฆ=
๐๐
๐๐ฆยฑ
๐๐
๐๐ฆ= ๐๐ฆ ยฑ ๐๐ฆ
๐ง = ๐ ๐ฅ, ๐ฆ ร ๐ ๐ฅ, ๐ฆ โ
๐๐ง
๐๐ฅ= ๐๐ฅ . ๐ + ๐๐ฅ . ๐
๐๐ง
๐๐ฆ= ๐๐ฆ . ๐ + ๐๐ฆ . ๐
๐ง =๐(๐ฅ,๐ฆ)
๐(๐ฅ,๐ฆ)โ
๐๐ง
๐๐ฅ=
๐๐ฅ . ๐โ๐๐ฅ . ๐
๐2
๐๐ง
๐๐ฆ=
๐๐ฆ . ๐โ๐๐ฆ . ๐
๐2
Some Exampleso Find partial derivatives of the function ๐ง = ๐ฅ2 โ ๐ฅ๐ฆ3 โ 5๐ฆ2.
๐๐ง
๐๐ฅ= ๐๐ โ ๐๐ ,
๐๐ง
๐๐ฆ= โ๐๐๐๐ โ ๐๐๐
o Find partial derivatives of ๐ง = ๐ฅ๐ฆ. ๐ฅ2 + ๐ฆ2 .
๐๐ง
๐๐ฅ= y. ๐ฅ2 + ๐ฆ2 +
2๐ฅ
2 ๐ฅ2 + ๐ฆ2. ๐ฅ๐ฆ = ๐ฒ. ๐๐ + ๐๐ +
๐๐๐
๐๐ + ๐๐
๐๐ง
๐๐ฆ= ๐ฅ. ๐ฅ2 + ๐ฆ2 +
2๐ฆ
2 ๐ฅ2 + ๐ฆ2. ๐ฅ๐ฆ = ๐. ๐๐ + ๐๐ +
๐๐๐
๐๐ + ๐๐
o Find partial derivatives of ๐ง =3๐ฅ2๐ฆ2
๐ฅ4+๐ฆ4.
๐๐ง
๐๐ฅ=
6๐ฅ๐ฆ2 ๐ฅ4 + ๐ฆ4 โ 4๐ฅ3 ร 3๐ฅ2๐ฆ2
๐ฅ4 + ๐ฆ4 2
๐๐ง
๐๐ฆ=
6๐ฆ๐ฅ2 ๐ฅ4 + ๐ฆ4 โ 4๐ฆ3 ร 3๐ฅ2๐ฆ2
๐ฅ4 + ๐ฆ4 2
Chain Rule (Different Cases)Case 1: If ๐ง = ๐ ๐ข and ๐ข = ๐(๐ฅ, ๐ฆ) then ๐ง = ๐(๐ ๐ฅ, ๐ฆ ) and
Examples:
o Find partial derivatives of ๐ง = ๐๐ฅ๐ฆ2.
Suppose ๐ข = ๐ฅ๐ฆ2 , so, ๐ง = ๐๐ข and
๐๐ง
๐๐ฅ=
๐๐ง
๐๐ข.๐๐ข
๐๐ฅ= (๐๐ข). ๐ข๐ฅ = ๐๐๐๐
. ๐๐
and ๐๐ง
๐๐ฆ=
๐๐ง
๐๐ข.๐๐ข
๐๐ฆ= (๐๐ข). ๐ข๐ฆ = ๐๐๐๐
. ๐๐๐
๐๐ง
๐๐ฅ= ๐โฒ.
๐๐
๐๐ฅ=
๐๐ง
๐๐ข.๐๐ข
๐๐ฅand
๐๐ง
๐๐ฆ= ๐โฒ.
๐๐
๐๐ฆ=
๐๐ง
๐๐ข.๐๐ข
๐๐ฆ
Chain Rule (Different Cases)
o Find partial derivatives of the function ๐ง = ๐๐ฅ
๐ฆ + cos(๐ฅ๐ฆ) .๐๐ง
๐๐ฅ=
๐
๐๐
๐๐ โ ๐. ๐๐๐ ๐๐ ,
๐๐ง
๐๐ฆ=
โ๐
๐๐๐
๐๐ โ ๐. ๐๐๐(๐๐)
โข Case 2: If ๐ง = ๐ ๐ฅ, ๐ฆ is a differentiable function of ๐ฅ and ๐ฆ and these two variables are differentiable functions of ๐ , such that ๐ฅ =๐ ๐ and ๐ฆ = โ(๐) , then:
o Find partial derivatives of ๐ง = ๐ฅ โ ๐๐๐ฆ when ๐ฅ = ๐ and ๐ฆ = ๐2 โ 1๐๐ง
๐๐= 1.
1
2 ๐โ
1
๐ฆ. 2๐ =
๐
๐ ๐โ
๐๐
๐๐ โ ๐
โข Can you suggest another way?
๐๐ง
๐๐=
๐๐ง
๐๐ฅ.๐๐ฅ
๐๐+
๐๐ง
๐๐ฆ.๐๐ฆ
๐๐
The same rules apply for multi variables functions
Chain Rules (Different Cases)โข Case 3: If ๐ง = ๐ ๐ฅ, ๐ฆ is a differentiable function of ๐ฅ and ๐ฆ and
these two variables are differentiable functions of ๐ and ๐ , such that ๐ฅ = ๐ ๐, ๐ and ๐ฆ = โ(๐, ๐ ) and ๐ and ๐ are independent from
each other (๐๐
๐๐ ,๐๐
๐๐= 0), then:
โข These derivatives are called โtotal derivatives of ๐ with respect to ๐ and ๐โ.
o Find partial derivatives of ๐ง =3
๐ฅ2 โ ๐ฆ where ๐ฅ = ๐2 + ๐ 2 and
๐ฆ =๐
๐ .
๐๐ง
๐๐=
๐๐ง
๐๐ฅ.๐๐ฅ
๐๐+
๐๐ง
๐๐ฆ.๐๐ฆ
๐๐and
๐๐ง
๐๐ =
๐๐ง
๐๐ฅ.๐๐ฅ
๐๐ +
๐๐ง
๐๐ฆ.๐๐ฆ
๐๐
Implicit Differentiationโข The Chain Rule can be used for implicit differentiation even for one
variable functions:๐น ๐ฅ, ๐ฆ = 0
Using the chain rule we have:๐๐น
๐๐ฅ=
๐๐น
๐๐ฅ.๐๐ฅ
๐๐ฅ+
๐๐น
๐๐ฆ.๐๐ฆ
๐๐ฅ= 0
So,
โข The same rule can be used for implicit two or multi variables functions. For example, for an implicit function ๐น ๐ฅ, ๐ฆ, ๐ง = 0, we have:
As ๐ ๐
๐ ๐= ๐
๐๐ฆ
๐๐ฅ= โ
๐๐น๐๐ฅ๐๐น๐๐ฆ
= โ๐น๐ฅ
๐น๐ฆ
๐๐ง
๐๐ฅ= โ
๐๐น๐๐ฅ๐๐น๐๐ง
= โ๐น๐ฅ
๐น๐ง๐๐๐
๐๐ง
๐๐ฆ= โ
๐๐น๐๐ฆ๐๐น๐๐ง
= โ๐น๐ฆ
๐น๐ง
Examples of Implicit Functionso Find the slope of the tangent line on the curve of intersection
between the surface ๐ฅ2 + ๐ฆ2 + ๐ง2 = 9 and the plane ๐ฆ = 2 at the point ๐ด(1,2,2) .
As ๐ฆ is fixed at 2 so, we are looking for ๐๐ง
๐๐ฅat point A :
2๐ฅ + 0 + 2๐ง.๐๐ง
๐๐ฅ= 0 โ
๐๐ง
๐๐ฅ=
โ๐ฅ
๐ง= โ
1
2Or using implicit differentiation:
๐๐ง
๐๐ฅ= โ
๐น๐ฅ
๐น๐ง= โ
2๐ฅ
2๐ง= โ
๐ฅ
๐ง
o Find ๐๐ง
๐๐ฆfor ๐๐ฅ+๐ฆ+๐ง = ๐ฅ2 โ 2๐ฆ2 + ๐ง2 .
0 + 1 +๐๐ง
๐๐ฆ๐๐ฅ+๐ฆ+๐ง = 0 โ 4y + 2z.
๐๐ง
๐๐ฆโ
๐๐ง
๐๐ฆ=
๐๐ฅ+๐ฆ+๐ง + 4๐ฆ
2๐ง โ ๐๐ฅ+๐ฆ+๐ง
Use the implicit differentiation for this question.
Higher Orders Partial Derivatives
โข For the function ๐ง = ๐(๐ฅ, ๐ฆ) the partial derivatives ๐๐ง
๐๐ฅand
๐๐ง
๐๐ฆare in
turn functions of ๐ฅ and ๐ฆ , in general. So, we can think of second partial derivatives of ๐ง , but in this case there are three different second derivatives:
๐ง๐ฅ๐ฅ = ๐๐ฅ๐ฅ =๐
๐๐ง๐๐ฅ
๐๐ฅ=
๐
๐๐ฅ
๐๐ง
๐๐ฅ=
๐2๐ง
๐๐ฅ2
๐ง๐ฆ๐ฆ = ๐๐ฆ๐ฆ =๐
๐๐ง๐๐ฆ
๐๐ฆ=
๐
๐๐ฅ
๐๐ง
๐๐ฆ=
๐2๐ง
๐๐ฆ2
๐ง๐ฅ๐ฆ = ๐๐ฅ๐ฆ =๐
๐๐ง๐๐ฅ
๐๐ฆ=
๐
๐๐ฆ
๐๐ง
๐๐ฅ=
๐2๐ง
๐๐ฆ. ๐๐ฅ
Second-order direct
partial derivatives
Second-order cross
partial derivative
The Equality of Mixed (Cross) Partial Derivatives
๐ง๐ฆ๐ฅ = ๐๐ฆ๐ฅ =๐
๐๐ง๐๐ฆ
๐๐ฅ=
๐
๐๐ฅ
๐๐ง
๐๐ฆ=
๐2๐ง
๐๐ฅ. ๐๐ฆ
โข If the cross (mixed) partial derivatives ๐๐ฅ๐ฆ and ๐๐ฆ๐ฅ are continuous
and finite in their domain then they are equal to one another; i.e.
๐๐ฅ๐ฆ = ๐๐ฆ๐ฅ
Or ๐2๐ง
๐๐ฆ.๐๐ฅ=
๐2๐ง
๐๐ฅ.๐๐ฆ
๐ง = ๐(๐ฅ, ๐ฆ)
๐๐ง
๐๐ฅ=๐๐ฅ
๐๐ง
๐๐ฆ=๐๐ฆ
๐๐ฅ๐ฅ
๐๐ฅ๐ฆ = ๐๐ฆ๐ฅ
๐๐ฆ๐ฆ
Second-order cross
partial derivative
Total Differentialโข The meaning of differential in multi variables scalar function is not
different with that in the one variable function. The only difference is that the source of change in dependent variable is the change of all independent variables., that is;
๐ง + โ๐ง = ๐(๐ฅ + โ๐ฅ, ๐ฆ + โ๐ฆ)
Or โ๐ง = ๐ ๐ฅ + โ๐ฅ, ๐ฆ + โ๐ฆ โ ๐(๐ฅ, ๐ฆ)
But ๐๐ง, which is called โtotal differentialโis defined as:
๐๐ง =๐๐ง
๐๐ฅ. ๐๐ฅ +
๐๐ง
๐๐ฆ๐๐ฆ
Or
๐๐ง = ๐๐ฅ . ๐๐ฅ + ๐๐ฆ. ๐๐ฆAdopted from Calculus Early Transcendental James Stewart p897
Total Differentialโข For a multi variables scalar function the same rule applies:
๐ง = ๐(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ๐)
๐๐ง =๐๐ง
๐๐ฅ1. ๐๐ฅ1 +
๐๐ง
๐๐ฅ2. ๐๐ฅ2 + โฏ +
๐๐ง
๐๐ฅ๐. ๐๐ฅ๐
โข in the case of two variables function ๐ง = ๐(๐ฅ, ๐ฆ) we assumed ๐ฅ and ๐ฆ are independent, but if they depend on other variables the differential of each one of them can be treated as the total differential of a dependent variable, that is;
๐ง = ๐ ๐ฅ, ๐ฆ โ ๐๐ง =๐๐ง
๐๐ฅ. ๐๐ฅ +
๐๐ง
๐๐ฆ. ๐๐ฆ ๐ด
๐ฅ = โ ๐, ๐ โ ๐๐ฅ =๐๐ฅ
๐๐. ๐๐ +
๐๐ฅ
๐๐ . ๐๐ ๐ต
๐ฆ = ๐ ๐, ๐ โ ๐๐ฆ =๐๐ฆ
๐๐. ๐๐ +
๐๐ฆ
๐๐ . ๐๐ ๐ถ
Substituting B and C into A:
๐๐ง =๐๐ง
๐๐ฅ.
๐๐ฅ
๐๐. ๐๐ +
๐๐ฅ
๐๐ . ๐๐ +
๐๐ง
๐๐ฆ.
๐๐ฆ
๐๐. ๐๐ +
๐๐ฆ
๐๐ . ๐๐
Total Differential
If we are looking for total derivatives of ๐ง with respect to ๐ and ๐ , which is introduced before as the chain rule (case 3), we need to suppose that ๐ and ๐ are independent variables and not associated to
each other (๐๐
๐๐๐๐
๐๐
๐๐ = 0); then:
๐๐ง =๐๐ง
๐๐ฅ.๐๐ฅ
๐๐+
๐๐ง
๐๐ฆ.๐๐ฆ
๐๐. ๐๐ +
๐๐ง
๐๐ฅ.๐๐ฅ
๐๐ +
๐๐ง
๐๐ฆ.๐๐ฆ
๐๐ . ๐๐
๐๐ง
๐๐=
๐๐ง
๐๐ฅ.๐๐ฅ
๐๐+
๐๐ง
๐๐ฆ.๐๐ฆ
๐๐
and
๐๐ง
๐๐ =
๐๐ง
๐๐ฅ.๐๐ฅ
๐๐ +
๐๐ง
๐๐ฆ.๐๐ฆ
๐๐
Second Order Total Differential โข The sign of the second order total differential ๐2๐ง shows the convexity and
concavity of the surface with respect to the ๐ฅ๐๐ฆ plane.
โข Considering the total differential ๐๐ง , the second order total differential ๐2๐ง can be obtained by applying the differential rules:
๐2๐ง = ๐ ๐๐ง = ๐(๐๐ง
๐๐ฅ. ๐๐ฅ +
๐๐ง
๐๐ฆ๐๐ฆ)
= ๐ ๐๐ฅ . ๐๐ฅ + ๐๐ฆ . ๐๐ฆ= ๐๐๐ฅ . ๐๐ฅ + ๐๐ฅ . ๐ ๐๐ฅ + ๐๐๐ฆ. ๐๐ฆ + ๐๐ฆ. ๐(๐๐ฆ)
As ๐ ๐๐ฅ = ๐2๐ฅ = 0
๐ ๐๐ฆ = ๐2๐ฆ = 0, and
๐๐๐ฅ = ๐๐ฅ๐ฅ . ๐๐ฅ + ๐๐ฅ๐ฆ. ๐๐ฆ
๐๐๐ฆ = ๐๐ฆ๐ฅ . ๐๐ฅ + ๐๐ฆ๐ฆ. ๐๐ฆtherefore :
โข Factorising ๐๐ฅ2 from the right hand side, we have:
๐2๐ง = ๐๐ฆ2. ๐๐ฅ๐ฅ .๐๐ฅ
๐๐ฆ
2
+ 2๐๐ฅ๐ฆ.๐๐ฅ
๐๐ฆ+ ๐๐ฆ๐ฆ
๐2๐ง = ๐๐ฅ๐ฅ . ๐๐ฅ2 + 2๐๐ฅ๐ฆ . ๐๐ฅ. ๐๐ฆ + ๐๐ฆ๐ฆ . ๐๐ฆ2
Second Order Differential โข ๐๐ฆ2 > 0 (why?); so the sign of ๐2๐ง depends on the sign of the
expression in the bracket.
โข From elementary algebra we know that the quadratic form ๐๐2 + ๐๐ + ๐ has the same sign as the parameter ๐ when โ=๐2 โ 4๐๐ < 0 .
โข If we assume that ๐ =๐๐ฅ
๐๐ฆand ๐ = ๐๐ฅ๐ฅ , ๐ = 2๐๐ฅ๐ฆ , ๐ = ๐๐ฆ๐ฆ then
๐2๐ง = ๐๐ฆ2. ๐๐2 + ๐๐ + ๐ has the same sign as ๐ = ๐๐ฅ๐ฅ if
2๐๐ฅ๐ฆ2
โ 4๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ < 0 โ ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ > ๐๐ฅ๐ฆ2
So;
1. ๐2๐ง > 0 if ๐๐ฅ๐ฅ > 0 and ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ > ๐๐ฅ๐ฆ2
.
2. ๐2๐ง < 0 if ๐๐ฅ๐ฅ < 0 and ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ > ๐๐ฅ๐ฆ2
.
Adopted from Calculus Early Transcendental James Stewart DIFFERENT PAGES
Optimising of Two Variables Functionsโข The two variables function ๐ง = ๐(๐ฅ, ๐ฆ) have a relative maximum
(relative minimum) at a point in its domain if at that point :
Note 1: If ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ2
< 0 , it
means the critical point is not a maximum
or a minimum but a saddle point.
(looks maximum from one axis but
minimum from another axis)
Adopted from http://commons.wikimedia.org/wiki/File:Saddle_point.png
๐ = ๐๐ โ ๐๐
i. ๐๐ฅ = 0 and ๐๐ฆ = 0 , simultaneously.
ii. ๐๐ฅ๐ฅ < 0 (๐๐ฅ๐ฅ > 0)
iii. ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ2
> 0Sufficient Conditions
Necessary conditions for differentiable functions
Optimising of Two Variables Functions
โข Note 2: If ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ2
= 0 at the critical point further
investigation is needed to find about the nature of the point.
โข Example:
o Find the local extremum of the function ๐(๐ฅ, ๐ฆ) = ๐ฅ3 โ 6๐ฅ๐ฆ + 8๐ฆ3, if any.
๐๐ฅ = 0๐๐ฆ = 0
โ 6๐ฅ2 โ 6๐ฆ = 0
โ6๐ฅ + 24๐ฆ2 = 0โ
๐ฅ2 = ๐ฆ
โ๐ฅ + 4๐ฆ2 = 0
After solving these simultaneous equations two critical points emerge
๐จ(๐, ๐, ๐) and ๐ฉ(๐ ๐
๐,
๐ ๐
๐๐,โ๐
๐) .
Optimising of Two Variables Functions
Now, ๐๐ฅ๐ฅ = 12๐ฅ and ๐๐ฆ๐ฆ = 48๐ฆ and ๐๐ฅ๐ฆ = ๐๐ฆ๐ฅ = โ6 .
So, ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ2
= 12๐ฅ. 48๐ฆ โ โ6 2 = 576๐ฅ๐ฆ โ 36 .
At the point ๐ด 0,0,0 : ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ2
= โ36 < 0 โ
๐ด ๐ข๐ฌ ๐ ๐ฌ๐๐๐ฅ๐ฅ๐ ๐ฉ๐จ๐ข๐ง๐ญ.
At the point
๐ต(3 1
4,
3 1
16,โ3
4) :๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ
2= 144 โ 36 = 108 > 0 and
๐๐ฅ๐ฅ > 0 , so this point is a local minimum.
The Jacobian & Hessian Determinants
โข From the matrix algebra we know that for any square matrix ๐ดif:
๐ด = 0 โน ๐ด ๐๐ ๐ ๐ ๐๐๐๐ข๐๐๐ ๐๐๐ก๐๐๐ฅ,
Which means, there exists linear dependence between at least two rows or two columns of the matrix.
And if:
๐ด โ 0 โน ๐ด ๐๐ ๐ ๐๐๐ โ ๐ ๐๐๐๐ข๐๐๐ ๐๐๐ก๐๐๐ฅ,
Which means, all rows and all columns are linearly independent.
โข So to test for linear dependence between the equations in a simultaneous system the determinant of the coefficients matrix can be used.
The Jacobian & Hessian Determinantsโข To test for functional dependence (both linear and non-linear) between
different functions we use Jacobian Determinant shown by ๐ฝ .
โข The Jacobian Matrix is the matrix of all first-order partial derivatives of a vector function ๐น: ๐ ๐ โ ๐ ๐, which corresponds a vector in ๐dimensional space(real n-tuples) into a vector in ๐ dimensional space (real m-tuples):
๐ฆ1 = ๐น1(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ๐)๐ฆ2 = ๐น2(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ๐)
โฎ๐ฆ๐ = ๐น๐(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ๐)
So, the Jacobian matrix of ๐น is:
๐ฝ =
๐๐น1
๐๐ฅ1โฏ
๐๐น1
๐๐ฅ๐
โฎ โฑ โฎ๐๐น๐
๐๐ฅ1โฏ
๐๐น๐
๐๐ฅ๐
Each row is the partial derivatives of one of the functions (e.g. ๐น1) with respect to all independent variables ๐ฅ1, ๐ฅ2, โฆ , ๐ฅ๐.
The Jacobian & Hessian Determinantsโข If ๐ = ๐, the Jacobian matrix is a square matrix and its
determinant shows if there is functional dependence or independence between the functions.
๐ฝ = 0 โน ๐โ๐ ๐๐๐ข๐๐ก๐๐๐๐ ๐๐๐ ๐๐ข๐๐๐ก๐๐๐๐๐๐๐ฆ ๐๐๐๐๐๐๐๐๐ก
This means, there is a linear or non-linear association between two functions.
๐ฝ โ 0 โน ๐โ๐ ๐๐๐ข๐๐ก๐๐๐๐ ๐๐๐ ๐๐ข๐๐๐ก๐๐๐๐๐๐๐ฆ ๐๐๐๐๐๐๐๐๐๐๐ก
This means, there is no linear or non-linear association between two functions.
Example: Use the Jacobian determinant to test the functional dependency of the following equations:
๐ฆ1 = 2๐ฅ1 โ 3๐ฅ2
๐ฆ2 = 4๐ฅ12 โ 12๐ฅ1๐ฅ2 + 9๐ฅ2
2
The Jacobian & Hessian Determinants
โข The Jacobian determinant is :
๐ฝ =
๐๐ฆ1
๐๐ฅ1
๐๐ฆ1
๐๐ฅ2
๐๐ฆ2
๐๐ฅ1
๐๐ฆ2
๐๐ฅ2
=2 โ3
8๐ฅ1 โ 12๐ฅ2 โ12๐ฅ1 + 18๐ฅ2
= 2 โ12๐ฅ1 + 18๐ฅ2 โ โ3 8๐ฅ1 โ 12๐ฅ2 = 0
โข So, the functions are not independent.
โข We expected such a result as we know that there is a quadratic functional relationship between ๐ฆ1 and ๐ฆ2:
๐ฆ12 = ๐ฆ2
The Jacobian & Hessian Determinants
โข Hessian Matrix is a square matrix which is composed of the second-order partial derivatives of a real (scalar) multi variables function, (๐: ๐ ๐ โ ๐ ). For a function ๐ฆ = ๐(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ๐), Hessian determinant is defined as:
๐ป =
๐2๐
๐๐ฅ12
๐2๐
๐๐ฅ1๐๐ฅ2โฏ
๐2๐
๐๐ฅ1๐๐ฅ๐
๐2๐
๐๐ฅ2๐๐ฅ1
๐2๐
๐๐ฅ22 โฏ
๐2๐
๐๐ฅ2๐๐ฅ๐
โฎ๐2๐
๐๐ฅ๐๐๐ฅ1
โฎ๐2๐
๐๐ฅ๐๐๐ฅ2
โฆ
โฎ๐2๐
๐๐ฅ๐2
=
๐11 ๐12 โฆ ๐1๐
๐21 ๐22 โฆ ๐2๐
โฎ๐๐1
โฎ๐๐2
โฑ โฎโฆ ๐๐๐
โข In the optimisation of two variables function if the first-order (necessary) conditions ๐๐ฅ = ๐๐ฆ = 0 are met, second-order
(sufficient) conditions are:
๐๐ฅ๐ฅ , ๐๐ฆ๐ฆ > 0 ๐๐๐ ๐ ๐๐๐๐๐๐ข๐ and ๐๐ฅ๐ฅ , ๐๐ฆ๐ฆ < 0 ๐๐๐ ๐ ๐๐๐ฅ๐๐๐ข๐
๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ2
> 0
The Jacobian & Hessian Determinants
โข Using the Hessian determinant, we can simply show the sufficient conditions as:
The optimal point is minimum if ๐ป1 > 0 and ๐ป2 > 0, because:
o ๐ป1 = ๐๐ฅ๐ฅ > 0
o ๐ป2 =๐๐ฅ๐ฅ ๐๐ฅ๐ฆ
๐๐ฆ๐ฅ ๐๐ฆ๐ฆ= ๐๐ฅ๐ฅ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ
2 > 0
And, the optimal point is maximum if ๐ป1 < 0 and ๐ป2 > 0.
โข There is the same story for a multi-variable function ๐ฆ = ๐(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ๐):
If ๐ป1 , ๐ป2 , ๐ป3 , โฆ , ๐ป๐ > 0, the critical point is local minimum.
If the principal minors change their signs consecutively, the critical point is the local maximum. (e.g. in case of ๐ฆ = ๐(๐ฅ1, ๐ฅ2, ๐ฅ3), ๐ป1 < 0 , ๐ป2 > 0 and
๐ป3 < 0)
Optimisation with a Constraint โข In reality, independent variables in a function, are not fully
independent from each other. They might be in a linear or even non-linear relationship with one another and make a constraint in the process of optimisation and change the result of that.
Adopted from http://staff.www.ltu.se/~larserik/applmath/chap7en/part7.htmlAdopted & altered from http://en.wikipedia.org/wiki/Lagrange_multiplier
๐ ๐ฅ, ๐ฆ = ๐
Linear constraintNon-linear constraint
๐ ๐, ๐ = ๐
Optimisation with a Constraint
โข In each case, the function ๐ง = ๐(๐ฅ, ๐ฆ) is the target function for optimisation, subject to a constraint ๐ ๐ฅ, ๐ฆ = ๐ (where ๐ is a constant). So;
Max or Min โถ ๐ง = ๐ ๐ฅ, ๐ฆSubject to โถ ๐ ๐ฅ, ๐ฆ = ๐
โข If the constraint function ๐ ๐ฅ, ๐ฆ = ๐ is linear (e.g. ๐ฅ โ 2๐ฆ = โ1)one way to include the constraint into the optimisation process is to find one variable with respect to another from the constraint function (here; ๐ฅ = 2๐ฆ โ 1) and put it into the target function to make it a function with one independent variable, ๐ง = ๐น(๐ฆ), and follow the optimisation process of two-variable function.
Example
โข Example: Find the maximum of the function ๐ง = ๐ฅ๐ฆ subject to the constraint ๐ฅ + ๐ฆ = 1.
From the constraint function we have ๐ฆ = โ๐ฅ + 1 and if we substitute this with the ๐ฆ in the target function, we will have ๐ง =โ ๐ฅ2 + ๐ฅ.
๐๐ง
๐๐ฅ= 0 โ โ2๐ฅ + 1 = 0 โ ๐ฅ = 0.5
Putting this into the constraint equation to find ๐ฆ and both into the target function to find ๐ง ; the maximum point will be ๐ด(0.5, 0.5, 0.25) .
How do we know the point is the maximum point?
The Lagrange Methodโข If the constraint function is non-linear the previous method
might become very complicated. Another method, which is called โLagrange Methodโ or the โMethod of Lagrange Multipliersโ, can help us to find local extremum points.
โข In the Lagrange method the constraint function comes into the process of optimisation by introducing a new variable ฮป(Lagrange Multiplier, Lagrange coefficient) to make the Lagrange function ๐ฟ , in the form of:
๐ฟ ๐ฅ, ๐ฆ, ๐ = ๐ ๐ฅ, ๐ฆ + ๐ . [๐ โ ๐ ๐ฅ, ๐ฆ ]
โข By changing ๐ฅ and ๐ฆ a point is moving on the surface of the function but the movement is limited to the constraint ๐ ๐ฅ, ๐ฆ =๐ .
โข This means ๐ โ ๐ ๐ฅ, ๐ฆ = 0 and ๐ฟ ๐ฅ, ๐ฆ, ๐ = ๐ ๐ฅ, ๐ฆ . So, the optimisation of ๐ฟ is equivalent to the optimisation of ๐ .
The Lagrange Method
โข To find the extremum values we need to find the derivative of the Lagrange function with respect to its variables and solve the following simultaneous equations :
๐๐ฟ
๐๐ฅ= 0 โ
๐๐
๐๐ฅโ ๐.
๐๐
๐๐ฅ= 0
๐๐ฟ
๐๐ฆ= 0 โ
๐๐
๐๐ฆโ ๐.
๐๐
๐๐ฆ= 0
๐๐ฟ
๐๐= 0 โ ๐ โ ๐ ๐ฅ, ๐ฆ = 0
โข Solving this simultaneous equations gives us the critical values of ๐ฅ
and ๐ฆ and a value for ๐ .
โข ๐ shows the sensitivity of the target (objective)function to the change in the constraint function.
Necessary conditions for having extremums A
Sufficient Conditionโข To make sure that the critical point(s) from solving the
simultaneous equations are extremum(s) we need sufficient evidence which is the sign of second order differential of the Lagrange function ๐2๐ฟ at the critical point(s).
โข If ๐ฟ = ๐ ๐ฅ, ๐ฆ + ๐ . [c โ ๐ ๐ฅ, ๐ฆ ] then
๐๐ฟ = ๐๐ โ ๐. ๐๐ โ ๐. ๐๐
And ๐2๐ฟ = ๐2๐ โ ๐๐. ๐๐ โ ๐. ๐2๐ โ ๐๐ . ๐๐ โ ๐ . ๐2๐
Since:
๐2๐ = 0
๐2๐ = ๐๐ฅ๐ฅ . ๐๐ฅ2 + 2๐๐ฅ๐ฆ . ๐๐ฅ. ๐๐ฆ + ๐๐ฆ๐ฆ . ๐๐ฆ2
๐๐ = ๐๐ฅ . ๐๐ฅ + ๐๐ฆ . ๐๐ฆ
๐2๐ = ๐๐ฅ๐ฅ . ๐๐ฅ2 + 2๐๐ฅ๐ฆ. ๐๐ฅ. ๐๐ฆ + ๐๐ฆ๐ฆ . ๐๐ฆ2
, therefore
๐2๐ฟ = ๐๐ฅ๐ฅ โ ๐. ๐๐ฅ๐ฅ . ๐๐ฅ2 + 2 ๐๐ฅ๐ฆ โ ๐. ๐๐ฅ๐ฆ . ๐๐ฅ. ๐๐ฆ + ๐๐ฆ๐ฆ โ ๐. ๐๐ฆ๐ฆ . ๐๐ฆ2 โ
2๐๐ฅ๐๐ฅ. ๐๐ โ 2๐๐ฆ๐๐ฆ. ๐๐
= ๐ฟ๐ฅ๐ฅ . ๐๐ฅ2 + 2๐ฟ๐ฅ๐ฆ . ๐๐ฅ. ๐๐ฆ + ๐ฟ๐ฆ๐ฆ. ๐๐ฆ2 โ 2๐๐ฅ๐๐ฅ. ๐๐ โ 2๐๐ฆ๐๐ฆ. ๐๐
โข In the matrix form we can use the bordered Hessian Matrix to represent the above quadratic form:
๐2๐ฟ = ๐๐ ๐๐ฅ ๐๐ฆ
0 โ๐๐ฅ โ๐๐ฆ
โ๐๐ฅ ๐ฟ๐ฅ๐ฅ ๐ฟ๐ฅ๐ฆ
โ๐๐ฆ ๐ฟ๐ฆ๐ฅ ๐ฟ๐ฆ๐ฆ
๐๐๐๐ฅ๐๐ฆ
โข Where the bordered Hessian matrix is:
๐ป3 =
0 โ๐๐ฅ โ๐๐ฆ
โ๐๐ฅ ๐ฟ๐ฅ๐ฅ ๐ฟ๐ฅ๐ฆ
โ๐๐ฆ ๐ฟ๐ฆ๐ฅ ๐ฟ๐ฆ๐ฆ
or sometimes ๐ป3 =
๐ฟ๐ฅ๐ฅ ๐ฟ๐ฅ๐ฆ โ๐๐ฅ
๐ฟ๐ฆ๐ฅ ๐ฟ๐ฆ๐ฆ โ๐๐ฆ
โ๐๐ฅ โ๐๐ฆ 0
Sufficient Condition
โข In the second form, the components of vectors of the first differentials of the variables, need to be re-arranged, i.e.:
๐๐ฅ ๐๐ฆ ๐๐
๐ฟ๐ฅ๐ฅ ๐ฟ๐ฅ๐ฆ โ๐๐ฅ
๐ฟ๐ฆ๐ฅ ๐ฟ๐ฆ๐ฆ โ๐๐ฆ
โ๐๐ฅ โ๐๐ฆ 0
๐๐ฅ๐๐ฆ๐๐
โข Note: In some books the constraint function ๐ enters in the Lagrange function with a positive sign, so, the signs of the first derivatives of ๐in the bordered Hessian matrix are positive, but there is no difference between their determinants. (Based on the properties of determinant, if just a row or just a column of a matrix multiplied by ๐, the determinant of the matrix is multiplied by ๐. In this case, the first row and the first column multiplied by -1, so, the determinant is multiplied by -1x(-1)=1)
Sufficient Condition
So, we have a minimum if
1. ๐2๐ฟ > 0 (i.e. all the principle minors of the Hessian matrix should be negative: ๐ป2 , ๐ป3 < 0 )
And a maximum if:
2. ๐2๐ฟ < 0 (i.e. the principle minors of the Hessian matrix change their sign one after another: ๐ป2 > 0, ๐ป3 < 0 )
โข For a multi variable function ๐ฆ = ๐(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ๐), The Hessian matrix is ๐ ร ๐ but the rule is the same:
โข For minimum: ๐ป2 , ๐ป3 , โฆ , ๐ป๐ < 0 .
โข For maximum: Tโ๐ ๐ ๐๐๐ ๐๐ ๐กโ๐ ๐๐๐๐๐๐๐๐๐ ๐๐๐๐๐๐ ๐โ๐๐๐๐ ๐๐๐๐ ๐๐๐ข๐ก๐๐ฃ๐๐๐ฆ.
Sufficient Condition
Exampleโข Find the extremums of the function ๐ ๐ฅ, ๐ฆ = ๐ฅ โ ๐ฆ subject to the ๐ฅ2 + ๐ฆ2 =
100, if any?
๐ฟ ๐ฅ, ๐ฆ, ๐ = ๐ฅ โ ๐ฆ + ๐[100 โ ๐ฅ2 โ ๐ฆ2]
๐ฟ๐ฅ = 1 โ 2๐๐ฅ = 0๐ฟ๐ฆ = โ1 โ 2๐๐ฆ = 0
๐ฟ๐ = 100 โ ๐ฅ2 โ ๐ฆ2 = 0
1
โ1=
2๐๐ฅ
2๐๐ฆ
From the first two equations ๐ can be omitted and we have ๐ฅ = โ๐ฆ. Substituting this new equation into the third equation we will have:
100 โ๐ฆ2 โ ๐ฆ2 = 0 โ ๐ฆ = ยฑ5 2
So, the critical points are A โ5 2, 5 2, โ10 2 and ๐ต(5 2, โ5 2, 10 2) and
๐ = โ2
20.
Without any further investigation it can be said that point A is minimum and point๐ต is maximum. (Why?)
Example
โข Using hessian determinant method we have:
๐ป3 =
0 โ2๐ฅ โ2๐ฆโ2๐ฅ โ2๐ 0โ2๐ฆ 0 โ2๐
= 8๐(๐ฅ2 + ๐ฆ2)
Obviously, the sign of this determinant depends on the sign of ๐.
At point A โ5 2, 5 2, โ10 2 , ๐ = โ2
20,so, ๐ป3 <0 and the point is
minimum. ( ๐ป2 is also negative).
At point ๐ต 5 2, โ5 2, 10 2 , ๐ = +2
20, so, ๐ป3 >0 and the point is
maximum.
โข If there are more than one constraint the process of optimisation is the same but there will be more than one Lagrange multiplier.
โข This case is the generalisation of the previous case and will not be discussed here.
Interpretation of the Lagrange Multiplier ๐
โข The first-order conditions in the form of the simultaneous equations (slide 40), provides the critical (and perhaps) optimal values of the independent variables (๐ฅโ, ๐ฆโ) and the corresponding value(s) of the Lagrange multiplier (๐โ).
โข The Lagrange multiplier shows the sensitivity of the optimal value of the target (objective) function(๐โ) to the change in the constant value of the constraint function (๐). It is calculated as the ratio, i.e.:
๐โ =๐๐โ ๐ฅโ, ๐ฆโ
๐๐This means if ๐โ = 2, and ๐ increases by 1%, the value of the target function (calculated at the optimal values ๐ฅโand ๐ฆโ) increases 2%.
A
Duality in Optimisation Analysisโข Consider the process of maximisation of the target (objective)
function ๐ง = ๐(๐ฅ, ๐ฆ), subject to the constraint ๐ = ๐(๐ฅ, ๐ฆ).
โข As we know, the solution is the tangency point on both functions, so, the process of optimisation can be done through different approaches. The primal approach is what we have discussed and done so far but the dual approach is when the constraint function๐ = ๐(๐ฅ, ๐ฆ) is the new target function and ๐ง = ๐(๐ฅ, ๐ฆ) as the new constraint.
โข The initial idea comes from the mathematical fact that if ๐ reaches to its maximum at the point ๐ฅ = ๐ฅโ, the function โ๐ will have a minimum at that point.
โข Therefore, instead of finding the maximum of ๐ง = ๐(๐ฅ, ๐ฆ), subject to the constraint ๐ = ๐ ๐ฅ, ๐ฆ , we can find the minimum of ๐ =๐ ๐ฅ, ๐ฆ , subject to the constraint ๐ง = ๐(๐ฅ, ๐ฆ), i.e. if we know that ๐งcannot be bigger than ๐งโwhat is the minimum value of ๐ ๐ฅ, ๐ฆ , which satisfies this constraint.
Duality in Optimisation Analysis
โข Let ๐ = ๐(๐ฅ, ๐ฆ) is the utility function subject to the budget constraint ๐ฅ. ๐๐ฅ + ๐ฆ. ๐๐ฆ = ๐.
โข The Lagrange function is:
๐ฟ ๐ฅ, ๐ฆ, ๐ = ๐ ๐ฅ, ๐ฆ + ๐(๐ โ ๐ฅ. ๐๐ฅ โ ๐ฆ. ๐๐ฆ)
The first-order conditions are:๐ฟ๐ฅ = ๐๐ฅ โ ๐๐๐ฅ = 0๐ฟ๐ฆ = ๐๐ฆ โ ๐๐๐ฆ = 0
๐ฟ๐ = ๐ โ ๐ฅ. ๐๐ฅ + ๐ฆ. ๐๐ฆ = 0
โข The optimal value for ๐ฅ and ๐ฆ which shows the Marshallian demand (consumption) function for ๐ฅ and ๐ฆ and the optimal value for ๐ are:
๐ฅ๐ = ๐ฅ๐ ๐๐ฅ, ๐๐ฆ , ๐
๐ฆ๐ = ๐ฆ๐ ๐๐ฅ, ๐๐ฆ , ๐
๐๐ = ๐๐ ๐๐ฅ, ๐๐ฆ , ๐
B
Duality in Optimisation Analysis
โข Substituting these solutions into the target function gives the maximum value of the utility can be achieved by the constraint:
๐โ = ๐โ ๐ฅ๐ ๐๐ฅ, ๐๐ฆ, ๐ , ๐ฆ๐ ๐๐ฅ , ๐๐ฆ, ๐
We call this as the indirect utility function, as it is the maximum value of the utility obtained at the optimal values of ๐ฅ and ๐ฆ, but it is an indirect function because now its values depends on the parameters ๐๐ฅ , ๐๐ฆ and ๐.
โข Now, the dual problem is when the expenditure on ๐ฅ and ๐ฆ is minimised subject to the maintaining of a given level of utility ๐โ. So, the new Lagrange function is:
๐ฟ ๐ฅ, ๐ฆ, ๐ = ๐ฅ. ๐๐ฅ + ๐ฆ. ๐๐ฆ + ๐[๐โ โ ๐ ๐ฅ, ๐ฆ ]
The first-order conditions provide optimal solutions for ๐ฅ,๐ฆ and ๐.
Duality in Optimisation Analysis๐ฟ๐ฅ = ๐๐ฅ โ ๐๐๐ฅ = 0๐ฟ๐ฆ = ๐๐ฆ โ ๐๐๐ฆ = 0
๐ฟ๐ = ๐โ โ ๐(๐ฅ, ๐ฆ) = 0
The optimal solutions represent the demand functions for ๐ฅ and ๐ฆ .๐ฅ๐ป = ๐ฅ๐ป ๐๐ฅ , ๐๐ฆ , ๐โ
๐ฆ๐ป = ๐ฆ๐ป ๐๐ฅ , ๐๐ฆ , ๐โ
๐๐ป = ๐๐ป ๐๐ฅ , ๐๐ฆ , ๐โ
โข The first two equations are called Hicksion demand functions.
Both simultaneous equations and give us the same results:
๐๐ฅ
๐๐ฅ=
๐๐ฆ
๐๐ฆ๐๐
๐๐ฅ
๐๐ฆ=
๐๐ฅ
๐๐ฆ
So, primal and dual analysis leads us to the same conclusion. The only difference is that:
๐๐ป =1
๐๐
C
B C