II. Some Mathematical Notes. - Michigan State University · Web view... where C is a nk x nk...

33
Ec 813a 3/6/2022 Ó 1998 R. H. Rasche II. Some Mathematical Notes. Logarithms (Logs) have a large number of applications in economics and economic modeling. It is probably well worth while getting a calculus book and reviewing the properties of the log transformation (alternatively check the Branson reading on the first section of the syllabus). One characteristic of the log transformation that makes it so useful in economics is the additive property of logs of products: if A*B = C, then ln(A) + ln(B) = ln(C), where ln is the natural log (base e) transformation. (Actually logs to any base have this property, but natural logs are used almost exclusively in economics because of their relationship to growth rates in continuous time). The reason why this is so useful is that a lot of relationships in economic are multiplicative, and hence become log-linear. As examples consider the following which arise frequently in macroeconomic models: 1

Transcript of II. Some Mathematical Notes. - Michigan State University · Web view... where C is a nk x nk...

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

II. Some Mathematical Notes.

Logarithms (Logs) have a large number of applications in economics and

economic modeling. It is probably well worth while getting a calculus book and

reviewing the properties of the log transformation (alternatively check the Branson

reading on the first section of the syllabus). One characteristic of the log transformation

that makes it so useful in economics is the additive property of logs of products:

if A*B = C, then ln(A) + ln(B) = ln(C),

where ln is the natural log (base e) transformation. (Actually logs to any base have this

property, but natural logs are used almost exclusively in economics because of their

relationship to growth rates in continuous time).

The reason why this is so useful is that a lot of relationships in economic are

multiplicative, and hence become log-linear. As examples consider the following which

arise frequently in macroeconomic models:

a) nominal income (Y) = real output (Q) x a price index (P), so

lnY = lnQ + lnP

b) profit maximization condition: marginal product of labor (MPL) = real wage

rate (w/P), so

ln(MPL) = lnw - lnP

c) definition of real money stock: real money stock (MR) = nominal money stock

(M) divided by the price level (P), so:

ln(MR) = lnM - lnP.

Such relations are convenient, since if we have a model which is linear in two variables

lnQ and lnP, we can extend the model to include the behavior of nominal income by

1

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

adding an additional equation (identity) using the definition of lnY without loosing the

linearity of the model.

Log transformations also become useful in the transformation of economic

relationships. Consider for example an aggregate production function of the Cobb-

Douglas form:

Q = aNbK1-b,

where Q = real output, N = input of labor services and K = input of capital services.

Using the log transformation we get:

lnQ = lna + blnN + (1-b)lnK,

using the property that exponents transform into multiplicative terms in logs.

It is also convenient to think of percentage changes in terms of differences in

logs. Consider the percentage change from X0 to X1, defined as PC = (X1 - X0)/X0. Now

consider 1 + PC = X1/X0, so ln(1+PC) = ln(X1) - ln(X0).

Second consider the series expansion of ln(1+PC) = PC - PC2/2 + 2PC3/6 - ...

(PC<1). For small values of PC this is approximately equal to PC (truncating the series

expansion at the linear term). Putting these two pieces together we get the

approximation:

ln(X1) - ln(X0) @ (X1 - X0)/X0.

With this in mind consider two points on a Cobb-Douglas production function, Q1 and Q0

corresponding to (N1,K1) and (N0,K0) respectively. From the Cobb-Douglas production

function above we have:

ln(Q1) - ln(Q0) = b [ln(N1) - ln(N0)] + (1-b)[ln(K1) - ln(K0)].

2

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

Since we can approximate log differences by percentage changes (for small changes)

and vice versa, it is appropriate to rewrite this expression as:

(Q1-Q0)/Q0 = b[(N1-N0)/N0] + (1-b)[K1-K0/K0]

from which it should be apparent that b is the elasticity of real output with respect to

labor input and (1-b) is the elasticity of real output with respect to capital input along the

Cobb-Douglas production function. This generalizes so that the coefficients in any

double log function (i.e. an equation in which both the left and right hand side variables

are logs) are the (partial) elasticities of the left hand side variable with respect to the

particular right hand side variable, and the functions have constant elasticity at all points.

This is an important point to keep in mind when you encounter econometric studies that

you want to interpret, since the double log specification is one that is quite common in

applied work.

In some cases, particularly in the construction or theoretical models (as contrasted

with models that are intended to be used in econometric applications) it is convenient to

represent some variables in terms of their logs and some other variables in terms of their

natural units. In such cases, the resulting equations are referred to as semi-log functions.

The primary purpose of such specifications is to maintain the linearity of the model in

terms of the endogenous variables. This makes it possible to construct reduced form or

final form representations of the economic model using standard matrix inversion

technology. Nonlinearity in terms of the exogenous variables is much easier to deal with

than nonlinearities in the endogenous variables. If you can figure out a way to avoid the

latter, it is probably well worth the cost of some specification error.

3

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

Another particularly useful application of changes in logs, is the first difference

(the change over time) of the logs of a variable. Since the log change is approximately

equal to the percentage change in a variable, and the percentage change over time

measures the growth rate of the variable, first differences of logs are approximately equal

to growth rates of variables. This is particularly useful in measuring things such as the

inflation rates, which is the growth rate of the price level. In many applications it is

convenient to measure the inflation rate (p) as pt = ln(Pt) - ln(Pt-1), where P is the index of

the price level.

It is in this context that natural logs are particularly useful. Consider the problem

of compounding of interest. In the discrete time case interest on one dollar at a rate of i

per period, compounded once per period accumulates to (1+i)n dollars at the end of n

periods. What happens if we maintain the stated rate per period, but increase the

frequency of the compounding? At the end of n periods we will accumulate:

(1+i/2)2n if we compound twice per period at a rate of i/2

(1+i/4)4n if we compound 4 times per period at a rate of i/4

(1+i/12)12n if we compound 12 times per period at a rate of i/12.

In general we accumulate:

(1+i/k)kn at the end of n periods if we compound k times per period at a rate of i/k.

What happens if we allow k ®¥? If we take the limit of the expression (1+i/k)kn as k

®¥, we get ein. Thus if Xt = 1 dollar, with continuous compounding of interest Xt+n = ein.

Thus ln(Xt+1) - ln(Xt) = i - 0 = i. Hence the first differences of logs of variables exactly

measure the continuous compounding rate of growth of that variable.

Linear Operators (Baumol: Economic Dynamics, 3rd edition, pp. 334-56)

4

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

A particularly useful tool in the analysis of many macroeconomic models is the

mathematical concept of an operator, in particular a linear operator. One such operator,

with which you may be familiar from statistics is the expectation operator, usually

denoted by the expression E(x). We know that the mathematical expectation of a

variable is an integral involving the product of the variable and its probability density

over the range of the density function: E(x) = òxf(x)dx. However, by using the

expectation operator in certain valid ways, we are able to manipulate mathematical

expectations in simple ways to reach useful conclusions. Two familiar properties of the

expectation operator are E(ax) = aE(x) for any constant a and E(x+y) = E(x) + E(y) for

two random variables x and y.

A rough definition of an operator is a symbol which represents some

transformation of a variable into some other variable in a specified manner (note that the

natural log transformation ln fits into this definition, but ln is nonlinear operator). The

particular operator which we will find useful is the lag (or backshift) operator, which we

will designate by L (but which you will sometimes see referred to as B). The

transformation that this operator brings about is a change in the timing of the observation

on a variable by shifting it back in time. In particular, the definition of the operator is

L(xt) = xt-1, where the subscript on the variable x indicates the time period in which that

variable is observed. With this definition, the properties of the expectation operator

indicated above clearly apply to the lag operator. L(ax t) = axt-1 = aL(xt) for any constant

a. Also L(xt + yt) = xt-1 + yt-1 = L(xt) + L(yt), hence L(axt+byt) = aL(xt) + bL(yt) for any

constants a and b. Beyond this we also have a multiplication property of the lag operator

that is particularly useful: Consider L2(xt) = L[L(xt)] = L[xt-1] = xt-2. More generally it

5

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

can be shown that Li(xt) = xt-i. This allows us to express distributed lags of a variable, i.e.

Saixt-i = a(L)x where a(L) is a polynomial in the operator L with coefficients ai (a(L) =

a0+ a1L + a2L2 + ...).

Using this notation we have a particularly compact notation for first differences

of variables as xt - xt-1 = (1-L)xt. In particular, if we approximate the inflation rate as the

log first difference of the price level then we can express inflation (pt) as pt = (1-L)lnPt.

The usefulness of polynomials is that the familiar algebraic operations from scalar

(i.e. zero order polynomials) variables are well defined for polynomials. The results of

addition, subtraction, multiplication and division of polynomials are all well defined

expressions. In the latter two cases, the computations get a little messy, but with enough

patience (or the help of a computer program) it is always possible to work out the results.

We will find that these operations are quite useful when we want to determine the

dynamic characteristics of macroeconomic models.

A property of polynomials that is useful in stationary or steady state analysis is

that the sum of the coefficients in any polynomial a(L) can be represented by a(1), i.e.

the value of the polynomial evaluated under the assumption that L=1.

In many cases, analysis can be simplified by factoring polynomials. Let

a(L) = a0 + a1L + a2L2 + ... + anLn. Then

a(L) can always be factored into the product of polynomials:

a(L) = b1(L)b2(L)b3(L)...bk(L) where the bi(L) are either linear or quadratic polynomials

with real coefficients:

bi(L) = bi0 + bi1L or

bi(L) = bi0 + bi1L + bi2L2.

6

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

(If your are willing to deal with complex valued coefficients, any quadratic terms can be

factored into linear terms with complex coefficients determined by the quadratic

formula). In the particular case where one of the linear factor is of the form (1-L) (a first

difference), the polynomial is said to have a unit root.

Polynomial division (or inversion of a polynomial) is only possible under

restrictive conditions. In particular, a polynomial is said to be invertible if the roots of the

polynomial (the values of L for which the value of the polynomial is equal to zero) all lie

outside of the unit circle. If a finite order polynomial is invertible, then the inverse of

that polynomial is an infinite order polynomial. Three particular cases that arise

frequently in

macroeconomic models are:

First order (linear) polynomials: a(L) = 1 - a1L. (note that any linear polynomial

the zero order term can always be factored out, so that it can be expressed in this form.

The root of a polynomial of this form is a1-1, so the requirement for invertibility is that

the absolute value of a1 < 1. If this is satisfied then the straight forward application of

long division results in

a-1(L) = 1 + a1L + a12L2 + a1

3L3 + ...

which is a convergent series as long as a(L) is invertible.

Quadratic Polynomials with Unequal Real Roots (Sargent, p. 183-86).

a(L) = 1 - a1L -a2L2 = (1-l1L)(1-l2L), l1 ¹ l2.

Then a(L) = 1 - (l1+l2)L + l1l2L2, so a1 = l1+l2 and a2 = -l1l2. l1-1 and l2

-1 are the

roots of the quadratic so invertibility requires that the absolute values of l1-1 and l2

-1 > 1

(alternatively |li| < 1.0) If the polynomial is invertible,then

7

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

a-1(L) = 1/[(1-l1L)(1-l2L)] = [l1/(l1-l2)][ 1/(1-l1L)] - [l2/(l1-l2)][ 1/(1-l2L)]

which can be verified by putting both terms on the right hand side over a common

denominator and canceling terms. Thus

a-1(L) = [ / ( )] [ / ( )]l l l l l l l l1 1 2 10

2 1 2 20

¥

¥i i

j

i i

jL L

Quadratic Polynomials with Complex Roots (Sargent pp. 181-182)

a(L) = 1 - a1L -a2L2

If the roots (lj-1) of this quadratic are complex, then they occur as a conjugate pair and

the lj are of the form lj = q1 ± q2i which can be rewritten in polar coordinates as

r*cos[w]±] ir*sin[w] where r = a2 and w = r*cos[.5|a1|/sqrt(-a2)]. Substituting for lj

in the expression for a-1(L) gives

a

¥ 1

01( ) { sin[ ( )] / sin[ ]}L r w k w Lk

k

k

(If you want to work this out, see the Appendix on some obscure trigometric formulas).

Hence the coefficients of a-1(L) osicillate in a damped fashion as long as r< 1.0; i.e. |a2|

< 1.0. Alternatively, the invertibility conditions for a quadratic polynomial can be

expressed in terms of the coefficients of the polynomial. Invertibility requires a2 + a1 ,

1.0; a2 - a1 < 1.0; and -1.0 < a2 < 1.0. (see Box and Jenkins, Time Series Analysis:

Forecasting and Control, 1976, p. 56)

Characteristics of the Second Order Difference Equation:(1 - a1L - a2L2)

8

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

Source: T.J. Sargent, Macroeconomic Theory, 2nd edition, p. 189.

Roots of Quadratic Polynomials

Suppose that the quadratic polynomial is of the form: 1 - a1L- a2L2. Then the

solution for the roots of the polynomial in terms of the quadratic formula are:

r b b acaj

± 2 42

, where a = -a2, b = -a1, and c = 1.0. Making these

substitutions gives:

rj ±

a a aa

1 12

2

2

42

` .

· Example : Consider the polynomial 1 - 4L +3L2. a1 = 4, a2 = -3, so:

9

Explosive Oscillations

Damped Oscillations

Explosive GrowthExplosive Oscillations

1.0

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

·

· rj ±

± ±

4 16 4 32 3

4 46

23

13

( )( )

.

·· For the roots of the polynomial to be real a a1

224 0 , so 4a2 -a1

2 . This will be true

for all positive values of a2 and for some negative values of a2. If the roots of the

polynomial are complex, then a2 will be negative.

r th Order Polynomials with Unequal Roots

Let a l( ) ( )L Ljj

r

1

1where the roots of a(L) are lj

-1.Then a l

1 1

11( ) ( )L Lj

j

r

which can be expanded using partial fractions

a l

1 1

11( ) ( )L A Lj j

j

r

Place each term on the right hand side over a common denominator ( )11

l jj

rL ,

a

l

l( )

( )

( )L

A L

L

j kkk j

r

jj

r

¹

11

1

1

1

Thus A Lj kkk j

r( )1

1

¹

l = 1.0 for all values of L. Evaluate this expression at L = l j 1 , j = 1, ... ,

r gives A j k jkk j

r( / ) .1 10

1

¹

l l , j = 1, ..., r, or

· A jj

r

j kkk j

r j j kkk j

r

¹

¹

l

l ll l l

1

1

1( )/ ( )

Thus a l l l l

¹

1

11

11( ) / ( ) ( )L Lj j kkk j

r

j

r

j

Polynomials with Unit Roots

10

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

Let C(L) = c0 + c1L + c2L2 + ... + cNLN be an Nth order polynomial such that C(1)

= 0.0. Divide C(L) by (1-L)

( ) ...

( )

( )

( ) ( )

( )

1 0 1 22

0 0 1

0 0

0 1 22

0 1 0 12

0 1 22

L c c L c L c L

c c c L

c c L

c c L c L

c c L c c L

c c c L

NN

so C(L) = (1-L)(d0+d1L+d2L2 + ... + dN-1LN-1) + dNLN

where d c i Ni jj

i

, , ...0

0. Since d cN j

j

N

0 0

0.

C(L) = (1-L)(d0+d1L+ ... + dN-1LN-1) = (1-L)D(L) and C(L) has a unit root.

Example Model in Lag Operator Notation

We can rewrite the example model that we have been using in lag operator

notation as follows:

11 1

0 01 1

l bL CY

IG

or A LCY

B LIG

t

t

t

t

t

t

t

t

( ) ( )

where A(L) and B(L) are polynomial matrices in the lag operator L. The determinant of

the polynomial matrix A(L) is 1-lL-b = (1-b)-lL =

(1-b) (1-lb1

L)

and the adjoint matrix A#(L) is 11 1

bl

L .

Therefore we can rewrite the model by multiplying both sides by the adjoint matrix:

(1-b) (1- lb1

L)

CY

t

t

=

11 1

0 01 1 1 1

bl

b bl l

L

IG L L

IG

t

t

t

t.

11

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

As long as the polynomial (1-lb1

L) is invertible, i.e. as long as l

b110

. , then we

can write the final form of the model as:

CY

LL L

IG

L LL IL G

t

t

t

t

j

j jt

jt

¥

( ) ( )

( )

1 11 1 1

11 1 1

1 1

1

0

blb

b bl l

blb

b bl l

.

Matrix Inversion and Solutions to Simultaneous Linear Equation Systems

Suppose that we have a linear dynamic macroeconomic model with M endogenous and

N exogenous variables. Assume that we can write this economic model in the form:

A0Yt = C0Xt + C1Xt-1 + A1Y-1

where the A's are MxM matricies of constant parameters (many of which are zeros) and

the C's are MxN matrices of constant parameters (many of which are zeros). The Y's are

Mx1 vectors of the current and one period lagged observations on the endogenous

variables and the X's are Nx1 vectors of current and one period lagged observations on

the exogenous variables. Now we want to determine the reduced form representation of

this economic model, so we solve the system of linear equations by inverting the matrix

A0, and premultiply the equation system by A0-1. Since A0*A0

-1 is an identity matrix by

definition the resulting reduced form representation is

Yt = A0-1*C0Xt + A0

-1*C1Xt-1 + A0-1*A1Yt-1.

In this equation the matrix of impact multipliers is A0-1*C0.

Now suppose that we wish to determine the dynamic multipliers of this model.

This can be done easily if we write the model in lag operator notation (note that a matrix

12

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

multiplied by the operator L implies that every element in that matrix is multiplied by the

operator):

[A0 - A1L]Yt= [C0 + C1L]Xt or more compactly A(L)Yt = C(L)Xt where A(L) and

C(L) are MxM and MxN matricies, respectively, whose individual elements are

polynomials (in this case the polynomials are at most of degree 1). Matricies of

polynomials are frequently referred to as Lambda matrices. Now recall from basic

algebra that for any nonsingular square matrix Z, the inverse matrix Z -1 can be written as

the adjoint matrix (here denoted Z#) divided by the determinant of Z (here denoted

Det(Z)). Alternatively Det(Z)*Z-1 = Z#. The same properties hold for nonsingular square

matrices of polynomials, with the qualification that the determinant of a matrix of

polynomials is a scalar polynomial and the adjoint matrix of a polynomial matrix is itself

a matrix of polynomials. Now we can put all of these results together and get the result

that:

Det(A(L))*Yt = A#(L)*C(L)Xt.

This is referred to as the ARMA (AutoRegressive - MovingAverage) representation of

the final form of the model. Det(A(L)) is the autoregressive part of the model.

A#(L)*C(L) is the moving average part of the model. If the polynomial Det(A(L)) is

invertible (all of its roots are outside of the unit circle) then the final form can be

constructed as:

Yt = [Det(A(L))]-1A#(L)*C(L)Xt

and the elements of the polynomial matrix [Det(A(L))]-1A#(L)*C(L) give the dynamic

multipliers of the model.

13

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

Matrix Inversion

The first rule of thumb is never try to invert a matrix that is larger than 3x3 by

hand. Attempts to do so are dangerous to your mental health. The second rule of thumb

is that whenever you think that you have computed an inverse of a matrix check your

computations! The simple check is to multiply the proposed inverse by the original

matrix. If the resulting matrix product is an identity matrix, then you have done the

computations correctly; if it is something other than an identity matrix then it is time to

recompute your work.

In dealing with macroecomomic models, it is rarely necessary to work with

matrices that are larger than 3x3. First, the matrices that have to be inverted typically

have lots of zeros scattered throughout. Under these circumstances, it is usually easy to

reduce the dimension of the model by substituting out various endogenous variables from

the model. As the dimension of the model is reduced, the matrix which multiplies the

vector of contemporaneous observations on the endogenous variables will become more

dense (will have fewer zeros), so at some point it pays to stop eliminating variables by

substitution and invert the matrix. Usually the problem can be reduced conveniently to

order two or three.

A second way to avoid inverting large matrices is to use partitioned inversion

techniques. Suppose that you have a matrix that can be written as:

AE FG H

where E and H are square submatrices of the square matrix A, and F and G are

dimensioned conformably. Then A-1 is:

14

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

AE I FD GE E FD

D GE D

11 1 1 1 1

1 1 1

( )

where D = H - GE-1F. In the particular case where F = 0, then

AE

H GE H

11

1 1 1

0

When G = 0 then

AE E FH

H

11 1 1

10

Respecifying an n dimensional Vector ARMA Model with a k th order AR process as kn dimensional Vector ARMA model with a First order AR process.

Let

A(L)Yt = B(L)Xt

be an n dimensional Vector ARMA model with a kth order AutoRegressive component,

i.e.

A(L) is an nxn polynomial matrix in the lag operator L, such that

A(L) = A0 + A1L + ... + AkLk, with rank (A0) = n.

Yt is an nx1 vector of endogenous variables.

B(L) is an nxm polynomial matrix in the lag operator L of arbitrary order.

Xt is a mx1 vector of exogenous variables.

Two special cases:

1) If A0 = In then the model can be interpreted as a dynamic reduced form model

2) If A0 = In, m = n, and B(L) = In, then the model is a kth order VAR of

dimension n.

15

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

Define Zt as a knx1 vector: Z

YY

Y

t

t

t

t k

1

1

: , so Z

YY

Y

t

t

t

t k

1

1

2

: . Then (1) can be

rewritten in “stacked form” as:

A A AI

I

I

Z

AI

I

I

Z

B L

X

k

n

n

n

t

k

n

n

n

t t

0 1 1

1

0 0 00 0 0

0 0

0 00 0

0 0 0

0 0

0

0

. ...

. . . . .. .

. .. .

.. . . . .

. .

( )

.

., or

A A AI

I

I

AI

I

I

L Z

B L

X

k

n

n

n

k

n

n

n

t t

0 1 1

0 0 00 0 0

0 0

0 00 0

0 0 0

0 0

0

0

. ...

. . . . .. .

. .. .

.. . . . .

. .

( )

.

.,

which is a nk dimensional first order Vector ARMA model in Z t. Note that

A A AI

I

I

k

n

n

n

0 1 1

0 0 00 0 0

0 0

. ...

. . . . .. .

is an upper block triangular matrix, so its inverse is:

A A A A AI

I

I

k

n

n

n

01

01

1 01

1

0 0 00 0 0

0 0

. ...

. . . . .. .

. Thus the model can be rewritten as:

16

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

(Ink - CL)Zt =

A B L01

0

0

( )

.

..

Xt, where C is a nk x nk matrix:

A A A A AI

I

I

AI

I

I

A A A A A AI

I

I

k

n

n

n

k

n

n

n

k1

n

n

n

01

01

1 01

1 01

1 01

2 01

0 0 00 0 0

0 0

0 00 0

0 0 0

0 0

0 0 00 0 0

0 0 0

. ...

. . . . .. .

. .. .

.. . . . .

. .

. ...

. . . . ..

.

C is the companion matrix to the original Vector ARMA model.

To construct a MA (final form) representation of the transformed model, the roots

of the determinant of the first order polynomial matrix (Ink - CL) must be outside of the

unit circle, or the values of i which satisfy the determinental equation:

I Cnk i 0

must be outside the unit circle.. For any i ¹ 0

i nk inkI ¹1 0 ,

so for and i ¹ 0 , which is a root of the determinental polynomial of C

i nkI 1 I C I Cnk i i nk 1 0 ,

so any i which satisfies I Cnk i 0 is the inverse of an eigenvalue of the companion

matrix C. Hence for an MA (final form) representation of the transformed system the

eigenvalues of the companion matrix must be inside the unit circle.

It remains to show that i which satisifies I Cnk i = 0 is also a root of A L( ) .

Write Ink-Ci as a partitioned matrix:

17

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

I CM MM Mnk i

11 12

21 22,

where M11, (k-1)n x (k-1)n is

I A A A A A AI I

I I

I I

n i i k i

n i n

n i n

n i n

0

11 0

12 0

11

0 00 0

0

. ...

. . . . .. .

,

M12 (k-1)n x n is

A A k i0

0

0

.

., M21 n x (k-1)n is 0 0. . In i , and M22 nxn is In.

The determinant of Ink-Ci is M M M M M M M M22 11 12 221

21 11 12 21 since M22 = In.

But M12M21 =

0 00 00 00 00 0

01. .

. . .

. . .

. . .

. . .

A A k i

,

so M11 - M12M21 =

I A A A A A A AI I

I I

I I

i i k i k i

n i n

n n

n i n

0

11 0

12 0

11

2

0 00 0

0 0

_ . . [ ]..

. . . . .. _

.

Apply the same partitioning pattern repeatedly to obtain:

I C I A [A A ... A A A( ) A A( )nk i n 01

1 i 2 i2

k ik

01

i 01

i .

Since rank (A0) = n, A 01 0 ¹ , and thus I is a root of A(L).

Computing Determinants of 3 x 3 Matricies

18

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

Let Aa a aa a aa a a

11 12 13

21 22 23

31 32 33

be a 3 x 3 matrix. The determinant of such a matrix can be

computed by expanding down the first column:

det( )

( ) ( ) ( )

A aa aa a

aa aa a

aa aa a

a a a a a a a a a a a a a a a a a a

1122 23

32 3321

12 13

32 3331

12 13

22 23

11 22 33 11 23 32 21 12 33 21 13 32 31 12 23 31 13 22

These terms can be rearranged as:

a a a a a a a a a a a a a a a a a a11 22 33 21 32 13 12 23 31 31 22 13 32 23 11 21 12 33

The first term in this expression is the product of the elements on the principal diagonal

of A. The second term is the product of the elements above the principal diagonal with

the element in the lower left corner. The third term is the product of the terms below the

principal diagonal with the term in the upper right corner.

The fourth term is the product of the elements on the diagonal from the lower left

corner to the upper right corner of the matrix. The fifth term is the product of the

elements below this diagonal with the element in the upper left corner. The sixth term is

the product of the elements above this diagonal with the element in the lower right corner

of the matrix.

Nonlinear Models

All of the above discussion gives us a handle on linear macroeconomic models.

What about nonlinear models (in particular models that are nonlinear in the endogenous

variables. Recall that we expressed a general macroeconomic model in the notation

F(Y,X|a) = 0. To analyze a nonlinear model, one approach is to construct a linear model

in small changes of the variables around some initial conditions of the model. Typically

19

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

the assumed initial conditions are a steady-state path of the model. The technique to do

this is to totally differentiate the equations of the model:

FY(Y,X|a)dY + FX(Y,X|a)dX = 0

where FY is an mxm matrix of the partial derivatives of the F vector with respect to each

of the m endogenous variables and FX is a mxn matrix of the partial derivatives of the F

vector with respect to the n exogenous variables. In this formulation, treat dY as the

endogenous variables, dX as the exogenous variables, and FY(Y,X|a), FX(Y,X|a) as

matrices of parameter whose values depend on the initial conditions of the model around

which we are measuring the small changes dY and dX. Then we can construct the

reduced form in a small neighborhood of the initial conditions of the system as:

dY = FY(Y,X|a)-1 FX(Y,X|a)dx.

Appendix: Some Obscure Trigonometric Formulas (Sargent, pp. 499-500)

If i = sqrt(-1), then sin(w) = [eiw - e-iw]/2i and

cos(w) = [eiw + e-iw]/2.

Therefore, eiw = cos(w) + isin(w) and

e-iw = cos(w) - isin(w).

For integer values of k 1,

eikw = cos(kw) + isin(kw) and

e-ikw = cos(kw) - i sin(kw).

Therefore eikw - e-ikw = 2isin(kw).

20

Ec 813a 5/8/2023Ó 1998 R. H. Rasche

21