Lecture note

62 downloads 15646 Views 155KB Size Report
Decentralised economies I. Martin Ellison. 1 Motivation. In the first two lectures on dynamic programming and the stochastic growth model, we solved the ...
Decentralised economies I Martin Ellison

1

Motivation

In the first two lectures on dynamic programming and the stochastic growth model, we solved the maximisation problem of the representative agent. More generally, these were examples of a social planner’s problem, in which welfare is maximised. At first, it appears that this is a very restricted class of problems, since normally we tend to have a mixture of firms maximising profits, consumers maximising utility and workers making optimal labour supply decisions. Fortunately, the social planner’s problem is more general than at first appears. The second fundamental welfare theorem states that any equilibrium which is Pareto optimal ( as the social planner’s solution is by definition) can be achieved as a decentralised equilibrium of a competitive economy, provided there is a suitable reallocation of initial endowments. Our solutions therefore do correspond to equilibria in decentralised economies. However, often we are interested in equilibria which are not Pareto optimal, such as those that prevail under sticky prices. In such cases, it is not possible to solve for the decentralised equilibrium by solving the social planner’s problem. Instead, we need to work directly with the first order conditions of optimising agents. In this lecture, we will take a log-linear approximation of the first order conditions, and proceed to solve the model using the 1

eigenvalue-eigenvector decomposition first proposed by Olivier Blanchard and Charles Kahn. In the nest lecture, we will obtain an exact solution by using the method of parameterised expectations developed by Albert Marcet and Wouter den Haan.

2

Key reading

The best explanation of log-linearisation and eigenvalue-eigenvector decomposition in a macroeconomic context is the unpublished manuscript “Production, Growth and Business Cycles” by King, Plosser and Rebelo, 1987. This is the seminal paper in the area. Our example will be a simplified version of their basic neoclassical model.

3

Other reading

The original paper on applying eigenvalue-eigenvector decompositions to linear rational expectations models is “The solution of linear difference models under rational expectations” by Blanchard and Kahn, Econometrica, 1980. Despite being in Econometrica, it is very accessible (and very short). Other papers based on different eigenvalue-eigenvector decompositions are “Solving linear rational expectations models” by Chris Sims, 2000 and “Solution and estimation of RE models with optimal policy” by Paul Söderlind, European Economic Review, 1999. The latter provides Gauss codes at http://home.tiscalinet.ch/paulsoderlind/

4

Approximation

By far the most common approach to solving decentralised economies is to take log-linear approximations around the steady state and then solve 2

the resulting linear expressions to arrive at AR processes for the various endogenous variables (see King, Plosser and Rebelo (1999)). This approach therefore has four main steps: 1. Calculate the steady state. 2. Derive analytical expressions for the approximation around the steady state. 3. Feed in the model parameter values. 4. Solve for the decision rules linking endogenous variables with predetermined and exogenous variables. The main reason why this approach is so common is it relative cheapness - the approximation leads to linear expressions for which there is a plentiful supply of cheap solution techniques available. The main cost comes in deriving analytical expressions for the approximation, whereas the actual computing time is reasonably trivial, which is a major gain compared to all other solution techniques. Naturally, this computation cheapness comes at a cost. Firstly, the model takes an approximation around the steady state. If the underlying model is fairly log-linear then this approximation will be a good one. However, the more nonlog-linear the model the worse the approximation and the more misleading the resulting simulations will be. For many of the simple models that academics examine (such as the stochastic growth model with only one source of uncertainty) this is unlikely to be a problem. However, as the size of the model increases and as risk aversion and volatility become more important these log-linear approximations become increasingly unreliable. Secondly, this approach only works if it is possible to solve for the steady state. For some models, a unique steady state may not exist. In spite of these drawbacks, it would be fair to say that this approach is most prevalent in the literature.

3

5

The stochastic growth model revisited

To illustrate the technique of log-linearisation and eigenvalue-eigenvector decomposition, we return to the simple stochastic growth model studied in the second lecture. The equilibrium in this economy is Pareto optimal so, by the second fundamental welfare theorem, the social planner’s solution and the decentralised equilibrium coincide. On the one hand, this means that for this model it is not necessary to use the approximation method. On the other hand, it means we can stick with a familiar model and at the end compare the exact solution from lecture 2 to the approximation solution derived in this lecture. In the standard stochastic growth model, the representative agent solves the following maximisation problem. maxE {ct }

∞ P

¯

t

t=0

µ

s:t:

c1−¾ t 1−¾



® ct + kt = At kt−1 + (1 − ±)kt−1

ln At+1 = ½ ln At + ²t

The problem is identical to that studied in lecture 2, except now the log of the stochastic term At follows an AR(1) process with persistence parameter ½. ¯ is the discount factor, ± is the depreciation rate and ¾ is the coefficient of relative risk aversion. We want to solve this model, by which we mean we wish to calculate sequences for consumption, output and capital which represent the equilibrium of the economy as it unfolds over time. The first order condition for this model is ®−1 c−¾ = E¯[c−¾ + 1 − ±)] t t+1 (®At+1 kt

4

6

Steady-state calculation

In steady state, consumption, output and capital are all constant. The logarithm of the technology term At is zero so At itself is unity. In terms of ¯ the budget constraint and first order conditions steady-state values c¯ and k, are

c¯ + k¯ = k¯ ® + (1 − ±)k¯

1 = ¯[(®k¯ ®−1 + 1 − ±)]

¯ and adding y¯ from the production function: Solving for c¯ and k, µ

1 − (1 − ±)¯ k¯ = ®¯ c¯ = k¯ ® − ± k¯

1 ¶ ®−1

y¯ = k¯ ®

7

Log-linearisation

The budget constraint and first order conditions are both non-linear so we proceed with a log-linear approximation. The basic idea is to rewrite the equations in terms of variables that define how much a variable is deviating from its steady-state value. To aid exposition, we introduce the hat notation. xt − x¯ ≈ ln xt − ln x¯ x¯ In this case, rather than saying xt is 12 and x ¯ is 10, we refer to xˆt as 0.2, xˆt =

meaning that xt is 20% above its steady-state value. To transpose the first order condition into hat notation, we first take logs.

5

−¾ ln ct = ln ¯ + E[−¾ ln ct+1 + ln(®At+1 kt®−1 + 1 − ±)]

(1)

Notice that already at this stage we have performed a trick by taking the expectations operator outside the logarithmic operator. In other words, we replace ln E(AB) with E ln A + E ln B. This is of course not strictly correct but is a necessary part of the approximation process. The left hand side and first two terms of the right hand side of the first order condition (1) will be easy to deal with. More problematic is the third term of the right hand side, which is a complex function of two variables, At and kt . To deal with this, we take a first order Taylor approximation of ln f (x; y) arround ln f (¯ x; y¯).

ln f (x; y) ≈ ln f (¯ x; y¯) +

fx (x; y) |x¯;¯y fy (x; y) |x¯;¯y (x − x¯) + (y − y¯) f (¯ x; y¯) f (¯ x; y¯)

Applying this to the third term in the right hand side of (1), we obtain

ln(®At+1 kt®−1

®k¯®−1 ®−1 ¯ ¯ + 1 − ±) ≈ ln(®k + 1 − ±) + ¯®−1 (At+1 − A) ®k +1−± ®(® − 1)k¯®−2 ¯ (kt − k) + ¯ ®−1 ®k +1−±

The expression can be simplified by recognising that in steady state, ®k¯ ®−1 + 1 − ± = ¯ −1 , ®k¯®−1 = ¯ −1 − (1 − ±) and A¯ = 1. ln(®At+1 kt®−1 + 1 − ±) ≈ − ln ¯ + (1 − (1 − ±)¯)Aˆt+1 + (® − 1)(1 − (1 − ±)¯)kˆt Notice again that this only holds to a first order approximation. There will inevitably be some loss of accuracy compared to the exact solution. Returning to condition (1), we write 6

−¾ ln ct = E[−¾ ln ct+1 + (1 − (1 − ±)¯)Aˆt+1 + (® − 1)(1 − (1 − ±)¯)kˆt ] Adding ¾ ln c¯ to each side and writing ln ct − ln c¯ = cˆt gives the final form. −¾ˆ ct = E[−¾ˆ ct+1 + (1 − (1 − ±)¯)Aˆt+1 + (® − 1)(1 − (1 − ±)¯)kˆt ] A similar process can be used to log-linearise the budget constraint and the law of motion for technology.

¯ kˆt−1 + y¯Aˆt c¯cˆt + k¯ kˆt = (®¯ y + (1 − ±)k) Aˆt+1 = ½Aˆt + ²t

8

State space form

It is convenient to express the three equations of the model (first order condition, budget constraint and law of motion for technology) in matrix form. 

 E 

1 − (1 − ±)¯ (® − 1)(1 − (1 − ±)¯) 0 k¯ 1



0 0

−¾

0



   ¯ −¯ = y ¯ ®¯ y + (1 − ±) k c   ½ 0 0



 ˆ −¾ A   t+1    0   kˆt   0 cˆt+1  Aˆt  kˆt−1   cˆt

More succinctly, EAxt+1 = Bxt . We will use state space forms in the rest of the lecture. What is required is to find the solution of EAxt+1 = Bxt . Assuming B is invertible, we can premultiply each side of the equation by B −1 to obtain 7

ECXt+1 = Xt where C = B −1 A. Our technique does require that the matrix B is invertible. However, other equally simple techniques such as Sims (2000) and Söderlind (1999) exist for models where B cannot be inverted.

9

Eigenvalue-eigenvector decomposition

The technique suggested by Blanchard and Kahn solves the system ECXt+1 = Xt by decomposing the matrix C into its eigenvalues and eigenvectors. Other techniques exist which do the job equally well, most notably the method of undetermined coefficients, which is the basis of Harald Uhlig’s toolkit for analysing nonlinear economic dynamic models “easily” (see http://www.wiwi.huberlin.de/wpol/html/toolkit.html). The Blanchard-Kahn algorithm begins by partitioning the variables in Xt into predetermined and exogenous variables wt and controls yt . In our model, wt ≡ (Aˆt kˆt−1 )0 since technology is

exogenous and the capital stock is predetermined. The control variable is consumption so yt ≡ cˆt . With the variables partitioned, we have EC

"

wt+1 yt+1

#

=

"

wt yt

#

(2)

The heart of the Blanchard-Kahn approach is the Jordan decomposition of the C matrix. Under quite general conditions, C is diagonalisable and we can write C = P −1 ΛP In this Jordan canonical form, Λ is a diagonal matrix with the eigenvalues of C along its leading diagonal and zeros in the off-diagonal elements. P is a matrix of the corresponding eigenvectors. In order to continue, we need the 8

number of unstable eigenvalues (i.e. of modulus less than one) to be exactly equal to the number of controls. This is known as the Blanchard-Kahn condition. In a two dimensional model (such as the Ramsey growth model) with one predetermined variable and one control, it is equivalent to requiring one stable root and one unstable root to guarantee saddle path stability in the phase diagram. If there are two many unstable roots than the system is explosive and we run into problems with the transversality conditions. If there are too few unstable roots then the system is super stable, which means there will be indeterminacy. Techniques do exist for handling models with indeterminacy, see “The Macroeconomics of Self-Fulfilling Prophecies” by Roger Farmer, MIT Press, 1993, but we restrict our attention here to models that satisfy the Blanchard-Kahn conditions. We progress by partitioning the matrix of eigenvalues Λ. Λ1 contains the stable eigenvalues (of number equal to the number of predetermined and exogenous variables) and Λ2 the unstable eigenvalues (of number equal to the number of controls). The matrix P is similarly partitioned. Λ=

"

Λ1

0

0

Λ2

#

P =

"

P11 P12 P21 P22

#

Using this partition and premultiplying each side by P , equation (2) becomes

E

"

Λ1

0

0

Λ2

#"

P11 P12 P21 P22

#"

wt+1 yt+1

#

=

"

P11 P12 P21 P22

#"

wt yt

#

This is a cumbersome expression to work with so we prefer to solve a transformed problem, with "

w ˜t y˜t

#

=

"

P11 P12 P21 P22 9

#"

wt yt

#

so that the equation to solve becomes E

"

Λ1

0

0

Λ2

#"

w˜t+1 y˜t+1

#

=

"

w˜t y˜t

#

We will solve this equation for y˜t and E w˜t+1 (w˜t is either exogenous or predetermined) and then work backwards to recover yt and Ewt+1 . In the two dimensional case, the transformation rotates the phase diagram so that the stable eigenvector lies on the x-axis and the unstable eigenvector lies on the y-axis. The beauty of working with the transformed problem is that the two equations are now decoupled. In other words, we can write each time t+1 variable as solely a function of predetermined variables, exogenous variables and controls at time t.

EΛ1 w˜t+1 = w ˜t EΛ2 y˜t+1 = y˜t The second equation shows the evolution of the controls. Solving forward to time t + j gives j ˜t E y˜t+j = (Λ−1 2 ) y

Since Λ2 contains the unstable eigenvalues (of modulus less than one), this is an explosive process. In this case, the only solution which satisfies the transversality conditions is y˜t =0 ∀t, in which case E y˜t+j = 0. The condition ¯ y˜t =0 translates back into the original programme as ¯ 0 = P21 wt + P22 yt We can therefore write the decision rule for the controls as −1 yt = −P22 P21 wt

10

The reaction function defines the controls yt as a linear function of the predetermined and exogenous variables wt . The linearity of the decision rule is a general feature of solution by log-linearisation. It has a direct analogy to the linear decision rules we derived for optimal linear-quadratic control in the last lecture. To derive the evolution of the predetermined variables, we return to the first equation of the transformed problem. j E w˜t+j = (Λ−1 ˜t 1 ) w

In this case Λ1 contains the stable eigenvalues (of modulus greater than one) and the system is stable. It already shows the expected evolution of the vector w ˜t . To return to the original problem, we recognise that

w˜t = P11 wt + P12 yt −1 = (P11 − P22 P21 )wt

Hence, −1 −1 Ewt+1 = (P11 − P22 P21 )−1 Λ−1 1 P11 − P22 P21 )wt

and the evolution of the predetermined and exogenous variables is also linear.

10

A numerical example

To demonstrate the technique of eigenvalue-eigenvector decomposition in practice, we present a Matlab code to solve the stochastic growth model. To maintain comparability with lecture 2, we use the calibration ¯ = 0:9, ® = 0:4, ¾ = 1 and ± = 0:3. The persistence parameter ½ in the law of 11

motion for technology is calibrated at 0.95, implying very high persistence. We begin by clearing the screen and defining the calibrated parameters. CLEAR; beta=0.9; alpha=0.75; sigma=1; delta=0.3; rho=0.95; Next, solve for steady state using µ

1 − (1 − ±)¯ k¯ = ®¯ ® ¯ ¯ c¯ = k − ± k

1 ¶ ®−1

y¯ = k¯ ®

kbar=((1-(1-delta)*beta)/(alpha*beta))^(1/(alpha-1)); cbar=kbar^alpha-delta*kbar; ybar=kbar^alpha; The numerical values in our calibrated model are k¯ = 11:08; c¯ = 2:75 and y¯ = 6:07. To write the model in state-space form, we define the matrices A and B in EAxt+1 = Bxt . A(1,1)=1-(1-delta)*beta; A(1,2)=(1-(1-delta)*beta)*(alpha-1); A(1,3)=-sigma; A(2,2)=kbar; A(3,1)=1; B(1,3)=-sigma; 12

B(2,1)=ybar; B(2,2)=alpha*ybar+(1-delta)*kbar; B(2,3)=-cbar; B(3,1)=rho; The numeric state-space form is 

 E  |

   Aˆt+1 0 0 −1      kˆt  =  6:07 12:31 −2:75   0      cˆt+1 0 0:95 0 0 } {z } |

0:37 −0:10 −1 0

11:08

1

0 {z A



B

 Aˆt  kˆt−1   cˆt

Inverting B and defining C = B −1 A, we have C=inv(B)*A; 

  Aˆt+1       kˆt  =  E −0:60 0:92 0:22     −0:37 0:09 1 cˆt+1 | {z } 1:05

0

0



C

 Aˆt  kˆt−1   cˆt

With the model in state-space form, we can perform the Jordan decomposition of C into eigenvalues and eigenvectors. The eigenvalues are stored in the matrix MU , with corresponding normalised eigenvectors in the matrix P. [ve,MU] = eig(C); P=inv(ve); The matrix MU of eigenvalues has the following numerical values.

13



 Λ= 

1:05

0

0

0

1:11

0

0

0

0:81

   

In this case, we have one unstable eigenvalue (the 0.81) and two stable eigenvalues (the 1.11 and 1.05) so the Blanchard-Kahn condition is satisfied and we have saddle path stability. In general, before partitioning the MU and P matrices, we would need to sort the eigenvalues so that the two stable eigenvalues are in the first two rows (corresponding to exogenous and predetermined variables) and the unstable eigenvalue is in the last row (corresponding to the control). Although in our case the eigenvalues are already sorted and we have saddle path stability, we present a more general algorithm which includes a sorting procedure and an eigenvalue stability test. IF (MU(1,1)