Part 3.  The Essentials of Dynamic Optimisation
In macroeconomics the majority of problems involve optimisation over time. Typically a representative agent chooses optimal magnitudes of choice variables from an initial time until infinitely far into the future.  There are a number of methods to solve these problems.  In discrete time the problem can often be solved using a Lagrangean function.  However in other cases it becomes necessary to use the more sophisticated techniques of Optimal Control Theory or Dynamic Programming .  This handout provides an introduction to optimal control theory.
Special Aspects of Optimisation over Time
∙ Stock - Flow variable relationship.
All dynamic problems have a stock-flow structure.  Mathematically the flow variables are referred to as control variables and the stock variables as state variables.  Not surprisingly the control variables are used to affect (or steer) the state variables.  For example in any one period the amount of investment and the amount of money growth are flow variables that affect the stock of output and the level of prices which are state variables.
∙ The objective function is additively seperable.  This assumption makes the problem analytically tractable.  In essence it allows us to separate the d ynamic problem into a sequence of separate (in the objective function) one period optimisation problems.  Don't be confused, the optimisation problems are not separate because of the stock-flow relationships, but the elements of the objective function are. To be more precise the objective function is expressed as a sum of functions (i.e. integral or sigma form) each of which depends only on the variables in that period.  For example utility in a given period is independent of utility in the previous period.
1.  Lagrangean Technique
We can apply the Lagrangean technique in the usual way.
Notation
t y  = State variable(s)  =t μControl variable(s)
The control and state variables are related according to some dynamic equation,
()t y f y y t t t t ,,1μ=-+    (1)
Choosing t μ allows us to alter the change in t y .  If the above is a production function we choose =t μ investment to alter t t y y -+1 the change in output over the period.  Why does time enter on its own? This would represent the trend growth rate of output.
We might also have constraints that apply in each single period such as,
()0,,≤t y G t t μ
(2)
The objective function in discrete time is of the form,
()∑=T
t t
t
t y F 0
,,μ
(3)
The first order conditions with respect to t y  are,
1. Optimal Control Theory
Suppose that our objective is maximise the discounted utility from the use of an exhaustible resource over a given time interval.  In order to optimise we would have to choose the optimal rate of extraction.  That is we would solve the following problem,
()()dt e E S U Max t T
E
ρ-⎰0
subject to,
()t E dt
dS
-=  )0(S S =
()free T S =
Where ()t S  denotes the stock of a raw material and ()t E  the rate of extraction.  By choosing the optimal rate of extraction we can choose the optimal stock of oil at each period of time and so maximise utility.  The rate of extraction is called the control variable and the stock of the raw material the state variable.  By finding the optimal path for the control variable we can find the optimal path for the state variable.  This is how optimal control theory works.
The relationship between the stock and the extraction rate is defined by a differential equation (otherwise it would not be a dynamic problem).  This differential equation is called the equation of motion .  The last two are conditions are boundary conditions.  The first tells us the current stock, the last tells us we are free to choose the stock at the end of the period.  If utility is always increased by using the raw material this must be zero.  Notice that the time period is fixed.  This is called a fixed terminal time problem.
The Maximum Principle
In general our prototype problem is to solve,
()dt u y t F V Max T
u
⎰=0
,,
()u y t f t
y
,,=∂∂  ()00y y =
To find the first order conditions that define the extreme values we apply a set of condition known as the maximum principle.
Step 1.  Form the Hamiltonian function defined as,
()()()()u y t f t u y t F u y t H ,,,,,,,λλ+=
Step 2.  Find,
),,,(λu y t H Max u
Or if as is usual you are looking for an interior solution, apply the weaker condition,
0)
,,,(=∂∂u
u y t H λ
Along with,
()∙
=∂∂y u y t H λλ,,,
()∙
=∂∂λλy u y t H ,,,
()0=T λ
Step 3.  Analyse these conditions.
Heuristic Proof of the Maximum Principle
In this section we can derive the maximum principle , a set of first order conditions that characterise extreme values of the problem under consideration.
The basic problem is defined by,
()dt u y t F V Max T
u
⎰=0,,
()u y t f t
y
,,=∂∂
()00y y =
To derive the maximum principle we use attempt to solve the problem using the 'Calculus of Variations'.  Essentially the approach is as follows.  The dynamic problem is to find the optimal time path for ()t y , although that we can use ()t u  to steer ()t y .  It ought to be obvious that,
()()0=∂∂t u t V
Will not do.  This simply finds the best choice in any one period without regard to any future periods.  Think of the trade off between consumption and saving.  We need to choose the paths of the control (state) variable that gives us the highest value of the integral subject to the constraints.  So we need to optimise in every time period, given the linkages across periods and the constraints.  The Calculus of Variations is a way to transform this into a static optimisation problem.
To do this let ()*t u  denote the optimal path of the control variable and consider each possible path as variations about the optimal path.
()()()t P t u t u ε+=*
(3)
In this case ε is a small number (the maths sense) and ()t P  is a perturbing curve.  It simply means all
paths can be written as variations about the optimal path.  Since we can write the control path this way we can also (must) write the path of the state variable and boundary points in the same way.
()()()t q t y t y ε+=*
(4)
T T T ∆+=*ε
(5)
T T T y y y ∆+=*
ε
(6)
The trick is that all of the possible choice variables that define the integral path are now functions of .ε As ε varies we can vary the whole path including the endpoints so this trick essentially allows us to solve the dynamic problem as a function of ε as a static problem.  That is to find the optimum (extreme value) path we choose the value of ε that satisfies,
0=∂∂ε
V
(7) given (3) to (6).
Since every variable has been written as a function of ε, (7) is the only necessary condition for an optimum that we need.  When this condition is applied it yields the various conditions that are referred to as the maximum principle .
In order to show this we first rewrite the problem in a way that allows us to include the Hamiltonian function,
()()()dt y u y t f t u y t F V Max T
u ⎪⎪⎭
⎫  ⎝⎛
-+=∙⎰,,,, 0λ
We can do this because the term inside the brackets is always zero provided the equation of motion is satisfied. Alternatively as,
()()dt y t u y t H V Max T
u
-=⎰λ0
,,
(1)
Integrating (by parts)1
the second term in the integral we obtain,
()()()()()T T u y T y dt t y t u y t H V Max λλλ-+⎭
⎬⎫
characterise
⎩⎨⎧+=⎰∙000,,
(2)
Now we apply the necessary condition (7) given (3) to (6).
Recall that to differentiate an Integral by a Parameter we use Leibniz's rule, (page 9).  After simplification this yields,
()()()()[]()0,,,0=∆-∆+⎭⎬⎫⎩⎨⎧∂∂+⎥⎦⎤⎢⎣
⎡+∂∂=∂∂=∙⎰T T t T y T T u y t H dt t p u H t q y H V λλλεε (3)
The 3 components of this integral provide the conditions defining the optimum.  In particular,
()()()⎰=⎭⎬⎫⎩⎨⎧∂∂+⎥⎦⎤⎢⎣
⎡+∂∂∙ελT dt t p u H
t q y H 00
requires that,
∙-=∂∂λy
H
and    0=∂∂u H
Which is a key part of the maximum principle.
The Transversality Condition
1
Just let
dt y y dt y T T
T
⎰⎰∙
-
=0
λλλ

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。