Skip to content

Commit c8e3c39

Browse files
Tom's Feb 6 edits of three tax-smoothing lectures
1 parent ffe1f46 commit c8e3c39

File tree

3 files changed

+129
-106
lines changed

3 files changed

+129
-106
lines changed

lectures/tax_smoothing_1.md

+31-26
Original file line numberDiff line numberDiff line change
@@ -23,24 +23,9 @@ kernelspec:
2323

2424
# How to Pay for a War: Part 1
2525

26-
In addition to what's in Anaconda, this lecture will deploy quantecon:
2726

28-
```{code-cell} ipython
29-
---
30-
tags: [hide-output]
31-
---
32-
!pip install --upgrade quantecon
33-
```
34-
35-
## Reader's Guide
27+
## Overview
3628

37-
Let's start with some standard imports:
38-
39-
```{code-cell} ipython
40-
import quantecon as qe
41-
import numpy as np
42-
import matplotlib.pyplot as plt
43-
```
4429

4530
This lecture uses the method of **Markov jump linear quadratic dynamic programming** that is described in lecture
4631
{doc}`Markov Jump LQ dynamic programming <markov_jump_lq>`
@@ -170,6 +155,23 @@ A {doc}`sequel to this lecture <tax_smoothing_2>`
170155
describes applies Markov LQ control to settings in which a government
171156
issues risk-free debt of different maturities.
172157

158+
159+
160+
```{code-cell} ipython
161+
---
162+
tags: [hide-output]
163+
---
164+
!pip install --upgrade quantecon
165+
```
166+
167+
Let's start with some standard imports:
168+
169+
```{code-cell} ipython
170+
import quantecon as qe
171+
import numpy as np
172+
import matplotlib.pyplot as plt
173+
```
174+
173175
## Barro (1979) Model
174176

175177
We begin by solving a version of the Barro (1979) {cite}`Barro1979` model by mapping it
@@ -362,9 +364,8 @@ which holds in this case:
362364
S - M @ F, (S - M @ F) @ (A - B @ F)
363365
```
364366

365-
This explains the fanning out of the conditional empirical distribution of taxation across time, computing
366-
by simulation the
367-
Barro model a large number of times:
367+
This explains the fanning out of the conditional empirical distribution of taxation across time, computed by simulating the
368+
Barro model many times and averaging over simulated paths:
368369

369370
```{code-cell} python3
370371
T = 500
@@ -410,13 +411,17 @@ equations.
410411

411412
Optimal $P_s,F_s,d_s$ are stored as attributes.
412413

413-
The class also contains a method” for simulating the model.
414+
The class also contains a method that simulates a model.
414415

415416
## Barro Model with a Time-varying Interest Rate
416417

417418
We can use the above class to implement a version of the Barro model
418-
with a time-varying interest rate. The simplest way to extend the model
419-
is to allow the interest rate to take two possible values. We set:
419+
with a time-varying interest rate.
420+
421+
A simple way to extend the model
422+
is to allow the interest rate to take two possible values.
423+
424+
We set:
420425

421426
$$
422427
p^1_{t,t+1} = \beta + 0.02 = 0.97
@@ -426,19 +431,19 @@ $$
426431
p^2_{t,t+1} = \beta - 0.017 = 0.933
427432
$$
428433

429-
Thus, the first Markov state has a low interest rate, and the
434+
Thus, the first Markov state has a low interest rate and the
430435
second Markov state has a high interest rate.
431436

432-
We also need to specify a transition matrix for the Markov state.
437+
We must also specify a transition matrix for the Markov state.
433438

434439
We use:
435440

436441
$$
437442
\Pi = \begin{bmatrix} 0.8 & 0.2 \\ 0.2 & 0.8 \end{bmatrix}
438443
$$
439444

440-
(so each Markov state is persistent, and there is an equal chance
441-
of moving from one state to the other)
445+
Here, each Markov state is persistent, and there is are equal chances
446+
of moving from one state to the other.
442447

443448
The choice of parameters means that the unconditional expectation of
444449
$p_{t,t+1}$ is 0.9515, higher than $\beta (=0.95)$.

lectures/tax_smoothing_2.md

+65-51
Original file line numberDiff line numberDiff line change
@@ -23,18 +23,11 @@ kernelspec:
2323

2424
# How to Pay for a War: Part 2
2525

26-
In addition to what's in Anaconda, this lecture deploys the quantecon library:
2726

28-
```{code-cell} ipython
29-
---
30-
tags: [hide-output]
31-
---
32-
!pip install --upgrade quantecon
33-
```
27+
## Overview
3428

35-
## An Application of Markov Jump Linear Quadratic Dynamic Programming
29+
This lecture presents another application of Markov jump linear quadratic dynamic programming and constitutes a {doc}`sequel to an earlier lecture <tax_smoothing_1>`.
3630

37-
This is a {doc}`sequel to an earlier lecture <tax_smoothing_1>`.
3831

3932
We use a method introduced in lecture {doc}`Markov Jump LQ dynamic programming <markov_jump_lq>` to
4033
implement suggestions by Barro (1999 {cite}`barro1999determinants`, 2003 {cite}`barro2003religion`) for extending his
@@ -69,6 +62,17 @@ We assume
6962
- that interest rates on those bonds are time-varying and in particular are
7063
governed by a jointly stationary stochastic process.
7164

65+
66+
67+
In addition to what's in Anaconda, this lecture deploys the quantecon library:
68+
69+
```{code-cell} ipython
70+
---
71+
tags: [hide-output]
72+
---
73+
!pip install --upgrade quantecon
74+
```
75+
7276
Let's start with some standard imports:
7377

7478
```{code-cell} ipython
@@ -90,12 +94,18 @@ We’ll describe two possible specifications
9094

9195
## One- and Two-period Bonds but No Restructuring
9296

93-
Let $T_t$ denote tax collections, $\beta$ a discount factor,
94-
$b_{t,t+1}$ time $t+1$ goods that the government promises to
95-
pay at $t$, $b_{t,t+2}$ time $t+2$ goods that the
96-
government promises to pay at time $t$, $G_t$ government
97-
purchases, $p_{t,t+1}$ the number of time $t$ goods received
98-
per time $t+1$ goods promised, and $p_{t,t+2}$ the number of
97+
Let
98+
* $T_t$ denote tax collections
99+
* $\beta$ be a discount factor
100+
* $b_{t,t+1}$ be time $t+1$ goods that the government promises to
101+
pay at $t$
102+
* $b_{t,t+2}$ betime $t+2$ goods that the
103+
government promises to pay at time $t$
104+
* $G_t$ be government
105+
purchases
106+
* $p_{t,t+1}$ be the number of time $t$ goods received
107+
per time $t+1$ goods promised
108+
* $p_{t,t+2}$ be the number of
99109
time $t$ goods received per time $t+2$ goods promised.
100110

101111
Evidently, $p_{t, t+1}, p_{t,t+2}$ are inversely related to
@@ -129,23 +139,24 @@ T_t & = G_t + b_{t-2,t} + b_{t-1,t} - p_{t,t+2} b_{t,t+2} - p_{t,t+1} b_{t,t+1}
129139
\end{bmatrix} & \sim \textrm{functions of Markov state with transition matrix } \Pi \end{aligned}
130140
$$
131141

132-
Here $w_{t+1} \sim {\cal N}(0,I)$ and $\Pi_{ij}$ is
142+
Here
143+
* $w_{t+1} \sim {\cal N}(0,I)$ and $\Pi_{ij}$ is
133144
the probability that the Markov state moves from state $i$ to
134-
state $j$ in one period.
135-
136-
The variables
137-
$T_t, b_{t, t+1}, b_{t,t+2}$ are *control* variables chosen at
138-
$t$, while the variables $b_{t-1,t}, b_{t-2,t}$ are
139-
endogenous state variables inherited from the past at time $t$ and
140-
$p_{t,t+1}, p_{t,t+2}$ are exogenous state variables at time
141-
$t$.
145+
state $j$ in one period
146+
* $T_t, b_{t, t+1}, b_{t,t+2}$ are *control* variables chosen at time
147+
$t$
148+
* variables $b_{t-1,t}, b_{t-2,t}$ are
149+
endogenous state variables inherited from the past at time $t$
150+
* $p_{t,t+1}, p_{t,t+2}$ are exogenous state variables at time $t$
142151

143152
The parameter $c_1$ imposes a penalty on the government’s issuing
144153
different quantities of one and two-period debt.
145154

146155
This penalty deters the
147156
government from taking large “long-short” positions in debt of different
148-
maturities. An example below will show this in action.
157+
maturities.
158+
159+
An example below will show the penalty in action.
149160

150161
As well as extending the model to allow for a maturity decision for
151162
government debt, we can also in principle allow the matrices
@@ -174,7 +185,7 @@ $$
174185
\end{bmatrix}
175186
$$
176187

177-
and the complete state
188+
and the complete state vector
178189

179190
$$
180191
x_t = \begin{bmatrix} \bar b_t \cr
@@ -277,7 +288,9 @@ $$
277288
T_t^2 + c_1( b_{t,t+1} - b_{t,t+2})^2 = x_t'R_t x_t + u_t' Q_t u_t + 2 u_t' W_t x_t + c_1 u_t'Q^c u_t
278289
$$
279290

280-
where $Q^c = \begin{bmatrix} 1 & -1 \\ -1 & 1 \end{bmatrix}$. Therefore, the overall $Q$ matrix for the Markov jump LQ problem is:
291+
where $Q^c = \begin{bmatrix} 1 & -1 \\ -1 & 1 \end{bmatrix}$.
292+
293+
Therefore, the appropriate $Q$ matrix in the Markov jump LQ problem is:
281294

282295
$$
283296
Q_t^c = Q_t + c_1Q^c
@@ -306,9 +319,9 @@ $$
306319
Thus, in this problem all the matrices apart from $B$ may depend
307320
on the Markov state at time $t$.
308321

309-
As shown in the {doc}`previous lecture <tax_smoothing_1>`,
310-
the `LQMarkov` class can solve Markov jump LQ problems when provided with the
311-
$A, B, C, R, Q, W$ matrices for each Markov state.
322+
As shown in the {doc}`previous lecture <tax_smoothing_1>`, when provided with appropriate
323+
$A, B, C, R, Q, W$ matrices for each Markov state
324+
the `LQMarkov` class can solve Markov jump LQ problems.
312325

313326
The function below maps the primitive matrices and parameters from the above
314327
two-period model into the matrices that the `LQMarkov` class requires:
@@ -375,7 +388,7 @@ With the above function, we can proceed to solve the model in two steps:
375388
1. Use the `LQMarkov` class to solve the resulting n-state Markov
376389
jump LQ problem.
377390

378-
## Penalty on Different Issuance Across Maturities
391+
## Penalty on Different Issues Across Maturities
379392

380393
To implement a simple example of the two-period model, we assume that
381394
$G_t$ follows an AR(1) process:
@@ -395,8 +408,9 @@ Therefore, in this example, $A_{22}, C_2$ and $U_g$ are not
395408
time-varying.
396409

397410
We will assume that there are two Markov states, one with a
398-
flatter yield curve, and one with a steeper yield curve. In state 1,
399-
prices are:
411+
flatter yield curve, and one with a steeper yield curve.
412+
413+
In state 1, prices are:
400414

401415
$$
402416
p^1_{t,t+1} = \beta \hspace{2mm} , \hspace{2mm} p^1_{t,t+2} = \beta^2 - 0.02
@@ -411,8 +425,8 @@ $$
411425
We first solve the model with no penalty parameter on different issuance
412426
across maturities, i.e. $c_1 = 0$.
413427

414-
We also need to specify a
415-
transition matrix for the Markov state, we use:
428+
We specify that the
429+
transition matrix for the Markov state is
416430

417431
$$
418432
\Pi = \begin{bmatrix} 0.9 & 0.1 \\ 0.1 & 0.9 \end{bmatrix}
@@ -472,10 +486,10 @@ The above simulations show that when no penalty is imposed on different
472486
issuances across maturities, the government has an incentive to take
473487
large “long-short” positions in debt of different maturities.
474488

475-
To prevent such an outcome, we now set $c_1 = 0.01$.
489+
To prevent such outcomes, we set $c_1 = 0.01$.
476490

477-
This penalty is enough
478-
to ensure that the government issues positive quantities of both one and
491+
This penalty is big enough
492+
to motivate the government to issue positive quantities of both one- and
479493
two-period debt:
480494

481495
```{code-cell} python3
@@ -517,7 +531,7 @@ plt.show()
517531

518532
## A Model with Restructuring
519533

520-
This model alters two features of the previous model:
534+
We now alter two features of the previous model:
521535

522536
1. The maximum horizon of government debt is now extended to a general
523537
*H* periods.
@@ -585,7 +599,8 @@ In terms of dimensions, the first two matrices defined above are $(H-1) \times H
585599
The last is $1 \times H$
586600

587601
We can now write the government’s budget constraint in matrix notation.
588-
Rearranging the government budget constraint gives:
602+
603+
We can rearrange the government budget constraint to become
589604

590605
$$
591606
T_t = b_t^{t-1} + \sum_{j=1}^{H-1} p_{t+j}^t b_{t+j}^{t-1} + G_t - \sum_{j=1}^H p_{t+j}^t b_{t+j}^t
@@ -597,7 +612,7 @@ $$
597612
T_t = \tilde S_x \bar b_t + (S_s p_t) \cdot (S_x \bar b_t) + U_g z_t - p_t \cdot u_t
598613
$$
599614

600-
If we want to write this in terms of the full state, we have:
615+
To express $T_t$ as a function of the full state, let
601616

602617
$$
603618
T_t = \begin{bmatrix} (\tilde S_x + p_t'S_s'S_x) & Ug \end{bmatrix} x_t - p_t' u_t
@@ -626,7 +641,7 @@ $$
626641
where to economize on notation we adopt the convention that for the linear state matrices
627642
$R_t \equiv R_{s_t}, Q_t \equiv W_{s_t}$ and so on.
628643

629-
We'll continue to use this convention also for the linear state matrices $A, B, W$ and so on below.
644+
We'll use this convention for the linear state matrices $A, B, W$ and so on below.
630645

631646
Because the payoff function also includes the penalty parameter for
632647
rescheduling, we have:
@@ -687,9 +702,9 @@ This completes the mapping into a Markov jump LQ problem.
687702

688703
## Restructuring as a Markov Jump Linear Quadratic Control Problem
689704

690-
As with the previous model, we can use a function to map the primitives
691-
of the model with restructuring into the matrices that the `LQMarkov`
692-
class requires:
705+
We can define a function that maps the primitives
706+
of the model with restructuring into the matrices required by the `LQMarkov`
707+
class:
693708

694709
```{code-cell} python3
695710
def LQ_markov_mapping_restruct(A22, C2, Ug, T, p_t, c=0):
@@ -741,11 +756,10 @@ def LQ_markov_mapping_restruct(A22, C2, Ug, T, p_t, c=0):
741756

742757
### Example with Restructuring
743758

744-
As an example of the model with restructuring, consider this model
745-
where $H = 3$.
759+
As an example let $H = 3$.
746760

747-
We will assume that there are two Markov states, one with a
748-
flatter yield curve, and one with a steeper yield curve.
761+
Assume that there are two Markov states, one with a
762+
flatter yield curve, the other with a steeper yield curve.
749763

750764
In state 1,
751765
prices are:
@@ -760,8 +774,8 @@ $$
760774
p^2_{t,t+1} = 0.9295 \hspace{2mm} , \hspace{2mm} p^2_{t,t+2} = 0.902 \hspace{2mm} , \hspace{2mm} p^2_{t,t+3} = 0.8769
761775
$$
762776

763-
We will assume the same transition matrix and $G_t$ process as
764-
above
777+
We specify the same transition matrix and $G_t$ process that we used earlier.
778+
765779

766780
```{code-cell} python3
767781
# New model parameters

0 commit comments

Comments
 (0)