@@ -58,14 +58,9 @@ import numpy as np
58
58
59
59
To explain irreducibility, let's take $P$ to be a fixed stochastic matrix.
60
60
61
- Two states $x$ and $y$ are said to ** communicate** with each other if
62
- there exist positive integers $j$ and $k$ such that
61
+ State $x$ is called ** accessible** (or ** reachable** ) from state $y$ if $P^t(x,y)>0$ for some integer $t\ge 0$.
63
62
64
- $$
65
- P^j(x, y) > 0
66
- \quad \text{and} \quad
67
- P^k(y, x) > 0
68
- $$
63
+ Two states, $x$ and $y$, are said to ** communicate** if $x$ and $y$ are accessible from each other.
69
64
70
65
In view of our discussion {ref}` above <finite_mc_mstp> ` , this means precisely
71
66
that
@@ -142,7 +137,7 @@ We'll come back to this a bit later.
142
137
143
138
### Irreducibility and stationarity
144
139
145
- We discussed uniqueness of stationary distributions our {doc} ` earlier lecture on Markov chains < markov_chains_I> `
140
+ We discussed uniqueness of stationary distributions in our earlier lecture {doc} ` markov_chains_I ` .
146
141
147
142
There we {prf: ref }` stated <mc_po_conv_thm> ` that uniqueness holds when the transition matrix is everywhere positive.
148
143
@@ -174,16 +169,16 @@ distribution, then, for all $x \in S$,
174
169
```{math}
175
170
:label: llnfmc0
176
171
177
- \frac{1}{m} \sum_{t = 1}^m \mathbf {1}\{X_t = x\} \to \psi^*(x)
172
+ \frac{1}{m} \sum_{t = 1}^m \mathbb {1}\{X_t = x\} \to \psi^*(x)
178
173
\quad \text{as } m \to \infty
179
174
```
180
175
181
176
````
182
177
183
178
Here
184
179
185
- * $\{ X_t\} $ is a Markov chain with stochastic matrix $P$ and initial.
186
- distribution $\psi_0$
180
+ * $\{ X_t\} $ is a Markov chain with stochastic matrix $P$ and initial distribution $\psi_0$
181
+
187
182
* $\mathbb{1} \{ X_t = x\} = 1$ if $X_t = x$ and zero otherwise.
188
183
189
184
The result in [ theorem 4.3] ( llnfmc0 ) is sometimes called ** ergodicity** .
@@ -196,16 +191,16 @@ This gives us another way to interpret the stationary distribution (provided irr
196
191
197
192
Importantly, the result is valid for any choice of $\psi_0$.
198
193
199
- The theorem is related to {doc}` the Law of Large Numbers <lln_clt> ` .
194
+ The theorem is related to {doc}` the law of large numbers <lln_clt> ` .
200
195
201
196
It tells us that, in some settings, the law of large numbers sometimes holds even when the
202
197
sequence of random variables is [ not IID] ( iid_violation ) .
203
198
204
199
205
200
(mc_eg1-2)=
206
- ### Example: Ergodicity and unemployment
201
+ ### Example: ergodicity and unemployment
207
202
208
- Recall our cross-sectional interpretation of the employment/unemployment model {ref}` discussed above <mc_eg1-1> ` .
203
+ Recall our cross-sectional interpretation of the employment/unemployment model {ref}` discussed before <mc_eg1-1> ` .
209
204
210
205
Assume that $\alpha \in (0,1)$ and $\beta \in (0,1)$, so that irreducibility holds.
211
206
@@ -235,7 +230,7 @@ Let's denote the fraction of time spent in state $x$ over the period $t=1,
235
230
\ldots, n$ by $\hat p_n(x)$, so that
236
231
237
232
$$
238
- \hat p_n(x) := \frac{1}{n} \sum_{t = 1}^n \mathbf {1}\{X_t = x\}
233
+ \hat p_n(x) := \frac{1}{n} \sum_{t = 1}^n \mathbb {1}\{X_t = x\}
239
234
\qquad (x \in \{0, 1, 2\})
240
235
$$
241
236
@@ -261,9 +256,9 @@ fig, ax = plt.subplots()
261
256
ax.axhline(ψ_star[x], linestyle='dashed', color='black',
262
257
label = fr'$\psi^*({x})$')
263
258
# Compute the fraction of time spent in state 0, starting from different x_0s
264
- for x0 in range(3 ):
259
+ for x0 in range(len(P) ):
265
260
X = mc.simulate(ts_length, init=x0)
266
- p_hat = (X == x).cumsum() / (1 + np.arange(ts_length) )
261
+ p_hat = (X == x).cumsum() / np.arange(1, ts_length+1 )
267
262
ax.plot(p_hat, label=fr'$\hat p_n({x})$ when $X_0 = \, {x0}$')
268
263
ax.set_xlabel('t')
269
264
ax.set_ylabel(fr'$\hat p_n({x})$')
@@ -307,14 +302,13 @@ The following figure illustrates
307
302
``` {code-cell} ipython3
308
303
P = np.array([[0, 1],
309
304
[1, 0]])
310
- ts_length = 10_000
305
+ ts_length = 100
311
306
mc = qe.MarkovChain(P)
312
307
n = len(P)
313
308
fig, axes = plt.subplots(nrows=1, ncols=n)
314
309
ψ_star = mc.stationary_distributions[0]
315
310
316
311
for i in range(n):
317
- axes[i].set_ylim(0.45, 0.55)
318
312
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color='black',
319
313
label = fr'$\psi^*({i})$')
320
314
axes[i].set_xlabel('t')
@@ -324,11 +318,10 @@ for i in range(n):
324
318
for x0 in range(n):
325
319
# Generate time series starting at different x_0
326
320
X = mc.simulate(ts_length, init=x0)
327
- p_hat = (X == i).cumsum() / (1 + np.arange(ts_length) )
321
+ p_hat = (X == i).cumsum() / np.arange(1, ts_length+1 )
328
322
axes[i].plot(p_hat, label=f'$x_0 = \, {x0} $')
329
323
330
324
axes[i].legend()
331
-
332
325
plt.tight_layout()
333
326
plt.show()
334
327
```
@@ -341,7 +334,7 @@ However, the distribution at each state does not.
341
334
342
335
### Example: political institutions
343
336
344
- Let's go back to the political institutions mode with six states discussed {ref}` in a previous lecture <mc_eg3> ` and study ergodicity.
337
+ Let's go back to the political institutions model with six states discussed {ref}` in a previous lecture <mc_eg3> ` and study ergodicity.
345
338
346
339
347
340
Here's the transition matrix.
@@ -374,19 +367,18 @@ P = [[0.86, 0.11, 0.03, 0.00, 0.00, 0.00],
374
367
[0.00, 0.00, 0.09, 0.11, 0.55, 0.25],
375
368
[0.00, 0.00, 0.09, 0.15, 0.26, 0.50]]
376
369
377
- ts_length = 10_000
370
+ ts_length = 2500
378
371
mc = qe.MarkovChain(P)
379
372
ψ_star = mc.stationary_distributions[0]
380
- fig, ax = plt.subplots(figsize=(9, 6) )
381
- X = mc.simulate(ts_length)
373
+ fig, ax = plt.subplots()
374
+ X = mc.simulate(ts_length, random_state=1 )
382
375
# Center the plot at 0
383
- ax.set_ylim(-0.25, 0.25)
384
- ax.axhline(0, linestyle='dashed', lw=2, color='black', alpha=0.4)
376
+ ax.axhline(linestyle='dashed', lw=2, color='black')
385
377
386
378
387
379
for x0 in range(len(P)):
388
380
# Calculate the fraction of time for each state
389
- p_hat = (X == x0).cumsum() / (1 + np.arange(ts_length) )
381
+ p_hat = (X == x0).cumsum() / np.arange(1, ts_length+1 )
390
382
ax.plot(p_hat - ψ_star[x0], label=f'$x = {x0+1} $')
391
383
ax.set_xlabel('t')
392
384
ax.set_ylabel(r'$\hat p_n(x) - \psi^* (x)$')
@@ -395,29 +387,6 @@ ax.legend()
395
387
plt.show()
396
388
```
397
389
398
- ### Expectations of geometric sums
399
-
400
- Sometimes we want to compute the mathematical expectation of a geometric sum, such as
401
- $\sum_t \beta^t h(X_t)$.
402
-
403
- In view of the preceding discussion, this is
404
-
405
- $$
406
- \mathbb{E}
407
- \left[
408
- \sum_{j=0}^\infty \beta^j h(X_{t+j}) \mid X_t
409
- = x
410
- \right]
411
- = x + \beta (Ph)(x) + \beta^2 (P^2 h)(x) + \cdots
412
- $$
413
-
414
- By the {ref}` Neumann series lemma <la_neumann> ` , this sum can be calculated using
415
-
416
- $$
417
- I + \beta P + \beta^2 P^2 + \cdots = (I - \beta P)^{-1}
418
- $$
419
-
420
-
421
390
## Exercises
422
391
423
392
```` {exercise}
@@ -506,14 +475,13 @@ Part 2:
506
475
``` {code-cell} ipython3
507
476
ts_length = 1000
508
477
mc = qe.MarkovChain(P)
509
- fig, ax = plt.subplots(figsize=(9, 6))
510
- X = mc.simulate(ts_length)
511
- ax.set_ylim(-0.25, 0.25)
512
- ax.axhline(0, linestyle='dashed', lw=2, color='black', alpha=0.4)
478
+ fig, ax = plt.subplots()
479
+ X = mc.simulate(ts_length, random_state=1)
480
+ ax.axhline(linestyle='dashed', lw=2, color='black')
513
481
514
- for x0 in range(8 ):
482
+ for x0 in range(len(P) ):
515
483
# Calculate the fraction of time for each worker
516
- p_hat = (X == x0).cumsum() / (1 + np.arange(ts_length) )
484
+ p_hat = (X == x0).cumsum() / np.arange(1, ts_length+1 )
517
485
ax.plot(p_hat - ψ_star[x0], label=f'$x = {x0+1} $')
518
486
ax.set_xlabel('t')
519
487
ax.set_ylabel(r'$\hat p_n(x) - \psi^* (x)$')
@@ -553,7 +521,7 @@ In other words, if $\{X_t\}$ represents the Markov chain for
553
521
employment, then $\bar X_m \to p$ as $m \to \infty$, where
554
522
555
523
$$
556
- \bar X_m := \frac{1}{m} \sum_{t = 1}^m \mathbf {1}\{X_t = 0\}
524
+ \bar X_m := \frac{1}{m} \sum_{t = 1}^m \mathbb {1}\{X_t = 0\}
557
525
$$
558
526
559
527
This exercise asks you to illustrate convergence by computing
@@ -580,31 +548,27 @@ As $m$ gets large, both series converge to zero.
580
548
581
549
``` {code-cell} ipython3
582
550
α = β = 0.1
583
- ts_length = 10000
551
+ ts_length = 3000
584
552
p = β / (α + β)
585
553
586
554
P = ((1 - α, α), # Careful: P and p are distinct
587
555
( β, 1 - β))
588
556
mc = qe.MarkovChain(P)
589
557
590
- fig, ax = plt.subplots(figsize=(9, 6))
591
- ax.set_ylim(-0.25, 0.25)
592
- ax.axhline(0, linestyle='dashed', lw=2, color='black', alpha=0.4)
558
+ fig, ax = plt.subplots()
559
+ ax.axhline(linestyle='dashed', lw=2, color='black')
593
560
594
- for x0, col in ((0, 'blue'), (1, 'green' )):
561
+ for x0 in range(len(P )):
595
562
# Generate time series for worker that starts at x0
596
563
X = mc.simulate(ts_length, init=x0)
597
564
# Compute fraction of time spent unemployed, for each n
598
- X_bar = (X == 0).cumsum() / (1 + np.arange(ts_length) )
565
+ X_bar = (X == 0).cumsum() / np.arange(1, ts_length+1 )
599
566
# Plot
600
- ax.fill_between(range(ts_length), np.zeros(ts_length), X_bar - p, color=col, alpha=0.1)
601
- ax.plot(X_bar - p, color=col, label=f'$x_0 = \, {x0} $')
602
- # Overlay in black--make lines clearer
603
- ax.plot(X_bar - p, 'k-', alpha=0.6)
567
+ ax.plot(X_bar - p, label=f'$x_0 = \, {x0} $')
604
568
ax.set_xlabel('t')
605
569
ax.set_ylabel(r'$\bar X_m - \psi^* (x)$')
606
570
607
- ax.legend(loc='upper right' )
571
+ ax.legend()
608
572
plt.show()
609
573
```
610
574
0 commit comments