You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
edge_color=[G[nodes[0]][nodes[1]][0]['weight'] for nodes in G.edges])
279
276
@@ -317,7 +314,7 @@ This means that, for any date $t$ and any state $y \in S$,
317
314
= \mathbb P \{ X_{t+1} = y \,|\, X_t, X_{t-1}, \ldots \}
318
315
```
319
316
320
-
This means that once we know the current state $X_t$, adding knowledge of earlier states $X_{t-1}, X_{t-2}$ provides no additional information about probabilities of **future** states.
317
+
This means that once we know the current state $X_t$, adding knowledge of earlier states $X_{t-1}, X_{t-2}$ provides no additional information about probabilities of *future* states.
321
318
322
319
Thus, the dynamics of a Markov chain are fully determined by the set of **conditional probabilities**
323
320
@@ -356,7 +353,7 @@ By construction, the resulting process satisfies {eq}`mpp`.
356
353
```{index} single: Markov Chains; Simulation
357
354
```
358
355
359
-
A good way to study a Markov chains is to simulate it.
356
+
A good way to study Markov chains is to simulate them.
360
357
361
358
Let's start by doing this ourselves and then look at libraries that can help
362
359
us.
@@ -434,7 +431,7 @@ P = [[0.4, 0.6],
434
431
Here's a short time series.
435
432
436
433
```{code-cell} ipython3
437
-
mc_sample_path(P, ψ_0=[1.0, 0.0], ts_length=10)
434
+
mc_sample_path(P, ψ_0=(1.0, 0.0), ts_length=10)
438
435
```
439
436
440
437
It can be shown that for a long series drawn from `P`, the fraction of the
@@ -448,7 +445,7 @@ $X_0$ is drawn.
448
445
The following code illustrates this
449
446
450
447
```{code-cell} ipython3
451
-
X = mc_sample_path(P, ψ_0=[0.1, 0.9], ts_length=1_000_000)
448
+
X = mc_sample_path(P, ψ_0=(0.1, 0.9), ts_length=1_000_000)
452
449
np.mean(X == 0)
453
450
```
454
451
@@ -488,11 +485,11 @@ The following code illustrates
488
485
489
486
```{code-cell} ipython3
490
487
mc = qe.MarkovChain(P, state_values=('unemployed', 'employed'))
491
-
mc.simulate(ts_length=4, init='employed')
488
+
mc.simulate(ts_length=4, init='employed') # Start at employed initial state
492
489
```
493
490
494
491
```{code-cell} ipython3
495
-
mc.simulate(ts_length=4, init='unemployed')
492
+
mc.simulate(ts_length=4, init='unemployed') # Start at unemployed initial state
496
493
```
497
494
498
495
```{code-cell} ipython3
@@ -570,7 +567,7 @@ This is very important, so let's repeat it
The general rule is that post-multiplying a distribution by $P^m$ shifts it forward $m$ units of time.
570
+
The general rule is that postmultiplying a distribution by $P^m$ shifts it forward $m$ units of time.
574
571
575
572
Hence the following is also valid.
576
573
@@ -625,12 +622,12 @@ $$
625
622
626
623
627
624
(mc_eg1-1)=
628
-
### Example 2: Cross-sectional distributions
625
+
### Example 2: cross-sectional distributions
629
626
630
627
The distributions we have been studying can be viewed either
631
628
632
629
1. as probabilities or
633
-
1. as cross-sectional frequencies that the Law of Large Numbers leads us to anticipate for large samples.
630
+
1. as cross-sectional frequencies that the law of large numbers leads us to anticipate for large samples.
634
631
635
632
To illustrate, recall our model of employment/unemployment dynamics for a given worker {ref}`discussed above <mc_eg1>`.
636
633
@@ -641,21 +638,21 @@ workers' processes.
641
638
642
639
Let $\psi_t$ be the current *cross-sectional* distribution over $\{ 0, 1 \}$.
643
640
644
-
The cross-sectional distribution records fractions of workers employed and unemployed at a given moment t.
641
+
The cross-sectional distribution records fractions of workers employed and unemployed at a given moment $t$.
645
642
646
-
* For example, $\psi_t(0)$ is the unemployment rate.
643
+
* For example, $\psi_t(0)$ is the unemployment rate at time $t$.
647
644
648
645
What will the cross-sectional distribution be in 10 periods hence?
649
646
650
647
The answer is $\psi_t P^{10}$, where $P$ is the stochastic matrix in
651
648
{eq}`p_unempemp`.
652
649
653
650
This is because each worker's state evolves according to $P$, so
654
-
$\psi_t P^{10}$ is a marginal distribution for a single randomly selected
651
+
$\psi_t P^{10}$ is a [marginal distribution](https://en.wikipedia.org/wiki/Marginal_distribution) for a single randomly selected
655
652
worker.
656
653
657
-
But when the sample is large, outcomes and probabilities are roughly equal (by an application of the Law
658
-
of Large Numbers).
654
+
But when the sample is large, outcomes and probabilities are roughly equal (by an application of the law
655
+
of large numbers).
659
656
660
657
So for a very large (tending to infinite) population,
661
658
$\psi_t P^{10}$ also represents fractions of workers in
@@ -688,11 +685,11 @@ Such distributions are called **stationary** or **invariant**.
688
685
(mc_stat_dd)=
689
686
Formally, a distribution $\psi^*$ on $S$ is called **stationary** for $P$ if $\psi^* P = \psi^* $.
690
687
691
-
Notice that, post-multiplying by $P$, we have $\psi^* P^2 = \psi^* P = \psi^*$.
688
+
Notice that, postmultiplying by $P$, we have $\psi^* P^2 = \psi^* P = \psi^*$.
692
689
693
-
Continuing in the same way leads to $\psi^* = \psi^* P^t$ for all $t$.
690
+
Continuing in the same way leads to $\psi^* = \psi^* P^t$ for all $t \ge 0$.
694
691
695
-
This tells us an important fact: If the distribution of $\psi_0$ is a stationary distribution, then $\psi_t$ will have this same distribution for all $t$.
692
+
This tells us an important fact: If the distribution of $\psi_0$ is a stationary distribution, then $\psi_t$ will have this same distribution for all $t \ge 0$.
696
693
697
694
The following theorem is proved in Chapter 4 of {cite}`sargent2023economic` and numerous other sources.
698
695
@@ -767,7 +764,7 @@ For example, we have the following result
767
764
768
765
(strict_stationary)=
769
766
```{prf:theorem}
770
-
Theorem: If there exists an integer $m$ such that all entries of $P^m$ are
767
+
If there exists an integer $m$ such that all entries of $P^m$ are
771
768
strictly positive, with unique stationary distribution $\psi^*$, then
772
769
773
770
$$
@@ -801,11 +798,10 @@ First, we write a function to iterate the sequence of distributions for `ts_leng
801
798
def iterate_ψ(ψ_0, P, ts_length):
802
799
n = len(P)
803
800
ψ_t = np.empty((ts_length, n))
804
-
ψ = ψ_0
805
-
for t in range(ts_length):
806
-
ψ_t[t] = ψ
807
-
ψ = ψ @ P
808
-
return np.array(ψ_t)
801
+
ψ_t[0 ]= ψ_0
802
+
for t in range(1, ts_length):
803
+
ψ_t[t] = ψ_t[t-1] @ P
804
+
return ψ_t
809
805
```
810
806
811
807
Now we plot the sequence
@@ -814,12 +810,7 @@ Now we plot the sequence
814
810
ψ_0 = (0.0, 0.2, 0.8) # Initial condition
815
811
816
812
fig = plt.figure()
817
-
ax = fig.add_subplot(111, projection='3d')
818
-
819
-
ax.set(xlim=(0, 1), ylim=(0, 1), zlim=(0, 1),
820
-
xticks=(0.25, 0.5, 0.75),
821
-
yticks=(0.25, 0.5, 0.75),
822
-
zticks=(0.25, 0.5, 0.75))
813
+
ax = fig.add_subplot(projection='3d')
823
814
824
815
ψ_t = iterate_ψ(ψ_0, P, 20)
825
816
@@ -852,13 +843,9 @@ First, we write a function to draw initial distributions $\psi_0$ of size `num_d
852
843
```{code-cell} ipython3
853
844
def generate_initial_values(num_distributions):
854
845
n = len(P)
855
-
ψ_0s = np.empty((num_distributions, n))
856
-
857
-
for i in range(num_distributions):
858
-
draws = np.random.randint(1, 10_000_000, size=n)
859
-
860
-
# Scale them so that they add up into 1
861
-
ψ_0s[i,:] = np.array(draws/sum(draws))
846
+
847
+
draws = np.random.randint(1, 10_000_000, size=(num_distributions,n))
848
+
ψ_0s = draws/draws.sum(axis=1)[:, None]
862
849
863
850
return ψ_0s
864
851
```
@@ -917,7 +904,7 @@ The convergence to $\psi^*$ holds for different initial distributions.
917
904
918
905
919
906
920
-
#### Example: Failure of convergence
907
+
#### Example: failure of convergence
921
908
922
909
923
910
In the case of a periodic chain, with
@@ -1077,7 +1064,7 @@ Solution 1:
1077
1064
1078
1065
```
1079
1066
1080
-
Since the matrix is everywhere positive, there is a unique stationary distribution.
1067
+
Since the matrix is everywhere positive, there is a unique stationary distribution $\psi^*$ such that $\psi_t\to \psi^*$ as $t\to \infty$.
0 commit comments