|
24 | 24 | ":author: Thomas Wiecki, Chris Fonnesbeck\n",
|
25 | 25 | ":::\n",
|
26 | 26 | "\n",
|
27 |
| - "Bayesian inference is a powerful tool for extracting inference from data using probability models. This involves an interplay among statistical models, subject matter knowledge, and computational techniques. In building Bayesian models, it is easy to get carried away with complex models at the outset, often leading to an unsatisfactory final result. To avoid these pitfalls, a structured approach is essential. The Bayesian workflow is a systematic approach to building, validating, and refining probabilistic models, ensuring that the models are robust, interpretable, and useful for decision-making. The workflow's iterative nature ensures that modeling assumptions are tested and refined as the model grows, leading to more reliable and interpretable results.\n", |
| 27 | + "Bayesian inference is a powerful tool for extracting inference from data using probability models. This involves an interplay among statistical models, subject matter knowledge, and computational techniques. In building Bayesian models, it is easy to get carried away with complex models from the outset, often leading to an unsatisfactory final result (or a dead end). To avoid common model development pitfalls, a structured approach is helpful. The *Bayesian workflow* (Gelman *et al.*) is a systematic approach to building, validating, and refining probabilistic models, ensuring that the models are robust, interpretable, and useful for decision-making. The workflow's iterative nature ensures that modeling assumptions are tested and refined as the model grows, leading to more reliable results.\n", |
28 | 28 | "\n",
|
29 |
| - "This workflow is particularly powerful in high-level probabilistic programming environments like PyMC, where the flexibility to rapidly prototype and iterate on complex statistical models enables practitioners to focus on the modeling process rather than the underlying computational details. The workflow invlolves moving from simple models via prior checks, fitting, diagnostics, and refinement through to a final product that satisfies the analytic goals, ensuring that computational and conceptual issues are identified and addressed systematically as they are encountered.\n", |
| 29 | + "This workflow is particularly powerful in high-level probabilistic programming environments like PyMC, where the ability to rapidly prototype and iterate on complex statistical models enables practitioners to focus on the modeling process rather than the underlying computational details. The workflow invlolves moving from simple models--via prior checks, fitting, diagnostics, and refinement--through to a final product that satisfies the analytic goals, making sure that computational and conceptual issues are identified and addressed systematically as they are encountered.\n", |
30 | 30 | "\n",
|
31 |
| - "Below we demonstrate the complete Bayesian workflow using COVID-19 case data, showing how to progress from basic exponential growth models to more sophisticated logistic growth formulations, highlighting the critical role of model checking and validation at each step. The model is not intended to be a state-of-the-art epidemiological model, but rather a demonstration of how to iterate from a simple model to a more complex one." |
| 31 | + "Below we demonstrate the Bayesian workflow using COVID-19 case data, showing how to progress from very basic, unrealistic models to more sophisticated formulations, highlighting the critical role of model checking and validation at each step. Here we are not looking to develop a state-of-the-art epidemiological model, but rather to demonstrate how to iterate from a simple model to a more complex one." |
32 | 32 | ]
|
33 | 33 | },
|
34 | 34 | {
|
|
173 | 173 | "3. Run prior predictive check\n",
|
174 | 174 | "4. Fit model\n",
|
175 | 175 | "5. Assess convergence\n",
|
176 |
| - "6. Run posterior predictive check\n", |
| 176 | + "6. Check model fit\n", |
177 | 177 | "7. Improve model\n",
|
178 | 178 | "\n",
|
179 | 179 | "### 1. Plot the data\n",
|
|
20632 | 20632 | "cell_type": "markdown",
|
20633 | 20633 | "metadata": {},
|
20634 | 20634 | "source": [
|
20635 |
| - "### 6. Run posterior predictive check\n", |
| 20635 | + "### 6. Check model fit\n", |
20636 | 20636 | "\n",
|
20637 |
| - "Similar to the prior predictive, we can also generate new data by repeatedly taking samples from the posterior and generating data using these parameters." |
| 20637 | + "Similar to the prior predictive, we can also generate new data by repeatedly taking samples from the posterior and generating data using these parameters. This process is called **posterior predictive checking** and is a crucial step in Bayesian model validation.\n", |
| 20638 | + "\n", |
| 20639 | + "Posterior predictive checking works by:\n", |
| 20640 | + "1. Taking parameter samples from the posterior distribution (which we already have from MCMC sampling)\n", |
| 20641 | + "2. For each set of parameter values, generating new synthetic datasets using the same likelihood function as our model\n", |
| 20642 | + "3. Comparing these synthetic datasets to our observed data\n", |
| 20643 | + "\n", |
| 20644 | + "This allows us to assess whether our model can reproduce key features of the observed data. If the posterior predictive samples look very different from our actual data, it suggests our model may be missing important aspects of the data-generating process. Conversely, if the posterior predictive samples encompass our observed data well, it provides evidence that our model is capturing the essential patterns in the data." |
20638 | 20645 | ]
|
20639 | 20646 | },
|
20640 | 20647 | {
|
|
26180 | 26187 | "cell_type": "markdown",
|
26181 | 26188 | "metadata": {},
|
26182 | 26189 | "source": [
|
26183 |
| - "OK, that does not look terrible, the data is at least inside of what the model can produce. Let's look at residuals for systematic errors:" |
| 26190 | + "OK, that does not look terrible; the data essentially behaves like a random draw from the model.\n", |
| 26191 | + "\n", |
| 26192 | + "As an additional check, we can also inspect the model residuals." |
26184 | 26193 | ]
|
26185 | 26194 | },
|
26186 | 26195 | {
|
|
31639 | 31648 | "source": [
|
31640 | 31649 | "### Prediction and forecasting\n",
|
31641 | 31650 | "\n",
|
31642 |
| - "We might also be interested in predicting on unseen or data, or, in the case time-series data like here, in forecasting. In `PyMC` you can do so easily using `pm.Data` nodes. What it allows you to do is define data to a PyMC model that you can later switch out for other data. That way, when you for example do posterior predictive sampling, it will generate samples into the future.\n", |
| 31651 | + "We are often interested in predicting or forecasting. In PyMC, you can do so easily using `pm.Data` nodes, which provide a powerful mechanism for out-of-sample prediction and forecasting.\n", |
| 31652 | + "\n", |
| 31653 | + "Wrapping your input data in `pm.Data` allows you to define data containers within a PyMC model that can be dynamically updated after model fitting. This is particularly useful for prediction scenarios where you want to:\n", |
| 31654 | + "\n", |
| 31655 | + "1. **Train on observed data**: Fit your model using the available training data\n", |
| 31656 | + "2. **Switch to prediction inputs**: Replace the training data with new input values (e.g., future time points)\n", |
| 31657 | + "3. **Generate predictions**: Use posterior predictive sampling to generate forecasts based on the fitted model\n", |
31643 | 31658 | "\n",
|
31644 |
| - "Let's change our model to use `pm.Data` instead." |
| 31659 | + "Let's demonstrate this approach by modifying our exponential growth model to use `pm.Data` nodes." |
31645 | 31660 | ]
|
31646 | 31661 | },
|
31647 | 31662 | {
|
|
0 commit comments