|
2117 | 2117 | " \n",
|
2118 | 2118 | "### Normalized Estimated Error Squared (NEES)\n",
|
2119 | 2119 | "\n",
|
2120 |
| - "The first method is the most powerful, but only possible in simulations. If you are simulating a system you know its true value, or 'ground truth'. It is then trival to compute the error of the system at any step as the difference between ground truth ($\\mathbf x$) and the filter's state estimate ($\\hat{\\mathbf x}$):\n", |
| 2120 | + "The first method is the most powerful, but only possible in simulations. If you are simulating a system you know its true value, or 'ground truth'. It is then trivial to compute the error of the system at any step as the difference between ground truth ($\\mathbf x$) and the filter's state estimate ($\\hat{\\mathbf x}$):\n", |
2121 | 2121 | "\n",
|
2122 | 2122 | "$$\\tilde{\\mathbf x} = \\mathbf x - \\hat{\\mathbf x}$$\n",
|
2123 | 2123 | "\n",
|
|
2195 | 2195 | "```python\n",
|
2196 | 2196 | "from filterpy.stats import NEES\n",
|
2197 | 2197 | "```\n",
|
2198 |
| - "This is an excellent measure of the filter's performance, and should be used whenever possible, especially in production code when you need to evaluate the filter while it is running. While designing a filter I still prefer plotting the residauls as it gives me a more intuitive understand what is happening.\n", |
| 2198 | + "This is an excellent measure of the filter's performance, and should be used whenever possible, especially in production code when you need to evaluate the filter while it is running. While designing a filter I still prefer plotting the residuals as it gives me a more intuitive understanding of what is happening.\n", |
2199 | 2199 | "\n",
|
2200 | 2200 | "However, if your simulation is of limited fidelity then you need to use another approach."
|
2201 | 2201 | ]
|
|
2234 | 2234 | "likelihood = multivariate_normal.pdf(z.flatten(), mean=hx, cov=S)\n",
|
2235 | 2235 | "```\n",
|
2236 | 2236 | "\n",
|
2237 |
| - "In practice it happens a bit differently. Likelihoods can be difficult to deal with mathmatically. It is common to compute and use the *log-likelihood* instead, which is just the natural log of the likelihood. This has several benefits. First, the log is strictly increasing, and it reaches it's maximum value at the same point of the function it is applied to. If you want to find the maximum of a function you normally take the derivative of it; it can be difficult to find the derivative of some arbitrary function, but finding $\\frac{d}{dx} log(f(x))$ is trivial, and the result is the same as $\\frac{d}{dx} f(x)$. We don't use this property in this book, but it is essential when performing analysis on filters.\n", |
| 2237 | + "In practice it happens a bit differently. Likelihoods can be difficult to deal with mathematically. It is common to compute and use the *log-likelihood* instead, which is just the natural log of the likelihood. This has several benefits. First, the log is strictly increasing, and it reaches it's maximum value at the same point of the function it is applied to. If you want to find the maximum of a function you normally take the derivative of it; it can be difficult to find the derivative of some arbitrary function, but finding $\\frac{d}{dx} log(f(x))$ is trivial, and the result is the same as $\\frac{d}{dx} f(x)$. We don't use this property in this book, but it is essential when performing analysis on filters.\n", |
2238 | 2238 | "\n",
|
2239 | 2239 | "The likelihood and log-likelihood are computed for you when `update()` is called, and is accessible via the 'log_likelihood' and `likelihood` data attribute. Let's look at this: I'll run the filter with several measurements within expected range, and then inject measurements far from the expected values:"
|
2240 | 2240 | ]
|
|
0 commit comments