-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
WIP: Add basic tests for examples notebooks #2252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I am not sure this is the best way to do it, as testing it in Travis will take forever. I think we had this discussion at some point before @ColCarroll ? |
We can't test them all, unfortunately. |
yeah, it will almost surely time out. I wrote https://github.com/ColCarroll/carpo ( @junpenglao I'm not sure what a model zoo would look like for pymc3! One thought would be that it is pre-computed traces for large models. Another thought is that it is pre-written functions for common models (https://github.com/ColCarroll/sampled could help with that) |
Yeah model zoo is probably not the right term, i meant something like an example showcase in a separate repository. |
We already are having trouble maintaining the examples we have, so I'm not sure moving them outside of the main repo will help with that. IMHO, maintained examples can live here and unmaintained examples can live in people's personal githubs or gists. |
Wow, this is worse than I expected. I'll fix |
If we are diligent about going in and manually checking that they run, say, before each release the I think that's fine. They don't all have to be tested every time we check in new code. Regular review will be important however. I've added a task to the release checklist on the wiki to check the notebooks. |
getting_started and rugby should be fixed now. #2255 might also have fixed some of the other notebooks. |
@fonnesbeck @ColCarroll I'm happy to be responsible for checking the notebooks pre-releases, if it takes a load off your plate! I have access to a fairly fast GPU tower at home, and can probably use the method @aseyboldt proposed up here (execute nbconvert) to handle the changes. FWIW I helped with matplotlib's docs (in a similar fashion) as well, and it'll give me another way to learn more about the code base too. |
That sounds great to me -- if you put up a similar post to this one, you can farm out help for updating APIs. Also see @kyleabeauchamp 's work on making the test suite run on the GPU (#2264) |
@a-rodin are you going to continue this work, or can it be closed? |
Closing because of age. Feel free to reopen if you want to fix this up. |
In #2236 there is a discussion about the status of examples notebooks. I checked some of them, and it turned some of the notebooks are broken because of API changes in a sense that they throw exceptions on execution, mostly of API changes as far as I can see.
So I tried to find a way to test them all at least in some basic way. Although it is hard to test code logic in the notebooks, it is at least easy to just make sure that the code in them can be run without errors.
This PR adds a test that runs all the notebooks using
nbconvert
executor. It turns out that the following notebooks should be fixed:These notebooks run on CPU for more than 15 minutes, so the tests for them are timeouted:
I'm not sure what would be an elegant way for dealing with these long-running notebooks that time out.