-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Capture profiling statistics #1110
Conversation
65bcd66
to
a9b3c55
Compare
274a55a
to
3f3507a
Compare
during profiling.
Co-authored-by: Matt Graham <[email protected]>
177b3a4
to
72d9cd6
Compare
Resolve conflicts in requirements/dev.in and regenerate requirements/dev.txt using pip-compile
Also wrap ignore_warnings into scale_run function
Only work when within a package
Decrease chance of keyword arg name collisions Handle missing value in command-line args Remove unnecessary logic for dealing with missing args Correct type annotations
I've updated branch to change default values of profiling run parameters to be for 5 years, 50k initial population in mode 2, and now have the arguments to |
Almost closes issue #686 - we just need to decide on the parameters that we should pass to the
scale_run
script itself, and to capture the extra statistics that are listed in the issue itself.ipysession
files are no longer producedThis alleviates the issue of filesizes of the
.pyisession
outputs. They are no longer pushed to the profiling repository - instead we push thestats.json
files (discussed below) and optionally the rendered HTML files of the session results.The HTML outputs are significantly smaller than the raw
.pyisession
outputs (of the order 10/100s kBs rather than 100s/1000s MBs).Profiling captures additional information on top of the profiling output
The profiling runs are now setup to output a
.stats.json
file upon completion, which captures information from the profiling session as well as additional information about the simulation itself. The output file is just ajson
file and can be parsed as such. Currently we are capturing:There is scope to include additional statistics that are of interest, which can be done by adding additional fields to the dictionaries that are produced in the
record_XXX_statistics
functions.The
run_profiling.py
script can also be passed the--html
flag to also produce a HTML file containing the results of the profiling session (as rendered by pyinstrument) if we so choose.Additionally,
run_profiling.py
can also be called with an--additional-stats
flag, which can be used to pass shell or workflow variables to the program which will then also be included in the.stats.json
output. For example,will result in the following key/value pairs appearing in the output:
Sample output:
Run on commit
3f3507a23483fa5e373d9b518968aefb8c84cbff
, manually passing thesha
andtrigger
keys.tox -vv -e profile -- --additional-stats \ sha=3f3507a23483fa5e373d9b518968aefb8c84cbff \ trigger="manual trigger"
producing the
.stats.json
file: