-
-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add visualization graphs to eval #102
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #102 +/- ##
=======================================
Coverage 79.94% 79.94%
=======================================
Files 12 12
Lines 354 354
=======================================
Hits 283 283
Misses 71 71 ☔ View full report in Codecov by Sentry. |
Hi @jacobbieker @peterdudfield, could you give any suggestions for the issues above? Thanks! |
Hi @jacobbieker, is it a known issue that a lot of timestamps in results_df does not have real generation_power values? Thanks! |
You can use
Is there something you can add on the readme about visualization of the evaluation
Hmm this is interesting, would you be able to write which values they are. It might be worth putting these in a different issue |
Thanks! Let me address these issues. : ) |
Created an issue here #110 |
Pull Request
Description
Add visualization graphs to eval. Fix the TODO in evaluation.py
Including "Predicted VS Actual" graph for every 48hrs segments and "Error Distribution" for all tested instances.
How Has This Been Tested?
Tested with
python scripts/run_evaluation.py
Checklist:
Remaining Issues
quartz_solar_forecast
, and it seems my visualization code has to be updated in the package before I can use. Investigating how to solve this.Examples
error_distribution.png

pred_vs_actual_9531_2021_07_28_22_00_00.png

pred_vs_actual_9531_2021_09_16_07_00_00.png
