Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write an archiver for NREL Cambium expansions of Standard Scenarios #565

Open
10 tasks
krivard opened this issue Jan 31, 2025 · 3 comments · May be fixed by #569
Open
10 tasks

Write an archiver for NREL Cambium expansions of Standard Scenarios #565

krivard opened this issue Jan 31, 2025 · 3 comments · May be fixed by #569
Labels

Comments

@krivard
Copy link
Contributor

krivard commented Jan 31, 2025

Motivation and context:

NREL selects a subset of their Standard Scenarios (see #561) for expansion and deeper analysis using Cambium, a specialized modeling and analysis tool. The primary contribution of this dataset is hourly long-run marginal emissions rates.

The results are available via the Scenario Viewer and could be downloaded using the same/similar code we use for Standard Scenarios, but the datasets are much larger -- 1s-10s of GB zipped.

Requirements for archiving

To be archived on Zenodo, a dataset must be:

  • published under an open license that permits reuse and redistribution
  • less than 50Gb in size (when zipped) - this is the sketchy bit; we might have to split the archive into 5-year batches
  • relevant to energy modelling and research

Checklist for archive creation

Based on the README documentation on creating a new archive:

Links to published archives:

Include a link to the published sandbox archive for review.

@krivard krivard linked a pull request Jan 31, 2025 that will close this issue
@zaneselvans zaneselvans linked a pull request Feb 1, 2025 that will close this issue
@krivard
Copy link
Contributor Author

krivard commented Feb 12, 2025

Challenges so far:

  • The server keeps timing out. Not sure if this is a soft IP-based rate-limit or actual server load/unreliability. Boosting to start backoff at 60 seconds helps a little but still fails some times.
  • The data is too big: just two years would put us over the Zenodo limit, so we'd have to separate them annually, which seems annoying:
Report Size in GB (zipped)
Cambium 2020 Total 17.61
Cambium 2021 Total 28.36
Cambium 2022 Total 36.41
Cambium 2023 Total 6.58
Grand Total 88.96
  • An analysis of file sizes suggests the ALL - ALL - ALL files are not easier-to-handle packages of everything, but are instead ...whatever the opposite of microdata is. They're aggregates that can't be disaggregated.
  • An analysis of .zip contents of the 2023 files suggests there is no overlap in files between zip files; every file name only appears once. 😞

@krivard
Copy link
Contributor Author

krivard commented Feb 13, 2025

Met with Ella and Zach on how to proceed. Summary:

  • Start with 5 hours
  • Start with timeout mitigation (try cycling User-Agent; find out if timeout is at the file level or packet level)
  • If timeouts can be worked around, use inheritance to make 1 archiver per report year, and 1 resource per source zip
  • Metadata could be a source of duplication; see if there's an easy way to centralize most content & vary the name

@krivard
Copy link
Contributor Author

krivard commented Feb 13, 2025

Initial timeout results: Something is fishy with these timestamps:

2025-02-12 18:08:11 [    INFO] catalystcoop.pudl_archiver.archivers.classes:135 Downloading file 33 nrelcambium_2022__high_natural_gas_prices__all__all__all.zip 58769 82460f06-548c-4954-b2d9-b84ba92d63e2
2025-02-12 19:10:47 [    INFO] catalystcoop.pudl_archiver.utils:57 Error while executing <coroutine object _download_file_post at 0x1413d1ea0> (try #1, retry in 60s): <class 'TimeoutError'> - 
2025-02-12 19:43:30 [    INFO] catalystcoop.pudl_archiver.utils:57 Error while executing <coroutine object _download_file_post at 0x1413d2680> (try #2, retry in 120s): <class 'TimeoutError'> - 
2025-02-12 19:55:31 [    INFO] catalystcoop.pudl_archiver.utils:57 Error while executing <coroutine object _download_file_post at 0x1413d27a0> (try #3, retry in 240s): <class 'TimeoutError'> - 
2025-02-12 20:09:32 [    INFO] catalystcoop.pudl_archiver.utils:57 Error while executing <coroutine object _download_file_post at 0x1413d2680> (try #4, retry in 480s): <class 'TimeoutError'> - 
2025-02-12 20:27:46 [    INFO] catalystcoop.pudl_archiver.archivers.classes:158 Finished file nrelcambium_2022__high_natural_gas_prices__all__all__all.zip 58769 82460f06-548c-4954-b2d9-b84ba92d63e2

Seems like we're timing out significantly after the last retry should trigger, but retries are never reset, which could be a problem for large files with long download times.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: New
Development

Successfully merging a pull request may close this issue.

1 participant