Scripts to process and upload ML training data to Google Cloud Storage for use on VMs.
Multiple processing scripts exist for NWPs due to variations in variables, coverage, and forecast horizons. Each script specifies its primary use case at the top.
NWP Processing Steps:
- Download individual forecast init time files
- Convert to unzipped Zarr format with combined variables
- Merge into yearly Zarrs with proper sorting, typing and chunking
- Validate data through visualization and testing
- Upload yearly Zarrs to Google Storage
- Issues can arise if you are still downloading data to the location where you are merging individual init times from. The solution is to manually remove files showing missing data.
- Some yearly NWP files can be very large (~1TB). Take careful consideration and conduct testing with threads, workers and memory limitations in the Dask client. Note that additional tasks running on the machine can impact performance, especially if they are also using lots of RAM.
- Another way to track process progress is to watch the zarr file size grow using
du -h
in the appropriate location.
Satellite data processing is handled by sat_proc.py
, which downloads satellite imagery data from Google Public Storage and processes it for ML training.
The gsp_pv_proc.py
script downloads the National PV generation data from Sheffield Solar PV Live, which is used as the target for OCF's national solar power forecast.
To upload files locally to Google Cloud Platform (GCP), you can use the gsutil
library. The can be done via:
gsutil -m cp -r my/folder/path/ gs://your-bucket-name/
For potentially faster uploads, you can try the upload_to_gcs.py
script which uses multiprocessing to speed things up. However sometimes the limitation is the internet or write speed of the disk so it may not be faster.
Once your data is in the GCS bucket, you can transfer it to a disk on your VM. First, SSH into your VM and make sure the target disk is attached with write permissions. Then mount the disk with read and write privileges using the following command:
sudo mount -o discard,defaults,rw /dev/ABC /mnt/disks/DISK_NAME
(You will need to know the disk name, which you can find with lsblk
, and replace ABC
with the actual disk name).
If updating an existing disk please note that anyone who has the disk mounted will be required to unmount it in order to change the disks read/write access. If cloning an exising disk and adding to it, please see the notes below at "Cloning disks on GCP".
To copy data from your GCS bucket to the mounted disk, use the following command:
gsutil -m cp -r gs://YOUR_BUCKET_NAME/YOUR_FILE_PATH* /mnt/disks/DISK_NAME/folder
The *
is used to copy all files in that directory.
If issues arise when uploading, use rsync
instead to copy the files across if some have already been downloaded. For example:
gsutil -m rsync -r gs://solar-pv-nowcasting-data/NWP/UK_Met_Office/UKV_extended/UKV_2023.zarr/ /mnt/disks/gcp_data/nwp/ukv/ukv_ext/UKV_2023.zarr/
rsync
synchronizes files by copying only the differences between source and destination. It can be slow because it needs to scan and compare all files first, then transfer the data. For large datasets like NWP files (~1TB), both the scanning and transfer phases take considerable time due to the volume of data involved.
After cloning (for GCP) mount the disk via the GCP UI. Then check the disk is not corrupted and the transfer was successful via sudo e2fsck -f /dev/DISK_NAME
you can find the disk name via the lsblk
command. When clonning a disk the UUID of the disk is the same. This can create issues when auto mounting disks on machine reboots. You can check a disks UUID by running sudo blkid
in the terminal and check the UUIDs.
To solve this, the UUID needs to be changed via sudo tune2fs /dev/DISK_NAME -U random
. Once completed run another check on the disk sudo e2fsck -fD /dev/sdc
. The -f
option forces a check even if the filesystem seems clean. The -D
option optimises directories in the filesystem.
You can now check that the UUID has been updated by running sudo blkid
.
Now the fstab
can be set to mount the cloned disk on reboot - follow the Google Cloud instructions for help on this.
- PR's are welcome! See the Organisation Profile for details on contributing
- Find out about our other projects in the OCF Meta Repo
- Check out the OCF blog for updates
- Follow OCF on LinkedIn
Part of the Open Climate Fix community.
Thanks goes to these wonderful people (emoji key):
THARAK HEGDE 📖 |
Megawattz 💻 |
This project follows the all-contributors specification. Contributions of any kind welcome!