- Orion version:latest
- Python version:3.11
- Operating System:Linux
Description
In my case, the training data is very large and cannot be loaded into memory all at once. It seems that time_segments_aggregate, SimpleImputer, MinMaxScaler, and rolling_window_sequences in the pipeline all require the data to be stored in memory. Can Orion handle training a 2-10TB dataset?