-
Notifications
You must be signed in to change notification settings - Fork 659
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] TimeSeriesDataSet.to_dataloader batch_size, RuntimeError #1752
Comments
Reproduced the bug on main branch. |
I cannot reproduce the bug, but I do get another exception (current main, windows, python 3.11, minimal depset of ptf) I get |
@gbilleyPeco, could you kindly confirm what error you get on the code that is currently the MRE? That is, the code that is currently at the top of this issue. Above, we were unable to verify the reported initial exception. |
@fkiraly The code that causes the error is this: |
Looking into this more, there are many posts on Stack Overflow and other sources where people have the same error. Posting a few examples below as info. For example: The responses on these posts suggest this happens when there is a mismatch between the shape of the dataset and the model architecture. |
Summary of the debuggingFirst I took the tutorial code in that I tried printing the size of validation(TimeSeriesDataSet) it turned out to be 100. Since in the tutorial batch size was taken as 128 this error didn't show up. When I changed batch_size=64,The error showed up. changing concat_sequences to _torch_cat_na which concat based on batch dimension solved the issue but when I did PR I got too many errors .In the error being shown the y that is being concatenated is the target from the TimeSeriesDataset the target(which is y) will be of shape (batch_size,time_steps) here the time_steps will be equal to prediction_length. The concat function will concat the y based on the time_steps,but they should be concat along the batch_size cause the ouput of the model is being concatenated along the batch. |
Interesting, and thanks for helping debug - does this suggest any fixes, @RUPESH-KUMAR01? |
From my understanding, the recent PR #1783 I closed solves this issue for this specific case, but it is causing problems with other models and failing the tests. |
Describe the bug
When executing
Baseline().predict( dataloader, ... )
, I get the following error.RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 64 but got size 44 for tensor number 54 in the list.
I do not know if this is a bug, but I'm posting here at the direction of Franz Kiraly.In the example below, if you set
batch_size=4
, the error disappears, but that number was found using trial and error. It would be nice to know which batch_sizes are valid without trial and error.To Reproduce
Expected behavior
I expect the Baseline() object to make predictions about the data, and calculate the SMAPE.
Additional context
This code was taken directly from the DeepAR tutorial found here:, however I have changed a the
generate_ar_data
parameters, and setmin_encoder_length=1
andmin_prediction_length=1
when initializing theTimeSeriesDataSet
.Versions
The text was updated successfully, but these errors were encountered: