Skip to content

[ENH] Add NLinear model implementation#2171

Open
Sylver-Icy wants to merge 6 commits intosktime:mainfrom
Sylver-Icy:add-nlinear
Open

[ENH] Add NLinear model implementation#2171
Sylver-Icy wants to merge 6 commits intosktime:mainfrom
Sylver-Icy:add-nlinear

Conversation

@Sylver-Icy
Copy link

Reference Issues/PRs
#2157

Reference implementation based on the LTSF-Linear paper:
https://github.com/cure-lab/LTSF-Linear
https://github.com/cure-lab/LTSF-Linear/blob/main/models/NLinear.py

This PR adds an implementation of the NLinear model for the PyTorch Forecasting v2 estimator API.

Main changes:
• Added NLinear model implementation using TslibBaseModel
• Implemented support for:
• point forecasting
• quantile forecasting via QuantileLoss
• optional per-channel linear layers (individual=True)
• Ensured compatibility with TslibDataModule and v2 estimator workflows

Reference paper:
https://arxiv.org/abs/2205.13504

Added a new test module:

tests/test_models/test_nlinear_v2.py

The tests cover:
• model initialization
• forward pass execution
• correct output tensor shapes
• behavior for both point forecasting and quantile forecasting
• compatibility with the v2 estimator pipeline

This implementation closely follows the structure used by existing v2 models (e.g., DLinear) to maintain consistency with the repository architecture.

Happy to adjust structure or implementation details if there are preferred patterns for new models.

@codecov
Copy link

codecov bot commented Mar 11, 2026

Codecov Report

❌ Patch coverage is 95.23810% with 5 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (main@941097c). Learn more about missing BASE report.

Files with missing lines Patch % Lines
pytorch_forecasting/models/nlinear/_nlinear_v2.py 92.53% 5 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2171   +/-   ##
=======================================
  Coverage        ?   86.86%           
=======================================
  Files           ?      168           
  Lines           ?     9837           
  Branches        ?        0           
=======================================
  Hits            ?     8545           
  Misses          ?     1292           
  Partials        ?        0           
Flag Coverage Δ
cpu 86.86% <95.23%> (?)
pytest 86.86% <95.23%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@phoeenniixx phoeenniixx added the enhancement New feature or request label Mar 14, 2026
@Sylver-Icy
Copy link
Author

TslibBaseModel initializes metadata as
self.metadata = metadata or {}
However, several attributes were still being read from the raw metadata argument rather than self.metadata. When metadata=None was passed, this could lead to an attributeerror when calling .get() on none

This change updates those accesses to consistently use self.metadata, ensuring the normalized metadata object is used throughout the class and preventing potential crashes when metadata is not provided


feature_indices = metadata.get("feature_indices", {})
feature_indices = self.metadata.get("feature_indices", {})
self.cont_indices = feature_indices.get("continuous", [])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these changes are out of scope for this PR?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I explained it in above comment 👆🏻
the change does not affect the existing behavior of the codebase. It only replaces direct access to the raw metadata argument with self.metadata, which is already normalized earlier as metadata or {}

I added it because the class was still reading from the raw argument in a few places, which could raise an attributeerror if metadata=None and .get() is called

If you prefer keeping this PR strictly scoped to the NLinear implementation, I can remove the change and open a separate PR for it just lmk

return TslibDataModule

@classmethod
def get_test_dataset_from(cls, **kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are you adding this method?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is this method coming from?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the default test datasets include covariates and categorical features, while NLinear v2 enforces a strict target-history-only input contract
using the default dataset would therefore trigger the validation errors for unsupported inputs

this helper creates a dataset containing only time_idx, group identifiers, nd the target column so that the package tests run with inputs that match the model's constraints

I avoided modifying the shared datamodule and kept the change scoped to the model package instead

"""Return testing parameter settings for the trainer."""
params = [
{},
dict(datamodule_cfg=dict(context_length=12, prediction_length=3)),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there should be some fixtures making changes to the model params itself?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this follows the original LTSF-NLinear formulation from the paper. It is just a single linear projection from context_length to prediction_length after the normalization step

Copy link
Member

@phoeenniixx phoeenniixx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a doubt: are you sure there are no model specific params (like n_layers or something) here?

@phoeenniixx
Copy link
Member

Ok, I looked at the implementation, it seems like a basic linear model?

@Sylver-Icy
Copy link
Author

Ok, I looked at the implementation, it seems like a basic linear model?

Yes sir the current implementation follows the original paper which is just a single linear projection from
context_length → prediction_length

So there are currently no model-specific hyperparameters like n_layers
or hidden_size to vary in fixtures the only configurable dimensions
are determined by the datamodule (context_length / prediction_length)
and the loss (e.g. quantile count)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants