Releases: sktime/pytorch-forecasting
Releases · sktime/pytorch-forecasting
New API for transforming inputs and outputs with encoders
Added
- Beta distribution loss for probabilistic models such as DeepAR (#160)
Changed
- BREAKING: Simplifying how to apply transforms (such as logit or log) before and after applying encoder. Some transformations are included by default but a tuple of a forward and reverse transform function can be passed for arbitrary transformations. This requires to use a
transformation
keyword in target normalizers instead of, e.g.log_scale
(#185)
Fixed
- Incorrect target position if
len(static_reals) > 0
leading to leakage (#184) - Fixing predicting completely unseen series (#172)
Contributors
- jdb78
- JakeForsey
Bugfixes and DeepAR improvements
Added
- Using GRU cells with DeepAR (#153)
Fixed
- GPU fix for variable sequence length (#169)
- Fix incorrect syntax for warning when removing series (#167)
- Fix issue when using unknown group ids in validation or test dataset (#172)
- Run non-failing CI on PRs from forks (#166, #156)
Docs
- Improved model selection guidance and explanations on how TimeSeriesDataSet works (#148)
- Clarify how to use with conda (#168)
Contributors
- jdb78
- JakeForsey
Adding DeepAR
Added
- DeepAR by Amazon (#115)
- First autoregressive model in PyTorch Forecasting
- Distribution loss: normal, negative binomial and log-normal distributions
- Currently missing: handling lag variables and tutorial (planned for 0.6.1)
- Improved documentation on TimeSeriesDataSet and how to implement a new network (#145)
Changed
- Internals of encoders and how they store center and scale (#115)
Fixed
- Update to PyTorch 1.7 and PyTorch Lightning 1.0.5 which came with breaking changes for CUDA handling and with optimizers (PyTorch Forecasting Ranger version) (#143, #137, #115)
Contributors
- jdb78
- JakeForesey
Bug fixes
Fixes
- Fix issue where hyperparameter verbosity controlled only part of output (#118)
- Fix occasional error when
.get_parameters()
fromTimeSeriesDataSet
failed (#117) - Remove redundant double pass through LSTM for temporal fusion transformer (#125)
- Prevent installation of pytorch-lightning 1.0.4 as it breaks the code (#127)
- Prevent modification of model defaults in-place (#112)
Fixes to interpretation and more control over hyperparameter verbosity
Added
- Hyperparameter tuning with optuna to tutorial
- Control over verbosity of hyper parameter tuning
Fixes
- Interpretation error when different batches had different maximum decoder lengths
- Fix some typos (no changes to user API)
PyTorch Lightning 1.0 compatibility
This release has only one purpose: Allow usage of PyTorch Lightning 1.0 - all tests have passed.
PyTorch Lightning 0.10 compatibility and classification
Added
- Additional checks for
TimeSeriesDataSet
inputs - now flagging if series are lost due to highmin_encoder_length
and ensure parameters are integers - Enable classification - simply change the target in the
TimeSeriesDataSet
to a non-float variable, use theCrossEntropy
metric to optimize and output as many classes as you want to predict
Changed
- Ensured PyTorch Lightning 0.10 compatibility
- Using
LearningRateMonitor
instead ofLearningRateLogger
- Use
EarlyStopping
callback in trainercallbacks
instead ofearly_stopping
argument - Update metric system
update()
andcompute()
methods - Use
trainer.tuner.lr_find()
instead oftrainer.lr_find()
in tutorials and examples
- Using
- Update poetry to 1.1.0
Various fixes models and data
Fixes
Model
- Removed attention to current datapoint in TFT decoder to generalise better over various sequence lengths
- Allow resuming optuna hyperparamter tuning study
Data
- Fixed inconsistent naming and calculation of
encoder_length
in TimeSeriesDataSet when added as feature
Contributors
- jdb78
Metrics, performance, and subsequence detection
Added
Models
- Backcast loss for N-BEATS network for better regularisation
- logging_metrics as explicit arguments to models
Metrics
- MASE (Mean absolute scaled error) metric for training and reporting
- Metrics can be composed, e.g.
0.3* metric1 + 0.7 * metric2
- Aggregation metric that is computed on mean prediction over all samples to reduce mean-bias
Data
- Increased speed of parsing data with missing datapoints. About 2s for 1M data points. If
numba
is installed, 0.2s for 1M data points - Time-synchronize samples in batches: ensure that all samples in each batch have with same time index in decoder
Breaking changes
- Improved subsequence detection in TimeSeriesDataSet ensures that there exists a subsequence starting and ending on each point in time.
- Fix
min_encoder_length = 0
being ignored and processed asmin_encoder_length = max_encoder_length
Contributors
- jdb78
- dehoyosb
More tests and better docs
- More tests driving coverage to ~90%
- Performance tweaks for temporal fusion transformer
- Reformatting with sort
- Improve documentation - particularly expand on hyper parameter tuning
Fixes:
- Fix PoissonLoss quantiles calculation
- Fix N-Beats visualisations