Skip to content

Releases: sktime/pytorch-forecasting

New API for transforming inputs and outputs with encoders

03 Dec 21:53
e5c8698
Compare
Choose a tag to compare

Added

  • Beta distribution loss for probabilistic models such as DeepAR (#160)

Changed

  • BREAKING: Simplifying how to apply transforms (such as logit or log) before and after applying encoder. Some transformations are included by default but a tuple of a forward and reverse transform function can be passed for arbitrary transformations. This requires to use a transformation keyword in target normalizers instead of, e.g. log_scale (#185)

Fixed

  • Incorrect target position if len(static_reals) > 0 leading to leakage (#184)
  • Fixing predicting completely unseen series (#172)

Contributors

  • jdb78
  • JakeForsey

Bugfixes and DeepAR improvements

24 Nov 16:10
48f3595
Compare
Choose a tag to compare

Added

  • Using GRU cells with DeepAR (#153)

Fixed

  • GPU fix for variable sequence length (#169)
  • Fix incorrect syntax for warning when removing series (#167)
  • Fix issue when using unknown group ids in validation or test dataset (#172)
  • Run non-failing CI on PRs from forks (#166, #156)

Docs

  • Improved model selection guidance and explanations on how TimeSeriesDataSet works (#148)
  • Clarify how to use with conda (#168)

Contributors

  • jdb78
  • JakeForsey

Adding DeepAR

10 Nov 12:27
278daa0
Compare
Choose a tag to compare

Added

  • DeepAR by Amazon (#115)
    • First autoregressive model in PyTorch Forecasting
    • Distribution loss: normal, negative binomial and log-normal distributions
    • Currently missing: handling lag variables and tutorial (planned for 0.6.1)
  • Improved documentation on TimeSeriesDataSet and how to implement a new network (#145)

Changed

  • Internals of encoders and how they store center and scale (#115)

Fixed

  • Update to PyTorch 1.7 and PyTorch Lightning 1.0.5 which came with breaking changes for CUDA handling and with optimizers (PyTorch Forecasting Ranger version) (#143, #137, #115)

Contributors

  • jdb78
  • JakeForesey

Bug fixes

31 Oct 08:29
bfab49a
Compare
Choose a tag to compare

Fixes

  • Fix issue where hyperparameter verbosity controlled only part of output (#118)
  • Fix occasional error when .get_parameters() from TimeSeriesDataSet failed (#117)
  • Remove redundant double pass through LSTM for temporal fusion transformer (#125)
  • Prevent installation of pytorch-lightning 1.0.4 as it breaks the code (#127)
  • Prevent modification of model defaults in-place (#112)

Fixes to interpretation and more control over hyperparameter verbosity

18 Oct 06:39
aa4f0d9
Compare
Choose a tag to compare

Added

  • Hyperparameter tuning with optuna to tutorial
  • Control over verbosity of hyper parameter tuning

Fixes

  • Interpretation error when different batches had different maximum decoder lengths
  • Fix some typos (no changes to user API)

PyTorch Lightning 1.0 compatibility

14 Oct 05:04
5d5adcf
Compare
Choose a tag to compare

This release has only one purpose: Allow usage of PyTorch Lightning 1.0 - all tests have passed.

PyTorch Lightning 0.10 compatibility and classification

12 Oct 06:44
599047e
Compare
Choose a tag to compare

Added

  • Additional checks for TimeSeriesDataSet inputs - now flagging if series are lost due to high min_encoder_length and ensure parameters are integers
  • Enable classification - simply change the target in the TimeSeriesDataSet to a non-float variable, use the CrossEntropy metric to optimize and output as many classes as you want to predict

Changed

  • Ensured PyTorch Lightning 0.10 compatibility
    • Using LearningRateMonitor instead of LearningRateLogger
    • Use EarlyStopping callback in trainer callbacks instead of early_stopping argument
    • Update metric system update() and compute() methods
    • Use trainer.tuner.lr_find() instead of trainer.lr_find() in tutorials and examples
  • Update poetry to 1.1.0

Various fixes models and data

01 Oct 08:58
e97189f
Compare
Choose a tag to compare

Fixes

Model

  • Removed attention to current datapoint in TFT decoder to generalise better over various sequence lengths
  • Allow resuming optuna hyperparamter tuning study

Data

  • Fixed inconsistent naming and calculation of encoder_lengthin TimeSeriesDataSet when added as feature

Contributors

  • jdb78

Metrics, performance, and subsequence detection

28 Sep 19:58
8c7277a
Compare
Choose a tag to compare

Added

Models

  • Backcast loss for N-BEATS network for better regularisation
  • logging_metrics as explicit arguments to models

Metrics

  • MASE (Mean absolute scaled error) metric for training and reporting
  • Metrics can be composed, e.g. 0.3* metric1 + 0.7 * metric2
  • Aggregation metric that is computed on mean prediction over all samples to reduce mean-bias

Data

  • Increased speed of parsing data with missing datapoints. About 2s for 1M data points. If numba is installed, 0.2s for 1M data points
  • Time-synchronize samples in batches: ensure that all samples in each batch have with same time index in decoder

Breaking changes

  • Improved subsequence detection in TimeSeriesDataSet ensures that there exists a subsequence starting and ending on each point in time.
  • Fix min_encoder_length = 0 being ignored and processed as min_encoder_length = max_encoder_length

Contributors

  • jdb78
  • dehoyosb

More tests and better docs

13 Sep 22:43
5e7c808
Compare
Choose a tag to compare
  • More tests driving coverage to ~90%
  • Performance tweaks for temporal fusion transformer
  • Reformatting with sort
  • Improve documentation - particularly expand on hyper parameter tuning

Fixes:

  • Fix PoissonLoss quantiles calculation
  • Fix N-Beats visualisations