Global Baseline Models (Naive)

A collection of simple benchmark models working with univariate, multivariate, single, and multiple series.

class darts.models.forecasting.global_baseline_models.GlobalNaiveAggregate(input_chunk_length, output_chunk_length, output_chunk_shift=0, agg_fn='mean', **kwargs)[source]

Bases: _NoCovariatesMixin, _GlobalNaiveModel

Global Naive Aggregate Model.

The model generates forecasts for each series as described below:

  • take an aggregate (computed with agg_fn, default: mean) from each target component over the last input_chunk_length points

  • the forecast is the component aggregate repeated output_chunk_length times

Depending on the horizon n used when calling model.predict(), the forecasts are either:

  • a constant aggregate value (default: mean) if n <= output_chunk_length, or

  • a moving aggregate if n > output_chunk_length, as a result of the autoregressive prediction.

This model is equivalent to:

  • NaiveMean, when input_chunk_length is equal to the length of the input target series, and agg_fn=’mean’.

  • NaiveMovingAverage, with identical input_chunk_length and output_chunk_length=1, and agg_fn=’mean’.

Note

  • Model checkpointing with save_checkpoints=True, and checkpoint loading with load_from_checkpoint() and load_weights_from_checkpoint() are not supported for global naive models.

Parameters
  • input_chunk_length (int) – The length of the input sequence fed to the model.

  • output_chunk_length (int) – The length of the emitted forecast and output sequence fed to the model.

  • output_chunk_shift (int) – Optionally, the number of steps to shift the start of the output chunk into the future (relative to the input chunk end). This will create a gap between the input and output. If the model supports future_covariates, the future values are extracted from the shifted output chunk. Predictions will start output_chunk_shift steps after the end of the target series. If output_chunk_shift is set, the model cannot generate autoregressive predictions (n > output_chunk_length).

  • agg_fn (Union[str, Callable[[Tensor, int], Tensor]]) –

    The aggregation function to use. If a string, must be the name of torch function that can be imported directly from torch (e.g. “mean” for torch.mean, “sum” for torch.sum). The function must have the signature below. If a Callable, it must also have the signature below.

    def agg_fn(x: torch.Tensor, dim: int, *args, **kwargs) -> torch.Tensor:
        # x has shape `(batch size, input_chunk_length, n targets)`, `dim` is always `1`.
        # function must return a tensor of shape `(batch size, n targets)`
        return torch.mean(x, dim=dim)
    

  • **kwargs – Optional arguments to initialize the pytorch_lightning.Module, pytorch_lightning.Trainer, and Darts’ TorchForecastingModel. Since naive models are not trained, the following parameters will have no effect: loss_fn, likelihood, optimizer_cls, optimizer_kwargs, lr_scheduler_cls, lr_scheduler_kwargs, n_epochs, save_checkpoints, and some of pl_trainer_kwargs.

Examples

>>> from darts.datasets import IceCreamHeaterDataset
>>> from darts.models import GlobalNaiveAggregate
>>> # create list of multivariate series
>>> series_1 = IceCreamHeaterDataset().load()
>>> series_2 = series_1 + 100.
>>> series = [series_1, series_2]
>>> # predict 3 months, take mean over last 60 months
>>> horizon, icl = 3, 60
>>> # naive mean over last 60 months (with `output_chunk_length = horizon`)
>>> model = GlobalNaiveAggregate(input_chunk_length=icl, output_chunk_length=horizon)
>>> # predict after end of each multivariate series
>>> pred = model.fit(series).predict(n=horizon, series=series)
>>> [p.values() for p in pred]
[array([[29.666668, 50.983337],
       [29.666668, 50.983337],
       [29.666668, 50.983337]]), array([[129.66667, 150.98334],
       [129.66667, 150.98334],
       [129.66667, 150.98334]])]
>>> # naive moving mean (with `output_chunk_length < horizon`)
>>> model = GlobalNaiveAggregate(input_chunk_length=icl, output_chunk_length=1, agg_fn="mean")
>>> pred = model.fit(series).predict(n=horizon, series=series)
>>> [p.values() for p in pred]
[array([[29.666668, 50.983337],
       [29.894447, 50.88306 ],
       [30.109352, 50.98111 ]]), array([[129.66667, 150.98334],
       [129.89445, 150.88307],
       [130.10936, 150.98111]])]
>>> # naive moving sum (with `output_chunk_length < horizon`)
>>> model = GlobalNaiveAggregate(input_chunk_length=icl, output_chunk_length=1, agg_fn="sum")
>>> pred = model.fit(series).predict(n=horizon, series=series)
>>> [p.values() for p in pred]
[array([[ 1780.,  3059.],
       [ 3544.,  6061.],
       [ 7071., 12077.]]), array([[ 7780.,  9059.],
       [15444., 17961.],
       [30771., 35777.]])]

Attributes

considers_static_covariates

Whether the model considers static covariates, if there are any.

extreme_lags

A 8-tuple containing in order: (min target lag, max target lag, min past covariate lag, max past covariate lag, min future covariate lag, max future covariate lag, output shift, max target lag train (only for RNNModel)).

min_train_samples

The minimum number of samples for training the model.

output_chunk_length

Number of time steps predicted at once by the model, not defined for statistical models.

output_chunk_shift

Number of time steps that the output/prediction starts after the end of the input.

supports_multivariate

Whether the model considers more than one variate in the time series.

supports_optimized_historical_forecasts

Whether the model supports optimized historical forecasts

supports_transferrable_series_prediction

Whether the model supports prediction for any input series.

uses_future_covariates

Whether the model uses future covariates, once fitted.

uses_past_covariates

Whether the model uses past covariates, once fitted.

uses_static_covariates

Whether the model uses static covariates, once fitted.

epochs_trained

input_chunk_length

likelihood

model_created

model_params

supports_future_covariates

supports_past_covariates

supports_static_covariates

Methods

backtest(series[, past_covariates, ...])

Compute error values that the model would have produced when used on (potentially multiple) series.

fit(series[, past_covariates, future_covariates])

Fit/train the model on a (or potentially multiple) series.

fit_from_dataset(train_dataset[, ...])

Train the model with a specific darts.utils.data.TrainingDataset instance.

generate_fit_encodings(series[, ...])

Generates the covariate encodings that were used/generated for fitting the model and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together.

generate_fit_predict_encodings(n, series[, ...])

Generates covariate encodings for training and inference/prediction and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together.

generate_predict_encodings(n, series[, ...])

Generates covariate encodings for the inference/prediction set and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together.

gridsearch(parameters, series[, ...])

Find the best hyper-parameters among a given set using a grid search.

historical_forecasts(series[, ...])

Compute the historical forecasts that would have been obtained by this model on (potentially multiple) series.

load(path, **kwargs)

Loads a model from a given file path.

load_from_checkpoint(model_name[, work_dir, ...])

Load the model from automatically saved checkpoints under '{work_dir}/darts_logs/{model_name}/checkpoints/'.

load_weights(path[, load_encoders, skip_checks])

Loads the weights from a manually saved model (saved with save()).

load_weights_from_checkpoint([model_name, ...])

Load only the weights from automatically saved checkpoints under '{work_dir}/darts_logs/{model_name}/ checkpoints/'.

lr_find(series[, past_covariates, ...])

A wrapper around PyTorch Lightning's Tuner.lr_find().

predict(n[, series, past_covariates, ...])

Predict the n time step following the end of the training series, or of the specified series.

predict_from_dataset(n, input_series_dataset)

This method allows for predicting with a specific darts.utils.data.InferenceDataset instance.

reset_model()

Resets the model object and removes all stored data - model, checkpoints, loggers and training history.

residuals(series[, past_covariates, ...])

Compute the residuals produced by this model on a (or sequence of) TimeSeries.

save([path])

Saves the model under a given path.

supports_likelihood_parameter_prediction()

Whether model instance supports direct prediction of likelihood parameters

supports_probabilistic_prediction()

Checks if the forecasting model with this configuration supports probabilistic predictions.

to_cpu()

Updates the PyTorch Lightning Trainer parameters to move the model to CPU the next time :fun:`fit()` or predict() is called.

backtest(series, past_covariates=None, future_covariates=None, historical_forecasts=None, num_samples=1, train_length=None, start=None, start_format='value', forecast_horizon=1, stride=1, retrain=True, overlap_end=False, last_points_only=False, metric=<function mape>, reduction=<function mean>, verbose=False, show_warnings=True, metric_kwargs=None, fit_kwargs=None, predict_kwargs=None)

Compute error values that the model would have produced when used on (potentially multiple) series.

If historical_forecasts are provided, the metric (given by the metric function) is evaluated directly on the forecast and the actual values. The same series must be passed that was used to generate the historical forecasts. Otherwise, it repeatedly builds a training set: either expanding from the beginning of series or moving with a fixed length train_length. It trains the current model on the training set, emits a forecast of length equal to forecast_horizon, and then moves the end of the training set forward by stride time steps. The metric is then evaluated on the forecast and the actual values. Finally, the method returns a reduction (the mean by default) of all these metric scores.

By default, this method uses each historical forecast (whole) to compute error scores. If last_points_only is set to True, it will use only the last point of each historical forecast. In this case, no reduction is used.

By default, this method always re-trains the models on the entire available history, corresponding to an expanding window strategy. If retrain is set to False (useful for models for which training might be time-consuming, such as deep learning models), the trained model will be used directly to emit the forecasts.

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The (or a sequence of) target time series used to successively train and evaluate the historical forecasts.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one (or a sequence of) past-observed covariate series. This applies only if the model supports past covariates.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one (or a sequence of) future-known covariate series. This applies only if the model supports future covariates.

  • historical_forecasts (Union[TimeSeries, Sequence[TimeSeries], Sequence[Sequence[TimeSeries]], None]) – Optionally, the (or a sequence of / a sequence of sequences of) historical forecasts time series to be evaluated. Corresponds to the output of historical_forecasts(). The same series and last_points_only values must be passed that were used to generate the historical forecasts. If provided, will skip historical forecasting and ignore all parameters except series, last_points_only, metric, and reduction.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Use values >1 only for probabilistic models.

  • train_length (Optional[int]) – Number of time steps in our training set (size of backtesting window to train on). Only effective when retrain is not False. Default is set to train_length=None where it takes all available time steps up until prediction time, otherwise the moving window strategy is used. If larger than the number of time steps available, all steps up until prediction time are used, as in default case. Needs to be at least min_train_series_length.

  • start (Union[Timestamp, float, int, None]) –

    Optionally, the first point in time at which a prediction is computed. This parameter supports: float, int, pandas.Timestamp, and None. If a float, it is the proportion of the time series that should lie before the first prediction point. If an int, it is either the index position of the first prediction point for series with a pd.DatetimeIndex, or the index value for series with a pd.RangeIndex. The latter can be changed to the index position with start_format=”position”. If a pandas.Timestamp, it is the time stamp of the first prediction point. If None, the first prediction point will automatically be set to:

    • the first predictable point if retrain is False, or retrain is a Callable and the first predictable point is earlier than the first trainable point.

    • the first trainable point if retrain is True or int (given train_length), or retrain is a Callable and the first trainable point is earlier than the first predictable point.

    • the first trainable point (given train_length) otherwise

    Note: Raises a ValueError if start yields a time outside the time index of series. Note: If start is outside the possible historical forecasting times, will ignore the parameter (default behavior with None) and start at the first trainable/predictable point.

  • start_format (Literal[‘position’, ‘value’]) – Defines the start format. Only effective when start is an integer and series is indexed with a pd.RangeIndex. If set to ‘position’, start corresponds to the index position of the first predicted point and can range from (-len(series), len(series) - 1). If set to ‘value’, start corresponds to the index value/label of the first predicted point. Will raise an error if the value is not in series’ index. Default: 'value'

  • forecast_horizon (int) – The forecast horizon for the point predictions.

  • stride (int) – The number of time steps between two consecutive predictions.

  • retrain (Union[bool, int, Callable[…, bool]]) –

    Whether and/or on which condition to retrain the model before predicting. This parameter supports 3 different datatypes: bool, (positive) int, and Callable (returning a bool). In the case of bool: retrain the model at each step (True), or never retrains the model (False). In the case of int: the model is retrained every retrain iterations. In the case of Callable: the model is retrained whenever callable returns True. The callable must have the following positional arguments:

    • counter (int): current retrain iteration

    • pred_time (pd.Timestamp or int): timestamp of forecast time (end of the training series)

    • train_series (TimeSeries): train series up to pred_time

    • past_covariates (TimeSeries): past_covariates series up to pred_time

    • future_covariates (TimeSeries): future_covariates series up to min(pred_time + series.freq * forecast_horizon, series.end_time())

    Note: if any optional *_covariates are not passed to historical_forecast, None will be passed to the corresponding retrain function argument. Note: some models do require being retrained every time and do not support anything other than retrain=True.

  • overlap_end (bool) – Whether the returned forecasts can go beyond the series’ end or not.

  • last_points_only (bool) – Whether to use the whole historical forecasts or only the last point of each forecast to compute the error.

  • metric (Union[Callable[…, Union[float, List[float], ndarray, List[ndarray]]], List[Callable[…, Union[float, List[float], ndarray, List[ndarray]]]]]) – A metric function or a list of metric functions. Each metric must either be a Darts metric (see here), or a custom metric that has an identical signature as Darts’ metrics, uses decorators multi_ts_support() and multi_ts_support(), and returns the metric score.

  • reduction (Optional[Callable[…, float]]) – A function used to combine the individual error scores obtained when last_points_only is set to False. When providing several metric functions, the function will receive the argument axis = 1 to obtain single value for each metric function. If explicitly set to None, the method will return a list of the individual error scores instead. Set to np.mean by default.

  • verbose (bool) – Whether to print progress.

  • show_warnings (bool) – Whether to show warnings related to parameters start, and train_length.

  • metric_kwargs (Union[Dict[str, Any], List[Dict[str, Any]], None]) – Additional arguments passed to metric(), such as ‘n_jobs’ for parallelization, ‘component_reduction’ for reducing the component wise metrics, seasonality ‘m’ for scaled metrics, etc. Will pass arguments to each metric separately and only if they are present in the corresponding metric signature. Parameter ‘insample’ for scaled metrics (e.g. mase`, rmsse, …) is ignored, as it is handled internally.

  • fit_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model fit() method.

  • predict_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model predict() method.

Return type

Union[float, ndarray, List[float], List[ndarray]]

Returns

  • float – A single backtest score for single uni/multivariate series, a single metric function and:

    • historical_forecasts generated with last_points_only=True

    • historical_forecasts generated with last_points_only=False and using a backtest reduction

  • np.ndarray – An numpy array of backtest scores. For single series and one of:

    • a single metric function, historical_forecasts generated with last_points_only=False and backtest reduction=None. The output has shape (n forecasts,).

    • multiple metric functions and historical_forecasts generated with last_points_only=False. The output has shape (n metrics,) when using a backtest reduction, and (n metrics, n forecasts) when reduction=None

    • multiple uni/multivariate series including series_reduction and at least one of component_reduction=None or time_reduction=None for “per time step metrics”

  • List[float] – Same as for type float but for a sequence of series. The returned metric list has length len(series) with the float metric for each input series.

  • List[np.ndarray] – Same as for type np.ndarray but for a sequence of series. The returned metric list has length len(series) with the np.ndarray metrics for each input series.

property considers_static_covariates: bool

Whether the model considers static covariates, if there are any.

Return type

bool

property epochs_trained: int
Return type

int

property extreme_lags: Tuple[Optional[int], Optional[int], Optional[int], Optional[int], Optional[int], Optional[int], int, Optional[int]]

A 8-tuple containing in order: (min target lag, max target lag, min past covariate lag, max past covariate lag, min future covariate lag, max future covariate lag, output shift, max target lag train (only for RNNModel)). If 0 is the index of the first prediction, then all lags are relative to this index.

See examples below.

If the model wasn’t fitted with:
  • target (concerning RegressionModels only): then the first element should be None.

  • past covariates: then the third and fourth elements should be None.

  • future covariates: then the fifth and sixth elements should be None.

Should be overridden by models that use past or future covariates, and/or for model that have minimum target lag and maximum target lags potentially different from -1 and 0.

Notes

maximum target lag (second value) cannot be None and is always larger than or equal to 0.

Examples

>>> model = LinearRegressionModel(lags=3, output_chunk_length=2)
>>> model.fit(train_series)
>>> model.extreme_lags
(-3, 1, None, None, None, None, 0, None)
>>> model = LinearRegressionModel(lags=3, output_chunk_length=2, output_chunk_shift=2)
>>> model.fit(train_series)
>>> model.extreme_lags
(-3, 1, None, None, None, None, 2, None)
>>> model = LinearRegressionModel(lags=[-3, -5], lags_past_covariates = 4, output_chunk_length=7)
>>> model.fit(train_series, past_covariates=past_covariates)
>>> model.extreme_lags
(-5, 6, -4, -1,  None, None, 0, None)
>>> model = LinearRegressionModel(lags=[3, 5], lags_future_covariates = [4, 6], output_chunk_length=7)
>>> model.fit(train_series, future_covariates=future_covariates)
>>> model.extreme_lags
(-5, 6, None, None, 4, 6, 0, None)
>>> model = NBEATSModel(input_chunk_length=10, output_chunk_length=7)
>>> model.fit(train_series)
>>> model.extreme_lags
(-10, 6, None, None, None, None, 0, None)
>>> model = NBEATSModel(input_chunk_length=10, output_chunk_length=7, lags_future_covariates=[4, 6])
>>> model.fit(train_series, future_covariates)
>>> model.extreme_lags
(-10, 6, None, None, 4, 6, 0, None)
Return type

Tuple[Optional[int], Optional[int], Optional[int], Optional[int], Optional[int], Optional[int], int, Optional[int]]

fit(series, past_covariates=None, future_covariates=None, *args, **kwargs)

Fit/train the model on a (or potentially multiple) series. This method is only implemented for naive baseline models to provide a unified fit/predict API with other forecasting models.

The model is not really trained on the input, but fit() is used to setup the model based on the input series. Also, it stores the training series in case only a single TimeSeries was passed. This allows to call predict() without having to pass the single series.

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – A series or sequence of series serving as target (i.e. what the model will be trained to forecast)

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series specifying past-observed covariates

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series specifying future-known covariates

  • **kwargs – Optionally, some keyword arguments.

Returns

Fitted model.

Return type

self

fit_from_dataset(train_dataset, val_dataset=None, trainer=None, verbose=None, epochs=0, num_loader_workers=0)

Train the model with a specific darts.utils.data.TrainingDataset instance. These datasets implement a PyTorch Dataset, and specify how the target and covariates are sliced for training. If you are not sure which training dataset to use, consider calling fit() instead, which will create a default training dataset appropriate for this model.

Training is performed with a PyTorch Lightning Trainer. It uses a default Trainer object from presets and pl_trainer_kwargs used at model creation. You can also use a custom Trainer with optional parameter trainer. For more information on PyTorch Lightning Trainers check out this link.

This function can be called several times to do some extra training. If epochs is specified, the model will be trained for some (extra) epochs epochs.

Parameters
  • train_dataset (TrainingDataset) – A training dataset with a type matching this model (e.g. PastCovariatesTrainingDataset for PastCovariatesTorchModel).

  • val_dataset (Optional[TrainingDataset]) – A training dataset with a type matching this model (e.g. PastCovariatesTrainingDataset for :class:`PastCovariatesTorchModel`s), representing the validation set (to track the validation loss).

  • trainer (Optional[Trainer]) – Optionally, a custom PyTorch-Lightning Trainer object to perform prediction. Using a custom trainer will override Darts’ default trainer.

  • verbose (Optional[bool]) – Optionally, whether to print the progress. Ignored if there is a ProgressBar callback in pl_trainer_kwargs.

  • epochs (int) – If specified, will train the model for epochs (additional) epochs, irrespective of what n_epochs was provided to the model constructor.

  • num_loader_workers (int) – Optionally, an integer specifying the num_workers to use in PyTorch DataLoader instances, both for the training and validation loaders (if any). A larger number of workers can sometimes increase performance, but can also incur extra overheads and increase memory usage, as more batches are loaded in parallel.

Returns

Fitted model.

Return type

self

generate_fit_encodings(series, past_covariates=None, future_covariates=None)

Generates the covariate encodings that were used/generated for fitting the model and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together. The encodings are generated by the encoders defined at model creation with parameter add_encoders. Pass the same series, past_covariates, and future_covariates that you used to train/fit the model.

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The series or sequence of series with the target values used when fitting the model.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the series or sequence of series with the past-observed covariates used when fitting the model.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the series or sequence of series with the future-known covariates used when fitting the model.

Returns

A tuple of (past covariates, future covariates). Each covariate contains the original as well as the encoded covariates.

Return type

Tuple[Union[TimeSeries, Sequence[TimeSeries]], Union[TimeSeries, Sequence[TimeSeries]]]

generate_fit_predict_encodings(n, series, past_covariates=None, future_covariates=None)

Generates covariate encodings for training and inference/prediction and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together. The encodings are generated by the encoders defined at model creation with parameter add_encoders. Pass the same series, past_covariates, and future_covariates that you intend to use for training and prediction.

Parameters
  • n (int) – The number of prediction time steps after the end of series intended to be used for prediction.

  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The series or sequence of series with target values intended to be used for training and prediction.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the past-observed covariates series intended to be used for training and prediction. The dimensions must match those of the covariates used for training.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the future-known covariates series intended to be used for prediction. The dimensions must match those of the covariates used for training.

Returns

A tuple of (past covariates, future covariates). Each covariate contains the original as well as the encoded covariates.

Return type

Tuple[Union[TimeSeries, Sequence[TimeSeries]], Union[TimeSeries, Sequence[TimeSeries]]]

generate_predict_encodings(n, series, past_covariates=None, future_covariates=None)

Generates covariate encodings for the inference/prediction set and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together. The encodings are generated by the encoders defined at model creation with parameter add_encoders. Pass the same series, past_covariates, and future_covariates that you intend to use for prediction.

Parameters
  • n (int) – The number of prediction time steps after the end of series intended to be used for prediction.

  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The series or sequence of series with target values intended to be used for prediction.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the past-observed covariates series intended to be used for prediction. The dimensions must match those of the covariates used for training.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the future-known covariates series intended to be used for prediction. The dimensions must match those of the covariates used for training.

Returns

A tuple of (past covariates, future covariates). Each covariate contains the original as well as the encoded covariates.

Return type

Tuple[Union[TimeSeries, Sequence[TimeSeries]], Union[TimeSeries, Sequence[TimeSeries]]]

classmethod gridsearch(parameters, series, past_covariates=None, future_covariates=None, forecast_horizon=None, stride=1, start=None, start_format='value', last_points_only=False, show_warnings=True, val_series=None, use_fitted_values=False, metric=<function mape>, reduction=<function mean>, verbose=False, n_jobs=1, n_random_samples=None, fit_kwargs=None, predict_kwargs=None)

Find the best hyper-parameters among a given set using a grid search.

This function has 3 modes of operation: Expanding window mode, split mode and fitted value mode. The three modes of operation evaluate every possible combination of hyper-parameter values provided in the parameters dictionary by instantiating the model_class subclass of ForecastingModel with each combination, and returning the best-performing model with regard to the metric function. The metric function is expected to return an error value, thus the model resulting in the smallest metric output will be chosen.

The relationship of the training data and test data depends on the mode of operation.

Expanding window mode (activated when forecast_horizon is passed): For every hyperparameter combination, the model is repeatedly trained and evaluated on different splits of series. This process is accomplished by using the backtest() function as a subroutine to produce historic forecasts starting from start that are compared against the ground truth values of series. Note that the model is retrained for every single prediction, thus this mode is slower.

Split window mode (activated when val_series is passed): This mode will be used when the val_series argument is passed. For every hyper-parameter combination, the model is trained on series and evaluated on val_series.

Fitted value mode (activated when use_fitted_values is set to True): For every hyper-parameter combination, the model is trained on series and evaluated on the resulting fitted values. Not all models have fitted values, and this method raises an error if the model doesn’t have a fitted_values member. The fitted values are the result of the fit of the model on series. Comparing with the fitted values can be a quick way to assess the model, but one cannot see if the model is overfitting the series.

Derived classes must ensure that a single instance of a model will not share parameters with the other instances, e.g., saving models in the same path. Otherwise, an unexpected behavior can arise while running several models in parallel (when n_jobs != 1). If this cannot be avoided, then gridsearch should be redefined, forcing n_jobs = 1.

Currently this method only supports deterministic predictions (i.e. when models’ predictions have only 1 sample).

Parameters
  • model_class – The ForecastingModel subclass to be tuned for ‘series’.

  • parameters (dict) – A dictionary containing as keys hyperparameter names, and as values lists of values for the respective hyperparameter.

  • series (TimeSeries) – The target series used as input and target for training.

  • past_covariates (Optional[TimeSeries]) – Optionally, a past-observed covariate series. This applies only if the model supports past covariates.

  • future_covariates (Optional[TimeSeries]) – Optionally, a future-known covariate series. This applies only if the model supports future covariates.

  • forecast_horizon (Optional[int]) – The integer value of the forecasting horizon. Activates expanding window mode.

  • stride (int) – Only used in expanding window mode. The number of time steps between two consecutive predictions.

  • start (Union[Timestamp, float, int, None]) –

    Only used in expanding window mode. Optionally, the first point in time at which a prediction is computed. This parameter supports: float, int, pandas.Timestamp, and None. If a float, it is the proportion of the time series that should lie before the first prediction point. If an int, it is either the index position of the first prediction point for series with a pd.DatetimeIndex, or the index value for series with a pd.RangeIndex. The latter can be changed to the index position with start_format=”position”. If a pandas.Timestamp, it is the time stamp of the first prediction point. If None, the first prediction point will automatically be set to:

    • the first predictable point if retrain is False, or retrain is a Callable and the first predictable point is earlier than the first trainable point.

    • the first trainable point if retrain is True or int (given train_length), or retrain is a Callable and the first trainable point is earlier than the first predictable point.

    • the first trainable point (given train_length) otherwise

    Note: Raises a ValueError if start yields a time outside the time index of series. Note: If start is outside the possible historical forecasting times, will ignore the parameter (default behavior with None) and start at the first trainable/predictable point.

  • start_format (Literal[‘position’, ‘value’]) – Only used in expanding window mode. Defines the start format. Only effective when start is an integer and series is indexed with a pd.RangeIndex. If set to ‘position’, start corresponds to the index position of the first predicted point and can range from (-len(series), len(series) - 1). If set to ‘value’, start corresponds to the index value/label of the first predicted point. Will raise an error if the value is not in series’ index. Default: 'value'

  • last_points_only (bool) – Only used in expanding window mode. Whether to use the whole forecasts or only the last point of each forecast to compute the error.

  • show_warnings (bool) – Only used in expanding window mode. Whether to show warnings related to the start parameter.

  • val_series (Optional[TimeSeries]) – The TimeSeries instance used for validation in split mode. If provided, this series must start right after the end of series; so that a proper comparison of the forecast can be made.

  • use_fitted_values (bool) – If True, uses the comparison with the fitted values. Raises an error if fitted_values is not an attribute of model_class.

  • metric (Callable[[TimeSeries, TimeSeries], float]) –

    A metric function that returns the error between two TimeSeries as a float value . Must either be one of Darts’ “aggregated over time” metrics (see here), or a custom metric that as input two TimeSeries and returns the error

  • reduction (Callable[[ndarray], float]) – A reduction function (mapping array to float) describing how to aggregate the errors obtained on the different validation series when backtesting. By default it’ll compute the mean of errors.

  • verbose – Whether to print progress.

  • n_jobs (int) – The number of jobs to run in parallel. Parallel jobs are created only when there are two or more parameters combinations to evaluate. Each job will instantiate, train, and evaluate a different instance of the model. Defaults to 1 (sequential). Setting the parameter to -1 means using all the available cores.

  • n_random_samples (Union[int, float, None]) – The number/ratio of hyperparameter combinations to select from the full parameter grid. This will perform a random search instead of using the full grid. If an integer, n_random_samples is the number of parameter combinations selected from the full grid and must be between 0 and the total number of parameter combinations. If a float, n_random_samples is the ratio of parameter combinations selected from the full grid and must be between 0 and 1. Defaults to None, for which random selection will be ignored.

  • fit_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model fit() method.

  • predict_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model predict() method.

Returns

A tuple containing an untrained model_class instance created from the best-performing hyper-parameters, along with a dictionary containing these best hyper-parameters, and metric score for the best hyper-parameters.

Return type

ForecastingModel, Dict, float

historical_forecasts(series, past_covariates=None, future_covariates=None, num_samples=1, train_length=None, start=None, start_format='value', forecast_horizon=1, stride=1, retrain=True, overlap_end=False, last_points_only=True, verbose=False, show_warnings=True, predict_likelihood_parameters=False, enable_optimization=True, fit_kwargs=None, predict_kwargs=None)

Compute the historical forecasts that would have been obtained by this model on (potentially multiple) series.

This method repeatedly builds a training set: either expanding from the beginning of series or moving with a fixed length train_length. It trains the model on the training set, emits a forecast of length equal to forecast_horizon, and then moves the end of the training set forward by stride time steps.

By default, this method will return one (or a sequence of) single time series made up of the last point of each historical forecast. This time series will thus have a frequency of series.freq * stride. If last_points_only is set to False, it will instead return one (or a sequence of) list of the historical forecasts series.

By default, this method always re-trains the models on the entire available history, corresponding to an expanding window strategy. If retrain is set to False, the model must have been fit before. This is not supported by all models.

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The (or a sequence of) target time series used to successively train and compute the historical forecasts.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one (or a sequence of) past-observed covariate series. This applies only if the model supports past covariates.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one (or a sequence of) of future-known covariate series. This applies only if the model supports future covariates.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Use values >1 only for probabilistic models.

  • train_length (Optional[int]) – Number of time steps in our training set (size of backtesting window to train on). Only effective when retrain is not False. Default is set to train_length=None where it takes all available time steps up until prediction time, otherwise the moving window strategy is used. If larger than the number of time steps available, all steps up until prediction time are used, as in default case. Needs to be at least min_train_series_length.

  • start (Union[Timestamp, float, int, None]) –

    Optionally, the first point in time at which a prediction is computed. This parameter supports: float, int, pandas.Timestamp, and None. If a float, it is the proportion of the time series that should lie before the first prediction point. If an int, it is either the index position of the first prediction point for series with a pd.DatetimeIndex, or the index value for series with a pd.RangeIndex. The latter can be changed to the index position with start_format=”position”. If a pandas.Timestamp, it is the time stamp of the first prediction point. If None, the first prediction point will automatically be set to:

    • the first predictable point if retrain is False, or retrain is a Callable and the first predictable point is earlier than the first trainable point.

    • the first trainable point if retrain is True or int (given train_length), or retrain is a Callable and the first trainable point is earlier than the first predictable point.

    • the first trainable point (given train_length) otherwise

    Note: If the model uses a shifted output (output_chunk_shift > 0), then the first predicted point is also shifted by output_chunk_shift points into the future. Note: Raises a ValueError if start yields a time outside the time index of series. Note: If start is outside the possible historical forecasting times, will ignore the parameter (default behavior with None) and start at the first trainable/predictable point.

  • start_format (Literal[‘position’, ‘value’]) – Defines the start format. Only effective when start is an integer and series is indexed with a pd.RangeIndex. If set to ‘position’, start corresponds to the index position of the first predicted point and can range from (-len(series), len(series) - 1). If set to ‘value’, start corresponds to the index value/label of the first predicted point. Will raise an error if the value is not in series’ index. Default: 'value'

  • forecast_horizon (int) – The forecast horizon for the predictions.

  • stride (int) – The number of time steps between two consecutive predictions.

  • retrain (Union[bool, int, Callable[…, bool]]) –

    Whether and/or on which condition to retrain the model before predicting. This parameter supports 3 different datatypes: bool, (positive) int, and Callable (returning a bool). In the case of bool: retrain the model at each step (True), or never retrains the model (False). In the case of int: the model is retrained every retrain iterations. In the case of Callable: the model is retrained whenever callable returns True. The callable must have the following positional arguments:

    • counter (int): current retrain iteration

    • pred_time (pd.Timestamp or int): timestamp of forecast time (end of the training series)

    • train_series (TimeSeries): train series up to pred_time

    • past_covariates (TimeSeries): past_covariates series up to pred_time

    • future_covariates (TimeSeries): future_covariates series up to min(pred_time + series.freq * forecast_horizon, series.end_time())

    Note: if any optional *_covariates are not passed to historical_forecast, None will be passed to the corresponding retrain function argument. Note: some models do require being retrained every time and do not support anything other than retrain=True.

  • overlap_end (bool) – Whether the returned forecasts can go beyond the series’ end or not.

  • last_points_only (bool) – Whether to retain only the last point of each historical forecast. If set to True, the method returns a single TimeSeries containing the successive point forecasts. Otherwise, returns a list of historical TimeSeries forecasts.

  • verbose (bool) – Whether to print progress.

  • show_warnings (bool) – Whether to show warnings related to historical forecasts optimization, or parameters start and train_length.

  • predict_likelihood_parameters (bool) – If set to True, the model predict the parameters of its Likelihood parameters instead of the target. Only supported for probabilistic models with a likelihood, num_samples = 1 and n<=output_chunk_length. Default: False

  • enable_optimization (bool) – Whether to use the optimized version of historical_forecasts when supported and available.

  • fit_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model fit() method.

  • predict_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model predict() method.

Return type

Union[TimeSeries, List[TimeSeries], List[List[TimeSeries]]]

Returns

  • TimeSeries – A single historical forecast for a single series and last_points_only=True: it contains only the predictions at step forecast_horizon from all historical forecasts.

  • List[TimeSeries] – A list of historical forecasts for:

    • a sequence (list) of series and last_points_only=True: for each series, it contains only the predictions at step forecast_horizon from all historical forecasts.

    • a single series and last_points_only=False: for each historical forecast, it contains the entire horizon forecast_horizon.

  • List[List[TimeSeries]] – A list of lists of historical forecasts for a sequence of series and last_points_only=False. For each series, and historical forecast, it contains the entire horizon forecast_horizon. The outer list is over the series provided in the input sequence, and the inner lists contain the historical forecasts for each series.

property input_chunk_length: int
Return type

int

property likelihood: Optional[Likelihood]
Return type

Optional[Likelihood]

static load(path, **kwargs)

Loads a model from a given file path.

Example for loading a general save from RNNModel:

from darts.models import RNNModel

model_loaded = RNNModel.load(path)

Example for loading an RNNModel to CPU that was saved on GPU:

from darts.models import RNNModel

model_loaded = RNNModel.load(path, map_location="cpu")
model_loaded.to_cpu()
Parameters
  • path (str) – Path from which to load the model. If no path was specified when saving the model, the automatically generated path ending with “.pt” has to be provided.

  • **kwargs – Additional kwargs for PyTorch Lightning’s LightningModule.load_from_checkpoint() method, such as map_location to load the model onto a different device than the one from which it was saved. For more information, read the official documentation.

Return type

TorchForecastingModel

static load_from_checkpoint(model_name, work_dir=None, file_name=None, best=True, **kwargs)

Load the model from automatically saved checkpoints under ‘{work_dir}/darts_logs/{model_name}/checkpoints/’. This method is used for models that were created with save_checkpoints=True.

If you manually saved your model, consider using load().

Example for loading a RNNModel from checkpoint (model_name is the model_name used at model creation):

from darts.models import RNNModel

model_loaded = RNNModel.load_from_checkpoint(model_name, best=True)

If file_name is given, returns the model saved under ‘{work_dir}/darts_logs/{model_name}/checkpoints/{file_name}’.

If file_name is not given, will try to restore the best checkpoint (if best is True) or the most recent checkpoint (if best is False from ‘{work_dir}/darts_logs/{model_name}/checkpoints/’.

Example for loading an RNNModel checkpoint to CPU that was saved on GPU:

from darts.models import RNNModel

model_loaded = RNNModel.load_from_checkpoint(model_name, best=True, map_location="cpu")
model_loaded.to_cpu()
Parameters
  • model_name (str) – The name of the model, used to retrieve the checkpoints folder’s name.

  • work_dir (Optional[str]) – Working directory (containing the checkpoints folder). Defaults to current working directory.

  • file_name (Optional[str]) – The name of the checkpoint file. If not specified, use the most recent one.

  • best (bool) – If set, will retrieve the best model (according to validation loss) instead of the most recent one. Only is ignored when file_name is given.

  • **kwargs

    Additional kwargs for PyTorch Lightning’s LightningModule.load_from_checkpoint() method, such as map_location to load the model onto a different device than the one from which it was saved. For more information, read the official documentation.

Returns

The corresponding trained TorchForecastingModel.

Return type

TorchForecastingModel

load_weights(path, load_encoders=True, skip_checks=False, **kwargs)

Loads the weights from a manually saved model (saved with save()).

Note: This method needs to be able to access the darts model checkpoint (.pt) in order to load the encoders and perform sanity checks on the model parameters.

Parameters
  • path (str) – Path from which to load the model’s weights. If no path was specified when saving the model, the automatically generated path ending with “.pt” has to be provided.

  • load_encoders (bool) – If set, will load the encoders from the model to enable direct call of fit() or predict(). Default: True.

  • skip_checks (bool) – If set, will disable the loading of the encoders and the sanity checks on model parameters (not recommended). Cannot be used with load_encoders=True. Default: False.

  • **kwargs

    Additional kwargs for PyTorch’s load() method, such as map_location to load the model onto a different device than the one from which it was saved. For more information, read the official documentation.

load_weights_from_checkpoint(model_name=None, work_dir=None, file_name=None, best=True, strict=True, load_encoders=True, skip_checks=False, **kwargs)

Load only the weights from automatically saved checkpoints under ‘{work_dir}/darts_logs/{model_name}/ checkpoints/’. This method is used for models that were created with save_checkpoints=True and that need to be re-trained or fine-tuned with different optimizer or learning rate scheduler. However, it can also be used to load weights for inference.

To resume an interrupted training, please consider using load_from_checkpoint() which also reload the trainer, optimizer and learning rate scheduler states.

For manually saved model, consider using load() or load_weights() instead.

Note: This method needs to be able to access the darts model checkpoint (.pt) in order to load the encoders and perform sanity checks on the model parameters.

Parameters
  • model_name (Optional[str]) – The name of the model, used to retrieve the checkpoints folder’s name. Default: self.model_name.

  • work_dir (Optional[str]) – Working directory (containing the checkpoints folder). Defaults to current working directory.

  • file_name (Optional[str]) – The name of the checkpoint file. If not specified, use the most recent one.

  • best (bool) – If set, will retrieve the best model (according to validation loss) instead of the most recent one. Only is ignored when file_name is given. Default: True.

  • strict (bool) –

    If set, strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict(). Default: True. For more information, read the official documentation.

  • load_encoders (bool) – If set, will load the encoders from the model to enable direct call of fit() or predict(). Default: True.

  • skip_checks (bool) – If set, will disable the loading of the encoders and the sanity checks on model parameters (not recommended). Cannot be used with load_encoders=True. Default: False.

  • **kwargs

    Additional kwargs for PyTorch’s load() method, such as map_location to load the model onto a different device than the one from which it was saved. For more information, read the official documentation.

lr_find(series, past_covariates=None, future_covariates=None, val_series=None, val_past_covariates=None, val_future_covariates=None, trainer=None, verbose=None, epochs=0, max_samples_per_ts=None, num_loader_workers=0, min_lr=1e-08, max_lr=1, num_training=100, mode='exponential', early_stop_threshold=4.0)

A wrapper around PyTorch Lightning’s Tuner.lr_find(). Performs a range test of good initial learning rates, to reduce the amount of guesswork in picking a good starting learning rate. For more information on PyTorch Lightning’s Tuner check out this link. It is recommended to increase the number of epochs if the tuner did not give satisfactory results. Consider creating a new model object with the suggested learning rate for example using model creation parameters optimizer_cls, optimizer_kwargs, lr_scheduler_cls, and lr_scheduler_kwargs.

Example using a RNNModel:

import torch
from darts.datasets import AirPassengersDataset
from darts.models import NBEATSModel

series = AirPassengersDataset().load()
train, val = series[:-18], series[-18:]
model = NBEATSModel(input_chunk_length=12, output_chunk_length=6, random_state=42)
# run the learning rate tuner
results = model.lr_find(series=train, val_series=val)
# plot the results
results.plot(suggest=True, show=True)
# create a new model with the suggested learning rate
model = NBEATSModel(
    input_chunk_length=12,
    output_chunk_length=6,
    random_state=42,
    optimizer_cls=torch.optim.Adam,
    optimizer_kwargs={"lr": results.suggestion()}
)
Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – A series or sequence of series serving as target (i.e. what the model will be trained to forecast)

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series specifying past-observed covariates

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series specifying future-known covariates

  • val_series (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one or a sequence of validation target series, which will be used to compute the validation loss throughout training and keep track of the best performing models.

  • val_past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the past covariates corresponding to the validation series (must match covariates)

  • val_future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the future covariates corresponding to the validation series (must match covariates)

  • trainer (Optional[Trainer]) – Optionally, a custom PyTorch-Lightning Trainer object to perform training. Using a custom trainer will override Darts’ default trainer.

  • verbose (Optional[bool]) – Optionally, whether to print the progress. Ignored if there is a ProgressBar callback in pl_trainer_kwargs.

  • epochs (int) – If specified, will train the model for epochs (additional) epochs, irrespective of what n_epochs was provided to the model constructor.

  • max_samples_per_ts (Optional[int]) – Optionally, a maximum number of samples to use per time series. Models are trained in a supervised fashion by constructing slices of (input, output) examples. On long time series, this can result in unnecessarily large number of training samples. This parameter upper-bounds the number of training samples per time series (taking only the most recent samples in each series). Leaving to None does not apply any upper bound.

  • num_loader_workers (int) – Optionally, an integer specifying the num_workers to use in PyTorch DataLoader instances, both for the training and validation loaders (if any). A larger number of workers can sometimes increase performance, but can also incur extra overheads and increase memory usage, as more batches are loaded in parallel.

  • min_lr (float) – minimum learning rate to investigate

  • max_lr (float) – maximum learning rate to investigate

  • num_training (int) – number of learning rates to test

  • mode (str) – Search strategy to update learning rate after each batch: ‘exponential’: Increases the learning rate exponentially. ‘linear’: Increases the learning rate linearly.

  • early_stop_threshold (float) – Threshold for stopping the search. If the loss at any point is larger than early_stop_threshold*best_loss then the search is stopped. To disable, set to None

Returns

_LRFinder object of Lightning containing the results of the LR sweep.

Return type

lr_finder

property min_train_samples: int

The minimum number of samples for training the model.

Return type

int

property model_created: bool
Return type

bool

property model_params: dict
Return type

dict

property output_chunk_length: int

Number of time steps predicted at once by the model, not defined for statistical models.

Return type

int

property output_chunk_shift: int

Number of time steps that the output/prediction starts after the end of the input.

Return type

int

predict(n, series=None, past_covariates=None, future_covariates=None, trainer=None, batch_size=None, verbose=None, n_jobs=1, roll_size=None, num_samples=1, num_loader_workers=0, mc_dropout=False, predict_likelihood_parameters=False, show_warnings=True)

Predict the n time step following the end of the training series, or of the specified series.

Prediction is performed with a PyTorch Lightning Trainer. It uses a default Trainer object from presets and pl_trainer_kwargs used at model creation. You can also use a custom Trainer with optional parameter trainer. For more information on PyTorch Lightning Trainers check out this link .

Below, all possible parameters are documented, but not all models support all parameters. For instance, all the PastCovariatesTorchModel support only past_covariates and not future_covariates. Darts will complain if you try calling predict() on a model with the wrong covariates argument.

Darts will also complain if the provided covariates do not have a sufficient time span. In general, not all models require the same covariates’ time spans:

  • Models relying on past covariates require the last input_chunk_length of the past_covariates
    points to be known at prediction time. For horizon values n > output_chunk_length, these models
    require at least the next n - output_chunk_length future values to be known as well.
  • Models relying on future covariates require the next n values to be known.
    In addition (for DualCovariatesTorchModel and MixedCovariatesTorchModel), they also
    require the “historic” values of these future covariates (over the past input_chunk_length).

When handling covariates, Darts will try to use the time axes of the target and the covariates to come up with the right time slices. So the covariates can be longer than needed; as long as the time axes are correct Darts will handle them correctly. It will also complain if their time span is not sufficient.

Parameters
  • n (int) – The number of time steps after the end of the training time series for which to produce predictions

  • series (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series, representing the history of the target series whose future is to be predicted. If specified, the method returns the forecasts of these series. Otherwise, the method returns the forecast of the (single) training series.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the past-observed covariates series needed as inputs for the model. They must match the covariates used for training in terms of dimension.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the future-known covariates series needed as inputs for the model. They must match the covariates used for training in terms of dimension.

  • trainer (Optional[Trainer]) – Optionally, a custom PyTorch-Lightning Trainer object to perform prediction. Using a custom trainer will override Darts’ default trainer.

  • batch_size (Optional[int]) – Size of batches during prediction. Defaults to the models’ training batch_size value.

  • verbose (Optional[bool]) – Optionally, whether to print the progress. Ignored if there is a ProgressBar callback in pl_trainer_kwargs.

  • n_jobs (int) – The number of jobs to run in parallel. -1 means using all processors. Defaults to 1.

  • roll_size (Optional[int]) – For self-consuming predictions, i.e. n > output_chunk_length, determines how many outputs of the model are fed back into it at every iteration of feeding the predicted target (and optionally future covariates) back into the model. If this parameter is not provided, it will be set output_chunk_length by default.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Should be left set to 1 for deterministic models.

  • num_loader_workers (int) – Optionally, an integer specifying the num_workers to use in PyTorch DataLoader instances, for the inference/prediction dataset loaders (if any). A larger number of workers can sometimes increase performance, but can also incur extra overheads and increase memory usage, as more batches are loaded in parallel.

  • mc_dropout (bool) – Optionally, enable monte carlo dropout for predictions using neural network based models. This allows bayesian approximation by specifying an implicit prior over learned models.

  • predict_likelihood_parameters (bool) – If set to True, the model predict the parameters of its Likelihood parameters instead of the target. Only supported for probabilistic models with a likelihood, num_samples = 1 and n<=output_chunk_length. Default: False.

  • show_warnings (bool) – Optionally, control whether warnings are shown. Not effective for all models.

Returns

One or several time series containing the forecasts of series, or the forecast of the training series if series is not specified and the model has been trained on a single series.

Return type

Union[TimeSeries, Sequence[TimeSeries]]

predict_from_dataset(n, input_series_dataset, trainer=None, batch_size=None, verbose=None, n_jobs=1, roll_size=None, num_samples=1, num_loader_workers=0, mc_dropout=False, predict_likelihood_parameters=False)

This method allows for predicting with a specific darts.utils.data.InferenceDataset instance. These datasets implement a PyTorch Dataset, and specify how the target and covariates are sliced for inference. In most cases, you’ll rather want to call predict() instead, which will create an appropriate InferenceDataset for you.

Prediction is performed with a PyTorch Lightning Trainer. It uses a default Trainer object from presets and pl_trainer_kwargs used at model creation. You can also use a custom Trainer with optional parameter trainer. For more information on PyTorch Lightning Trainers check out this link .

Parameters
  • n (int) – The number of time steps after the end of the training time series for which to produce predictions

  • input_series_dataset (InferenceDataset) – Optionally, a series or sequence of series, representing the history of the target series’ whose future is to be predicted. If specified, the method returns the forecasts of these series. Otherwise, the method returns the forecast of the (single) training series.

  • trainer (Optional[Trainer]) – Optionally, a custom PyTorch-Lightning Trainer object to perform prediction. Using a custom trainer will override Darts’ default trainer.

  • batch_size (Optional[int]) – Size of batches during prediction. Defaults to the models batch_size value.

  • verbose (Optional[bool]) – Optionally, whether to print the progress. Ignored if there is a ProgressBar callback in pl_trainer_kwargs.

  • n_jobs (int) – The number of jobs to run in parallel. -1 means using all processors. Defaults to 1.

  • roll_size (Optional[int]) – For self-consuming predictions, i.e. n > output_chunk_length, determines how many outputs of the model are fed back into it at every iteration of feeding the predicted target (and optionally future covariates) back into the model. If this parameter is not provided, it will be set output_chunk_length by default.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Should be left set to 1 for deterministic models.

  • num_loader_workers (int) – Optionally, an integer specifying the num_workers to use in PyTorch DataLoader instances, for the inference/prediction dataset loaders (if any). A larger number of workers can sometimes increase performance, but can also incur extra overheads and increase memory usage, as more batches are loaded in parallel.

  • mc_dropout (bool) – Optionally, enable monte carlo dropout for predictions using neural network based models. This allows bayesian approximation by specifying an implicit prior over learned models.

  • predict_likelihood_parameters (bool) – If set to True, the model predict the parameters of its Likelihood parameters instead of the target. Only supported for probabilistic models with a likelihood, num_samples = 1 and n<=output_chunk_length. Default: False

Returns

Returns one or more forecasts for time series.

Return type

Sequence[TimeSeries]

reset_model()

Resets the model object and removes all stored data - model, checkpoints, loggers and training history.

residuals(series, past_covariates=None, future_covariates=None, historical_forecasts=None, num_samples=1, train_length=None, start=None, start_format='value', forecast_horizon=1, stride=1, retrain=True, last_points_only=True, metric=<function err>, verbose=False, show_warnings=True, metric_kwargs=None, fit_kwargs=None, predict_kwargs=None, values_only=False)

Compute the residuals produced by this model on a (or sequence of) TimeSeries.

This function computes the difference (or one of Darts’ “per time step” metrics) between the actual observations from series and the fitted values obtained by training the model on series (or using a pre-trained model with retrain=False). Not all models support fitted values, so we use historical forecasts as an approximation for them.

In sequence this method performs:

  • compute historical forecasts for each series or use pre-computed historical_forecasts (see historical_forecasts() for more details). How the historical forecasts are generated can be configured with parameters num_samples, train_length, start, start_format, forecast_horizon, stride, retrain, last_points_only, fit_kwargs, and predict_kwargs.

  • compute a backtest using a “per time step” metric between the historical forecasts and series per component/column and time step (see backtest() for more details). By default, uses the residuals err() as a metric.

  • create and return TimeSeries (or simply a np.ndarray with values_only=True) with the time index from historical forecasts, and values from the metrics per component and time step.

This method works for single or multiple univariate or multivariate series. It uses the median prediction (when dealing with stochastic forecasts).

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The univariate TimeSeries instance which the residuals will be computed for.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – One or several past-observed covariate time series.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – One or several future-known covariate time series.

  • forecast_horizon (int) – The forecasting horizon used to predict each fitted value.

  • historical_forecasts (Union[TimeSeries, Sequence[TimeSeries], Sequence[Sequence[TimeSeries]], None]) – Optionally, the (or a sequence of / a sequence of sequences of) historical forecasts time series to be evaluated. Corresponds to the output of historical_forecasts(). The same series and last_points_only values must be passed that were used to generate the historical forecasts. If provided, will skip historical forecasting and ignore all parameters except series, last_points_only, metric, and reduction.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Use values >1 only for probabilistic models.

  • train_length (Optional[int]) – Number of time steps in our training set (size of backtesting window to train on). Only effective when retrain is not False. Default is set to train_length=None where it takes all available time steps up until prediction time, otherwise the moving window strategy is used. If larger than the number of time steps available, all steps up until prediction time are used, as in default case. Needs to be at least min_train_series_length.

  • start (Union[Timestamp, float, int, None]) –

    Optionally, the first point in time at which a prediction is computed. This parameter supports: float, int, pandas.Timestamp, and None. If a float, it is the proportion of the time series that should lie before the first prediction point. If an int, it is either the index position of the first prediction point for series with a pd.DatetimeIndex, or the index value for series with a pd.RangeIndex. The latter can be changed to the index position with start_format=”position”. If a pandas.Timestamp, it is the time stamp of the first prediction point. If None, the first prediction point will automatically be set to:

    • the first predictable point if retrain is False, or retrain is a Callable and the first predictable point is earlier than the first trainable point.

    • the first trainable point if retrain is True or int (given train_length), or retrain is a Callable and the first trainable point is earlier than the first predictable point.

    • the first trainable point (given train_length) otherwise

    Note: Raises a ValueError if start yields a time outside the time index of series. Note: If start is outside the possible historical forecasting times, will ignore the parameter (default behavior with None) and start at the first trainable/predictable point.

  • start_format (Literal[‘position’, ‘value’]) – Defines the start format. Only effective when start is an integer and series is indexed with a pd.RangeIndex. If set to ‘position’, start corresponds to the index position of the first predicted point and can range from (-len(series), len(series) - 1). If set to ‘value’, start corresponds to the index value/label of the first predicted point. Will raise an error if the value is not in series’ index. Default: 'value'

  • forecast_horizon – The forecast horizon for the point predictions.

  • stride (int) – The number of time steps between two consecutive predictions.

  • retrain (Union[bool, int, Callable[…, bool]]) –

    Whether and/or on which condition to retrain the model before predicting. This parameter supports 3 different datatypes: bool, (positive) int, and Callable (returning a bool). In the case of bool: retrain the model at each step (True), or never retrains the model (False). In the case of int: the model is retrained every retrain iterations. In the case of Callable: the model is retrained whenever callable returns True. The callable must have the following positional arguments:

    • counter (int): current retrain iteration

    • pred_time (pd.Timestamp or int): timestamp of forecast time (end of the training series)

    • train_series (TimeSeries): train series up to pred_time

    • past_covariates (TimeSeries): past_covariates series up to pred_time

    • future_covariates (TimeSeries): future_covariates series up to min(pred_time + series.freq * forecast_horizon, series.end_time())

    Note: if any optional *_covariates are not passed to historical_forecast, None will be passed to the corresponding retrain function argument. Note: some models do require being retrained every time and do not support anything other than retrain=True.

  • last_points_only (bool) – Whether to use the whole historical forecasts or only the last point of each forecast to compute the error.

  • metric (Callable[…, Union[float, List[float], ndarray, List[ndarray]]]) –

    Either one of Darts’ “per time step” metrics (see here), or a custom metric that has an identical signature as Darts’ “per time step” metrics, uses decorators multi_ts_support() and multi_ts_support(), and returns one value per time step.

  • verbose (bool) – Whether to print progress.

  • show_warnings (bool) – Whether to show warnings related to parameters start, and train_length.

  • metric_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to metric(), such as ‘n_jobs’ for parallelization, ‘m’ for scaled metrics, etc. Will pass arguments only if they are present in the corresponding metric signature. Ignores reduction arguments “series_reduction”, “component_reduction”, “time_reduction”, and parameter ‘insample’ for scaled metrics (e.g. mase`, rmsse, …), as they are handled internally.

  • fit_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model fit() method.

  • predict_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model predict() method.

  • values_only (bool) – Whether to return the residuals as np.ndarray. If False, returns residuals as TimeSeries.

Return type

Union[TimeSeries, List[TimeSeries], List[List[TimeSeries]]]

Returns

  • TimeSeries – Residual TimeSeries for a single series and historical_forecasts generated with last_points_only=True.

  • List[TimeSeries] – A list of residual TimeSeries for a sequence (list) of series with last_points_only=True. The residual list has length len(series).

  • List[List[TimeSeries]] – A list of lists of residual TimeSeries for a sequence of series with last_points_only=False. The outer residual list has length len(series). The inner lists consist of the residuals from all possible series-specific historical forecasts.

save(path=None)

Saves the model under a given path.

Creates two files under path (model object) and path.ckpt (checkpoint).

Example for saving and loading a RNNModel:

from darts.models import RNNModel

model = RNNModel(input_chunk_length=4)

model.save("my_model.pt")
model_loaded = RNNModel.load("my_model.pt")
Parameters

path (Optional[str]) – Path under which to save the model at its current state. Please avoid path starting with “last-” or “best-” to avoid collision with Pytorch-Ligthning checkpoints. If no path is specified, the model is automatically saved under "{ModelClass}_{YYYY-mm-dd_HH_MM_SS}.pt". E.g., "RNNModel_2020-01-01_12_00_00.pt".

Return type

None

property supports_future_covariates: bool

Whether model supports future covariates

Return type

bool

supports_likelihood_parameter_prediction()

Whether model instance supports direct prediction of likelihood parameters

Return type

bool

property supports_multivariate: bool

Whether the model considers more than one variate in the time series.

Return type

bool

property supports_optimized_historical_forecasts: bool

Whether the model supports optimized historical forecasts

Return type

bool

property supports_past_covariates: bool

Whether model supports past covariates

Return type

bool

supports_probabilistic_prediction()

Checks if the forecasting model with this configuration supports probabilistic predictions.

By default, returns False. Needs to be overwritten by models that do support probabilistic predictions.

Return type

bool

property supports_static_covariates: bool

Whether model supports static covariates

Return type

bool

property supports_transferrable_series_prediction: bool

Whether the model supports prediction for any input series.

Return type

bool

to_cpu()

Updates the PyTorch Lightning Trainer parameters to move the model to CPU the next time :fun:`fit()` or predict() is called.

property uses_future_covariates: bool

Whether the model uses future covariates, once fitted.

Return type

bool

property uses_past_covariates: bool

Whether the model uses past covariates, once fitted.

Return type

bool

property uses_static_covariates: bool

Whether the model uses static covariates, once fitted.

Return type

bool

class darts.models.forecasting.global_baseline_models.GlobalNaiveDrift(input_chunk_length, output_chunk_length, output_chunk_shift=0, **kwargs)[source]

Bases: _NoCovariatesMixin, _GlobalNaiveModel

Global Naive Drift Model.

The model generates forecasts for each series as described below:

  • take the slope m from each target component between the input_chunk_length`th and last point before the end of the `series.

  • the forecast is m * x + c per component where x are the values range(1 + output_chunk_shift, 1 + output_chunk_length + output_chunk_shift), and c are the last values from each target component.

Depending on the horizon n used when calling model.predict(), the forecasts are either:

  • a linear drift if n <= output_chunk_length, or

  • a moving drift if n > output_chunk_length, as a result of the autoregressive prediction.

This model is equivalent to:

  • NaiveDrift, when input_chunk_length is equal to the length of the input target series and output_chunk_length=n.

Note

  • Model checkpointing with save_checkpoints=True, and checkpoint loading with load_from_checkpoint() and load_weights_from_checkpoint() are not supported for global naive models.

Parameters
  • input_chunk_length (int) – The length of the input sequence fed to the model.

  • output_chunk_length (int) – The length of the emitted forecast and output sequence fed to the model.

  • output_chunk_shift (int) – Optionally, the number of steps to shift the start of the output chunk into the future (relative to the input chunk end). This will create a gap between the input and output. If the model supports future_covariates, the future values are extracted from the shifted output chunk. Predictions will start output_chunk_shift steps after the end of the target series. If output_chunk_shift is set, the model cannot generate autoregressive predictions (n > output_chunk_length).

  • **kwargs – Optional arguments to initialize the pytorch_lightning.Module, pytorch_lightning.Trainer, and Darts’ TorchForecastingModel. Since naive models are not trained, the following parameters will have no effect: loss_fn, likelihood, optimizer_cls, optimizer_kwargs, lr_scheduler_cls, lr_scheduler_kwargs, n_epochs, save_checkpoints, and some of pl_trainer_kwargs.

Examples

>>> from darts.datasets import IceCreamHeaterDataset
>>> from darts.models import GlobalNaiveDrift
>>> # create list of multivariate series
>>> series_1 = IceCreamHeaterDataset().load()
>>> series_2 = series_1 + 100.
>>> series = [series_1, series_2]
>>> # predict 3 months, use drift over the last 60 months
>>> horizon, icl = 3, 60
>>> # linear drift (with `output_chunk_length = horizon`)
>>> model = GlobalNaiveDrift(input_chunk_length=icl, output_chunk_length=horizon)
>>> # predict after end of each multivariate series
>>> pred = model.fit(series).predict(n=horizon, series=series)
>>> [p.values() for p in pred]
[array([[24.135593, 74.28814 ],
       [24.271187, 74.57627 ],
       [24.40678 , 74.86441 ]]), array([[124.13559, 174.28813],
       [124.27119, 174.57628],
       [124.40678, 174.86441]])]
>>> # moving drift (with `output_chunk_length < horizon`)
>>> model = GlobalNaiveDrift(input_chunk_length=icl, output_chunk_length=1)
>>> pred = model.fit(series).predict(n=horizon, series=series)
>>> [p.values() for p in pred]
[array([[24.135593, 74.28814 ],
       [24.256536, 74.784546],
       [24.34563 , 75.45886 ]]), array([[124.13559, 174.28813],
       [124.25653, 174.78455],
       [124.34563, 175.45886]])]

Attributes

considers_static_covariates

Whether the model considers static covariates, if there are any.

extreme_lags

A 8-tuple containing in order: (min target lag, max target lag, min past covariate lag, max past covariate lag, min future covariate lag, max future covariate lag, output shift, max target lag train (only for RNNModel)).

min_train_samples

The minimum number of samples for training the model.

output_chunk_length

Number of time steps predicted at once by the model, not defined for statistical models.

output_chunk_shift

Number of time steps that the output/prediction starts after the end of the input.

supports_multivariate

Whether the model considers more than one variate in the time series.

supports_optimized_historical_forecasts

Whether the model supports optimized historical forecasts

supports_transferrable_series_prediction

Whether the model supports prediction for any input series.

uses_future_covariates

Whether the model uses future covariates, once fitted.

uses_past_covariates

Whether the model uses past covariates, once fitted.

uses_static_covariates

Whether the model uses static covariates, once fitted.

epochs_trained

input_chunk_length

likelihood

model_created

model_params

supports_future_covariates

supports_past_covariates

supports_static_covariates

Methods

backtest(series[, past_covariates, ...])

Compute error values that the model would have produced when used on (potentially multiple) series.

fit(series[, past_covariates, future_covariates])

Fit/train the model on a (or potentially multiple) series.

fit_from_dataset(train_dataset[, ...])

Train the model with a specific darts.utils.data.TrainingDataset instance.

generate_fit_encodings(series[, ...])

Generates the covariate encodings that were used/generated for fitting the model and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together.

generate_fit_predict_encodings(n, series[, ...])

Generates covariate encodings for training and inference/prediction and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together.

generate_predict_encodings(n, series[, ...])

Generates covariate encodings for the inference/prediction set and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together.

gridsearch(parameters, series[, ...])

Find the best hyper-parameters among a given set using a grid search.

historical_forecasts(series[, ...])

Compute the historical forecasts that would have been obtained by this model on (potentially multiple) series.

load(path, **kwargs)

Loads a model from a given file path.

load_from_checkpoint(model_name[, work_dir, ...])

Load the model from automatically saved checkpoints under '{work_dir}/darts_logs/{model_name}/checkpoints/'.

load_weights(path[, load_encoders, skip_checks])

Loads the weights from a manually saved model (saved with save()).

load_weights_from_checkpoint([model_name, ...])

Load only the weights from automatically saved checkpoints under '{work_dir}/darts_logs/{model_name}/ checkpoints/'.

lr_find(series[, past_covariates, ...])

A wrapper around PyTorch Lightning's Tuner.lr_find().

predict(n[, series, past_covariates, ...])

Predict the n time step following the end of the training series, or of the specified series.

predict_from_dataset(n, input_series_dataset)

This method allows for predicting with a specific darts.utils.data.InferenceDataset instance.

reset_model()

Resets the model object and removes all stored data - model, checkpoints, loggers and training history.

residuals(series[, past_covariates, ...])

Compute the residuals produced by this model on a (or sequence of) TimeSeries.

save([path])

Saves the model under a given path.

supports_likelihood_parameter_prediction()

Whether model instance supports direct prediction of likelihood parameters

supports_probabilistic_prediction()

Checks if the forecasting model with this configuration supports probabilistic predictions.

to_cpu()

Updates the PyTorch Lightning Trainer parameters to move the model to CPU the next time :fun:`fit()` or predict() is called.

backtest(series, past_covariates=None, future_covariates=None, historical_forecasts=None, num_samples=1, train_length=None, start=None, start_format='value', forecast_horizon=1, stride=1, retrain=True, overlap_end=False, last_points_only=False, metric=<function mape>, reduction=<function mean>, verbose=False, show_warnings=True, metric_kwargs=None, fit_kwargs=None, predict_kwargs=None)

Compute error values that the model would have produced when used on (potentially multiple) series.

If historical_forecasts are provided, the metric (given by the metric function) is evaluated directly on the forecast and the actual values. The same series must be passed that was used to generate the historical forecasts. Otherwise, it repeatedly builds a training set: either expanding from the beginning of series or moving with a fixed length train_length. It trains the current model on the training set, emits a forecast of length equal to forecast_horizon, and then moves the end of the training set forward by stride time steps. The metric is then evaluated on the forecast and the actual values. Finally, the method returns a reduction (the mean by default) of all these metric scores.

By default, this method uses each historical forecast (whole) to compute error scores. If last_points_only is set to True, it will use only the last point of each historical forecast. In this case, no reduction is used.

By default, this method always re-trains the models on the entire available history, corresponding to an expanding window strategy. If retrain is set to False (useful for models for which training might be time-consuming, such as deep learning models), the trained model will be used directly to emit the forecasts.

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The (or a sequence of) target time series used to successively train and evaluate the historical forecasts.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one (or a sequence of) past-observed covariate series. This applies only if the model supports past covariates.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one (or a sequence of) future-known covariate series. This applies only if the model supports future covariates.

  • historical_forecasts (Union[TimeSeries, Sequence[TimeSeries], Sequence[Sequence[TimeSeries]], None]) – Optionally, the (or a sequence of / a sequence of sequences of) historical forecasts time series to be evaluated. Corresponds to the output of historical_forecasts(). The same series and last_points_only values must be passed that were used to generate the historical forecasts. If provided, will skip historical forecasting and ignore all parameters except series, last_points_only, metric, and reduction.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Use values >1 only for probabilistic models.

  • train_length (Optional[int]) – Number of time steps in our training set (size of backtesting window to train on). Only effective when retrain is not False. Default is set to train_length=None where it takes all available time steps up until prediction time, otherwise the moving window strategy is used. If larger than the number of time steps available, all steps up until prediction time are used, as in default case. Needs to be at least min_train_series_length.

  • start (Union[Timestamp, float, int, None]) –

    Optionally, the first point in time at which a prediction is computed. This parameter supports: float, int, pandas.Timestamp, and None. If a float, it is the proportion of the time series that should lie before the first prediction point. If an int, it is either the index position of the first prediction point for series with a pd.DatetimeIndex, or the index value for series with a pd.RangeIndex. The latter can be changed to the index position with start_format=”position”. If a pandas.Timestamp, it is the time stamp of the first prediction point. If None, the first prediction point will automatically be set to:

    • the first predictable point if retrain is False, or retrain is a Callable and the first predictable point is earlier than the first trainable point.

    • the first trainable point if retrain is True or int (given train_length), or retrain is a Callable and the first trainable point is earlier than the first predictable point.

    • the first trainable point (given train_length) otherwise

    Note: Raises a ValueError if start yields a time outside the time index of series. Note: If start is outside the possible historical forecasting times, will ignore the parameter (default behavior with None) and start at the first trainable/predictable point.

  • start_format (Literal[‘position’, ‘value’]) – Defines the start format. Only effective when start is an integer and series is indexed with a pd.RangeIndex. If set to ‘position’, start corresponds to the index position of the first predicted point and can range from (-len(series), len(series) - 1). If set to ‘value’, start corresponds to the index value/label of the first predicted point. Will raise an error if the value is not in series’ index. Default: 'value'

  • forecast_horizon (int) – The forecast horizon for the point predictions.

  • stride (int) – The number of time steps between two consecutive predictions.

  • retrain (Union[bool, int, Callable[…, bool]]) –

    Whether and/or on which condition to retrain the model before predicting. This parameter supports 3 different datatypes: bool, (positive) int, and Callable (returning a bool). In the case of bool: retrain the model at each step (True), or never retrains the model (False). In the case of int: the model is retrained every retrain iterations. In the case of Callable: the model is retrained whenever callable returns True. The callable must have the following positional arguments:

    • counter (int): current retrain iteration

    • pred_time (pd.Timestamp or int): timestamp of forecast time (end of the training series)

    • train_series (TimeSeries): train series up to pred_time

    • past_covariates (TimeSeries): past_covariates series up to pred_time

    • future_covariates (TimeSeries): future_covariates series up to min(pred_time + series.freq * forecast_horizon, series.end_time())

    Note: if any optional *_covariates are not passed to historical_forecast, None will be passed to the corresponding retrain function argument. Note: some models do require being retrained every time and do not support anything other than retrain=True.

  • overlap_end (bool) – Whether the returned forecasts can go beyond the series’ end or not.

  • last_points_only (bool) – Whether to use the whole historical forecasts or only the last point of each forecast to compute the error.

  • metric (Union[Callable[…, Union[float, List[float], ndarray, List[ndarray]]], List[Callable[…, Union[float, List[float], ndarray, List[ndarray]]]]]) –

    A metric function or a list of metric functions. Each metric must either be a Darts metric (see here), or a custom metric that has an identical signature as Darts’ metrics, uses decorators multi_ts_support() and multi_ts_support(), and returns the metric score.

  • reduction (Optional[Callable[…, float]]) – A function used to combine the individual error scores obtained when last_points_only is set to False. When providing several metric functions, the function will receive the argument axis = 1 to obtain single value for each metric function. If explicitly set to None, the method will return a list of the individual error scores instead. Set to np.mean by default.

  • verbose (bool) – Whether to print progress.

  • show_warnings (bool) – Whether to show warnings related to parameters start, and train_length.

  • metric_kwargs (Union[Dict[str, Any], List[Dict[str, Any]], None]) – Additional arguments passed to metric(), such as ‘n_jobs’ for parallelization, ‘component_reduction’ for reducing the component wise metrics, seasonality ‘m’ for scaled metrics, etc. Will pass arguments to each metric separately and only if they are present in the corresponding metric signature. Parameter ‘insample’ for scaled metrics (e.g. mase`, rmsse, …) is ignored, as it is handled internally.

  • fit_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model fit() method.

  • predict_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model predict() method.

Return type

Union[float, ndarray, List[float], List[ndarray]]

Returns

  • float – A single backtest score for single uni/multivariate series, a single metric function and:

    • historical_forecasts generated with last_points_only=True

    • historical_forecasts generated with last_points_only=False and using a backtest reduction

  • np.ndarray – An numpy array of backtest scores. For single series and one of:

    • a single metric function, historical_forecasts generated with last_points_only=False and backtest reduction=None. The output has shape (n forecasts,).

    • multiple metric functions and historical_forecasts generated with last_points_only=False. The output has shape (n metrics,) when using a backtest reduction, and (n metrics, n forecasts) when reduction=None

    • multiple uni/multivariate series including series_reduction and at least one of component_reduction=None or time_reduction=None for “per time step metrics”

  • List[float] – Same as for type float but for a sequence of series. The returned metric list has length len(series) with the float metric for each input series.

  • List[np.ndarray] – Same as for type np.ndarray but for a sequence of series. The returned metric list has length len(series) with the np.ndarray metrics for each input series.

property considers_static_covariates: bool

Whether the model considers static covariates, if there are any.

Return type

bool

property epochs_trained: int
Return type

int

property extreme_lags: Tuple[Optional[int], Optional[int], Optional[int], Optional[int], Optional[int], Optional[int], int, Optional[int]]

A 8-tuple containing in order: (min target lag, max target lag, min past covariate lag, max past covariate lag, min future covariate lag, max future covariate lag, output shift, max target lag train (only for RNNModel)). If 0 is the index of the first prediction, then all lags are relative to this index.

See examples below.

If the model wasn’t fitted with:
  • target (concerning RegressionModels only): then the first element should be None.

  • past covariates: then the third and fourth elements should be None.

  • future covariates: then the fifth and sixth elements should be None.

Should be overridden by models that use past or future covariates, and/or for model that have minimum target lag and maximum target lags potentially different from -1 and 0.

Notes

maximum target lag (second value) cannot be None and is always larger than or equal to 0.

Examples

>>> model = LinearRegressionModel(lags=3, output_chunk_length=2)
>>> model.fit(train_series)
>>> model.extreme_lags
(-3, 1, None, None, None, None, 0, None)
>>> model = LinearRegressionModel(lags=3, output_chunk_length=2, output_chunk_shift=2)
>>> model.fit(train_series)
>>> model.extreme_lags
(-3, 1, None, None, None, None, 2, None)
>>> model = LinearRegressionModel(lags=[-3, -5], lags_past_covariates = 4, output_chunk_length=7)
>>> model.fit(train_series, past_covariates=past_covariates)
>>> model.extreme_lags
(-5, 6, -4, -1,  None, None, 0, None)
>>> model = LinearRegressionModel(lags=[3, 5], lags_future_covariates = [4, 6], output_chunk_length=7)
>>> model.fit(train_series, future_covariates=future_covariates)
>>> model.extreme_lags
(-5, 6, None, None, 4, 6, 0, None)
>>> model = NBEATSModel(input_chunk_length=10, output_chunk_length=7)
>>> model.fit(train_series)
>>> model.extreme_lags
(-10, 6, None, None, None, None, 0, None)
>>> model = NBEATSModel(input_chunk_length=10, output_chunk_length=7, lags_future_covariates=[4, 6])
>>> model.fit(train_series, future_covariates)
>>> model.extreme_lags
(-10, 6, None, None, 4, 6, 0, None)
Return type

Tuple[Optional[int], Optional[int], Optional[int], Optional[int], Optional[int], Optional[int], int, Optional[int]]

fit(series, past_covariates=None, future_covariates=None, *args, **kwargs)

Fit/train the model on a (or potentially multiple) series. This method is only implemented for naive baseline models to provide a unified fit/predict API with other forecasting models.

The model is not really trained on the input, but fit() is used to setup the model based on the input series. Also, it stores the training series in case only a single TimeSeries was passed. This allows to call predict() without having to pass the single series.

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – A series or sequence of series serving as target (i.e. what the model will be trained to forecast)

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series specifying past-observed covariates

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series specifying future-known covariates

  • **kwargs – Optionally, some keyword arguments.

Returns

Fitted model.

Return type

self

fit_from_dataset(train_dataset, val_dataset=None, trainer=None, verbose=None, epochs=0, num_loader_workers=0)

Train the model with a specific darts.utils.data.TrainingDataset instance. These datasets implement a PyTorch Dataset, and specify how the target and covariates are sliced for training. If you are not sure which training dataset to use, consider calling fit() instead, which will create a default training dataset appropriate for this model.

Training is performed with a PyTorch Lightning Trainer. It uses a default Trainer object from presets and pl_trainer_kwargs used at model creation. You can also use a custom Trainer with optional parameter trainer. For more information on PyTorch Lightning Trainers check out this link.

This function can be called several times to do some extra training. If epochs is specified, the model will be trained for some (extra) epochs epochs.

Parameters
  • train_dataset (TrainingDataset) – A training dataset with a type matching this model (e.g. PastCovariatesTrainingDataset for PastCovariatesTorchModel).

  • val_dataset (Optional[TrainingDataset]) – A training dataset with a type matching this model (e.g. PastCovariatesTrainingDataset for :class:`PastCovariatesTorchModel`s), representing the validation set (to track the validation loss).

  • trainer (Optional[Trainer]) – Optionally, a custom PyTorch-Lightning Trainer object to perform prediction. Using a custom trainer will override Darts’ default trainer.

  • verbose (Optional[bool]) – Optionally, whether to print the progress. Ignored if there is a ProgressBar callback in pl_trainer_kwargs.

  • epochs (int) – If specified, will train the model for epochs (additional) epochs, irrespective of what n_epochs was provided to the model constructor.

  • num_loader_workers (int) – Optionally, an integer specifying the num_workers to use in PyTorch DataLoader instances, both for the training and validation loaders (if any). A larger number of workers can sometimes increase performance, but can also incur extra overheads and increase memory usage, as more batches are loaded in parallel.

Returns

Fitted model.

Return type

self

generate_fit_encodings(series, past_covariates=None, future_covariates=None)

Generates the covariate encodings that were used/generated for fitting the model and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together. The encodings are generated by the encoders defined at model creation with parameter add_encoders. Pass the same series, past_covariates, and future_covariates that you used to train/fit the model.

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The series or sequence of series with the target values used when fitting the model.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the series or sequence of series with the past-observed covariates used when fitting the model.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the series or sequence of series with the future-known covariates used when fitting the model.

Returns

A tuple of (past covariates, future covariates). Each covariate contains the original as well as the encoded covariates.

Return type

Tuple[Union[TimeSeries, Sequence[TimeSeries]], Union[TimeSeries, Sequence[TimeSeries]]]

generate_fit_predict_encodings(n, series, past_covariates=None, future_covariates=None)

Generates covariate encodings for training and inference/prediction and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together. The encodings are generated by the encoders defined at model creation with parameter add_encoders. Pass the same series, past_covariates, and future_covariates that you intend to use for training and prediction.

Parameters
  • n (int) – The number of prediction time steps after the end of series intended to be used for prediction.

  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The series or sequence of series with target values intended to be used for training and prediction.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the past-observed covariates series intended to be used for training and prediction. The dimensions must match those of the covariates used for training.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the future-known covariates series intended to be used for prediction. The dimensions must match those of the covariates used for training.

Returns

A tuple of (past covariates, future covariates). Each covariate contains the original as well as the encoded covariates.

Return type

Tuple[Union[TimeSeries, Sequence[TimeSeries]], Union[TimeSeries, Sequence[TimeSeries]]]

generate_predict_encodings(n, series, past_covariates=None, future_covariates=None)

Generates covariate encodings for the inference/prediction set and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together. The encodings are generated by the encoders defined at model creation with parameter add_encoders. Pass the same series, past_covariates, and future_covariates that you intend to use for prediction.

Parameters
  • n (int) – The number of prediction time steps after the end of series intended to be used for prediction.

  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The series or sequence of series with target values intended to be used for prediction.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the past-observed covariates series intended to be used for prediction. The dimensions must match those of the covariates used for training.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the future-known covariates series intended to be used for prediction. The dimensions must match those of the covariates used for training.

Returns

A tuple of (past covariates, future covariates). Each covariate contains the original as well as the encoded covariates.

Return type

Tuple[Union[TimeSeries, Sequence[TimeSeries]], Union[TimeSeries, Sequence[TimeSeries]]]

classmethod gridsearch(parameters, series, past_covariates=None, future_covariates=None, forecast_horizon=None, stride=1, start=None, start_format='value', last_points_only=False, show_warnings=True, val_series=None, use_fitted_values=False, metric=<function mape>, reduction=<function mean>, verbose=False, n_jobs=1, n_random_samples=None, fit_kwargs=None, predict_kwargs=None)

Find the best hyper-parameters among a given set using a grid search.

This function has 3 modes of operation: Expanding window mode, split mode and fitted value mode. The three modes of operation evaluate every possible combination of hyper-parameter values provided in the parameters dictionary by instantiating the model_class subclass of ForecastingModel with each combination, and returning the best-performing model with regard to the metric function. The metric function is expected to return an error value, thus the model resulting in the smallest metric output will be chosen.

The relationship of the training data and test data depends on the mode of operation.

Expanding window mode (activated when forecast_horizon is passed): For every hyperparameter combination, the model is repeatedly trained and evaluated on different splits of series. This process is accomplished by using the backtest() function as a subroutine to produce historic forecasts starting from start that are compared against the ground truth values of series. Note that the model is retrained for every single prediction, thus this mode is slower.

Split window mode (activated when val_series is passed): This mode will be used when the val_series argument is passed. For every hyper-parameter combination, the model is trained on series and evaluated on val_series.

Fitted value mode (activated when use_fitted_values is set to True): For every hyper-parameter combination, the model is trained on series and evaluated on the resulting fitted values. Not all models have fitted values, and this method raises an error if the model doesn’t have a fitted_values member. The fitted values are the result of the fit of the model on series. Comparing with the fitted values can be a quick way to assess the model, but one cannot see if the model is overfitting the series.

Derived classes must ensure that a single instance of a model will not share parameters with the other instances, e.g., saving models in the same path. Otherwise, an unexpected behavior can arise while running several models in parallel (when n_jobs != 1). If this cannot be avoided, then gridsearch should be redefined, forcing n_jobs = 1.

Currently this method only supports deterministic predictions (i.e. when models’ predictions have only 1 sample).

Parameters
  • model_class – The ForecastingModel subclass to be tuned for ‘series’.

  • parameters (dict) – A dictionary containing as keys hyperparameter names, and as values lists of values for the respective hyperparameter.

  • series (TimeSeries) – The target series used as input and target for training.

  • past_covariates (Optional[TimeSeries]) – Optionally, a past-observed covariate series. This applies only if the model supports past covariates.

  • future_covariates (Optional[TimeSeries]) – Optionally, a future-known covariate series. This applies only if the model supports future covariates.

  • forecast_horizon (Optional[int]) – The integer value of the forecasting horizon. Activates expanding window mode.

  • stride (int) – Only used in expanding window mode. The number of time steps between two consecutive predictions.

  • start (Union[Timestamp, float, int, None]) –

    Only used in expanding window mode. Optionally, the first point in time at which a prediction is computed. This parameter supports: float, int, pandas.Timestamp, and None. If a float, it is the proportion of the time series that should lie before the first prediction point. If an int, it is either the index position of the first prediction point for series with a pd.DatetimeIndex, or the index value for series with a pd.RangeIndex. The latter can be changed to the index position with start_format=”position”. If a pandas.Timestamp, it is the time stamp of the first prediction point. If None, the first prediction point will automatically be set to:

    • the first predictable point if retrain is False, or retrain is a Callable and the first predictable point is earlier than the first trainable point.

    • the first trainable point if retrain is True or int (given train_length), or retrain is a Callable and the first trainable point is earlier than the first predictable point.

    • the first trainable point (given train_length) otherwise

    Note: Raises a ValueError if start yields a time outside the time index of series. Note: If start is outside the possible historical forecasting times, will ignore the parameter (default behavior with None) and start at the first trainable/predictable point.

  • start_format (Literal[‘position’, ‘value’]) – Only used in expanding window mode. Defines the start format. Only effective when start is an integer and series is indexed with a pd.RangeIndex. If set to ‘position’, start corresponds to the index position of the first predicted point and can range from (-len(series), len(series) - 1). If set to ‘value’, start corresponds to the index value/label of the first predicted point. Will raise an error if the value is not in series’ index. Default: 'value'

  • last_points_only (bool) – Only used in expanding window mode. Whether to use the whole forecasts or only the last point of each forecast to compute the error.

  • show_warnings (bool) – Only used in expanding window mode. Whether to show warnings related to the start parameter.

  • val_series (Optional[TimeSeries]) – The TimeSeries instance used for validation in split mode. If provided, this series must start right after the end of series; so that a proper comparison of the forecast can be made.

  • use_fitted_values (bool) – If True, uses the comparison with the fitted values. Raises an error if fitted_values is not an attribute of model_class.

  • metric (Callable[[TimeSeries, TimeSeries], float]) –

    A metric function that returns the error between two TimeSeries as a float value . Must either be one of Darts’ “aggregated over time” metrics (see here), or a custom metric that as input two TimeSeries and returns the error

  • reduction (Callable[[ndarray], float]) – A reduction function (mapping array to float) describing how to aggregate the errors obtained on the different validation series when backtesting. By default it’ll compute the mean of errors.

  • verbose – Whether to print progress.

  • n_jobs (int) – The number of jobs to run in parallel. Parallel jobs are created only when there are two or more parameters combinations to evaluate. Each job will instantiate, train, and evaluate a different instance of the model. Defaults to 1 (sequential). Setting the parameter to -1 means using all the available cores.

  • n_random_samples (Union[int, float, None]) – The number/ratio of hyperparameter combinations to select from the full parameter grid. This will perform a random search instead of using the full grid. If an integer, n_random_samples is the number of parameter combinations selected from the full grid and must be between 0 and the total number of parameter combinations. If a float, n_random_samples is the ratio of parameter combinations selected from the full grid and must be between 0 and 1. Defaults to None, for which random selection will be ignored.

  • fit_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model fit() method.

  • predict_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model predict() method.

Returns

A tuple containing an untrained model_class instance created from the best-performing hyper-parameters, along with a dictionary containing these best hyper-parameters, and metric score for the best hyper-parameters.

Return type

ForecastingModel, Dict, float

historical_forecasts(series, past_covariates=None, future_covariates=None, num_samples=1, train_length=None, start=None, start_format='value', forecast_horizon=1, stride=1, retrain=True, overlap_end=False, last_points_only=True, verbose=False, show_warnings=True, predict_likelihood_parameters=False, enable_optimization=True, fit_kwargs=None, predict_kwargs=None)

Compute the historical forecasts that would have been obtained by this model on (potentially multiple) series.

This method repeatedly builds a training set: either expanding from the beginning of series or moving with a fixed length train_length. It trains the model on the training set, emits a forecast of length equal to forecast_horizon, and then moves the end of the training set forward by stride time steps.

By default, this method will return one (or a sequence of) single time series made up of the last point of each historical forecast. This time series will thus have a frequency of series.freq * stride. If last_points_only is set to False, it will instead return one (or a sequence of) list of the historical forecasts series.

By default, this method always re-trains the models on the entire available history, corresponding to an expanding window strategy. If retrain is set to False, the model must have been fit before. This is not supported by all models.

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The (or a sequence of) target time series used to successively train and compute the historical forecasts.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one (or a sequence of) past-observed covariate series. This applies only if the model supports past covariates.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one (or a sequence of) of future-known covariate series. This applies only if the model supports future covariates.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Use values >1 only for probabilistic models.

  • train_length (Optional[int]) – Number of time steps in our training set (size of backtesting window to train on). Only effective when retrain is not False. Default is set to train_length=None where it takes all available time steps up until prediction time, otherwise the moving window strategy is used. If larger than the number of time steps available, all steps up until prediction time are used, as in default case. Needs to be at least min_train_series_length.

  • start (Union[Timestamp, float, int, None]) –

    Optionally, the first point in time at which a prediction is computed. This parameter supports: float, int, pandas.Timestamp, and None. If a float, it is the proportion of the time series that should lie before the first prediction point. If an int, it is either the index position of the first prediction point for series with a pd.DatetimeIndex, or the index value for series with a pd.RangeIndex. The latter can be changed to the index position with start_format=”position”. If a pandas.Timestamp, it is the time stamp of the first prediction point. If None, the first prediction point will automatically be set to:

    • the first predictable point if retrain is False, or retrain is a Callable and the first predictable point is earlier than the first trainable point.

    • the first trainable point if retrain is True or int (given train_length), or retrain is a Callable and the first trainable point is earlier than the first predictable point.

    • the first trainable point (given train_length) otherwise

    Note: If the model uses a shifted output (output_chunk_shift > 0), then the first predicted point is also shifted by output_chunk_shift points into the future. Note: Raises a ValueError if start yields a time outside the time index of series. Note: If start is outside the possible historical forecasting times, will ignore the parameter (default behavior with None) and start at the first trainable/predictable point.

  • start_format (Literal[‘position’, ‘value’]) – Defines the start format. Only effective when start is an integer and series is indexed with a pd.RangeIndex. If set to ‘position’, start corresponds to the index position of the first predicted point and can range from (-len(series), len(series) - 1). If set to ‘value’, start corresponds to the index value/label of the first predicted point. Will raise an error if the value is not in series’ index. Default: 'value'

  • forecast_horizon (int) – The forecast horizon for the predictions.

  • stride (int) – The number of time steps between two consecutive predictions.

  • retrain (Union[bool, int, Callable[…, bool]]) –

    Whether and/or on which condition to retrain the model before predicting. This parameter supports 3 different datatypes: bool, (positive) int, and Callable (returning a bool). In the case of bool: retrain the model at each step (True), or never retrains the model (False). In the case of int: the model is retrained every retrain iterations. In the case of Callable: the model is retrained whenever callable returns True. The callable must have the following positional arguments:

    • counter (int): current retrain iteration

    • pred_time (pd.Timestamp or int): timestamp of forecast time (end of the training series)

    • train_series (TimeSeries): train series up to pred_time

    • past_covariates (TimeSeries): past_covariates series up to pred_time

    • future_covariates (TimeSeries): future_covariates series up to min(pred_time + series.freq * forecast_horizon, series.end_time())

    Note: if any optional *_covariates are not passed to historical_forecast, None will be passed to the corresponding retrain function argument. Note: some models do require being retrained every time and do not support anything other than retrain=True.

  • overlap_end (bool) – Whether the returned forecasts can go beyond the series’ end or not.

  • last_points_only (bool) – Whether to retain only the last point of each historical forecast. If set to True, the method returns a single TimeSeries containing the successive point forecasts. Otherwise, returns a list of historical TimeSeries forecasts.

  • verbose (bool) – Whether to print progress.

  • show_warnings (bool) – Whether to show warnings related to historical forecasts optimization, or parameters start and train_length.

  • predict_likelihood_parameters (bool) – If set to True, the model predict the parameters of its Likelihood parameters instead of the target. Only supported for probabilistic models with a likelihood, num_samples = 1 and n<=output_chunk_length. Default: False

  • enable_optimization (bool) – Whether to use the optimized version of historical_forecasts when supported and available.

  • fit_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model fit() method.

  • predict_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model predict() method.

Return type

Union[TimeSeries, List[TimeSeries], List[List[TimeSeries]]]

Returns

  • TimeSeries – A single historical forecast for a single series and last_points_only=True: it contains only the predictions at step forecast_horizon from all historical forecasts.

  • List[TimeSeries] – A list of historical forecasts for:

    • a sequence (list) of series and last_points_only=True: for each series, it contains only the predictions at step forecast_horizon from all historical forecasts.

    • a single series and last_points_only=False: for each historical forecast, it contains the entire horizon forecast_horizon.

  • List[List[TimeSeries]] – A list of lists of historical forecasts for a sequence of series and last_points_only=False. For each series, and historical forecast, it contains the entire horizon forecast_horizon. The outer list is over the series provided in the input sequence, and the inner lists contain the historical forecasts for each series.

property input_chunk_length: int
Return type

int

property likelihood: Optional[Likelihood]
Return type

Optional[Likelihood]

static load(path, **kwargs)

Loads a model from a given file path.

Example for loading a general save from RNNModel:

from darts.models import RNNModel

model_loaded = RNNModel.load(path)

Example for loading an RNNModel to CPU that was saved on GPU:

from darts.models import RNNModel

model_loaded = RNNModel.load(path, map_location="cpu")
model_loaded.to_cpu()
Parameters
  • path (str) – Path from which to load the model. If no path was specified when saving the model, the automatically generated path ending with “.pt” has to be provided.

  • **kwargs

    Additional kwargs for PyTorch Lightning’s LightningModule.load_from_checkpoint() method, such as map_location to load the model onto a different device than the one from which it was saved. For more information, read the official documentation.

Return type

TorchForecastingModel

static load_from_checkpoint(model_name, work_dir=None, file_name=None, best=True, **kwargs)

Load the model from automatically saved checkpoints under ‘{work_dir}/darts_logs/{model_name}/checkpoints/’. This method is used for models that were created with save_checkpoints=True.

If you manually saved your model, consider using load().

Example for loading a RNNModel from checkpoint (model_name is the model_name used at model creation):

from darts.models import RNNModel

model_loaded = RNNModel.load_from_checkpoint(model_name, best=True)

If file_name is given, returns the model saved under ‘{work_dir}/darts_logs/{model_name}/checkpoints/{file_name}’.

If file_name is not given, will try to restore the best checkpoint (if best is True) or the most recent checkpoint (if best is False from ‘{work_dir}/darts_logs/{model_name}/checkpoints/’.

Example for loading an RNNModel checkpoint to CPU that was saved on GPU:

from darts.models import RNNModel

model_loaded = RNNModel.load_from_checkpoint(model_name, best=True, map_location="cpu")
model_loaded.to_cpu()
Parameters
  • model_name (str) – The name of the model, used to retrieve the checkpoints folder’s name.

  • work_dir (Optional[str]) – Working directory (containing the checkpoints folder). Defaults to current working directory.

  • file_name (Optional[str]) – The name of the checkpoint file. If not specified, use the most recent one.

  • best (bool) – If set, will retrieve the best model (according to validation loss) instead of the most recent one. Only is ignored when file_name is given.

  • **kwargs

    Additional kwargs for PyTorch Lightning’s LightningModule.load_from_checkpoint() method, such as map_location to load the model onto a different device than the one from which it was saved. For more information, read the official documentation.

Returns

The corresponding trained TorchForecastingModel.

Return type

TorchForecastingModel

load_weights(path, load_encoders=True, skip_checks=False, **kwargs)

Loads the weights from a manually saved model (saved with save()).

Note: This method needs to be able to access the darts model checkpoint (.pt) in order to load the encoders and perform sanity checks on the model parameters.

Parameters
  • path (str) – Path from which to load the model’s weights. If no path was specified when saving the model, the automatically generated path ending with “.pt” has to be provided.

  • load_encoders (bool) – If set, will load the encoders from the model to enable direct call of fit() or predict(). Default: True.

  • skip_checks (bool) – If set, will disable the loading of the encoders and the sanity checks on model parameters (not recommended). Cannot be used with load_encoders=True. Default: False.

  • **kwargs

    Additional kwargs for PyTorch’s load() method, such as map_location to load the model onto a different device than the one from which it was saved. For more information, read the official documentation.

load_weights_from_checkpoint(model_name=None, work_dir=None, file_name=None, best=True, strict=True, load_encoders=True, skip_checks=False, **kwargs)

Load only the weights from automatically saved checkpoints under ‘{work_dir}/darts_logs/{model_name}/ checkpoints/’. This method is used for models that were created with save_checkpoints=True and that need to be re-trained or fine-tuned with different optimizer or learning rate scheduler. However, it can also be used to load weights for inference.

To resume an interrupted training, please consider using load_from_checkpoint() which also reload the trainer, optimizer and learning rate scheduler states.

For manually saved model, consider using load() or load_weights() instead.

Note: This method needs to be able to access the darts model checkpoint (.pt) in order to load the encoders and perform sanity checks on the model parameters.

Parameters
  • model_name (Optional[str]) – The name of the model, used to retrieve the checkpoints folder’s name. Default: self.model_name.

  • work_dir (Optional[str]) – Working directory (containing the checkpoints folder). Defaults to current working directory.

  • file_name (Optional[str]) – The name of the checkpoint file. If not specified, use the most recent one.

  • best (bool) – If set, will retrieve the best model (according to validation loss) instead of the most recent one. Only is ignored when file_name is given. Default: True.

  • strict (bool) –

    If set, strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict(). Default: True. For more information, read the official documentation.

  • load_encoders (bool) – If set, will load the encoders from the model to enable direct call of fit() or predict(). Default: True.

  • skip_checks (bool) – If set, will disable the loading of the encoders and the sanity checks on model parameters (not recommended). Cannot be used with load_encoders=True. Default: False.

  • **kwargs

    Additional kwargs for PyTorch’s load() method, such as map_location to load the model onto a different device than the one from which it was saved. For more information, read the official documentation.

lr_find(series, past_covariates=None, future_covariates=None, val_series=None, val_past_covariates=None, val_future_covariates=None, trainer=None, verbose=None, epochs=0, max_samples_per_ts=None, num_loader_workers=0, min_lr=1e-08, max_lr=1, num_training=100, mode='exponential', early_stop_threshold=4.0)

A wrapper around PyTorch Lightning’s Tuner.lr_find(). Performs a range test of good initial learning rates, to reduce the amount of guesswork in picking a good starting learning rate. For more information on PyTorch Lightning’s Tuner check out this link. It is recommended to increase the number of epochs if the tuner did not give satisfactory results. Consider creating a new model object with the suggested learning rate for example using model creation parameters optimizer_cls, optimizer_kwargs, lr_scheduler_cls, and lr_scheduler_kwargs.

Example using a RNNModel:

import torch
from darts.datasets import AirPassengersDataset
from darts.models import NBEATSModel

series = AirPassengersDataset().load()
train, val = series[:-18], series[-18:]
model = NBEATSModel(input_chunk_length=12, output_chunk_length=6, random_state=42)
# run the learning rate tuner
results = model.lr_find(series=train, val_series=val)
# plot the results
results.plot(suggest=True, show=True)
# create a new model with the suggested learning rate
model = NBEATSModel(
    input_chunk_length=12,
    output_chunk_length=6,
    random_state=42,
    optimizer_cls=torch.optim.Adam,
    optimizer_kwargs={"lr": results.suggestion()}
)
Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – A series or sequence of series serving as target (i.e. what the model will be trained to forecast)

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series specifying past-observed covariates

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series specifying future-known covariates

  • val_series (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one or a sequence of validation target series, which will be used to compute the validation loss throughout training and keep track of the best performing models.

  • val_past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the past covariates corresponding to the validation series (must match covariates)

  • val_future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the future covariates corresponding to the validation series (must match covariates)

  • trainer (Optional[Trainer]) – Optionally, a custom PyTorch-Lightning Trainer object to perform training. Using a custom trainer will override Darts’ default trainer.

  • verbose (Optional[bool]) – Optionally, whether to print the progress. Ignored if there is a ProgressBar callback in pl_trainer_kwargs.

  • epochs (int) – If specified, will train the model for epochs (additional) epochs, irrespective of what n_epochs was provided to the model constructor.

  • max_samples_per_ts (Optional[int]) – Optionally, a maximum number of samples to use per time series. Models are trained in a supervised fashion by constructing slices of (input, output) examples. On long time series, this can result in unnecessarily large number of training samples. This parameter upper-bounds the number of training samples per time series (taking only the most recent samples in each series). Leaving to None does not apply any upper bound.

  • num_loader_workers (int) – Optionally, an integer specifying the num_workers to use in PyTorch DataLoader instances, both for the training and validation loaders (if any). A larger number of workers can sometimes increase performance, but can also incur extra overheads and increase memory usage, as more batches are loaded in parallel.

  • min_lr (float) – minimum learning rate to investigate

  • max_lr (float) – maximum learning rate to investigate

  • num_training (int) – number of learning rates to test

  • mode (str) – Search strategy to update learning rate after each batch: ‘exponential’: Increases the learning rate exponentially. ‘linear’: Increases the learning rate linearly.

  • early_stop_threshold (float) – Threshold for stopping the search. If the loss at any point is larger than early_stop_threshold*best_loss then the search is stopped. To disable, set to None

Returns

_LRFinder object of Lightning containing the results of the LR sweep.

Return type

lr_finder

property min_train_samples: int

The minimum number of samples for training the model.

Return type

int

property model_created: bool
Return type

bool

property model_params: dict
Return type

dict

property output_chunk_length: int

Number of time steps predicted at once by the model, not defined for statistical models.

Return type

int

property output_chunk_shift: int

Number of time steps that the output/prediction starts after the end of the input.

Return type

int

predict(n, series=None, past_covariates=None, future_covariates=None, trainer=None, batch_size=None, verbose=None, n_jobs=1, roll_size=None, num_samples=1, num_loader_workers=0, mc_dropout=False, predict_likelihood_parameters=False, show_warnings=True)

Predict the n time step following the end of the training series, or of the specified series.

Prediction is performed with a PyTorch Lightning Trainer. It uses a default Trainer object from presets and pl_trainer_kwargs used at model creation. You can also use a custom Trainer with optional parameter trainer. For more information on PyTorch Lightning Trainers check out this link .

Below, all possible parameters are documented, but not all models support all parameters. For instance, all the PastCovariatesTorchModel support only past_covariates and not future_covariates. Darts will complain if you try calling predict() on a model with the wrong covariates argument.

Darts will also complain if the provided covariates do not have a sufficient time span. In general, not all models require the same covariates’ time spans:

  • Models relying on past covariates require the last input_chunk_length of the past_covariates
    points to be known at prediction time. For horizon values n > output_chunk_length, these models
    require at least the next n - output_chunk_length future values to be known as well.
  • Models relying on future covariates require the next n values to be known.
    In addition (for DualCovariatesTorchModel and MixedCovariatesTorchModel), they also
    require the “historic” values of these future covariates (over the past input_chunk_length).

When handling covariates, Darts will try to use the time axes of the target and the covariates to come up with the right time slices. So the covariates can be longer than needed; as long as the time axes are correct Darts will handle them correctly. It will also complain if their time span is not sufficient.

Parameters
  • n (int) – The number of time steps after the end of the training time series for which to produce predictions

  • series (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series, representing the history of the target series whose future is to be predicted. If specified, the method returns the forecasts of these series. Otherwise, the method returns the forecast of the (single) training series.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the past-observed covariates series needed as inputs for the model. They must match the covariates used for training in terms of dimension.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the future-known covariates series needed as inputs for the model. They must match the covariates used for training in terms of dimension.

  • trainer (Optional[Trainer]) – Optionally, a custom PyTorch-Lightning Trainer object to perform prediction. Using a custom trainer will override Darts’ default trainer.

  • batch_size (Optional[int]) – Size of batches during prediction. Defaults to the models’ training batch_size value.

  • verbose (Optional[bool]) – Optionally, whether to print the progress. Ignored if there is a ProgressBar callback in pl_trainer_kwargs.

  • n_jobs (int) – The number of jobs to run in parallel. -1 means using all processors. Defaults to 1.

  • roll_size (Optional[int]) – For self-consuming predictions, i.e. n > output_chunk_length, determines how many outputs of the model are fed back into it at every iteration of feeding the predicted target (and optionally future covariates) back into the model. If this parameter is not provided, it will be set output_chunk_length by default.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Should be left set to 1 for deterministic models.

  • num_loader_workers (int) – Optionally, an integer specifying the num_workers to use in PyTorch DataLoader instances, for the inference/prediction dataset loaders (if any). A larger number of workers can sometimes increase performance, but can also incur extra overheads and increase memory usage, as more batches are loaded in parallel.

  • mc_dropout (bool) – Optionally, enable monte carlo dropout for predictions using neural network based models. This allows bayesian approximation by specifying an implicit prior over learned models.

  • predict_likelihood_parameters (bool) – If set to True, the model predict the parameters of its Likelihood parameters instead of the target. Only supported for probabilistic models with a likelihood, num_samples = 1 and n<=output_chunk_length. Default: False.

  • show_warnings (bool) – Optionally, control whether warnings are shown. Not effective for all models.

Returns

One or several time series containing the forecasts of series, or the forecast of the training series if series is not specified and the model has been trained on a single series.

Return type

Union[TimeSeries, Sequence[TimeSeries]]

predict_from_dataset(n, input_series_dataset, trainer=None, batch_size=None, verbose=None, n_jobs=1, roll_size=None, num_samples=1, num_loader_workers=0, mc_dropout=False, predict_likelihood_parameters=False)

This method allows for predicting with a specific darts.utils.data.InferenceDataset instance. These datasets implement a PyTorch Dataset, and specify how the target and covariates are sliced for inference. In most cases, you’ll rather want to call predict() instead, which will create an appropriate InferenceDataset for you.

Prediction is performed with a PyTorch Lightning Trainer. It uses a default Trainer object from presets and pl_trainer_kwargs used at model creation. You can also use a custom Trainer with optional parameter trainer. For more information on PyTorch Lightning Trainers check out this link .

Parameters
  • n (int) – The number of time steps after the end of the training time series for which to produce predictions

  • input_series_dataset (InferenceDataset) – Optionally, a series or sequence of series, representing the history of the target series’ whose future is to be predicted. If specified, the method returns the forecasts of these series. Otherwise, the method returns the forecast of the (single) training series.

  • trainer (Optional[Trainer]) – Optionally, a custom PyTorch-Lightning Trainer object to perform prediction. Using a custom trainer will override Darts’ default trainer.

  • batch_size (Optional[int]) – Size of batches during prediction. Defaults to the models batch_size value.

  • verbose (Optional[bool]) – Optionally, whether to print the progress. Ignored if there is a ProgressBar callback in pl_trainer_kwargs.

  • n_jobs (int) – The number of jobs to run in parallel. -1 means using all processors. Defaults to 1.

  • roll_size (Optional[int]) – For self-consuming predictions, i.e. n > output_chunk_length, determines how many outputs of the model are fed back into it at every iteration of feeding the predicted target (and optionally future covariates) back into the model. If this parameter is not provided, it will be set output_chunk_length by default.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Should be left set to 1 for deterministic models.

  • num_loader_workers (int) – Optionally, an integer specifying the num_workers to use in PyTorch DataLoader instances, for the inference/prediction dataset loaders (if any). A larger number of workers can sometimes increase performance, but can also incur extra overheads and increase memory usage, as more batches are loaded in parallel.

  • mc_dropout (bool) – Optionally, enable monte carlo dropout for predictions using neural network based models. This allows bayesian approximation by specifying an implicit prior over learned models.

  • predict_likelihood_parameters (bool) – If set to True, the model predict the parameters of its Likelihood parameters instead of the target. Only supported for probabilistic models with a likelihood, num_samples = 1 and n<=output_chunk_length. Default: False

Returns

Returns one or more forecasts for time series.

Return type

Sequence[TimeSeries]

reset_model()

Resets the model object and removes all stored data - model, checkpoints, loggers and training history.

residuals(series, past_covariates=None, future_covariates=None, historical_forecasts=None, num_samples=1, train_length=None, start=None, start_format='value', forecast_horizon=1, stride=1, retrain=True, last_points_only=True, metric=<function err>, verbose=False, show_warnings=True, metric_kwargs=None, fit_kwargs=None, predict_kwargs=None, values_only=False)

Compute the residuals produced by this model on a (or sequence of) TimeSeries.

This function computes the difference (or one of Darts’ “per time step” metrics) between the actual observations from series and the fitted values obtained by training the model on series (or using a pre-trained model with retrain=False). Not all models support fitted values, so we use historical forecasts as an approximation for them.

In sequence this method performs:

  • compute historical forecasts for each series or use pre-computed historical_forecasts (see historical_forecasts() for more details). How the historical forecasts are generated can be configured with parameters num_samples, train_length, start, start_format, forecast_horizon, stride, retrain, last_points_only, fit_kwargs, and predict_kwargs.

  • compute a backtest using a “per time step” metric between the historical forecasts and series per component/column and time step (see backtest() for more details). By default, uses the residuals err() as a metric.

  • create and return TimeSeries (or simply a np.ndarray with values_only=True) with the time index from historical forecasts, and values from the metrics per component and time step.

This method works for single or multiple univariate or multivariate series. It uses the median prediction (when dealing with stochastic forecasts).

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The univariate TimeSeries instance which the residuals will be computed for.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – One or several past-observed covariate time series.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – One or several future-known covariate time series.

  • forecast_horizon (int) – The forecasting horizon used to predict each fitted value.

  • historical_forecasts (Union[TimeSeries, Sequence[TimeSeries], Sequence[Sequence[TimeSeries]], None]) – Optionally, the (or a sequence of / a sequence of sequences of) historical forecasts time series to be evaluated. Corresponds to the output of historical_forecasts(). The same series and last_points_only values must be passed that were used to generate the historical forecasts. If provided, will skip historical forecasting and ignore all parameters except series, last_points_only, metric, and reduction.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Use values >1 only for probabilistic models.

  • train_length (Optional[int]) – Number of time steps in our training set (size of backtesting window to train on). Only effective when retrain is not False. Default is set to train_length=None where it takes all available time steps up until prediction time, otherwise the moving window strategy is used. If larger than the number of time steps available, all steps up until prediction time are used, as in default case. Needs to be at least min_train_series_length.

  • start (Union[Timestamp, float, int, None]) –

    Optionally, the first point in time at which a prediction is computed. This parameter supports: float, int, pandas.Timestamp, and None. If a float, it is the proportion of the time series that should lie before the first prediction point. If an int, it is either the index position of the first prediction point for series with a pd.DatetimeIndex, or the index value for series with a pd.RangeIndex. The latter can be changed to the index position with start_format=”position”. If a pandas.Timestamp, it is the time stamp of the first prediction point. If None, the first prediction point will automatically be set to:

    • the first predictable point if retrain is False, or retrain is a Callable and the first predictable point is earlier than the first trainable point.

    • the first trainable point if retrain is True or int (given train_length), or retrain is a Callable and the first trainable point is earlier than the first predictable point.

    • the first trainable point (given train_length) otherwise

    Note: Raises a ValueError if start yields a time outside the time index of series. Note: If start is outside the possible historical forecasting times, will ignore the parameter (default behavior with None) and start at the first trainable/predictable point.

  • start_format (Literal[‘position’, ‘value’]) – Defines the start format. Only effective when start is an integer and series is indexed with a pd.RangeIndex. If set to ‘position’, start corresponds to the index position of the first predicted point and can range from (-len(series), len(series) - 1). If set to ‘value’, start corresponds to the index value/label of the first predicted point. Will raise an error if the value is not in series’ index. Default: 'value'

  • forecast_horizon – The forecast horizon for the point predictions.

  • stride (int) – The number of time steps between two consecutive predictions.

  • retrain (Union[bool, int, Callable[…, bool]]) –

    Whether and/or on which condition to retrain the model before predicting. This parameter supports 3 different datatypes: bool, (positive) int, and Callable (returning a bool). In the case of bool: retrain the model at each step (True), or never retrains the model (False). In the case of int: the model is retrained every retrain iterations. In the case of Callable: the model is retrained whenever callable returns True. The callable must have the following positional arguments:

    • counter (int): current retrain iteration

    • pred_time (pd.Timestamp or int): timestamp of forecast time (end of the training series)

    • train_series (TimeSeries): train series up to pred_time

    • past_covariates (TimeSeries): past_covariates series up to pred_time

    • future_covariates (TimeSeries): future_covariates series up to min(pred_time + series.freq * forecast_horizon, series.end_time())

    Note: if any optional *_covariates are not passed to historical_forecast, None will be passed to the corresponding retrain function argument. Note: some models do require being retrained every time and do not support anything other than retrain=True.

  • last_points_only (bool) – Whether to use the whole historical forecasts or only the last point of each forecast to compute the error.

  • metric (Callable[…, Union[float, List[float], ndarray, List[ndarray]]]) –

    Either one of Darts’ “per time step” metrics (see here), or a custom metric that has an identical signature as Darts’ “per time step” metrics, uses decorators multi_ts_support() and multi_ts_support(), and returns one value per time step.

  • verbose (bool) – Whether to print progress.

  • show_warnings (bool) – Whether to show warnings related to parameters start, and train_length.

  • metric_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to metric(), such as ‘n_jobs’ for parallelization, ‘m’ for scaled metrics, etc. Will pass arguments only if they are present in the corresponding metric signature. Ignores reduction arguments “series_reduction”, “component_reduction”, “time_reduction”, and parameter ‘insample’ for scaled metrics (e.g. mase`, rmsse, …), as they are handled internally.

  • fit_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model fit() method.

  • predict_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model predict() method.

  • values_only (bool) – Whether to return the residuals as np.ndarray. If False, returns residuals as TimeSeries.

Return type

Union[TimeSeries, List[TimeSeries], List[List[TimeSeries]]]

Returns

  • TimeSeries – Residual TimeSeries for a single series and historical_forecasts generated with last_points_only=True.

  • List[TimeSeries] – A list of residual TimeSeries for a sequence (list) of series with last_points_only=True. The residual list has length len(series).

  • List[List[TimeSeries]] – A list of lists of residual TimeSeries for a sequence of series with last_points_only=False. The outer residual list has length len(series). The inner lists consist of the residuals from all possible series-specific historical forecasts.

save(path=None)

Saves the model under a given path.

Creates two files under path (model object) and path.ckpt (checkpoint).

Example for saving and loading a RNNModel:

from darts.models import RNNModel

model = RNNModel(input_chunk_length=4)

model.save("my_model.pt")
model_loaded = RNNModel.load("my_model.pt")
Parameters

path (Optional[str]) – Path under which to save the model at its current state. Please avoid path starting with “last-” or “best-” to avoid collision with Pytorch-Ligthning checkpoints. If no path is specified, the model is automatically saved under "{ModelClass}_{YYYY-mm-dd_HH_MM_SS}.pt". E.g., "RNNModel_2020-01-01_12_00_00.pt".

Return type

None

property supports_future_covariates: bool

Whether model supports future covariates

Return type

bool

supports_likelihood_parameter_prediction()

Whether model instance supports direct prediction of likelihood parameters

Return type

bool

property supports_multivariate: bool

Whether the model considers more than one variate in the time series.

Return type

bool

property supports_optimized_historical_forecasts: bool

Whether the model supports optimized historical forecasts

Return type

bool

property supports_past_covariates: bool

Whether model supports past covariates

Return type

bool

supports_probabilistic_prediction()

Checks if the forecasting model with this configuration supports probabilistic predictions.

By default, returns False. Needs to be overwritten by models that do support probabilistic predictions.

Return type

bool

property supports_static_covariates: bool

Whether model supports static covariates

Return type

bool

property supports_transferrable_series_prediction: bool

Whether the model supports prediction for any input series.

Return type

bool

to_cpu()

Updates the PyTorch Lightning Trainer parameters to move the model to CPU the next time :fun:`fit()` or predict() is called.

property uses_future_covariates: bool

Whether the model uses future covariates, once fitted.

Return type

bool

property uses_past_covariates: bool

Whether the model uses past covariates, once fitted.

Return type

bool

property uses_static_covariates: bool

Whether the model uses static covariates, once fitted.

Return type

bool

class darts.models.forecasting.global_baseline_models.GlobalNaiveSeasonal(input_chunk_length, output_chunk_length, output_chunk_shift=0, **kwargs)[source]

Bases: _NoCovariatesMixin, _GlobalNaiveModel

Global Naive Seasonal Model.

The model generates forecasts for each series as described below:

  • take the value from each target component at the input_chunk_length`th point before the end of the target `series.

  • the forecast is the component value repeated output_chunk_length times.

Depending on the horizon n used when calling model.predict(), the forecasts are either:

  • a constant value if n <= output_chunk_length, or

  • a moving (seasonal) value if n > output_chunk_length, as a result of the autoregressive prediction.

This model is equivalent to:

  • NaiveSeasonal, when input_chunk_length is equal to the length of the input target series and output_chunk_length=1.

Note

  • Model checkpointing with save_checkpoints=True, and checkpoint loading with load_from_checkpoint() and load_weights_from_checkpoint() are not supported for global naive models.

Parameters
  • input_chunk_length (int) – The length of the input sequence fed to the model.

  • output_chunk_length (int) – The length of the emitted forecast and output sequence fed to the model.

  • output_chunk_shift (int) – Optionally, the number of steps to shift the start of the output chunk into the future (relative to the input chunk end). This will create a gap between the input and output. If the model supports future_covariates, the future values are extracted from the shifted output chunk. Predictions will start output_chunk_shift steps after the end of the target series. If output_chunk_shift is set, the model cannot generate autoregressive predictions (n > output_chunk_length).

  • **kwargs – Optional arguments to initialize the pytorch_lightning.Module, pytorch_lightning.Trainer, and Darts’ TorchForecastingModel. Since naive models are not trained, the following parameters will have no effect: loss_fn, likelihood, optimizer_cls, optimizer_kwargs, lr_scheduler_cls, lr_scheduler_kwargs, n_epochs, save_checkpoints, and some of pl_trainer_kwargs.

Examples

>>> from darts.datasets import IceCreamHeaterDataset
>>> from darts.models import GlobalNaiveSeasonal
>>> # create list of multivariate series
>>> series_1 = IceCreamHeaterDataset().load()
>>> series_2 = series_1 + 100.
>>> series = [series_1, series_2]
>>> # predict 3 months, use value from 12 months ago
>>> horizon, icl = 3, 12
>>> # repeated seasonal value (with `output_chunk_length = horizon`)
>>> model = GlobalNaiveSeasonal(input_chunk_length=icl, output_chunk_length=horizon)
>>> # predict after end of each multivariate series
>>> pred = model.fit(series).predict(n=horizon, series=series)
>>> [p.values() for p in pred]
[array([[ 21., 100.],
       [ 21., 100.],
       [ 21., 100.]]), array([[121., 200.],
       [121., 200.],
       [121., 200.]])]
>>> # moving seasonal value (with `output_chunk_length < horizon`)
>>> model = GlobalNaiveSeasonal(input_chunk_length=icl, output_chunk_length=1)
>>> pred = model.fit(series).predict(n=horizon, series=series)
>>> [p.values() for p in pred]
[array([[ 21., 100.],
       [ 21.,  68.],
       [ 24.,  51.]]), array([[121., 200.],
       [121., 168.],
       [124., 151.]])]

Attributes

considers_static_covariates

Whether the model considers static covariates, if there are any.

extreme_lags

A 8-tuple containing in order: (min target lag, max target lag, min past covariate lag, max past covariate lag, min future covariate lag, max future covariate lag, output shift, max target lag train (only for RNNModel)).

min_train_samples

The minimum number of samples for training the model.

output_chunk_length

Number of time steps predicted at once by the model, not defined for statistical models.

output_chunk_shift

Number of time steps that the output/prediction starts after the end of the input.

supports_multivariate

Whether the model considers more than one variate in the time series.

supports_optimized_historical_forecasts

Whether the model supports optimized historical forecasts

supports_transferrable_series_prediction

Whether the model supports prediction for any input series.

uses_future_covariates

Whether the model uses future covariates, once fitted.

uses_past_covariates

Whether the model uses past covariates, once fitted.

uses_static_covariates

Whether the model uses static covariates, once fitted.

epochs_trained

input_chunk_length

likelihood

model_created

model_params

supports_future_covariates

supports_past_covariates

supports_static_covariates

Methods

backtest(series[, past_covariates, ...])

Compute error values that the model would have produced when used on (potentially multiple) series.

fit(series[, past_covariates, future_covariates])

Fit/train the model on a (or potentially multiple) series.

fit_from_dataset(train_dataset[, ...])

Train the model with a specific darts.utils.data.TrainingDataset instance.

generate_fit_encodings(series[, ...])

Generates the covariate encodings that were used/generated for fitting the model and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together.

generate_fit_predict_encodings(n, series[, ...])

Generates covariate encodings for training and inference/prediction and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together.

generate_predict_encodings(n, series[, ...])

Generates covariate encodings for the inference/prediction set and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together.

gridsearch(parameters, series[, ...])

Find the best hyper-parameters among a given set using a grid search.

historical_forecasts(series[, ...])

Compute the historical forecasts that would have been obtained by this model on (potentially multiple) series.

load(path, **kwargs)

Loads a model from a given file path.

load_from_checkpoint(model_name[, work_dir, ...])

Load the model from automatically saved checkpoints under '{work_dir}/darts_logs/{model_name}/checkpoints/'.

load_weights(path[, load_encoders, skip_checks])

Loads the weights from a manually saved model (saved with save()).

load_weights_from_checkpoint([model_name, ...])

Load only the weights from automatically saved checkpoints under '{work_dir}/darts_logs/{model_name}/ checkpoints/'.

lr_find(series[, past_covariates, ...])

A wrapper around PyTorch Lightning's Tuner.lr_find().

predict(n[, series, past_covariates, ...])

Predict the n time step following the end of the training series, or of the specified series.

predict_from_dataset(n, input_series_dataset)

This method allows for predicting with a specific darts.utils.data.InferenceDataset instance.

reset_model()

Resets the model object and removes all stored data - model, checkpoints, loggers and training history.

residuals(series[, past_covariates, ...])

Compute the residuals produced by this model on a (or sequence of) TimeSeries.

save([path])

Saves the model under a given path.

supports_likelihood_parameter_prediction()

Whether model instance supports direct prediction of likelihood parameters

supports_probabilistic_prediction()

Checks if the forecasting model with this configuration supports probabilistic predictions.

to_cpu()

Updates the PyTorch Lightning Trainer parameters to move the model to CPU the next time :fun:`fit()` or predict() is called.

backtest(series, past_covariates=None, future_covariates=None, historical_forecasts=None, num_samples=1, train_length=None, start=None, start_format='value', forecast_horizon=1, stride=1, retrain=True, overlap_end=False, last_points_only=False, metric=<function mape>, reduction=<function mean>, verbose=False, show_warnings=True, metric_kwargs=None, fit_kwargs=None, predict_kwargs=None)

Compute error values that the model would have produced when used on (potentially multiple) series.

If historical_forecasts are provided, the metric (given by the metric function) is evaluated directly on the forecast and the actual values. The same series must be passed that was used to generate the historical forecasts. Otherwise, it repeatedly builds a training set: either expanding from the beginning of series or moving with a fixed length train_length. It trains the current model on the training set, emits a forecast of length equal to forecast_horizon, and then moves the end of the training set forward by stride time steps. The metric is then evaluated on the forecast and the actual values. Finally, the method returns a reduction (the mean by default) of all these metric scores.

By default, this method uses each historical forecast (whole) to compute error scores. If last_points_only is set to True, it will use only the last point of each historical forecast. In this case, no reduction is used.

By default, this method always re-trains the models on the entire available history, corresponding to an expanding window strategy. If retrain is set to False (useful for models for which training might be time-consuming, such as deep learning models), the trained model will be used directly to emit the forecasts.

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The (or a sequence of) target time series used to successively train and evaluate the historical forecasts.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one (or a sequence of) past-observed covariate series. This applies only if the model supports past covariates.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one (or a sequence of) future-known covariate series. This applies only if the model supports future covariates.

  • historical_forecasts (Union[TimeSeries, Sequence[TimeSeries], Sequence[Sequence[TimeSeries]], None]) – Optionally, the (or a sequence of / a sequence of sequences of) historical forecasts time series to be evaluated. Corresponds to the output of historical_forecasts(). The same series and last_points_only values must be passed that were used to generate the historical forecasts. If provided, will skip historical forecasting and ignore all parameters except series, last_points_only, metric, and reduction.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Use values >1 only for probabilistic models.

  • train_length (Optional[int]) – Number of time steps in our training set (size of backtesting window to train on). Only effective when retrain is not False. Default is set to train_length=None where it takes all available time steps up until prediction time, otherwise the moving window strategy is used. If larger than the number of time steps available, all steps up until prediction time are used, as in default case. Needs to be at least min_train_series_length.

  • start (Union[Timestamp, float, int, None]) –

    Optionally, the first point in time at which a prediction is computed. This parameter supports: float, int, pandas.Timestamp, and None. If a float, it is the proportion of the time series that should lie before the first prediction point. If an int, it is either the index position of the first prediction point for series with a pd.DatetimeIndex, or the index value for series with a pd.RangeIndex. The latter can be changed to the index position with start_format=”position”. If a pandas.Timestamp, it is the time stamp of the first prediction point. If None, the first prediction point will automatically be set to:

    • the first predictable point if retrain is False, or retrain is a Callable and the first predictable point is earlier than the first trainable point.

    • the first trainable point if retrain is True or int (given train_length), or retrain is a Callable and the first trainable point is earlier than the first predictable point.

    • the first trainable point (given train_length) otherwise

    Note: Raises a ValueError if start yields a time outside the time index of series. Note: If start is outside the possible historical forecasting times, will ignore the parameter (default behavior with None) and start at the first trainable/predictable point.

  • start_format (Literal[‘position’, ‘value’]) – Defines the start format. Only effective when start is an integer and series is indexed with a pd.RangeIndex. If set to ‘position’, start corresponds to the index position of the first predicted point and can range from (-len(series), len(series) - 1). If set to ‘value’, start corresponds to the index value/label of the first predicted point. Will raise an error if the value is not in series’ index. Default: 'value'

  • forecast_horizon (int) – The forecast horizon for the point predictions.

  • stride (int) – The number of time steps between two consecutive predictions.

  • retrain (Union[bool, int, Callable[…, bool]]) –

    Whether and/or on which condition to retrain the model before predicting. This parameter supports 3 different datatypes: bool, (positive) int, and Callable (returning a bool). In the case of bool: retrain the model at each step (True), or never retrains the model (False). In the case of int: the model is retrained every retrain iterations. In the case of Callable: the model is retrained whenever callable returns True. The callable must have the following positional arguments:

    • counter (int): current retrain iteration

    • pred_time (pd.Timestamp or int): timestamp of forecast time (end of the training series)

    • train_series (TimeSeries): train series up to pred_time

    • past_covariates (TimeSeries): past_covariates series up to pred_time

    • future_covariates (TimeSeries): future_covariates series up to min(pred_time + series.freq * forecast_horizon, series.end_time())

    Note: if any optional *_covariates are not passed to historical_forecast, None will be passed to the corresponding retrain function argument. Note: some models do require being retrained every time and do not support anything other than retrain=True.

  • overlap_end (bool) – Whether the returned forecasts can go beyond the series’ end or not.

  • last_points_only (bool) – Whether to use the whole historical forecasts or only the last point of each forecast to compute the error.

  • metric (Union[Callable[…, Union[float, List[float], ndarray, List[ndarray]]], List[Callable[…, Union[float, List[float], ndarray, List[ndarray]]]]]) –

    A metric function or a list of metric functions. Each metric must either be a Darts metric (see here), or a custom metric that has an identical signature as Darts’ metrics, uses decorators multi_ts_support() and multi_ts_support(), and returns the metric score.

  • reduction (Optional[Callable[…, float]]) – A function used to combine the individual error scores obtained when last_points_only is set to False. When providing several metric functions, the function will receive the argument axis = 1 to obtain single value for each metric function. If explicitly set to None, the method will return a list of the individual error scores instead. Set to np.mean by default.

  • verbose (bool) – Whether to print progress.

  • show_warnings (bool) – Whether to show warnings related to parameters start, and train_length.

  • metric_kwargs (Union[Dict[str, Any], List[Dict[str, Any]], None]) – Additional arguments passed to metric(), such as ‘n_jobs’ for parallelization, ‘component_reduction’ for reducing the component wise metrics, seasonality ‘m’ for scaled metrics, etc. Will pass arguments to each metric separately and only if they are present in the corresponding metric signature. Parameter ‘insample’ for scaled metrics (e.g. mase`, rmsse, …) is ignored, as it is handled internally.

  • fit_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model fit() method.

  • predict_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model predict() method.

Return type

Union[float, ndarray, List[float], List[ndarray]]

Returns

  • float – A single backtest score for single uni/multivariate series, a single metric function and:

    • historical_forecasts generated with last_points_only=True

    • historical_forecasts generated with last_points_only=False and using a backtest reduction

  • np.ndarray – An numpy array of backtest scores. For single series and one of:

    • a single metric function, historical_forecasts generated with last_points_only=False and backtest reduction=None. The output has shape (n forecasts,).

    • multiple metric functions and historical_forecasts generated with last_points_only=False. The output has shape (n metrics,) when using a backtest reduction, and (n metrics, n forecasts) when reduction=None

    • multiple uni/multivariate series including series_reduction and at least one of component_reduction=None or time_reduction=None for “per time step metrics”

  • List[float] – Same as for type float but for a sequence of series. The returned metric list has length len(series) with the float metric for each input series.

  • List[np.ndarray] – Same as for type np.ndarray but for a sequence of series. The returned metric list has length len(series) with the np.ndarray metrics for each input series.

property considers_static_covariates: bool

Whether the model considers static covariates, if there are any.

Return type

bool

property epochs_trained: int
Return type

int

property extreme_lags: Tuple[Optional[int], Optional[int], Optional[int], Optional[int], Optional[int], Optional[int], int, Optional[int]]

A 8-tuple containing in order: (min target lag, max target lag, min past covariate lag, max past covariate lag, min future covariate lag, max future covariate lag, output shift, max target lag train (only for RNNModel)). If 0 is the index of the first prediction, then all lags are relative to this index.

See examples below.

If the model wasn’t fitted with:
  • target (concerning RegressionModels only): then the first element should be None.

  • past covariates: then the third and fourth elements should be None.

  • future covariates: then the fifth and sixth elements should be None.

Should be overridden by models that use past or future covariates, and/or for model that have minimum target lag and maximum target lags potentially different from -1 and 0.

Notes

maximum target lag (second value) cannot be None and is always larger than or equal to 0.

Examples

>>> model = LinearRegressionModel(lags=3, output_chunk_length=2)
>>> model.fit(train_series)
>>> model.extreme_lags
(-3, 1, None, None, None, None, 0, None)
>>> model = LinearRegressionModel(lags=3, output_chunk_length=2, output_chunk_shift=2)
>>> model.fit(train_series)
>>> model.extreme_lags
(-3, 1, None, None, None, None, 2, None)
>>> model = LinearRegressionModel(lags=[-3, -5], lags_past_covariates = 4, output_chunk_length=7)
>>> model.fit(train_series, past_covariates=past_covariates)
>>> model.extreme_lags
(-5, 6, -4, -1,  None, None, 0, None)
>>> model = LinearRegressionModel(lags=[3, 5], lags_future_covariates = [4, 6], output_chunk_length=7)
>>> model.fit(train_series, future_covariates=future_covariates)
>>> model.extreme_lags
(-5, 6, None, None, 4, 6, 0, None)
>>> model = NBEATSModel(input_chunk_length=10, output_chunk_length=7)
>>> model.fit(train_series)
>>> model.extreme_lags
(-10, 6, None, None, None, None, 0, None)
>>> model = NBEATSModel(input_chunk_length=10, output_chunk_length=7, lags_future_covariates=[4, 6])
>>> model.fit(train_series, future_covariates)
>>> model.extreme_lags
(-10, 6, None, None, 4, 6, 0, None)
Return type

Tuple[Optional[int], Optional[int], Optional[int], Optional[int], Optional[int], Optional[int], int, Optional[int]]

fit(series, past_covariates=None, future_covariates=None, *args, **kwargs)

Fit/train the model on a (or potentially multiple) series. This method is only implemented for naive baseline models to provide a unified fit/predict API with other forecasting models.

The model is not really trained on the input, but fit() is used to setup the model based on the input series. Also, it stores the training series in case only a single TimeSeries was passed. This allows to call predict() without having to pass the single series.

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – A series or sequence of series serving as target (i.e. what the model will be trained to forecast)

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series specifying past-observed covariates

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series specifying future-known covariates

  • **kwargs – Optionally, some keyword arguments.

Returns

Fitted model.

Return type

self

fit_from_dataset(train_dataset, val_dataset=None, trainer=None, verbose=None, epochs=0, num_loader_workers=0)

Train the model with a specific darts.utils.data.TrainingDataset instance. These datasets implement a PyTorch Dataset, and specify how the target and covariates are sliced for training. If you are not sure which training dataset to use, consider calling fit() instead, which will create a default training dataset appropriate for this model.

Training is performed with a PyTorch Lightning Trainer. It uses a default Trainer object from presets and pl_trainer_kwargs used at model creation. You can also use a custom Trainer with optional parameter trainer. For more information on PyTorch Lightning Trainers check out this link.

This function can be called several times to do some extra training. If epochs is specified, the model will be trained for some (extra) epochs epochs.

Parameters
  • train_dataset (TrainingDataset) – A training dataset with a type matching this model (e.g. PastCovariatesTrainingDataset for PastCovariatesTorchModel).

  • val_dataset (Optional[TrainingDataset]) – A training dataset with a type matching this model (e.g. PastCovariatesTrainingDataset for :class:`PastCovariatesTorchModel`s), representing the validation set (to track the validation loss).

  • trainer (Optional[Trainer]) – Optionally, a custom PyTorch-Lightning Trainer object to perform prediction. Using a custom trainer will override Darts’ default trainer.

  • verbose (Optional[bool]) – Optionally, whether to print the progress. Ignored if there is a ProgressBar callback in pl_trainer_kwargs.

  • epochs (int) – If specified, will train the model for epochs (additional) epochs, irrespective of what n_epochs was provided to the model constructor.

  • num_loader_workers (int) – Optionally, an integer specifying the num_workers to use in PyTorch DataLoader instances, both for the training and validation loaders (if any). A larger number of workers can sometimes increase performance, but can also incur extra overheads and increase memory usage, as more batches are loaded in parallel.

Returns

Fitted model.

Return type

self

generate_fit_encodings(series, past_covariates=None, future_covariates=None)

Generates the covariate encodings that were used/generated for fitting the model and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together. The encodings are generated by the encoders defined at model creation with parameter add_encoders. Pass the same series, past_covariates, and future_covariates that you used to train/fit the model.

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The series or sequence of series with the target values used when fitting the model.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the series or sequence of series with the past-observed covariates used when fitting the model.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the series or sequence of series with the future-known covariates used when fitting the model.

Returns

A tuple of (past covariates, future covariates). Each covariate contains the original as well as the encoded covariates.

Return type

Tuple[Union[TimeSeries, Sequence[TimeSeries]], Union[TimeSeries, Sequence[TimeSeries]]]

generate_fit_predict_encodings(n, series, past_covariates=None, future_covariates=None)

Generates covariate encodings for training and inference/prediction and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together. The encodings are generated by the encoders defined at model creation with parameter add_encoders. Pass the same series, past_covariates, and future_covariates that you intend to use for training and prediction.

Parameters
  • n (int) – The number of prediction time steps after the end of series intended to be used for prediction.

  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The series or sequence of series with target values intended to be used for training and prediction.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the past-observed covariates series intended to be used for training and prediction. The dimensions must match those of the covariates used for training.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the future-known covariates series intended to be used for prediction. The dimensions must match those of the covariates used for training.

Returns

A tuple of (past covariates, future covariates). Each covariate contains the original as well as the encoded covariates.

Return type

Tuple[Union[TimeSeries, Sequence[TimeSeries]], Union[TimeSeries, Sequence[TimeSeries]]]

generate_predict_encodings(n, series, past_covariates=None, future_covariates=None)

Generates covariate encodings for the inference/prediction set and returns a tuple of past, and future covariates series with the original and encoded covariates stacked together. The encodings are generated by the encoders defined at model creation with parameter add_encoders. Pass the same series, past_covariates, and future_covariates that you intend to use for prediction.

Parameters
  • n (int) – The number of prediction time steps after the end of series intended to be used for prediction.

  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The series or sequence of series with target values intended to be used for prediction.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the past-observed covariates series intended to be used for prediction. The dimensions must match those of the covariates used for training.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the future-known covariates series intended to be used for prediction. The dimensions must match those of the covariates used for training.

Returns

A tuple of (past covariates, future covariates). Each covariate contains the original as well as the encoded covariates.

Return type

Tuple[Union[TimeSeries, Sequence[TimeSeries]], Union[TimeSeries, Sequence[TimeSeries]]]

classmethod gridsearch(parameters, series, past_covariates=None, future_covariates=None, forecast_horizon=None, stride=1, start=None, start_format='value', last_points_only=False, show_warnings=True, val_series=None, use_fitted_values=False, metric=<function mape>, reduction=<function mean>, verbose=False, n_jobs=1, n_random_samples=None, fit_kwargs=None, predict_kwargs=None)

Find the best hyper-parameters among a given set using a grid search.

This function has 3 modes of operation: Expanding window mode, split mode and fitted value mode. The three modes of operation evaluate every possible combination of hyper-parameter values provided in the parameters dictionary by instantiating the model_class subclass of ForecastingModel with each combination, and returning the best-performing model with regard to the metric function. The metric function is expected to return an error value, thus the model resulting in the smallest metric output will be chosen.

The relationship of the training data and test data depends on the mode of operation.

Expanding window mode (activated when forecast_horizon is passed): For every hyperparameter combination, the model is repeatedly trained and evaluated on different splits of series. This process is accomplished by using the backtest() function as a subroutine to produce historic forecasts starting from start that are compared against the ground truth values of series. Note that the model is retrained for every single prediction, thus this mode is slower.

Split window mode (activated when val_series is passed): This mode will be used when the val_series argument is passed. For every hyper-parameter combination, the model is trained on series and evaluated on val_series.

Fitted value mode (activated when use_fitted_values is set to True): For every hyper-parameter combination, the model is trained on series and evaluated on the resulting fitted values. Not all models have fitted values, and this method raises an error if the model doesn’t have a fitted_values member. The fitted values are the result of the fit of the model on series. Comparing with the fitted values can be a quick way to assess the model, but one cannot see if the model is overfitting the series.

Derived classes must ensure that a single instance of a model will not share parameters with the other instances, e.g., saving models in the same path. Otherwise, an unexpected behavior can arise while running several models in parallel (when n_jobs != 1). If this cannot be avoided, then gridsearch should be redefined, forcing n_jobs = 1.

Currently this method only supports deterministic predictions (i.e. when models’ predictions have only 1 sample).

Parameters
  • model_class – The ForecastingModel subclass to be tuned for ‘series’.

  • parameters (dict) – A dictionary containing as keys hyperparameter names, and as values lists of values for the respective hyperparameter.

  • series (TimeSeries) – The target series used as input and target for training.

  • past_covariates (Optional[TimeSeries]) – Optionally, a past-observed covariate series. This applies only if the model supports past covariates.

  • future_covariates (Optional[TimeSeries]) – Optionally, a future-known covariate series. This applies only if the model supports future covariates.

  • forecast_horizon (Optional[int]) – The integer value of the forecasting horizon. Activates expanding window mode.

  • stride (int) – Only used in expanding window mode. The number of time steps between two consecutive predictions.

  • start (Union[Timestamp, float, int, None]) –

    Only used in expanding window mode. Optionally, the first point in time at which a prediction is computed. This parameter supports: float, int, pandas.Timestamp, and None. If a float, it is the proportion of the time series that should lie before the first prediction point. If an int, it is either the index position of the first prediction point for series with a pd.DatetimeIndex, or the index value for series with a pd.RangeIndex. The latter can be changed to the index position with start_format=”position”. If a pandas.Timestamp, it is the time stamp of the first prediction point. If None, the first prediction point will automatically be set to:

    • the first predictable point if retrain is False, or retrain is a Callable and the first predictable point is earlier than the first trainable point.

    • the first trainable point if retrain is True or int (given train_length), or retrain is a Callable and the first trainable point is earlier than the first predictable point.

    • the first trainable point (given train_length) otherwise

    Note: Raises a ValueError if start yields a time outside the time index of series. Note: If start is outside the possible historical forecasting times, will ignore the parameter (default behavior with None) and start at the first trainable/predictable point.

  • start_format (Literal[‘position’, ‘value’]) – Only used in expanding window mode. Defines the start format. Only effective when start is an integer and series is indexed with a pd.RangeIndex. If set to ‘position’, start corresponds to the index position of the first predicted point and can range from (-len(series), len(series) - 1). If set to ‘value’, start corresponds to the index value/label of the first predicted point. Will raise an error if the value is not in series’ index. Default: 'value'

  • last_points_only (bool) – Only used in expanding window mode. Whether to use the whole forecasts or only the last point of each forecast to compute the error.

  • show_warnings (bool) – Only used in expanding window mode. Whether to show warnings related to the start parameter.

  • val_series (Optional[TimeSeries]) – The TimeSeries instance used for validation in split mode. If provided, this series must start right after the end of series; so that a proper comparison of the forecast can be made.

  • use_fitted_values (bool) – If True, uses the comparison with the fitted values. Raises an error if fitted_values is not an attribute of model_class.

  • metric (Callable[[TimeSeries, TimeSeries], float]) –

    A metric function that returns the error between two TimeSeries as a float value . Must either be one of Darts’ “aggregated over time” metrics (see here), or a custom metric that as input two TimeSeries and returns the error

  • reduction (Callable[[ndarray], float]) – A reduction function (mapping array to float) describing how to aggregate the errors obtained on the different validation series when backtesting. By default it’ll compute the mean of errors.

  • verbose – Whether to print progress.

  • n_jobs (int) – The number of jobs to run in parallel. Parallel jobs are created only when there are two or more parameters combinations to evaluate. Each job will instantiate, train, and evaluate a different instance of the model. Defaults to 1 (sequential). Setting the parameter to -1 means using all the available cores.

  • n_random_samples (Union[int, float, None]) – The number/ratio of hyperparameter combinations to select from the full parameter grid. This will perform a random search instead of using the full grid. If an integer, n_random_samples is the number of parameter combinations selected from the full grid and must be between 0 and the total number of parameter combinations. If a float, n_random_samples is the ratio of parameter combinations selected from the full grid and must be between 0 and 1. Defaults to None, for which random selection will be ignored.

  • fit_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model fit() method.

  • predict_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model predict() method.

Returns

A tuple containing an untrained model_class instance created from the best-performing hyper-parameters, along with a dictionary containing these best hyper-parameters, and metric score for the best hyper-parameters.

Return type

ForecastingModel, Dict, float

historical_forecasts(series, past_covariates=None, future_covariates=None, num_samples=1, train_length=None, start=None, start_format='value', forecast_horizon=1, stride=1, retrain=True, overlap_end=False, last_points_only=True, verbose=False, show_warnings=True, predict_likelihood_parameters=False, enable_optimization=True, fit_kwargs=None, predict_kwargs=None)

Compute the historical forecasts that would have been obtained by this model on (potentially multiple) series.

This method repeatedly builds a training set: either expanding from the beginning of series or moving with a fixed length train_length. It trains the model on the training set, emits a forecast of length equal to forecast_horizon, and then moves the end of the training set forward by stride time steps.

By default, this method will return one (or a sequence of) single time series made up of the last point of each historical forecast. This time series will thus have a frequency of series.freq * stride. If last_points_only is set to False, it will instead return one (or a sequence of) list of the historical forecasts series.

By default, this method always re-trains the models on the entire available history, corresponding to an expanding window strategy. If retrain is set to False, the model must have been fit before. This is not supported by all models.

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The (or a sequence of) target time series used to successively train and compute the historical forecasts.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one (or a sequence of) past-observed covariate series. This applies only if the model supports past covariates.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one (or a sequence of) of future-known covariate series. This applies only if the model supports future covariates.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Use values >1 only for probabilistic models.

  • train_length (Optional[int]) – Number of time steps in our training set (size of backtesting window to train on). Only effective when retrain is not False. Default is set to train_length=None where it takes all available time steps up until prediction time, otherwise the moving window strategy is used. If larger than the number of time steps available, all steps up until prediction time are used, as in default case. Needs to be at least min_train_series_length.

  • start (Union[Timestamp, float, int, None]) –

    Optionally, the first point in time at which a prediction is computed. This parameter supports: float, int, pandas.Timestamp, and None. If a float, it is the proportion of the time series that should lie before the first prediction point. If an int, it is either the index position of the first prediction point for series with a pd.DatetimeIndex, or the index value for series with a pd.RangeIndex. The latter can be changed to the index position with start_format=”position”. If a pandas.Timestamp, it is the time stamp of the first prediction point. If None, the first prediction point will automatically be set to:

    • the first predictable point if retrain is False, or retrain is a Callable and the first predictable point is earlier than the first trainable point.

    • the first trainable point if retrain is True or int (given train_length), or retrain is a Callable and the first trainable point is earlier than the first predictable point.

    • the first trainable point (given train_length) otherwise

    Note: If the model uses a shifted output (output_chunk_shift > 0), then the first predicted point is also shifted by output_chunk_shift points into the future. Note: Raises a ValueError if start yields a time outside the time index of series. Note: If start is outside the possible historical forecasting times, will ignore the parameter (default behavior with None) and start at the first trainable/predictable point.

  • start_format (Literal[‘position’, ‘value’]) – Defines the start format. Only effective when start is an integer and series is indexed with a pd.RangeIndex. If set to ‘position’, start corresponds to the index position of the first predicted point and can range from (-len(series), len(series) - 1). If set to ‘value’, start corresponds to the index value/label of the first predicted point. Will raise an error if the value is not in series’ index. Default: 'value'

  • forecast_horizon (int) – The forecast horizon for the predictions.

  • stride (int) – The number of time steps between two consecutive predictions.

  • retrain (Union[bool, int, Callable[…, bool]]) –

    Whether and/or on which condition to retrain the model before predicting. This parameter supports 3 different datatypes: bool, (positive) int, and Callable (returning a bool). In the case of bool: retrain the model at each step (True), or never retrains the model (False). In the case of int: the model is retrained every retrain iterations. In the case of Callable: the model is retrained whenever callable returns True. The callable must have the following positional arguments:

    • counter (int): current retrain iteration

    • pred_time (pd.Timestamp or int): timestamp of forecast time (end of the training series)

    • train_series (TimeSeries): train series up to pred_time

    • past_covariates (TimeSeries): past_covariates series up to pred_time

    • future_covariates (TimeSeries): future_covariates series up to min(pred_time + series.freq * forecast_horizon, series.end_time())

    Note: if any optional *_covariates are not passed to historical_forecast, None will be passed to the corresponding retrain function argument. Note: some models do require being retrained every time and do not support anything other than retrain=True.

  • overlap_end (bool) – Whether the returned forecasts can go beyond the series’ end or not.

  • last_points_only (bool) – Whether to retain only the last point of each historical forecast. If set to True, the method returns a single TimeSeries containing the successive point forecasts. Otherwise, returns a list of historical TimeSeries forecasts.

  • verbose (bool) – Whether to print progress.

  • show_warnings (bool) – Whether to show warnings related to historical forecasts optimization, or parameters start and train_length.

  • predict_likelihood_parameters (bool) – If set to True, the model predict the parameters of its Likelihood parameters instead of the target. Only supported for probabilistic models with a likelihood, num_samples = 1 and n<=output_chunk_length. Default: False

  • enable_optimization (bool) – Whether to use the optimized version of historical_forecasts when supported and available.

  • fit_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model fit() method.

  • predict_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model predict() method.

Return type

Union[TimeSeries, List[TimeSeries], List[List[TimeSeries]]]

Returns

  • TimeSeries – A single historical forecast for a single series and last_points_only=True: it contains only the predictions at step forecast_horizon from all historical forecasts.

  • List[TimeSeries] – A list of historical forecasts for:

    • a sequence (list) of series and last_points_only=True: for each series, it contains only the predictions at step forecast_horizon from all historical forecasts.

    • a single series and last_points_only=False: for each historical forecast, it contains the entire horizon forecast_horizon.

  • List[List[TimeSeries]] – A list of lists of historical forecasts for a sequence of series and last_points_only=False. For each series, and historical forecast, it contains the entire horizon forecast_horizon. The outer list is over the series provided in the input sequence, and the inner lists contain the historical forecasts for each series.

property input_chunk_length: int
Return type

int

property likelihood: Optional[Likelihood]
Return type

Optional[Likelihood]

static load(path, **kwargs)

Loads a model from a given file path.

Example for loading a general save from RNNModel:

from darts.models import RNNModel

model_loaded = RNNModel.load(path)

Example for loading an RNNModel to CPU that was saved on GPU:

from darts.models import RNNModel

model_loaded = RNNModel.load(path, map_location="cpu")
model_loaded.to_cpu()
Parameters
  • path (str) – Path from which to load the model. If no path was specified when saving the model, the automatically generated path ending with “.pt” has to be provided.

  • **kwargs

    Additional kwargs for PyTorch Lightning’s LightningModule.load_from_checkpoint() method, such as map_location to load the model onto a different device than the one from which it was saved. For more information, read the official documentation.

Return type

TorchForecastingModel

static load_from_checkpoint(model_name, work_dir=None, file_name=None, best=True, **kwargs)

Load the model from automatically saved checkpoints under ‘{work_dir}/darts_logs/{model_name}/checkpoints/’. This method is used for models that were created with save_checkpoints=True.

If you manually saved your model, consider using load().

Example for loading a RNNModel from checkpoint (model_name is the model_name used at model creation):

from darts.models import RNNModel

model_loaded = RNNModel.load_from_checkpoint(model_name, best=True)

If file_name is given, returns the model saved under ‘{work_dir}/darts_logs/{model_name}/checkpoints/{file_name}’.

If file_name is not given, will try to restore the best checkpoint (if best is True) or the most recent checkpoint (if best is False from ‘{work_dir}/darts_logs/{model_name}/checkpoints/’.

Example for loading an RNNModel checkpoint to CPU that was saved on GPU:

from darts.models import RNNModel

model_loaded = RNNModel.load_from_checkpoint(model_name, best=True, map_location="cpu")
model_loaded.to_cpu()
Parameters
  • model_name (str) – The name of the model, used to retrieve the checkpoints folder’s name.

  • work_dir (Optional[str]) – Working directory (containing the checkpoints folder). Defaults to current working directory.

  • file_name (Optional[str]) – The name of the checkpoint file. If not specified, use the most recent one.

  • best (bool) – If set, will retrieve the best model (according to validation loss) instead of the most recent one. Only is ignored when file_name is given.

  • **kwargs

    Additional kwargs for PyTorch Lightning’s LightningModule.load_from_checkpoint() method, such as map_location to load the model onto a different device than the one from which it was saved. For more information, read the official documentation.

Returns

The corresponding trained TorchForecastingModel.

Return type

TorchForecastingModel

load_weights(path, load_encoders=True, skip_checks=False, **kwargs)

Loads the weights from a manually saved model (saved with save()).

Note: This method needs to be able to access the darts model checkpoint (.pt) in order to load the encoders and perform sanity checks on the model parameters.

Parameters
  • path (str) – Path from which to load the model’s weights. If no path was specified when saving the model, the automatically generated path ending with “.pt” has to be provided.

  • load_encoders (bool) – If set, will load the encoders from the model to enable direct call of fit() or predict(). Default: True.

  • skip_checks (bool) – If set, will disable the loading of the encoders and the sanity checks on model parameters (not recommended). Cannot be used with load_encoders=True. Default: False.

  • **kwargs

    Additional kwargs for PyTorch’s load() method, such as map_location to load the model onto a different device than the one from which it was saved. For more information, read the official documentation.

load_weights_from_checkpoint(model_name=None, work_dir=None, file_name=None, best=True, strict=True, load_encoders=True, skip_checks=False, **kwargs)

Load only the weights from automatically saved checkpoints under ‘{work_dir}/darts_logs/{model_name}/ checkpoints/’. This method is used for models that were created with save_checkpoints=True and that need to be re-trained or fine-tuned with different optimizer or learning rate scheduler. However, it can also be used to load weights for inference.

To resume an interrupted training, please consider using load_from_checkpoint() which also reload the trainer, optimizer and learning rate scheduler states.

For manually saved model, consider using load() or load_weights() instead.

Note: This method needs to be able to access the darts model checkpoint (.pt) in order to load the encoders and perform sanity checks on the model parameters.

Parameters
  • model_name (Optional[str]) – The name of the model, used to retrieve the checkpoints folder’s name. Default: self.model_name.

  • work_dir (Optional[str]) – Working directory (containing the checkpoints folder). Defaults to current working directory.

  • file_name (Optional[str]) – The name of the checkpoint file. If not specified, use the most recent one.

  • best (bool) – If set, will retrieve the best model (according to validation loss) instead of the most recent one. Only is ignored when file_name is given. Default: True.

  • strict (bool) –

    If set, strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict(). Default: True. For more information, read the official documentation.

  • load_encoders (bool) – If set, will load the encoders from the model to enable direct call of fit() or predict(). Default: True.

  • skip_checks (bool) – If set, will disable the loading of the encoders and the sanity checks on model parameters (not recommended). Cannot be used with load_encoders=True. Default: False.

  • **kwargs

    Additional kwargs for PyTorch’s load() method, such as map_location to load the model onto a different device than the one from which it was saved. For more information, read the official documentation.

lr_find(series, past_covariates=None, future_covariates=None, val_series=None, val_past_covariates=None, val_future_covariates=None, trainer=None, verbose=None, epochs=0, max_samples_per_ts=None, num_loader_workers=0, min_lr=1e-08, max_lr=1, num_training=100, mode='exponential', early_stop_threshold=4.0)

A wrapper around PyTorch Lightning’s Tuner.lr_find(). Performs a range test of good initial learning rates, to reduce the amount of guesswork in picking a good starting learning rate. For more information on PyTorch Lightning’s Tuner check out this link. It is recommended to increase the number of epochs if the tuner did not give satisfactory results. Consider creating a new model object with the suggested learning rate for example using model creation parameters optimizer_cls, optimizer_kwargs, lr_scheduler_cls, and lr_scheduler_kwargs.

Example using a RNNModel:

import torch
from darts.datasets import AirPassengersDataset
from darts.models import NBEATSModel

series = AirPassengersDataset().load()
train, val = series[:-18], series[-18:]
model = NBEATSModel(input_chunk_length=12, output_chunk_length=6, random_state=42)
# run the learning rate tuner
results = model.lr_find(series=train, val_series=val)
# plot the results
results.plot(suggest=True, show=True)
# create a new model with the suggested learning rate
model = NBEATSModel(
    input_chunk_length=12,
    output_chunk_length=6,
    random_state=42,
    optimizer_cls=torch.optim.Adam,
    optimizer_kwargs={"lr": results.suggestion()}
)
Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – A series or sequence of series serving as target (i.e. what the model will be trained to forecast)

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series specifying past-observed covariates

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series specifying future-known covariates

  • val_series (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one or a sequence of validation target series, which will be used to compute the validation loss throughout training and keep track of the best performing models.

  • val_past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the past covariates corresponding to the validation series (must match covariates)

  • val_future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the future covariates corresponding to the validation series (must match covariates)

  • trainer (Optional[Trainer]) – Optionally, a custom PyTorch-Lightning Trainer object to perform training. Using a custom trainer will override Darts’ default trainer.

  • verbose (Optional[bool]) – Optionally, whether to print the progress. Ignored if there is a ProgressBar callback in pl_trainer_kwargs.

  • epochs (int) – If specified, will train the model for epochs (additional) epochs, irrespective of what n_epochs was provided to the model constructor.

  • max_samples_per_ts (Optional[int]) – Optionally, a maximum number of samples to use per time series. Models are trained in a supervised fashion by constructing slices of (input, output) examples. On long time series, this can result in unnecessarily large number of training samples. This parameter upper-bounds the number of training samples per time series (taking only the most recent samples in each series). Leaving to None does not apply any upper bound.

  • num_loader_workers (int) – Optionally, an integer specifying the num_workers to use in PyTorch DataLoader instances, both for the training and validation loaders (if any). A larger number of workers can sometimes increase performance, but can also incur extra overheads and increase memory usage, as more batches are loaded in parallel.

  • min_lr (float) – minimum learning rate to investigate

  • max_lr (float) – maximum learning rate to investigate

  • num_training (int) – number of learning rates to test

  • mode (str) – Search strategy to update learning rate after each batch: ‘exponential’: Increases the learning rate exponentially. ‘linear’: Increases the learning rate linearly.

  • early_stop_threshold (float) – Threshold for stopping the search. If the loss at any point is larger than early_stop_threshold*best_loss then the search is stopped. To disable, set to None

Returns

_LRFinder object of Lightning containing the results of the LR sweep.

Return type

lr_finder

property min_train_samples: int

The minimum number of samples for training the model.

Return type

int

property model_created: bool
Return type

bool

property model_params: dict
Return type

dict

property output_chunk_length: int

Number of time steps predicted at once by the model, not defined for statistical models.

Return type

int

property output_chunk_shift: int

Number of time steps that the output/prediction starts after the end of the input.

Return type

int

predict(n, series=None, past_covariates=None, future_covariates=None, trainer=None, batch_size=None, verbose=None, n_jobs=1, roll_size=None, num_samples=1, num_loader_workers=0, mc_dropout=False, predict_likelihood_parameters=False, show_warnings=True)

Predict the n time step following the end of the training series, or of the specified series.

Prediction is performed with a PyTorch Lightning Trainer. It uses a default Trainer object from presets and pl_trainer_kwargs used at model creation. You can also use a custom Trainer with optional parameter trainer. For more information on PyTorch Lightning Trainers check out this link .

Below, all possible parameters are documented, but not all models support all parameters. For instance, all the PastCovariatesTorchModel support only past_covariates and not future_covariates. Darts will complain if you try calling predict() on a model with the wrong covariates argument.

Darts will also complain if the provided covariates do not have a sufficient time span. In general, not all models require the same covariates’ time spans:

  • Models relying on past covariates require the last input_chunk_length of the past_covariates
    points to be known at prediction time. For horizon values n > output_chunk_length, these models
    require at least the next n - output_chunk_length future values to be known as well.
  • Models relying on future covariates require the next n values to be known.
    In addition (for DualCovariatesTorchModel and MixedCovariatesTorchModel), they also
    require the “historic” values of these future covariates (over the past input_chunk_length).

When handling covariates, Darts will try to use the time axes of the target and the covariates to come up with the right time slices. So the covariates can be longer than needed; as long as the time axes are correct Darts will handle them correctly. It will also complain if their time span is not sufficient.

Parameters
  • n (int) – The number of time steps after the end of the training time series for which to produce predictions

  • series (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, a series or sequence of series, representing the history of the target series whose future is to be predicted. If specified, the method returns the forecasts of these series. Otherwise, the method returns the forecast of the (single) training series.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the past-observed covariates series needed as inputs for the model. They must match the covariates used for training in terms of dimension.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, the future-known covariates series needed as inputs for the model. They must match the covariates used for training in terms of dimension.

  • trainer (Optional[Trainer]) – Optionally, a custom PyTorch-Lightning Trainer object to perform prediction. Using a custom trainer will override Darts’ default trainer.

  • batch_size (Optional[int]) – Size of batches during prediction. Defaults to the models’ training batch_size value.

  • verbose (Optional[bool]) – Optionally, whether to print the progress. Ignored if there is a ProgressBar callback in pl_trainer_kwargs.

  • n_jobs (int) – The number of jobs to run in parallel. -1 means using all processors. Defaults to 1.

  • roll_size (Optional[int]) – For self-consuming predictions, i.e. n > output_chunk_length, determines how many outputs of the model are fed back into it at every iteration of feeding the predicted target (and optionally future covariates) back into the model. If this parameter is not provided, it will be set output_chunk_length by default.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Should be left set to 1 for deterministic models.

  • num_loader_workers (int) – Optionally, an integer specifying the num_workers to use in PyTorch DataLoader instances, for the inference/prediction dataset loaders (if any). A larger number of workers can sometimes increase performance, but can also incur extra overheads and increase memory usage, as more batches are loaded in parallel.

  • mc_dropout (bool) – Optionally, enable monte carlo dropout for predictions using neural network based models. This allows bayesian approximation by specifying an implicit prior over learned models.

  • predict_likelihood_parameters (bool) – If set to True, the model predict the parameters of its Likelihood parameters instead of the target. Only supported for probabilistic models with a likelihood, num_samples = 1 and n<=output_chunk_length. Default: False.

  • show_warnings (bool) – Optionally, control whether warnings are shown. Not effective for all models.

Returns

One or several time series containing the forecasts of series, or the forecast of the training series if series is not specified and the model has been trained on a single series.

Return type

Union[TimeSeries, Sequence[TimeSeries]]

predict_from_dataset(n, input_series_dataset, trainer=None, batch_size=None, verbose=None, n_jobs=1, roll_size=None, num_samples=1, num_loader_workers=0, mc_dropout=False, predict_likelihood_parameters=False)

This method allows for predicting with a specific darts.utils.data.InferenceDataset instance. These datasets implement a PyTorch Dataset, and specify how the target and covariates are sliced for inference. In most cases, you’ll rather want to call predict() instead, which will create an appropriate InferenceDataset for you.

Prediction is performed with a PyTorch Lightning Trainer. It uses a default Trainer object from presets and pl_trainer_kwargs used at model creation. You can also use a custom Trainer with optional parameter trainer. For more information on PyTorch Lightning Trainers check out this link .

Parameters
  • n (int) – The number of time steps after the end of the training time series for which to produce predictions

  • input_series_dataset (InferenceDataset) – Optionally, a series or sequence of series, representing the history of the target series’ whose future is to be predicted. If specified, the method returns the forecasts of these series. Otherwise, the method returns the forecast of the (single) training series.

  • trainer (Optional[Trainer]) – Optionally, a custom PyTorch-Lightning Trainer object to perform prediction. Using a custom trainer will override Darts’ default trainer.

  • batch_size (Optional[int]) – Size of batches during prediction. Defaults to the models batch_size value.

  • verbose (Optional[bool]) – Optionally, whether to print the progress. Ignored if there is a ProgressBar callback in pl_trainer_kwargs.

  • n_jobs (int) – The number of jobs to run in parallel. -1 means using all processors. Defaults to 1.

  • roll_size (Optional[int]) – For self-consuming predictions, i.e. n > output_chunk_length, determines how many outputs of the model are fed back into it at every iteration of feeding the predicted target (and optionally future covariates) back into the model. If this parameter is not provided, it will be set output_chunk_length by default.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Should be left set to 1 for deterministic models.

  • num_loader_workers (int) – Optionally, an integer specifying the num_workers to use in PyTorch DataLoader instances, for the inference/prediction dataset loaders (if any). A larger number of workers can sometimes increase performance, but can also incur extra overheads and increase memory usage, as more batches are loaded in parallel.

  • mc_dropout (bool) – Optionally, enable monte carlo dropout for predictions using neural network based models. This allows bayesian approximation by specifying an implicit prior over learned models.

  • predict_likelihood_parameters (bool) – If set to True, the model predict the parameters of its Likelihood parameters instead of the target. Only supported for probabilistic models with a likelihood, num_samples = 1 and n<=output_chunk_length. Default: False

Returns

Returns one or more forecasts for time series.

Return type

Sequence[TimeSeries]

reset_model()

Resets the model object and removes all stored data - model, checkpoints, loggers and training history.

residuals(series, past_covariates=None, future_covariates=None, historical_forecasts=None, num_samples=1, train_length=None, start=None, start_format='value', forecast_horizon=1, stride=1, retrain=True, last_points_only=True, metric=<function err>, verbose=False, show_warnings=True, metric_kwargs=None, fit_kwargs=None, predict_kwargs=None, values_only=False)

Compute the residuals produced by this model on a (or sequence of) TimeSeries.

This function computes the difference (or one of Darts’ “per time step” metrics) between the actual observations from series and the fitted values obtained by training the model on series (or using a pre-trained model with retrain=False). Not all models support fitted values, so we use historical forecasts as an approximation for them.

In sequence this method performs:

  • compute historical forecasts for each series or use pre-computed historical_forecasts (see historical_forecasts() for more details). How the historical forecasts are generated can be configured with parameters num_samples, train_length, start, start_format, forecast_horizon, stride, retrain, last_points_only, fit_kwargs, and predict_kwargs.

  • compute a backtest using a “per time step” metric between the historical forecasts and series per component/column and time step (see backtest() for more details). By default, uses the residuals err() as a metric.

  • create and return TimeSeries (or simply a np.ndarray with values_only=True) with the time index from historical forecasts, and values from the metrics per component and time step.

This method works for single or multiple univariate or multivariate series. It uses the median prediction (when dealing with stochastic forecasts).

Parameters
  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The univariate TimeSeries instance which the residuals will be computed for.

  • past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – One or several past-observed covariate time series.

  • future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – One or several future-known covariate time series.

  • forecast_horizon (int) – The forecasting horizon used to predict each fitted value.

  • historical_forecasts (Union[TimeSeries, Sequence[TimeSeries], Sequence[Sequence[TimeSeries]], None]) – Optionally, the (or a sequence of / a sequence of sequences of) historical forecasts time series to be evaluated. Corresponds to the output of historical_forecasts(). The same series and last_points_only values must be passed that were used to generate the historical forecasts. If provided, will skip historical forecasting and ignore all parameters except series, last_points_only, metric, and reduction.

  • num_samples (int) – Number of times a prediction is sampled from a probabilistic model. Use values >1 only for probabilistic models.

  • train_length (Optional[int]) – Number of time steps in our training set (size of backtesting window to train on). Only effective when retrain is not False. Default is set to train_length=None where it takes all available time steps up until prediction time, otherwise the moving window strategy is used. If larger than the number of time steps available, all steps up until prediction time are used, as in default case. Needs to be at least min_train_series_length.

  • start (Union[Timestamp, float, int, None]) –

    Optionally, the first point in time at which a prediction is computed. This parameter supports: float, int, pandas.Timestamp, and None. If a float, it is the proportion of the time series that should lie before the first prediction point. If an int, it is either the index position of the first prediction point for series with a pd.DatetimeIndex, or the index value for series with a pd.RangeIndex. The latter can be changed to the index position with start_format=”position”. If a pandas.Timestamp, it is the time stamp of the first prediction point. If None, the first prediction point will automatically be set to:

    • the first predictable point if retrain is False, or retrain is a Callable and the first predictable point is earlier than the first trainable point.

    • the first trainable point if retrain is True or int (given train_length), or retrain is a Callable and the first trainable point is earlier than the first predictable point.

    • the first trainable point (given train_length) otherwise

    Note: Raises a ValueError if start yields a time outside the time index of series. Note: If start is outside the possible historical forecasting times, will ignore the parameter (default behavior with None) and start at the first trainable/predictable point.

  • start_format (Literal[‘position’, ‘value’]) – Defines the start format. Only effective when start is an integer and series is indexed with a pd.RangeIndex. If set to ‘position’, start corresponds to the index position of the first predicted point and can range from (-len(series), len(series) - 1). If set to ‘value’, start corresponds to the index value/label of the first predicted point. Will raise an error if the value is not in series’ index. Default: 'value'

  • forecast_horizon – The forecast horizon for the point predictions.

  • stride (int) – The number of time steps between two consecutive predictions.

  • retrain (Union[bool, int, Callable[…, bool]]) –

    Whether and/or on which condition to retrain the model before predicting. This parameter supports 3 different datatypes: bool, (positive) int, and Callable (returning a bool). In the case of bool: retrain the model at each step (True), or never retrains the model (False). In the case of int: the model is retrained every retrain iterations. In the case of Callable: the model is retrained whenever callable returns True. The callable must have the following positional arguments:

    • counter (int): current retrain iteration

    • pred_time (pd.Timestamp or int): timestamp of forecast time (end of the training series)

    • train_series (TimeSeries): train series up to pred_time

    • past_covariates (TimeSeries): past_covariates series up to pred_time

    • future_covariates (TimeSeries): future_covariates series up to min(pred_time + series.freq * forecast_horizon, series.end_time())

    Note: if any optional *_covariates are not passed to historical_forecast, None will be passed to the corresponding retrain function argument. Note: some models do require being retrained every time and do not support anything other than retrain=True.

  • last_points_only (bool) – Whether to use the whole historical forecasts or only the last point of each forecast to compute the error.

  • metric (Callable[…, Union[float, List[float], ndarray, List[ndarray]]]) –

    Either one of Darts’ “per time step” metrics (see here), or a custom metric that has an identical signature as Darts’ “per time step” metrics, uses decorators multi_ts_support() and multi_ts_support(), and returns one value per time step.

  • verbose (bool) – Whether to print progress.

  • show_warnings (bool) – Whether to show warnings related to parameters start, and train_length.

  • metric_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to metric(), such as ‘n_jobs’ for parallelization, ‘m’ for scaled metrics, etc. Will pass arguments only if they are present in the corresponding metric signature. Ignores reduction arguments “series_reduction”, “component_reduction”, “time_reduction”, and parameter ‘insample’ for scaled metrics (e.g. mase`, rmsse, …), as they are handled internally.

  • fit_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model fit() method.

  • predict_kwargs (Optional[Dict[str, Any]]) – Additional arguments passed to the model predict() method.

  • values_only (bool) – Whether to return the residuals as np.ndarray. If False, returns residuals as TimeSeries.

Return type

Union[TimeSeries, List[TimeSeries], List[List[TimeSeries]]]

Returns

  • TimeSeries – Residual TimeSeries for a single series and historical_forecasts generated with last_points_only=True.

  • List[TimeSeries] – A list of residual TimeSeries for a sequence (list) of series with last_points_only=True. The residual list has length len(series).

  • List[List[TimeSeries]] – A list of lists of residual TimeSeries for a sequence of series with last_points_only=False. The outer residual list has length len(series). The inner lists consist of the residuals from all possible series-specific historical forecasts.

save(path=None)

Saves the model under a given path.

Creates two files under path (model object) and path.ckpt (checkpoint).

Example for saving and loading a RNNModel:

from darts.models import RNNModel

model = RNNModel(input_chunk_length=4)

model.save("my_model.pt")
model_loaded = RNNModel.load("my_model.pt")
Parameters

path (Optional[str]) – Path under which to save the model at its current state. Please avoid path starting with “last-” or “best-” to avoid collision with Pytorch-Ligthning checkpoints. If no path is specified, the model is automatically saved under "{ModelClass}_{YYYY-mm-dd_HH_MM_SS}.pt". E.g., "RNNModel_2020-01-01_12_00_00.pt".

Return type

None

property supports_future_covariates: bool

Whether model supports future covariates

Return type

bool

supports_likelihood_parameter_prediction()

Whether model instance supports direct prediction of likelihood parameters

Return type

bool

property supports_multivariate: bool

Whether the model considers more than one variate in the time series.

Return type

bool

property supports_optimized_historical_forecasts: bool

Whether the model supports optimized historical forecasts

Return type

bool

property supports_past_covariates: bool

Whether model supports past covariates

Return type

bool

supports_probabilistic_prediction()

Checks if the forecasting model with this configuration supports probabilistic predictions.

By default, returns False. Needs to be overwritten by models that do support probabilistic predictions.

Return type

bool

property supports_static_covariates: bool

Whether model supports static covariates

Return type

bool

property supports_transferrable_series_prediction: bool

Whether the model supports prediction for any input series.

Return type

bool

to_cpu()

Updates the PyTorch Lightning Trainer parameters to move the model to CPU the next time :fun:`fit()` or predict() is called.

property uses_future_covariates: bool

Whether the model uses future covariates, once fitted.

Return type

bool

property uses_past_covariates: bool

Whether the model uses past covariates, once fitted.

Return type

bool

property uses_static_covariates: bool

Whether the model uses static covariates, once fitted.

Return type

bool