Shap Explainer for RegressionModels

A shap explainer specifically for time series forecasting models.

This class is (currently) limited to Darts’ RegressionModel instances of forecasting models. It uses shap values to provide “explanations” of each input features. The input features are the different past lags (of the target and/or past covariates), as well as potential future lags of future covariates used as inputs by the forecasting model to produce its forecasts. Furthermore, in the case of multivariate series, the features contain each dimension of each of the (lagged) series.

Note

This explainer is subject to the usual features independence assumption used to compute shap values. This means that it does not capture potential indirect influence that some lags may have on the target by influencing other lags.

  • explain() generates the explanations for a given foreground series (or background series, if foreground is not provided).

  • summary_plot() displays a shap plot summary for each horizon and each component dimension of the target series.

  • force_plot_from_ts() displays a shap force_plot for one target and one horizon, for a given target series. It displays shap values of each lag/covariate with an additive force

    layout.

class darts.explainability.shap_explainer.ShapExplainer(model, background_series=None, background_past_covariates=None, background_future_covariates=None, background_num_samples=None, shap_method=None, **kwargs)[source]

Bases: _ForecastingModelExplainer

Definitions

  • A background series is a TimeSeries used to train the shap explainer.

  • A foreground series is a TimeSeries that can be explained by a shap explainer after it has been fitted.

Currently, ShapExplainer only works with RegressionModel forecasting models. The number of explained horizons (t+1, t+2, …) can be at most equal to output_chunk_length of model.

Parameters
  • model (darts.models.forecasting.regression_model.RegressionModel) – A RegressionModel to be explained. It must be fitted first.

  • background_series (Union[TimeSeries, Sequence[TimeSeries], None]) – One or several series to train the ShapExplainer along with any foreground series. Consider using a reduced well-chosen background to reduce computation time. Optional if model was fit on a single target series. By default, it is the series used at fitting time. Mandatory if model was fit on multiple (list of) target series.

  • background_past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – A past covariates series or list of series that the model needs once fitted.

  • background_future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – A future covariates series or list of series that the model needs once fitted.

  • background_num_samples (Optional[int, None]) – Optionally, whether to sample a subset of the original background. Randomly picks background_num_samples training samples of the constructed training dataset (using shap.utils.sample()). Generally used for faster computation, especially when shap_method is "kernel" or "permutation".

  • shap_method (Optional[str, None]) – Optionally, the shap method to apply. By default, an attempt is made to select the most appropriate method based on a pre-defined set of known models. internal mapping. Supported values : "permutation", "partition", "tree", "kernel", "sampling", "linear", "deep", "gradient", "additive".

  • **kwargs – Optionally, additional keyword arguments passed to shap_method.

Examples

>>> from darts.datasets import AirPassengersDataset
>>> from darts.explainability.shap_explainer import ShapExplainer
>>> from darts.models import LinearRegressionModel
>>> series = AirPassengersDataset().load()
>>> model = LinearRegressionModel(lags=12)
>>> model.fit(series[:-36])
>>> shap_explain = ShapExplainer(model)
>>> results = shap_explain.explain()
>>> shap_explain.summary_plot()
>>> shap_explain.force_plot_from_ts()

Methods

explain([foreground_series, ...])

Explains a foreground time series and returns a ShapExplainabilityResult.

force_plot_from_ts([foreground_series, ...])

Display a shap force_plot for one target and one horizon, for a given foreground_series.

summary_plot([horizons, target_components, ...])

Display a shap plot summary for each horizon and each component dimension of the target.

explain(foreground_series=None, foreground_past_covariates=None, foreground_future_covariates=None, horizons=None, target_components=None)[source]

Explains a foreground time series and returns a ShapExplainabilityResult. The results can be retrieved with method get_explanation(). The result is a multivariate TimeSeries instance containing the ‘explanation’ for the (horizon, target_component) forecast at any timestamp forecastable corresponding to the foreground TimeSeries input.

The component name convention of this multivariate TimeSeries is: "{name}_{type_of_cov}_lag_{idx}", where:

  • {name} is the component name from the original foreground series (target, past, or future).

  • {type_of_cov} is the covariates type. It can take 3 different values: "target", "past_cov" or "future_cov".

  • {idx} is the lag index.

Parameters
  • foreground_series (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one or a sequence of target TimeSeries to be explained. Can be multivariate. If not provided, the background TimeSeries will be explained instead.

  • foreground_past_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one or a sequence of past covariates TimeSeries if required by the forecasting model.

  • foreground_future_covariates (Union[TimeSeries, Sequence[TimeSeries], None]) – Optionally, one or a sequence of future covariates TimeSeries if required by the forecasting model.

  • horizons (Optional[Sequence[int], None]) – Optionally, an integer or sequence of integers representing the future time steps to be explained. 1 corresponds to the first timestamp being forecasted. All values must be <=output_chunk_length of the explained forecasting model.

  • target_components (Optional[Sequence[str], None]) – Optionally, a string or sequence of strings with the target components to explain.

Returns

The forecast explanations

Return type

ShapExplainabilityResult

Examples

Say we have a model with 2 target components named "T_0" and "T_1", 3 past covariates with default component names "0", "1", and "2", and one future covariate with default component name "0". Also, horizons = [1, 2]. The model is a regression model, with lags = 3, lags_past_covariates=[-1, -3], lags_future_covariates = [0].

We provide foreground_series, foreground_past_covariates, foreground_future_covariates each of length 5.

>>> explain_results = explainer.explain(
>>>     foreground_series=foreground_series,
>>>     foreground_past_covariates=foreground_past_covariates,
>>>     foreground_future_covariates=foreground_future_covariates,
>>>     horizons=[1, 2],
>>>     target_names=["T_0", "T_1"])
>>> output = explain_results.get_explanation(horizon=1, component="T_1")
>>> feature_values = explain_results.get_feature_values(horizon=1, component="T_1")
>>> shap_objects = explain_results.get_shap_explanation_objects(horizon=1, component="T_1")

Then the method returns a multivariate TimeSeries containing the explanations of the ShapExplainer, with the following component names:

  • T_0_target_lag-1

  • T_0_target_lag-2

  • T_0_target_lag-3

  • T_1_target_lag-1

  • T_1_target_lag-2

  • T_1_target_lag-3

  • 0_past_cov_lag-1

  • 0_past_cov_lag-3

  • 1_past_cov_lag-1

  • 1_past_cov_lag-3

  • 2_past_cov_lag-1

  • 2_past_cov_lag-3

  • 0_fut_cov_lag_0

This series has length 3, as the model can explain 5-3+1 forecasts (timestamp indexes 4, 5, and 6)

force_plot_from_ts(foreground_series=None, foreground_past_covariates=None, foreground_future_covariates=None, horizon=1, target_component=None, **kwargs)[source]

Display a shap force_plot for one target and one horizon, for a given foreground_series. It displays shap values of each lag/covariate with an additive force layout.

Once the plot is displayed, select “original sample ordering” to observe the time series chronologically.

Parameters
  • foreground_series (Optional[TimeSeries, None]) – Optionally, the target series to explain. Can be multivariate. If None, will use the background_series.

  • foreground_past_covariates (Optional[TimeSeries, None]) – Optionally, a past covariate series if required by the forecasting model. If None, will use the background_past_covariates.

  • foreground_future_covariates (Optional[TimeSeries, None]) – Optionally, a future covariate series if required by the forecasting model. If None, will use the background_future_covariates.

  • horizon (Optional[int, None]) – Optionally, an integer for the point/step in the future to explain, starting from the first prediction step at 1. horizons must not be larger than output_chunk_length.

  • target_component (Optional[str, None]) – Optionally, the target component to plot. If the target series is multivariate, the target component must be specified.

  • **kwargs – Optionally, additional keyword arguments passed to shap.force_plot().

model: RegressionModel
summary_plot(horizons=None, target_components=None, num_samples=None, plot_type='dot', **kwargs)[source]

Display a shap plot summary for each horizon and each component dimension of the target. This method reuses the initial background data as foreground (potentially sampled) to give a general importance plot for each feature. If no target names and/or no horizons are provided, all summary plots are produced.

Parameters
  • horizons (Union[int, Sequence[int], None]) – Optionally, an integer or sequence of integers representing which points/steps in the future to explain, starting from the first prediction step at 1. horizons must <=output_chunk_length of the forecasting model.

  • target_components (Union[str, Sequence[str], None]) – Optionally, a string or sequence of strings with the target components to explain.

  • num_samples (Optional[int, None]) – Optionally, an integer for sampling the foreground series (based on the background), for the sake of performance.

  • plot_type (Optional[str, None]) – Optionally, specify which of the shap library plot type to use. Can be one of 'dot', 'bar', 'violin'.

Returns

A nested dictionary {horizon : {component : shap.Explanation}} containing the raw Explanations for all the horizons and components.

Return type

shaps_