Explainability Result#
Contains the explainability results obtained from _ForecastingModelExplainer.explain().
ComponentBasedExplainabilityResultfor component based explainability resultsHorizonBasedExplainabilityResultfor horizon based explainability results
- class darts.explainability.explainability_result.ComponentBasedExplainabilityResult(explained_components)[source]#
Bases:
_ExplainabilityResultExplainability result for general component objects. The explained components can describe anything.
Examples
>>> explainer = SomeComponentBasedExplainer(model) >>> explain_results = explainer.explain() >>> output = explain_results.get_explanation(component="some_component")
Methods
get_explanation(component)Returns one or several explanations for a given component.
- class darts.explainability.explainability_result.HorizonBasedExplainabilityResult(explained_forecasts)[source]#
Bases:
_ExplainabilityResultStores the explainability results of a
_ForecastingModelExplainerwith convenient access to the horizon based results.The result is a multivariate TimeSeries instance containing the ‘explanation’ for the (horizon, target_component) forecast at any timestamp forecastable corresponding to the foreground TimeSeries input.
The component name convention of this multivariate TimeSeries is:
"{name}_{type_of_cov}_lag_{idx}", where:{name}is the component name from the original foreground series (target, past, or future).{type_of_cov}is the covariates type. It can take 3 different values:"target","past_cov"or"future_cov".{idx}is the lag index.
Examples
Say we have a model with 2 target components named
"T_0"and"T_1", 3 past covariates with default component names"0","1", and"2", and one future covariate with default component name"0". Also,horizons = [1, 2]. The model is a SKLearnModel, withlags = 3,lags_past_covariates=[-1, -3],lags_future_covariates = [0].We provide foreground_series, foreground_past_covariates, foreground_future_covariates each of length 5.
>>> explainer = SomeHorizonBasedExplainer(model) >>> explain_results = explainer.explain( >>> foreground_series=foreground_series, >>> foreground_past_covariates=foreground_past_covariates, >>> foreground_future_covariates=foreground_future_covariates, >>> horizons=[1, 2], >>> target_names=["T_0", "T_1"] >>> ) >>> output = explain_results.get_explanation(horizon=1, target="T_1")
Then the method returns a multivariate TimeSeries containing the explanations of the corresponding _ForecastingModelExplainer, with the following component names:
T_0_target_lag-1
T_0_target_lag-2
T_0_target_lag-3
T_1_target_lag-1
T_1_target_lag-2
T_1_target_lag-3
0_past_cov_lag-1
0_past_cov_lag-3
1_past_cov_lag-1
1_past_cov_lag-3
2_past_cov_lag-1
2_past_cov_lag-3
0_fut_cov_lag_0
This series has length 3, as the model can explain 5-3+1 forecasts (timestamp indexes 4, 5, and 6)
Methods
get_explanation(horizon[, component])Returns one or several TimeSeries representing the explanations for a given horizon and component.
- get_explanation(horizon, component=None)[source]#
Returns one or several TimeSeries representing the explanations for a given horizon and component.
- Parameters:
horizon (
int) – The horizon for which to return the explanation.component (
Optional[str]) – The component for which to return the explanation. Does not need to be specified for univariate series.
- Return type:
Union[TimeSeries,list[TimeSeries]]
- class darts.explainability.explainability_result.ShapExplainabilityResult(explained_forecasts, feature_values, shap_explanation_object)[source]#
Bases:
HorizonBasedExplainabilityResultStores the explainability results of a
ShapExplainerwith convenient access to the results. It extends theHorizonBasedExplainabilityResultand carries additional information specific to the Shap explainers. In particular, in addition to the explained_forecasts (which in the case of the ShapExplainer are the shap values), it also provides access to the corresponding feature_values and the underlying shap.Explanation object.get_explanation(): explained forecast for a given horizon (and target component)get_feature_values(): feature values for a given horizon (and target component).get_shap_explanation_object(): shap.Explanation object for a given horizon (and target component).
Examples
>>> explainer = ShapExplainer(model) # requires `background` if model was trained on multiple series >>> explain_results = explainer.explain() >>> exlained_fc = explain_results.get_explanation(horizon=1) >>> feature_values = explain_results.get_feature_values(horizon=1) >>> shap_objects = explain_results.get_shap_explanation_objects(horizon=1)
Methods
get_explanation(horizon[, component])Returns one or several TimeSeries representing the explanations for a given horizon and component.
get_feature_values(horizon[, component])Returns one or several TimeSeries representing the feature values for a given horizon and component.
get_shap_explanation_object(horizon[, component])Returns the underlying shap.Explanation object for a given horizon and component.
- get_explanation(horizon, component=None)#
Returns one or several TimeSeries representing the explanations for a given horizon and component.
- Parameters:
horizon (
int) – The horizon for which to return the explanation.component (
Optional[str]) – The component for which to return the explanation. Does not need to be specified for univariate series.
- Return type:
Union[TimeSeries,list[TimeSeries]]
- get_feature_values(horizon, component=None)[source]#
Returns one or several TimeSeries representing the feature values for a given horizon and component.
- Parameters:
horizon (
int) – The horizon for which to return the feature values.component (
Optional[str]) – The component for which to return the feature values. Does not need to be specified for univariate series.
- Return type:
Union[TimeSeries,list[TimeSeries]]
- get_shap_explanation_object(horizon, component=None)[source]#
Returns the underlying shap.Explanation object for a given horizon and component.
- Parameters:
horizon (
int) – The horizon for which to return the shap.Explanation object.component (
Optional[str]) – The component for which to return the shap.Explanation object. Does not need to be specified for univariate series.
- Return type:
Union[Explanation,list[Explanation]]
- class darts.explainability.explainability_result.TFTExplainabilityResult(explanations)[source]#
Bases:
ComponentBasedExplainabilityResultStores the explainability results of a
TFTExplainerwith convenient access to the results. It extends theComponentBasedExplainabilityResultand carries information specific to the TFT explainer.get_attention(): self attention over the encoder and decoderget_encoder_importance(): encoder feature importances including past target, past covariates, and historic part of future covariates.get_decoder_importance(): decoder feature importances including future part of future covariates.get_static_covariates_importance(): static covariates importances.get_feature_importances(): get all feature importances at once.
Examples
>>> explainer = TFTExplainer(model) # requires `background` if model was trained on multiple series >>> explain_results = explainer.explain() >>> attention = explain_results.get_attention() >>> importances = explain_results.get_feature_importances() >>> encoder_importance = explain_results.get_encoder_importance() >>> decoder_importance = explain_results.get_decoder_importance() >>> static_covariates_importance = explain_results.get_static_covariates_importance()
Methods
Returns the time-dependent attention on the encoder and decoder for each horizon in (1, output_chunk_length).
Returns the time-dependent decoder importances as a pd.DataFrames.
Returns the time-dependent encoder importances as a pd.DataFrames.
get_explanation(component)Returns one or several explanations for a given component.
Returns the feature importances for the encoder, decoder and static covariates as pd.DataFrames.
Returns the numeric and categorical static covariates importances as a pd.DataFrames.
- get_attention()[source]#
Returns the time-dependent attention on the encoder and decoder for each horizon in (1, output_chunk_length). The time index ranges from the prediction series’ start time - input_chunk_length and ends at the prediction series’ end time. If multiple series were used when calling
TFTExplainer.explain(), returns a list of TimeSeries.- Return type:
Union[TimeSeries,list[TimeSeries]]
- get_decoder_importance()[source]#
Returns the time-dependent decoder importances as a pd.DataFrames. If multiple series were used in
TFTExplainer.explain(), returns a list of pd.DataFrames.- Return type:
Union[DataFrame,list[DataFrame]]
- get_encoder_importance()[source]#
Returns the time-dependent encoder importances as a pd.DataFrames. If multiple series were used in
TFTExplainer.explain(), returns a list of pd.DataFrames.- Return type:
Union[DataFrame,list[DataFrame]]
- get_explanation(component)#
Returns one or several explanations for a given component.
- Parameters:
component – The component for which to return the explanation.
- Return type:
Union[Any,list[Any]]
- get_feature_importances()[source]#
Returns the feature importances for the encoder, decoder and static covariates as pd.DataFrames. If multiple series were used in
TFTExplainer.explain(), returns a list of pd.DataFrames per importance.- Return type:
dict[str,Union[DataFrame,list[DataFrame]]]
- get_static_covariates_importance()[source]#
Returns the numeric and categorical static covariates importances as a pd.DataFrames. If multiple series were used in
TFTExplainer.explain(), returns a list of pd.DataFrames.- Return type:
Union[DataFrame,list[DataFrame]]