Scorers Base Classes

class darts.ad.scorers.scorers.AnomalyScorer(univariate_scorer, window)[source]

Bases: ABC

Base class for all anomaly scorers

Attributes

is_probabilistic

Whether the scorer expects a probabilistic prediction for its first input.

Methods

eval_accuracy_from_prediction(...[, metric])

Computes the anomaly score between actual_series and pred_series, and returns the score of an agnostic threshold metric.

show_anomalies_from_prediction(...[, ...])

Plot the results of the scorer.

score_from_prediction

eval_accuracy_from_prediction(actual_anomalies, actual_series, pred_series, metric='AUC_ROC')[source]

Computes the anomaly score between actual_series and pred_series, and returns the score of an agnostic threshold metric.

Parameters
  • actual_anomalies (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) ground truth of the anomalies (1 if it is an anomaly and 0 if not)

  • actual_series (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) actual series.

  • pred_series (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) predicted series.

  • metric (str) – Optionally, metric function to use. Must be one of “AUC_ROC” and “AUC_PR”. Default: “AUC_ROC”

Returns

Score of an agnostic threshold metric for the computed anomaly score
  • float if actual_series and actual_series are univariate series (dimension=1).

  • Sequence[float]

    • if actual_series and actual_series are multivariate series (dimension>1),

    returns one value per dimension, or * if actual_series and actual_series are sequences of univariate series, returns one value per series

  • Sequence[Sequence[float]]] if actual_series and actual_series are sequences

of multivariate series. Outer Sequence is over the sequence input and the inner Sequence is over the dimensions of each element in the sequence input.

Return type

Union[float, Sequence[float], Sequence[Sequence[float]]]

property is_probabilistic: bool

Whether the scorer expects a probabilistic prediction for its first input.

Return type

bool

abstract score_from_prediction(actual_series, pred_series)[source]
Return type

Any

show_anomalies_from_prediction(actual_series, pred_series, scorer_name=None, actual_anomalies=None, title=None, metric=None)[source]

Plot the results of the scorer.

Computes the anomaly score on the two series. And plots the results.

The plot will be composed of the following:
  • the actual_series and the pred_series.

  • the anomaly score of the scorer.

  • the actual anomalies, if given.

It is possible to:
  • add a title to the figure with the parameter title

  • give personalized name to the scorer with scorer_name

  • show the results of a metric for the anomaly score (AUC_ROC or AUC_PR), if the actual anomalies is provided.

Parameters
  • actual_series (TimeSeries) – The actual series to visualize anomalies from.

  • pred_series (TimeSeries) – The predicted series of actual_series.

  • actual_anomalies (Optional[TimeSeries]) – The ground truth of the anomalies (1 if it is an anomaly and 0 if not)

  • scorer_name (Optional[str]) – Name of the scorer.

  • title (Optional[str]) – Title of the figure

  • metric (Optional[str]) – Optionally, Scoring function to use. Must be one of “AUC_ROC” and “AUC_PR”. Default: “AUC_ROC”

class darts.ad.scorers.scorers.FittableAnomalyScorer(univariate_scorer, window, diff_fn='abs_diff')[source]

Bases: AnomalyScorer

Base class of scorers that do need training.

Attributes

is_probabilistic

Whether the scorer expects a probabilistic prediction for its first input.

Methods

check_if_fit_called()

Checks if the scorer has been fitted before calling its score() function.

eval_accuracy(actual_anomalies, series[, metric])

Computes the anomaly score of the given time series, and returns the score of an agnostic threshold metric.

eval_accuracy_from_prediction(...[, metric])

Computes the anomaly score between actual_series and pred_series, and returns the score of an agnostic threshold metric.

fit(series)

Fits the scorer on the given time series input.

fit_from_prediction(actual_series, pred_series)

Fits the scorer on the two (sequence of) series.

score(series)

Computes the anomaly score on the given series.

score_from_prediction(actual_series, pred_series)

Computes the anomaly score on the two (sequence of) series.

show_anomalies(series[, actual_anomalies, ...])

Plot the results of the scorer.

show_anomalies_from_prediction(...[, ...])

Plot the results of the scorer.

check_if_fit_called()[source]

Checks if the scorer has been fitted before calling its score() function.

eval_accuracy(actual_anomalies, series, metric='AUC_ROC')[source]

Computes the anomaly score of the given time series, and returns the score of an agnostic threshold metric.

Parameters
  • actual_anomalies (Union[TimeSeries, Sequence[TimeSeries]]) – The ground truth of the anomalies (1 if it is an anomaly and 0 if not)

  • series (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) series to detect anomalies from.

  • metric (str) – Optionally, metric function to use. Must be one of “AUC_ROC” and “AUC_PR”. Default: “AUC_ROC”

Returns

Score of an agnostic threshold metric for the computed anomaly score
  • float if series is a univariate series (dimension=1).

  • Sequence[float]

    • if series is a multivariate series (dimension>1), returns one

    value per dimension, or * if series is a sequence of univariate series, returns one value per series

  • Sequence[Sequence[float]]] if series is a sequence of multivariate

series. Outer Sequence is over the sequence input and the inner Sequence is over the dimensions of each element in the sequence input.

Return type

Union[float, Sequence[float], Sequence[Sequence[float]]]

eval_accuracy_from_prediction(actual_anomalies, actual_series, pred_series, metric='AUC_ROC')

Computes the anomaly score between actual_series and pred_series, and returns the score of an agnostic threshold metric.

Parameters
  • actual_anomalies (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) ground truth of the anomalies (1 if it is an anomaly and 0 if not)

  • actual_series (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) actual series.

  • pred_series (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) predicted series.

  • metric (str) – Optionally, metric function to use. Must be one of “AUC_ROC” and “AUC_PR”. Default: “AUC_ROC”

Returns

Score of an agnostic threshold metric for the computed anomaly score
  • float if actual_series and actual_series are univariate series (dimension=1).

  • Sequence[float]

    • if actual_series and actual_series are multivariate series (dimension>1),

    returns one value per dimension, or * if actual_series and actual_series are sequences of univariate series, returns one value per series

  • Sequence[Sequence[float]]] if actual_series and actual_series are sequences

of multivariate series. Outer Sequence is over the sequence input and the inner Sequence is over the dimensions of each element in the sequence input.

Return type

Union[float, Sequence[float], Sequence[Sequence[float]]]

fit(series)[source]

Fits the scorer on the given time series input.

If sequence of series is given, the scorer will be fitted on the concatenation of the sequence.

The assumption is that the series series used for training are generally anomaly-free.

Parameters

series (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) series with no anomalies.

Returns

Fitted Scorer.

Return type

self

fit_from_prediction(actual_series, pred_series)[source]

Fits the scorer on the two (sequence of) series.

The function diff_fn passed as a parameter to the scorer, will transform pred_series and actual_series into one series. By default, diff_fn will compute the absolute difference (Default: “abs_diff”). If pred_series and actual_series are sequences, diff_fn will be applied to all pairwise elements of the sequences.

The scorer will then be fitted on this (sequence of) series. If a sequence of series is given, the scorer will be fitted on the concatenation of the sequence.

The scorer assumes that the (sequence of) actual_series is anomaly-free.

Parameters
Returns

Fitted Scorer.

Return type

self

property is_probabilistic: bool

Whether the scorer expects a probabilistic prediction for its first input.

Return type

bool

score(series)[source]

Computes the anomaly score on the given series.

If a sequence of series is given, the scorer will score each series independently and return an anomaly score for each series in the sequence.

Parameters

series (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) series to detect anomalies from.

Returns

(Sequence of) anomaly score time series

Return type

Union[TimeSeries, Sequence[TimeSeries]]

score_from_prediction(actual_series, pred_series)[source]

Computes the anomaly score on the two (sequence of) series.

The function diff_fn passed as a parameter to the scorer, will transform pred_series and actual_series into one “difference” series. By default, diff_fn will compute the absolute difference (Default: “abs_diff”). If actual_series and pred_series are sequences, diff_fn will be applied to all pairwise elements of the sequences.

The scorer will then transform this series into an anomaly score. If a sequence of series is given, the scorer will score each series independently and return an anomaly score for each series in the sequence.

Parameters
Returns

(Sequence of) anomaly score time series

Return type

Union[TimeSeries, Sequence[TimeSeries]]

show_anomalies(series, actual_anomalies=None, scorer_name=None, title=None, metric=None)[source]

Plot the results of the scorer.

Computes the score on the given series input. And plots the results.

The plot will be composed of the following:
  • the series itself.

  • the anomaly score of the score.

  • the actual anomalies, if given.

It is possible to:
  • add a title to the figure with the parameter title

  • give personalized name to the scorer with scorer_name

  • show the results of a metric for the anomaly score (AUC_ROC or AUC_PR),

if the actual anomalies is provided.

Parameters
  • series (TimeSeries) – The series to visualize anomalies from.

  • actual_anomalies (Optional[TimeSeries]) – The ground truth of the anomalies (1 if it is an anomaly and 0 if not)

  • scorer_name (Optional[str]) – Name of the scorer.

  • title (Optional[str]) – Title of the figure

  • metric (Optional[str]) – Optionally, Scoring function to use. Must be one of “AUC_ROC” and “AUC_PR”. Default: “AUC_ROC”

show_anomalies_from_prediction(actual_series, pred_series, scorer_name=None, actual_anomalies=None, title=None, metric=None)

Plot the results of the scorer.

Computes the anomaly score on the two series. And plots the results.

The plot will be composed of the following:
  • the actual_series and the pred_series.

  • the anomaly score of the scorer.

  • the actual anomalies, if given.

It is possible to:
  • add a title to the figure with the parameter title

  • give personalized name to the scorer with scorer_name

  • show the results of a metric for the anomaly score (AUC_ROC or AUC_PR), if the actual anomalies is provided.

Parameters
  • actual_series (TimeSeries) – The actual series to visualize anomalies from.

  • pred_series (TimeSeries) – The predicted series of actual_series.

  • actual_anomalies (Optional[TimeSeries]) – The ground truth of the anomalies (1 if it is an anomaly and 0 if not)

  • scorer_name (Optional[str]) – Name of the scorer.

  • title (Optional[str]) – Title of the figure

  • metric (Optional[str]) – Optionally, Scoring function to use. Must be one of “AUC_ROC” and “AUC_PR”. Default: “AUC_ROC”

class darts.ad.scorers.scorers.NLLScorer(window)[source]

Bases: NonFittableAnomalyScorer

Parent class for all LikelihoodScorer

Attributes

is_probabilistic

Whether the scorer expects a probabilistic prediction for its first input.

Methods

eval_accuracy_from_prediction(...[, metric])

Computes the anomaly score between actual_series and pred_series, and returns the score of an agnostic threshold metric.

score_from_prediction(actual_series, pred_series)

Computes the anomaly score on the two (sequence of) series.

show_anomalies_from_prediction(...[, ...])

Plot the results of the scorer.

eval_accuracy_from_prediction(actual_anomalies, actual_series, pred_series, metric='AUC_ROC')

Computes the anomaly score between actual_series and pred_series, and returns the score of an agnostic threshold metric.

Parameters
  • actual_anomalies (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) ground truth of the anomalies (1 if it is an anomaly and 0 if not)

  • actual_series (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) actual series.

  • pred_series (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) predicted series.

  • metric (str) – Optionally, metric function to use. Must be one of “AUC_ROC” and “AUC_PR”. Default: “AUC_ROC”

Returns

Score of an agnostic threshold metric for the computed anomaly score
  • float if actual_series and actual_series are univariate series (dimension=1).

  • Sequence[float]

    • if actual_series and actual_series are multivariate series (dimension>1),

    returns one value per dimension, or * if actual_series and actual_series are sequences of univariate series, returns one value per series

  • Sequence[Sequence[float]]] if actual_series and actual_series are sequences

of multivariate series. Outer Sequence is over the sequence input and the inner Sequence is over the dimensions of each element in the sequence input.

Return type

Union[float, Sequence[float], Sequence[Sequence[float]]]

property is_probabilistic: bool

Whether the scorer expects a probabilistic prediction for its first input.

Return type

bool

score_from_prediction(actual_series, pred_series)

Computes the anomaly score on the two (sequence of) series.

If a pair of sequences is given, they must contain the same number of series. The scorer will score each pair of series independently and return an anomaly score for each pair.

Parameters
Returns

(Sequence of) anomaly score time series

Return type

Union[TimeSeries, Sequence[TimeSeries]]

show_anomalies_from_prediction(actual_series, pred_series, scorer_name=None, actual_anomalies=None, title=None, metric=None)

Plot the results of the scorer.

Computes the anomaly score on the two series. And plots the results.

The plot will be composed of the following:
  • the actual_series and the pred_series.

  • the anomaly score of the scorer.

  • the actual anomalies, if given.

It is possible to:
  • add a title to the figure with the parameter title

  • give personalized name to the scorer with scorer_name

  • show the results of a metric for the anomaly score (AUC_ROC or AUC_PR), if the actual anomalies is provided.

Parameters
  • actual_series (TimeSeries) – The actual series to visualize anomalies from.

  • pred_series (TimeSeries) – The predicted series of actual_series.

  • actual_anomalies (Optional[TimeSeries]) – The ground truth of the anomalies (1 if it is an anomaly and 0 if not)

  • scorer_name (Optional[str]) – Name of the scorer.

  • title (Optional[str]) – Title of the figure

  • metric (Optional[str]) – Optionally, Scoring function to use. Must be one of “AUC_ROC” and “AUC_PR”. Default: “AUC_ROC”

class darts.ad.scorers.scorers.NonFittableAnomalyScorer(univariate_scorer, window)[source]

Bases: AnomalyScorer

Base class of anomaly scorers that do not need training.

Attributes

is_probabilistic

Whether the scorer expects a probabilistic prediction for its first input.

Methods

eval_accuracy_from_prediction(...[, metric])

Computes the anomaly score between actual_series and pred_series, and returns the score of an agnostic threshold metric.

score_from_prediction(actual_series, pred_series)

Computes the anomaly score on the two (sequence of) series.

show_anomalies_from_prediction(...[, ...])

Plot the results of the scorer.

eval_accuracy_from_prediction(actual_anomalies, actual_series, pred_series, metric='AUC_ROC')

Computes the anomaly score between actual_series and pred_series, and returns the score of an agnostic threshold metric.

Parameters
  • actual_anomalies (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) ground truth of the anomalies (1 if it is an anomaly and 0 if not)

  • actual_series (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) actual series.

  • pred_series (Union[TimeSeries, Sequence[TimeSeries]]) – The (sequence of) predicted series.

  • metric (str) – Optionally, metric function to use. Must be one of “AUC_ROC” and “AUC_PR”. Default: “AUC_ROC”

Returns

Score of an agnostic threshold metric for the computed anomaly score
  • float if actual_series and actual_series are univariate series (dimension=1).

  • Sequence[float]

    • if actual_series and actual_series are multivariate series (dimension>1),

    returns one value per dimension, or * if actual_series and actual_series are sequences of univariate series, returns one value per series

  • Sequence[Sequence[float]]] if actual_series and actual_series are sequences

of multivariate series. Outer Sequence is over the sequence input and the inner Sequence is over the dimensions of each element in the sequence input.

Return type

Union[float, Sequence[float], Sequence[Sequence[float]]]

property is_probabilistic: bool

Whether the scorer expects a probabilistic prediction for its first input.

Return type

bool

score_from_prediction(actual_series, pred_series)[source]

Computes the anomaly score on the two (sequence of) series.

If a pair of sequences is given, they must contain the same number of series. The scorer will score each pair of series independently and return an anomaly score for each pair.

Parameters
Returns

(Sequence of) anomaly score time series

Return type

Union[TimeSeries, Sequence[TimeSeries]]

show_anomalies_from_prediction(actual_series, pred_series, scorer_name=None, actual_anomalies=None, title=None, metric=None)

Plot the results of the scorer.

Computes the anomaly score on the two series. And plots the results.

The plot will be composed of the following:
  • the actual_series and the pred_series.

  • the anomaly score of the scorer.

  • the actual anomalies, if given.

It is possible to:
  • add a title to the figure with the parameter title

  • give personalized name to the scorer with scorer_name

  • show the results of a metric for the anomaly score (AUC_ROC or AUC_PR), if the actual anomalies is provided.

Parameters
  • actual_series (TimeSeries) – The actual series to visualize anomalies from.

  • pred_series (TimeSeries) – The predicted series of actual_series.

  • actual_anomalies (Optional[TimeSeries]) – The ground truth of the anomalies (1 if it is an anomaly and 0 if not)

  • scorer_name (Optional[str]) – Name of the scorer.

  • title (Optional[str]) – Title of the figure

  • metric (Optional[str]) – Optionally, Scoring function to use. Must be one of “AUC_ROC” and “AUC_PR”. Default: “AUC_ROC”