About anomaly detection

This page applies to Apigee and Apigee hybrid.

View Apigee Edge documentation.

What is an anomaly?

An anomaly is an unusual or unexpected API data pattern. For example, take a look at the graph of API error rate below:

Graph of an error-rate anomaly.

As you can see, the error rate suddenly jumps up at around 7 AM. Compared to the data leading up to that time, this increase is unusual enough to be classified as an anomaly.

However, not all variations in API data represent anomalies: most are simply random fluctuations. For example, you can see some relatively minor variations in error rate leading up to the anomaly, but these are not significant enough to be called a true anomaly.

Anomaly versus random data variation.

AAPI Ops continually monitors API data and performs statistical analysis to distinguish true anomalies from random fluctuations in the data.

Without anomaly detection, you need to choose a threshold for detecting each anomaly yourself. (A threshold is a value that a quantity, such as error rate, must reach to trigger an anomaly.) You also need to keep the threshold values up to date, based on the latest data. By contrast, AAPI-Ops chooses the best anomaly thresholds for you, based on recent data patterns.

When AAPI detects an anomaly like the one shown above, it displays the anomaly details in the Anomaly Events dashboard. At this point, you can investigate the anomaly in the API Monitoring dashboards and take appropriate action if necessary. You can also create an alert to notify you if similar events occur in future.

A detected anomaly includes the following information:

  • The metric that caused the anomaly, such as proxy latency or an HTTP error code.
  • The severity of the anomaly. The severity can be slight, moderate, or severe, based on its confidence level in the model. A low confidence level indicates that the severity is slight, while a high confidence level indicates that it is severe.

Anomaly types

Apigee automatically detects the following types of anomalies:

  • Increase in HTTP 503 errors at the organization, environment, and region level
  • Increase in HTTP 504 errors at the organization, environment, and region level
  • Increase in all HTTP 4xx or 5xx errors at the organization, environment, and region level
  • Increase in the total response latency for the 90th percentile (p90) at the organization, environment, and region level

How anomaly detection works

Anomaly detection involves the following stages:

Train models

Anomaly detection works by training a model of the behavior of your API proxies from historical time-series data. There is no action required on your part to train the model. Apigee automatically creates and trains models for you from the previous six hours of API data. Therefore, Apigee requires a minimum of six hours of data on an API proxy to train the model before it can log an anomaly.

The goal of training is to improve the accuracy of the model, which can then be tested on historical data. The simplest way to test a model's accuracy is to calculate its error rate—the sum of false positives and false negatives, divided by the total number of predicted events.

Log anomaly events

At runtime, Apigee anomaly detection compares the current behavior of your API proxies with the behavior predicted by the model. Anomaly detection can then determine, with a specific confidence level, when an operational metric is exceeding the predicted value. For example, when the rate of 5xx errors exceeds the rate predicted by the model.

When Apigee detects an anomaly, it automatically logs the event in the Anomaly Events dashboard. The list of events displayed in the dashboard includes all detected anomalies, as well as triggered alerts.