Class ARIMAPlus (1.7.0)

ARIMAPlus(
    *,
    horizon: int = 1000,
    auto_arima: bool = True,
    auto_arima_max_order: typing.Optional[int] = None,
    auto_arima_min_order: typing.Optional[int] = None,
    data_frequency: str = "auto_frequency",
    include_drift: bool = False,
    holiday_region: typing.Optional[str] = None,
    clean_spikes_and_dips: bool = True,
    adjust_step_changes: bool = True,
    time_series_length_fraction: typing.Optional[float] = None,
    min_time_series_length: typing.Optional[int] = None,
    max_time_series_length: typing.Optional[int] = None,
    trend_smoothing_window_size: typing.Optional[int] = None,
    decompose_time_series: bool = True
)

Time Series ARIMA Plus model.

Parameters

Name Description
horizon int, default 1,000

The number of time points to forecast. Default to 1,000, max value 10,000.

auto_arima bool, default True

Determines whether the training process uses auto.ARIMA or not. If True, training automatically finds the best non-seasonal order (that is, the p, d, q tuple) and decides whether or not to include a linear drift term when d is 1.

auto_arima_max_order int or None, default None

The maximum value for the sum of non-seasonal p and q.

auto_arima_min_order int or None, default None

The minimum value for the sum of non-seasonal p and q.

data_frequency str, default "auto_frequency"

The data frequency of the input time series. Possible values are "auto_frequency", "per_minute", "hourly", "daily", "weekly", "monthly", "quarterly", "yearly"

include_drift bool, defalut False

Determines whether the model should include a linear drift term or not. The drift term is applicable when non-seasonal d is 1.

holiday_region str or None, default None

The geographical region based on which the holiday effect is applied in modeling. By default, holiday effect modeling isn't used. Possible values see https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-create-time-series#holiday_region.

clean_spikes_and_dips bool, default True

Determines whether or not to perform automatic spikes and dips detection and cleanup in the model training pipeline. The spikes and dips are replaced with local linear interpolated values when they're detected.

adjust_step_changes bool, default True

Determines whether or not to perform automatic step change detection and adjustment in the model training pipeline.

time_series_length_fraction float or None, default None

The fraction of the interpolated length of the time series that's used to model the time series trend component. All of the time points of the time series are used to model the non-trend component.

min_time_series_length int or None, default None

The minimum number of time points that are used in modeling the trend component of the time series.

max_time_series_length int or None, default None

The maximum number of time points in a time series that can be used in modeling the trend component of the time series.

trend_smoothing_window_size int or None, default None

The smoothing window size for the trend component.

decompose_time_series bool, default True

Determines whether the separate components of both the history and forecast parts of the time series (such as holiday effect and seasonal components) are saved in the model.

Properties

coef_

Inspect the coefficients of the model.

..note::

Output matches that of the ML.ARIMA_COEFFICIENTS function.
See: https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-arima-coefficients
for the outputs relevant to this model type.
Returns
Type Description
bigframes.dataframe.DataFrame A DataFrame with the coefficients for the model.

Methods

__repr__

__repr__()

Print the estimator's constructor with all non-default parameter values.

detect_anomalies

detect_anomalies(
    X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series],
    *,
    anomaly_prob_threshold: float = 0.95
) -> bigframes.dataframe.DataFrame

Detect the anomaly data points of the input.

Parameters
Name Description
X bigframes.dataframe.DataFrame or bigframes.series.Series

Series or a DataFrame to detect anomalies.

anomaly_prob_threshold float, default 0.95

Identifies the custom threshold to use for anomaly detection. The value must be in the range [0, 1), with a default value of 0.95.

Returns
Type Description
bigframes.dataframe.DataFrame Detected DataFrame.

fit

fit(
    X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series],
    y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series],
) -> bigframes.ml.base._T

API documentation for fit method.

get_params

get_params(deep: bool = True) -> typing.Dict[str, typing.Any]

Get parameters for this estimator.

Parameter
Name Description
deep bool, default True

Default True. If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
Type Description
Dictionary A dictionary of parameter names mapped to their values.

predict

predict(
    X=None, *, horizon: int = 3, confidence_level: float = 0.95
) -> bigframes.dataframe.DataFrame

Forecast time series at future horizon.

Parameters
Name Description
X default None

ignored, to be compatible with other APIs.

confidence_level float, default 0.95

A float value that specifies percentage of the future values that fall in the prediction interval. The valid input range is [0.0, 1.0).

Returns
Type Description
bigframes.dataframe.DataFrame The predicted DataFrames. Which contains 2 columns: "forecast_timestamp" and "forecast_value".

register

register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._T

Register the model to Vertex AI.

After register, go to the Google Cloud console (https://console.cloud.google.com/vertex-ai/models) to manage the model registries. Refer to https://cloud.google.com/vertex-ai/docs/model-registry/introduction for more options.

Parameter
Name Description
vertex_ai_model_id Optional[str], default None

Optional string id as model id in Vertex. If not set, will default to 'bigframes_{bq_model_id}'. Vertex Ai model id will be truncated to 63 characters due to its limitation.

score

score(
    X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series],
    y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series],
) -> bigframes.dataframe.DataFrame

Calculate evaluation metrics of the model.

Parameters
Name Description
X bigframes.dataframe.DataFrame or bigframes.series.Series

A BigQuery DataFrame only contains 1 column as evaluation timestamp. The timestamp must be within the horizon of the model, which by default is 1000 data points.

y bigframes.dataframe.DataFrame or bigframes.series.Series

A BigQuery DataFrame only contains 1 column as evaluation numeric values.

Returns
Type Description
bigframes.dataframe.DataFrame A DataFrame as evaluation result.

summary

summary(show_all_candidate_models: bool = False) -> bigframes.dataframe.DataFrame

Summary of the evaluation metrics of the time series model.

Parameter
Name Description
show_all_candidate_models bool, default to False

Whether to show evaluation metrics or an error message for either all candidate models or for only the best model with the lowest AIC. Default to False.

Returns
Type Description
bigframes.dataframe.DataFrame A DataFrame as evaluation result.

to_gbq

to_gbq(
    model_name: str, replace: bool = False
) -> bigframes.ml.forecasting.ARIMAPlus

Save the model to BigQuery.

Parameters
Name Description
model_name str

The name of the model.

replace bool, default False

Determine whether to replace if the model already exists. Default to False.

Returns
Type Description
ARIMAPlus Saved model.