About Spanner Vertex AI integration

Stay organized with collections Save and categorize content based on your preferences.

This page provides an overview of Spanner Vertex AI integration.

Spanner Vertex AI integration helps you to access classifier and regression ML models hosted on Vertex AI through the Google Standard SQL interface. This helps to seamlessly integrate ML predictions serving functionality with general Cloud Spanner data access operations performed using DQL/DML queries.

Spanner Vertex AI integration shares the same SQL syntax with BigQuery ML, even though only a subset of BigQuery ML syntax is supported.

Benefits of Spanner Vertex AI integration

Generating ML predictions using Spanner Vertex AI integration provides multiple benefits compared to the approach where Cloud Spanner data access and access to the Vertex AI prediction endpoint are performed separately:

  • Performance:
    • Better latency: Spanner Vertex AI integration talking to the Vertex AI service directly eliminates additional round-trips between a compute node running a Cloud Spanner's client and the Vertex AI service.
    • Better throughput/parallelism: Spanner Vertex AI integration runs on top of Cloud Spanner's distributed query processing infrastructure, which supports highly parallelizable query execution.
  • User experience:
    • Ability to use a single, simple, coherent, and familiar SQL interface to facilitate both data transformation and ML serving scenarios on Cloud Spanner level of scale lowers the ML entry barrier and allows for a much smoother user experience.
  • Costs:
    • Spanner Vertex AI integration uses Cloud Spanner compute capacity to merge the results of ML computations and SQL query execution, which eliminates the need to provision an additional compute (for example, in Compute Engine or Google Kubernetes Engine) for that.

How does Spanner Vertex AI integration work?

Spanner Vertex AI integration doesn't host ML models, but relies on the Vertex AI service infrastructure instead. For a model to be used with Spanner Vertex AI integration, it should be already trained and deployed to Vertex AI.

Spanner Vertex AI integration also doesn't provide any special ML training functionality. To train models on data stored in Cloud Spanner, you can use either of the following:

As soon as a model is deployed in the Vertex AI service, a database owner can register this model using the CREATE MODEL DDL statement. After that, the model can be referenced from the ML.PREDICT functions to produce predictions.

See Generate ML predictions using SQL for a tutorial on using Spanner Vertex AI integration.

Predictions generated by the ML.PREDICT function results in calls to the Vertex AI APIs and are charged according to Vertex AI pricing.