-
Pre-built containers for prediction
List of the pre-built containers for training. Describes how to use them with model artifacts that you created using Vertex AI's custom training functionality or model artifacts that you created outside of Vertex AI.
-
Custom container requirements for prediction
Describes what you need to consider when designing a container image to use with Vertex AI.
-
Using a custom container for prediction
How to create a
Model
that uses a custom container. -
Deploying a model using the Cloud console
Describes how to use the Google Cloud console to deploy a model to an endpoint.
-
Deploying a model using the Vertex AI API
How to use the Vertex AI API to deploy a model to an endpoint.
-
Online prediction logging
How to enable and disable online prediction logging.
-
Configuring compute resources for prediction
How to customize the type of virtual machine and GPU that Vertex AI uses for prediction nodes.
-
Getting batch predictions
How to get batch predictions.
-
Getting online predictions from AutoML models
How to get online predictions for AutomL models.
-
Getting online predictions with custom-trained models
How to get online predictions for custom-trained models.
-
Interpreting prediction results from AutoML models
How to interpret prediction results from AutoML models.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.