AI Platform Prediction brings the power and flexibility of TensorFlow, scikit-learn and XGBoost to the cloud. You can use AI Platform Prediction to host your trained models so that you can send them prediction requests.
Getting started
-
Introduction to AI Platform
An overview of AI Platform products.
-
Prediction overview
An introduction to using AI Platform Prediction to host machine learning models and serve predictions.
-
Development environment
Requirements for your local development environment.
-
Online versus batch prediction
An overview of differences between online and batch prediction.
-
Getting started: training and prediction with TensorFlow Keras
Train a TensorFlow Keras model in AI Platform Training and deploy the model to AI Platform Prediction.
-
Getting started: training and prediction with TensorFlow Estimator
Train a TensorFlow Estimator model in AI Platform Training and deploy the model to AI Platform Prediction.
-
Getting started with scikit-learn and XGBoost
Deploy a scikit-learn or XGBoost model to serve predictions.
Prediction workflow
-
Exporting models for prediction
Write training code to export model artifacts that are ready for AI Platform Prediction.
-
Exporting a SavedModel for prediction
Export a TensorFlow SavedModel so that it's compatible with AI Platform Prediction.
-
Deploying models
Deploy machine learning models by creating model and model version resources.
-
Custom prediction routines
Customize how AI Platform Prediction processes prediction requests.
-
Machine types for online prediction
Configure which types of virtual machines and GPUs AI Platform Prediction uses to handle requests.
-
Getting online predictions
Send requests to your deployed machine learning model and receive predictions.
-
Getting batch predictions
Perform prediction on a high volume of data instances using a TensorFlow model.
Custom containers for online prediction
-
Getting started: Serving PyTorch predictions with a custom container
Use a custom container to deploy a PyTorch machine learning model that serves online predictions.
-
Custom container requirements
Learn the requirements for creating a custom Docker container image to use with AI Platform Prediction.
-
Using a custom container
Configure your model version to use a custom container.
Integrating with tools and services
-
Using the Python client library
Use the Google API Client Library for Python to send requests to the AI Platform Training and Prediction API.
-
Working with Cloud Storage
Set up Cloud Storage to work with AI Platform Prediction.
-
Using the What-If Tool
Inspect your deployed models with an interactive dashboard.
Monitoring and security
-
Monitoring models
Monitor the performance and behavior of deployed models.
-
Viewing audit logs
Monitor admin activity and data access with Cloud Audit Logs.
-
Access control
An overview of permissions required to perform various actions in the AI Platform Training and Prediction API, as well as IAM roles that provide these permissions.
-
Using a custom service account
Configure a model version to use a custom service account to serve predictions.
-
Using VPC Service Controls
Configure VPC Service Controls to mitigate the risk of data exfiltration.
AI Platform Prediction resources
-
Projects, models, versions, and jobs
An overview of the resources that you create and interact with in AI Platform.
-
Managing models and jobs
Manage the AI Platform resources that you have created.
-
Labeling resources
Organize your jobs, models, and model versions with custom labels.
-
Sharing models
Share access to your AI Platform Prediction resources with other users, groups, or service accounts.
Tutorials
-
Creating a custom prediction routine with Keras
Deploy a Keras model together with preprocessing and postprocessing code for handling requests.
-
Getting online predictions with XGBoost
Deploy an XGBoost model, and request predictions.
-
Getting online predictions with scikit-learn
Deploy a scikit-learn model that uses a pipeline with many transformers.
-
Predictions with scikit-learn pipelines
Deploy a scikit-learn model that uses a pipeline with a preprocessing step and a classification step.
-
Using a scikit-learn pipeline with custom transformers
Deploy a scikit-learn pipeline with custom preprocessing.
-
Creating a custom prediction routine with scikit-learn
Deploy a scikit-learn model together with preprocessing and postprocessing code for handling requests.
-
Using scikit-learn on Kaggle and AI Platform Prediction
Train a model in Kaggle and deploy it to AI Platform Prediction.
Runtime versions
AI Explanations
-
AI Explanations Overview
An introduction to using AI Explanations with AI Platform Prediction.
-
Getting started with AI Explanations
Deploy TensorFlow models and make explanation requests.
-
Limitations of AI Explanations
Considerations you must consider when using AI Explanations.
-
Using feature attributions
Configure your machine learning model for AI Explanations and request explanations.
-
Saving TensorFlow models
Save TensorFlow 2 and TensorFlow 1.15 models correctly for AI Explanations.
-
Preparing metadata
Create the explanation metadata file required for AI Explanations, using the Explainable AI SDK.
-
Visualizing explanations
Visualize explanations with AI Explanations.
-
Understanding inputs and outputs for explanation
Find input and output tensors to create the explanation metadata file manually, before deploying an existing TensorFlow 1.15 model to AI Explanations.
Continuous evaluation
-
Continuous evaluation overview
An introduction to continuous evaluation.
-
Before you begin continuous evaluation
Preparing your machine learning model to be compatible with continuous evaluation.
-
Creating an evaluation job
Configure how you want your model version to be evaluated.
-
Viewing evaluation metrics
View the metrics about your model calculated by continuous evaluation.
-
Updating, pausing, or deleting an evaluation job
Update an existing continuous evaluation job.