Use AI Platform to train your machine learning models at scale, to host your trained model in the cloud, and to use your model to make predictions about new data.
A brief description of machine learning
Machine learning (ML) is a subfield of artificial intelligence (AI). The goal of ML is to make computers learn from the data that you give them. Instead of writing code that describes the action the computer should take, your code provides an algorithm that adapts based on examples of intended behavior. The resulting program, consisting of the algorithm and associated learned parameters, is called a trained model.
Where AI Platform fits in the ML workflow
The diagram below gives a high-level overview of the stages in an ML workflow. The blue-filled boxes indicate where AI Platform provides managed services and APIs:
As the diagram indicates, you can use AI Platform to manage the following stages in the ML workflow:
Train an ML model on your data:
- Train model
- Evaluate model accuracy
- Tune hyperparameters
Deploy your trained model.
Send prediction requests to your model:
- Online prediction
- Batch prediction (for TensorFlow only)
Monitor the predictions on an ongoing basis.
Manage your models and model versions.
Components of AI Platform
This section describes the pieces that make up AI Platform and the primary purpose of each piece.
The AI Platform training service allows you to train models using a wide range of different customization options.
You can select many different machine types to power your training jobs, enable distributed training, use hyperparameter tuning, and accelerate with GPUs and TPUs.
You can also select different ways to customize your training application. You can submit your input data for AI Platform to train using a built-in algorithm (beta). If the built-in algorithms do not fit your use case, you can submit your own training application to run on AI Platform, or build a custom container with your training application and its dependencies to run on AI Platform.
The AI Platform prediction service allows you to serve predictions based on a trained model, whether or not the model was trained on AI Platform.
AI Platform Notebooks enables you to create and manage virtual machine (VM) instances that are pre-packaged with JupyterLab. AI Platform Notebooks instances have a pre-installed suite of deep learning packages, including support for the TensorFlow and PyTorch frameworks. You can configure either CPU-only or GPU-enabled instances, to best suit your needs.
Your notebook instances are protected by Google Cloud Platform (GCP) authentication and authorization, and are available using a notebook instance URL. Notebook instances also integrate with GitHub so that you can easily sync your notebook with a GitHub repository.
Data labeling service (beta)
AI Platform Data Labeling Service (beta) lets you request human labeling for a dataset that you plan to use to train a custom machine learning model. You can submit a request to label your video, image, or text data.
To submit a labeling request, you provide a representative sample of labeled data, specify all the possible labels for your dataset, and provide some instructions for how to apply those labels. The human labelers follow your instructions, and when the labeling request is complete, you get your annotated dataset that you can use to train a machine learning model.
Deep learning VM image
AI Platform Deep Learning VM Image lets you choose from a set of Debian 9-based Compute Engine virtual machine images optimized for data science and machine learning tasks. All images come with key ML frameworks and tools pre-installed, and can be used out of the box on instances with GPUs to accelerate your data processing tasks.
Tools to interact with AI Platform
This section describes the tools that you use to interact with AI Platform.
Google Cloud Platform Console
You can deploy models to the cloud and manage your models, versions, and jobs on the GCP Console. This option gives you a user interface for working with your machine learning resources. As part of GCP, your AI Platform resources are connected to useful tools like Stackdriver Logging and Stackdriver Monitoring.
gcloud command-line tool
You can manage your models and versions, submit jobs, and accomplish other
AI Platform tasks at the command line with the
gcloud ai-platform command-line tool.
gcloud commands for most AI Platform tasks, and the
REST API (see below) for online predictions.
The AI Platform REST API provides RESTful services for managing jobs, models, and versions, and for making predictions with hosted models on GCP.
You can use the Google APIs Client Library for Python to access the APIs. When using the client library, you use Python representations of the resources and objects used by the API. This is easier and requires less code than working directly with HTTP requests.
We recommend the REST API for serving online predictions in particular.
- Get started with AI Platform Training and AI Platform Prediction using Keras.
- Learn how to train with custom containers.
- Learn how to train TensorFlow and XGBoost models without writing code by using AI Platform built-in algorithms.
- Learn how to use custom prediction routines to add preprocessing and postprocessing for your online prediction requests.
- Add custom code and custom scikit-learn transformations to your online prediction pipeline.
- Learn more about AI Platform Training and AI Platform Prediction.