AI Platform (Unified) for AI Platform (Classic) users

AI Platform (Unified) brings together AI Platform and AutoML into a single interface. This page compares AI Platform (Unified) and AI Platform (Classic), for users who are familiar with AI Platform (Classic).

Custom training

With AI Platform (Unified), you can train models with AutoML, or you can do custom training, which is a workflow more similar to AI Platform Training.

Task AI Platform Training AI Platform (Unified)
Select the machine learning framework version to use Google Cloud Console users set the framework name and framework version.
Runtime versions - When submitting a training job, specify the number of a runtime version that includes your desired framework and framework version. Containers - When submitting a custom training job, specify the Container Registry URI of a pre-built container that corresponds to your framework and framework version.
Set the Google Cloud region to use Specify the name of a region when submitting a training job to a global endpoint (ml.googleapis.com). Submit your custom training job to a regional endpoint, such as us-central1-aiplatform.googleapis.com. There is no global endpoint. Some regions that are available in AI Platform (Classic) are not available in AI Platform (Unified). See the list of supported regions on the Locations page.
Specify machine configurations for distributed training Specify configurations named after specific roles of your training cluster (masterConfig, workerConfig, parameterServerConfig, and evaluatorConfig). The configuration is more generic and flexible — specify machine configurations as a list (in CustomJobSpec.workerPoolSpecs[]).
Submit a training job using a Python package Fields related to your Python package are top-level within TrainingInput. Fields related to your Python package are organized within pythonPackageSpec.
Submit a training job using a custom container Build your own custom container, host it on Container Registry and use it to run your training app.
Submit a hyperparameter tuning job Submit a training job with a hyperparameters configuration. Whether a training job is submitted with or without hyperparameter tuning, it creates a TrainingJob API resource. Submit a hyperparameter tuning job with a studySpec configuration. This creates a top-level API resource (HyperparameterTuningJob). Custom training jobs submitted without hyperparameter tuning create a top-level CustomJob API resource.
Create a training pipeline to orchestrate training jobs with other operations No built-in API resource for orchestration; use AI Platform Pipelines, Kubeflow, or another orchestration tool. Create a TrainingPipeline resource to orchestrate a training job with model deployment.

Machine types for training

Not all machine types supported by AI Platform (Classic) are supported by AI Platform (Unified).

Prediction

Task/Concept AI Platform Prediction AI Platform (Unified)
Select the machine learning framework version to use Google Cloud Console users set the framework name and framework version.
Runtime versions - When deploying a model, specify the number of a runtime version that includes your desired framework and framework version. Containers - When deploying a model, specify the Container Registry URI of a pre-built container that corresponds to your framework and framework version. Use the multi-regional option that matches your regional endpoint — for example, us.gcr.io for a us-central1 endpoint.
Set the Google Cloud region to use Specify the name of a region when creating a model on a global API endpoint (ml.googleapis.com). Create your model on a regional endpoint, such as us-central1-aiplatform.googleapis.com. There is no global endpoint. Some regions that are available in AI Platform (Classic) are not available in AI Platform (Unified). See the list of supported regions on the Locations page.
Store model artifacts Model artifacts are stored in Cloud Storage. There is no associated API resource for model artifacts. There is managed model storage for model artifacts, and it is associated with the Model resource.
Model deployment You deploy a model directly to make it available for online predictions. You create an Endpoint object, which provides resources for serving online predictions. You then deploy the model to the endpoint. To request predictions, you call the predict() method on an endpoint reserved for online prediction. For example, if your deployment was in the us-central1 region, your online prediction endpoint would be us-central1-prediction-aiplatform.googleapis.com.
Request batch predictions You can request batch predictions on models stored in Cloud Storage, and specify a runtime version in your request. Alternatively, you can request batch predictions on deployed models, and use the runtime version you specified during model deployment. You upload your model to AI Platform (Unified), and then you specify either a pre-built container or a custom container to serve the predictions.
Online prediction requests The JSON structure includes a list of instances. The JSON structure includes a list of instances and a field for parameters.
Specify machine types Specify any available machine type when creating a version. Legacy online prediction machine types from AI Platform (MLS1) are not supported. Only Compute Engine machine types (N1) are available.
Deploy models Create a model resource, and then create a version resource. Create a model resource, create an endpoint resource, and deploy the model to the endpoint. Specify traffic splitting in the endpoint.
Run custom code with prediction Use custom prediction routines. Custom prediction routines are not supported.

Data labeling

AI Platform Data Labeling Service is available with a few changes to the API:

Task / Concept AI Platform Data Labeling Service Data labeling in AI Platform (Unified)
Submit instructions for data labelers Your instructions, as a PDF file, are stored in Cloud Storage and associated with an Instruction API resource. Your instructions, as a PDF file, are stored in Cloud Storage, but there is no API resource just for instructions. Specify the Cloud Storage URI of your instruction file when you create a DataLabelingJob API resource.
Annotated datasets There is a AnnotatedDataset API resource. There is no AnnotatedDataset API resource.
How AnnotationSpecs are organized AnnotationSpecs are organized under an AnnotationSpecSet API resource. There is no AnnotationSpecSet. All AnnotationSpecs are organized under Dataset.

Additionally, there is a new data labeling feature on AI Platform (Unified):

Data labeling tasks are usually completed by Google's specialist labelers. As an alternative, you can create a specialist pool that allows you to manage data labeling tasks using your own workforce to complete the labeling tasks, instead of using Google's specialists. This feature is currently available only through an API request. It is not available in Google Cloud Console.