Deploying your model

Initial model deployment

After you have created (trained) a model, you must deploy the model before you can make online (or synchronous) call to the model.

You can now also update model deployment if you need additional online prediction capacity.

Web UI

  1. Navigate to the Test & Use tab below the title bar.
  2. Select the Deploy Model button. This opens a new deployment option window. Test and use model page
  3. In the newly opened deployment option window, specify the number of nodes to deploy with. Each node supports a certain number of prediction queries per second (QPS).

    One node is usually sufficient for most experimental traffic.

    deploy popup menu
  4. Select Deploy to begin model deployment.

    model deploying
  5. You will receive an email after the model deployment operation finishes.

REST

Before using any of the request data, make the following replacements:

  • project-id: your GCP project ID.
  • model-id: the ID of your model, from the response when you created the model. The ID is the last element of the name of your model. For example:
    • model name: projects/project-id/locations/location-id/models/IOD4412217016962778756
    • model id: IOD4412217016962778756

Field considerations:

  • nodeCount - The number of nodes to deploy the model on. The value must be between 1 and 100, inclusive on both ends. A node is an abstraction of a machine resource, which can handle online prediction queries per second (QPS) as given in the model's qps_per_node.

HTTP method and URL:

POST https://automl.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/models/MODEL_ID:deploy

Request JSON body:

{
  "imageClassificationModelDeploymentMetadata": {
    "nodeCount": 2
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: project-id" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://automl.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/models/MODEL_ID:deploy"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "project-id" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://automl.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/models/MODEL_ID:deploy" | Select-Object -Expand Content

You should see output similar to the following. You can use the operation ID to get the status of the task. For an example, see Working with long-running operations

{
  "name": "projects/PROJECT_ID/locations/us-central1/operations/OPERATION_ID",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.automl.v1.OperationMetadata",
    "createTime": "2019-08-07T22:00:20.692109Z",
    "updateTime": "2019-08-07T22:00:20.692109Z",
    "deployModelDetails": {}
  }
}

You can get the status of an operation with the following HTTP method and URL:

GET https://automl.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/operations/OPERATION_ID

The status of a finished operation will look similar to the following:

{
  "name": "projects/PROJECT_ID/locations/us-central1/operations/OPERATION_ID",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.automl.v1.OperationMetadata",
    "createTime": "2019-06-21T16:47:21.704674Z",
    "updateTime": "2019-06-21T17:01:00.802505Z",
    "deployModelDetails": {}
  },
  "done": true,
  "response": {
    "@type": "type.googleapis.com/google.protobuf.Empty"
  }
}

Go

Before trying this sample, follow the setup instructions for this language on the Client Libraries page.

import (
	"context"
	"fmt"
	"io"

	automl "cloud.google.com/go/automl/apiv1"
	"cloud.google.com/go/automl/apiv1/automlpb"
)

// deployModel deploys a model.
func deployModel(w io.Writer, projectID string, location string, modelID string) error {
	// projectID := "my-project-id"
	// location := "us-central1"
	// modelID := "TRL123456789..."

	ctx := context.Background()
	client, err := automl.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("NewClient: %w", err)
	}
	defer client.Close()

	req := &automlpb.DeployModelRequest{
		Name: fmt.Sprintf("projects/%s/locations/%s/models/%s", projectID, location, modelID),
	}

	op, err := client.DeployModel(ctx, req)
	if err != nil {
		return fmt.Errorf("DeployModel: %w", err)
	}
	fmt.Fprintf(w, "Processing operation name: %q\n", op.Name())

	if err := op.Wait(ctx); err != nil {
		return fmt.Errorf("Wait: %w", err)
	}

	fmt.Fprintf(w, "Model deployed.\n")

	return nil
}

Java

Before trying this sample, follow the setup instructions for this language on the Client Libraries page.

import com.google.api.gax.longrunning.OperationFuture;
import com.google.cloud.automl.v1.AutoMlClient;
import com.google.cloud.automl.v1.DeployModelRequest;
import com.google.cloud.automl.v1.ModelName;
import com.google.cloud.automl.v1.OperationMetadata;
import com.google.protobuf.Empty;
import java.io.IOException;
import java.util.concurrent.ExecutionException;

class DeployModel {

  public static void main(String[] args)
      throws IOException, ExecutionException, InterruptedException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    deployModel(projectId, modelId);
  }

  // Deploy a model for prediction
  static void deployModel(String projectId, String modelId)
      throws IOException, ExecutionException, InterruptedException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (AutoMlClient client = AutoMlClient.create()) {
      // Get the full path of the model.
      ModelName modelFullId = ModelName.of(projectId, "us-central1", modelId);
      DeployModelRequest request =
          DeployModelRequest.newBuilder().setName(modelFullId.toString()).build();
      OperationFuture<Empty, OperationMetadata> future = client.deployModelAsync(request);

      future.get();
      System.out.println("Model deployment finished");
    }
  }
}

Node.js

Before trying this sample, follow the setup instructions for this language on the Client Libraries page.

/**
 * TODO(developer): Uncomment these variables before running the sample.
 */
// const projectId = 'YOUR_PROJECT_ID';
// const location = 'us-central1';
// const modelId = 'YOUR_MODEL_ID';

// Imports the Google Cloud AutoML library
const {AutoMlClient} = require('@google-cloud/automl').v1;

// Instantiates a client
const client = new AutoMlClient();

async function deployModel() {
  // Construct request
  const request = {
    name: client.modelPath(projectId, location, modelId),
  };

  const [operation] = await client.deployModel(request);

  // Wait for operation to complete.
  const [response] = await operation.promise();
  console.log(`Model deployment finished. ${response}`);
}

deployModel();

Python

Before trying this sample, follow the setup instructions for this language on the Client Libraries page.

from google.cloud import automl

# TODO(developer): Uncomment and set the following variables
# project_id = "YOUR_PROJECT_ID"
# model_id = "YOUR_MODEL_ID"

client = automl.AutoMlClient()
# Get the full path of the model.
model_full_id = client.model_path(project_id, "us-central1", model_id)
response = client.deploy_model(name=model_full_id)

print(f"Model deployment finished. {response.result()}")

Additional languages

C#: Please follow the C# setup instructions on the client libraries page and then visit the AutoML Vision reference documentation for .NET.

PHP: Please follow the PHP setup instructions on the client libraries page and then visit the AutoML Vision reference documentation for PHP.

Ruby: Please follow the Ruby setup instructions on the client libraries page and then visit the AutoML Vision reference documentation for Ruby.

Update a model's node number

Once you have a trained deployed model you can update the number of nodes the model is deployed on to respond to your specific amount of traffic. For example, if you experience a higher amount of queries per second (QPS) than expected you can adjust the deployed node number to handle this traffic.

You can change this node number without first having to undeploy the model. Updating deployment changes the node number without interrupting your served prediction traffic.

Web UI

  1. In the Vision Dashboard select the Models tab in the left navigation bar to display the available models.

    To view the models for a different project, select the project from the drop-down list in the upper right of the title bar.

  2. Select your trained model that you have deployed.
  3. Select the Test & Use tab just below the title bar.
  4. A message is displayed in a box at the top of the page that says "Your model is deployed and is available for online prediction requests". Select the Update deployment option to the side of this text.

    image of update deployment button
  5. In the Update deployment window that opens select the new node number to deploy your model on from the list. Node numbers display their estimated prediction queries per second (QPS). image of update deployment popup window
  6. After selecting a new node number from the list select Update deployment to update the node number the model is deployed on.

    update deployment window after selecting a new node number
  7. You will be returned to the Test & Use window where you see the text box now displaying "Deploying model...". model deploying
  8. After your model has successfully deployed on the new node number you will receive an email at the address associated with your project.

REST

The same method you use to deploy a model initially is also used to change the deployed model's node number.

Before using any of the request data, make the following replacements:

  • project-id: your GCP project ID.
  • model-id: the ID of your model, from the response when you created the model. The ID is the last element of the name of your model. For example:
    • model name: projects/project-id/locations/location-id/models/IOD4412217016962778756
    • model id: IOD4412217016962778756

Field considerations:

  • nodeCount - The number of nodes to deploy the model on. The value must be between 1 and 100, inclusive on both ends. A node is an abstraction of a machine resource, which can handle online prediction queries per second (QPS) as given in the model's qps_per_node.

HTTP method and URL:

POST https://automl.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/models/MODEL_ID:deploy

Request JSON body:

{
  "imageClassificationModelDeploymentMetadata": {
    "nodeCount": 2
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: project-id" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://automl.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/models/MODEL_ID:deploy"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "project-id" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://automl.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/models/MODEL_ID:deploy" | Select-Object -Expand Content

You should see output similar to the following. You can use the operation ID to get the status of the task. For an example, see Working with long-running operations

{
  "name": "projects/PROJECT_ID/locations/us-central1/operations/OPERATION_ID",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.automl.v1.OperationMetadata",
    "createTime": "2019-08-07T22:00:20.692109Z",
    "updateTime": "2019-08-07T22:00:20.692109Z",
    "deployModelDetails": {}
  }
}

You can get the status of an operation with the following HTTP method and URL:

GET https://automl.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/operations/OPERATION_ID

The status of a finished operation will look similar to the following:

{
  "name": "projects/PROJECT_ID/locations/us-central1/operations/OPERATION_ID",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.automl.v1.OperationMetadata",
    "createTime": "2019-06-21T16:47:21.704674Z",
    "updateTime": "2019-06-21T17:01:00.802505Z",
    "deployModelDetails": {}
  },
  "done": true,
  "response": {
    "@type": "type.googleapis.com/google.protobuf.Empty"
  }
}

Go

Before trying this sample, follow the setup instructions for this language on the Client Libraries page.

import (
	"context"
	"fmt"
	"io"

	automl "cloud.google.com/go/automl/apiv1"
	"cloud.google.com/go/automl/apiv1/automlpb"
)

// visionClassificationDeployModelWithNodeCount deploys a model with node count.
func visionClassificationDeployModelWithNodeCount(w io.Writer, projectID string, location string, modelID string) error {
	// projectID := "my-project-id"
	// location := "us-central1"
	// modelID := "ICN123456789..."

	ctx := context.Background()
	client, err := automl.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("NewClient: %w", err)
	}
	defer client.Close()

	req := &automlpb.DeployModelRequest{
		Name: fmt.Sprintf("projects/%s/locations/%s/models/%s", projectID, location, modelID),
		ModelDeploymentMetadata: &automlpb.DeployModelRequest_ImageClassificationModelDeploymentMetadata{
			ImageClassificationModelDeploymentMetadata: &automlpb.ImageClassificationModelDeploymentMetadata{
				NodeCount: 2,
			},
		},
	}

	op, err := client.DeployModel(ctx, req)
	if err != nil {
		return fmt.Errorf("DeployModel: %w", err)
	}
	fmt.Fprintf(w, "Processing operation name: %q\n", op.Name())

	if err := op.Wait(ctx); err != nil {
		return fmt.Errorf("Wait: %w", err)
	}

	fmt.Fprintf(w, "Model deployed.\n")

	return nil
}

Java

Before trying this sample, follow the setup instructions for this language on the Client Libraries page.

import com.google.api.gax.longrunning.OperationFuture;
import com.google.cloud.automl.v1.AutoMlClient;
import com.google.cloud.automl.v1.DeployModelRequest;
import com.google.cloud.automl.v1.ImageClassificationModelDeploymentMetadata;
import com.google.cloud.automl.v1.ModelName;
import com.google.cloud.automl.v1.OperationMetadata;
import com.google.protobuf.Empty;
import java.io.IOException;
import java.util.concurrent.ExecutionException;

class VisionClassificationDeployModelNodeCount {

  static void visionClassificationDeployModelNodeCount()
      throws InterruptedException, ExecutionException, IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    visionClassificationDeployModelNodeCount(projectId, modelId);
  }

  // Deploy a model for prediction with a specified node count (can be used to redeploy a model)
  static void visionClassificationDeployModelNodeCount(String projectId, String modelId)
      throws IOException, ExecutionException, InterruptedException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (AutoMlClient client = AutoMlClient.create()) {
      // Get the full path of the model.
      ModelName modelFullId = ModelName.of(projectId, "us-central1", modelId);
      ImageClassificationModelDeploymentMetadata metadata =
          ImageClassificationModelDeploymentMetadata.newBuilder().setNodeCount(2).build();
      DeployModelRequest request =
          DeployModelRequest.newBuilder()
              .setName(modelFullId.toString())
              .setImageClassificationModelDeploymentMetadata(metadata)
              .build();
      OperationFuture<Empty, OperationMetadata> future = client.deployModelAsync(request);

      future.get();
      System.out.println("Model deployment finished");
    }
  }
}

Node.js

Before trying this sample, follow the setup instructions for this language on the Client Libraries page.

/**
 * TODO(developer): Uncomment these variables before running the sample.
 */
// const projectId = 'YOUR_PROJECT_ID';
// const location = 'us-central1';
// const modelId = 'YOUR_MODEL_ID';

// Imports the Google Cloud AutoML library
const {AutoMlClient} = require('@google-cloud/automl').v1;

// Instantiates a client
const client = new AutoMlClient();

async function deployModelWithNodeCount() {
  // Construct request
  const request = {
    name: client.modelPath(projectId, location, modelId),
    imageClassificationModelDeploymentMetadata: {
      nodeCount: 2,
    },
  };

  const [operation] = await client.deployModel(request);

  // Wait for operation to complete.
  const [response] = await operation.promise();
  console.log(`Model deployment finished. ${response}`);
}

deployModelWithNodeCount();

Python

Before trying this sample, follow the setup instructions for this language on the Client Libraries page.

from google.cloud import automl

# TODO(developer): Uncomment and set the following variables
# project_id = "YOUR_PROJECT_ID"
# model_id = "YOUR_MODEL_ID"

client = automl.AutoMlClient()
# Get the full path of the model.
model_full_id = client.model_path(project_id, "us-central1", model_id)

# node count determines the number of nodes to deploy the model on.
# https://cloud.google.com/automl/docs/reference/rpc/google.cloud.automl.v1#imageclassificationmodeldeploymentmetadata
metadata = automl.ImageClassificationModelDeploymentMetadata(node_count=2)

request = automl.DeployModelRequest(
    name=model_full_id, image_classification_model_deployment_metadata=metadata
)
response = client.deploy_model(request=request)

print(f"Model deployment finished. {response.result()}")

Additional languages

C#: Please follow the C# setup instructions on the client libraries page and then visit the AutoML Vision reference documentation for .NET.

PHP: Please follow the PHP setup instructions on the client libraries page and then visit the AutoML Vision reference documentation for PHP.

Ruby: Please follow the Ruby setup instructions on the client libraries page and then visit the AutoML Vision reference documentation for Ruby.