Deploying your model

Initial model deployment

After you have created (trained) a model, you must deploy the model before you can make online (or synchronous) calls to it.

You can now also update model deployment if you need additional online prediction capacity.

Deploying a Classification model does not incur additional charges until General Availability (GA) later in 2019. Until then, pricing remains based on number of images sent for online prediction in your monthly billing cycle. On the GA release date, the new deployment-based pricing, similar to that of Object Detection, will be updated in the pricing page.

Models created prior to the Beta Refresh (September 2019) are automatically deployed on an old prediction service, which will be excluded from our Service Level Agreement at GA time. The UI/API will show the model's state as 'deployed' with 0 node.

If you want to update a pre-existing model to the latest prediction service, you can undeploy and redeploy the model.

Web UI

This feature is not available in the original User Interface. Select the Integrated UI tab for instructions on how to deploy models in the updated UI.

Integrated UI

  1. Navigate to the Test & Use tab below the title bar.
  2. Select the Deploy Model button. This will open a new deployment option window. Test and use model page
  3. In the newly opened deployment option window specify the number of nodes to deploy with. Each node can support a certain number of prediction queries per second (QPS).

    One node is usually sufficient for most experimental traffic.

    deploy popup menu
  4. Select Deploy to begin model deployment.

    model deploying
  5. You will receive an email after the model deployment operation finishes.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • project-id: your GCP project ID.
  • model-id: the ID of your model, from the response when you created the model. The ID is the last element of the name of your model. For example:
    • model name: projects/project-id/locations/location-id/models/IOD4412217016962778756
    • model id: IOD4412217016962778756

Field considerations:

  • nodeCount - The number of nodes to deploy the model on. The value must be between 1 and 100, inclusive on both ends. A node is an abstraction of a machine resource, which can handle online prediction queries per second (QPS) as given in the model's qps_per_node.

HTTP method and URL:

POST https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/models/model-id:deploy

Request JSON body:

{
  "imageClassificationModelDeploymentMetadata": {
    "nodeCount": 2
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file called request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/models/model-id:deploy

PowerShell

Save the request body in a file called request.json, and execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/models/model-id:deploy" | Select-Object -Expand Content

You should see output similar to the following. You can use the operation ID to get the status of the task.

{
  "name": "projects/project-id/locations/us-central1/operations/operation-id",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
    "createTime": "2019-08-07T22:00:20.692109Z",
    "updateTime": "2019-08-07T22:00:20.692109Z",
    "deployModelDetails": {}
  }
}

You can get the status of an operation with the following HTTP method and URL:

GET https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/operations/operation-id

The status of a finished operation will look similar to the following:

{
  "name": "projects/project-id/locations/us-central1/operations/operation-id",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
    "createTime": "2019-06-21T16:47:21.704674Z",
    "updateTime": "2019-06-21T17:01:00.802505Z",
    "deployModelDetails": {}
  },
  "done": true,
  "response": {
    "@type": "type.googleapis.com/google.protobuf.Empty"
  }
}

Java

import com.google.api.gax.longrunning.OperationFuture;
import com.google.cloud.automl.v1beta1.AutoMlClient;
import com.google.cloud.automl.v1beta1.DeployModelRequest;
import com.google.cloud.automl.v1beta1.ModelName;
import com.google.cloud.automl.v1beta1.OperationMetadata;
import com.google.protobuf.Empty;

import java.io.IOException;
import java.util.concurrent.ExecutionException;

class ClassificationDeployModel {

  // Deploy a model
  static void classificationDeployModel(String projectId, String modelId) {
    // String projectId = "YOUR_PROJECT_ID";
    // String modelId = "YOUR_MODEL_ID";

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (AutoMlClient client = AutoMlClient.create()) {

      // Get the full path of the model.
      ModelName modelFullId = ModelName.of(projectId, "us-central1", modelId);

      // Build deploy model request.
      DeployModelRequest deployModelRequest =
          DeployModelRequest.newBuilder().setName(modelFullId.toString()).build();

      // Deploy a model with the deploy model request.
      OperationFuture<Empty, OperationMetadata> future =
          client.deployModelAsync(deployModelRequest);

      future.get();

      // Display the deployment details of model.
      System.out.println("Model deployment finished");
    } catch (IOException | InterruptedException | ExecutionException e) {
      e.printStackTrace();
    }
  }
}

Python

from google.cloud import automl_v1beta1 as automl

# project_id = 'YOUR_PROJECT_ID'
# model_name = 'YOUR_MODEL_ID'

client = automl.AutoMlClient()

# The full path to your model
full_model_id = client.model_path(project_id, 'us-central1', model_id)

# Deploy the model
response = client.deploy_model(full_model_id)

print(u'Model deployment finished'.format(response.result()))

Update a model's node number

Once you have a trained deployed model you can update the number of nodes the model is deployed on to respond to your specific amount of traffic. For example, if you experience a higher amount of queries per second (QPS) than expected.

You can change this node number without first having to undeploy the model. Updating deployment will change the node number without interrupting your served prediction traffic.

Web UI

This feature is not available in the original User Interface. Select the Integrated UI tab for instructions on how to update model deployment in the updated UI.

Integrated UI

  1. In the Vision Dashboard and select the Models tab (with lightbulb icon) in the left navigation bar to display the available models.

    To view the models for a different project, select the project from the drop-down list in the upper right of the title bar.

  2. Select your trained model that has been deployed.
  3. Select the Test & Use tab just below the title bar.
  4. A message is displayed in a box at the top of the page that says "Your model is deployed and is available for online prediction requests". Select the Update deployment option to the side of this text.

    image of update deployment button
  5. In the Update deployment window that opens select the new node number to deploy your model on from the list. Node numbers display their estimated prediction queries per second (QPS). image of update deployment popup window
  6. After selecting a new node number from the list select Update deployment to update the node number the model is deployed on.

    update deployment window after selecting a new node number
  7. You will be returned to the Test & Use window where you see the text box now displaying "Deploying model...". model deploying
  8. After your model has successfully deployed on the new node number you will receive an email at the address associated with your project.

REST & CMD LINE

The same method you use to initially use to deploy a model is used to change the deployed model's node number.

Before using any of the request data below, make the following replacements:

  • project-id: your GCP project ID.
  • model-id: the ID of your model, from the response when you created the model. The ID is the last element of the name of your model. For example:
    • model name: projects/project-id/locations/location-id/models/IOD4412217016962778756
    • model id: IOD4412217016962778756

Field considerations:

  • nodeCount - The number of nodes to deploy the model on. The value must be between 1 and 100, inclusive on both ends. A node is an abstraction of a machine resource, which can handle online prediction queries per second (QPS) as given in the model's qps_per_node.

HTTP method and URL:

POST https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/models/model-id:deploy

Request JSON body:

{
  "imageClassificationModelDeploymentMetadata": {
    "nodeCount": 2
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file called request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/models/model-id:deploy

PowerShell

Save the request body in a file called request.json, and execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/models/model-id:deploy" | Select-Object -Expand Content

You should see output similar to the following. You can use the operation ID to get the status of the task.

{
  "name": "projects/project-id/locations/us-central1/operations/operation-id",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
    "createTime": "2019-08-07T22:00:20.692109Z",
    "updateTime": "2019-08-07T22:00:20.692109Z",
    "deployModelDetails": {}
  }
}

You can get the status of an operation with the following HTTP method and URL:

GET https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/operations/operation-id

The status of a finished operation will look similar to the following:

{
  "name": "projects/project-id/locations/us-central1/operations/operation-id",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
    "createTime": "2019-06-21T16:47:21.704674Z",
    "updateTime": "2019-06-21T17:01:00.802505Z",
    "deployModelDetails": {}
  },
  "done": true,
  "response": {
    "@type": "type.googleapis.com/google.protobuf.Empty"
  }
}

Java

import com.google.api.gax.longrunning.OperationFuture;
import com.google.cloud.automl.v1beta1.AutoMlClient;
import com.google.cloud.automl.v1beta1.DeployModelRequest;
import com.google.cloud.automl.v1beta1.ImageClassificationModelDeploymentMetadata;
import com.google.cloud.automl.v1beta1.ModelName;
import com.google.cloud.automl.v1beta1.OperationMetadata;
import com.google.protobuf.Empty;

import java.io.IOException;
import java.util.concurrent.ExecutionException;

class ClassificationDeployModelNodeCount {

  // Deploy a model with a specified node count
  static void classificationDeployModelNodeCount(String projectId, String modelId) {
    // String projectId = "YOUR_PROJECT_ID";
    // String modelId = "YOUR_MODEL_ID";

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (AutoMlClient client = AutoMlClient.create()) {
      // Get the full path of the model.
      ModelName modelFullId = ModelName.of(projectId, "us-central1", modelId);

      // Set how many nodes the model is deployed on
      ImageClassificationModelDeploymentMetadata deploymentMetadata =
              ImageClassificationModelDeploymentMetadata.newBuilder().setNodeCount(2).build();

      DeployModelRequest request = DeployModelRequest.newBuilder()
              .setName(modelFullId.toString())
              .setImageClassificationModelDeploymentMetadata(deploymentMetadata)
              .build();
      // Deploy the model
      OperationFuture<Empty, OperationMetadata> future = client.deployModelAsync(request);
      future.get();
      System.out.println("Model deployment on 2 nodes finished");
    } catch (IOException | InterruptedException | ExecutionException e) {
      e.printStackTrace();
    }
  }
}

Python

from google.cloud import automl_v1beta1 as automl

# project_id = 'YOUR_PROJECT_ID'
# model_name = 'YOUR_MODEL_ID'

client = automl.AutoMlClient()

# The full path to your model
full_model_id = client.model_path(project_id, 'us-central1', model_id)

# Set how many nodes the model is deployed on
metadata = (
    automl.types.ImageClassificationModelDeploymentMetadata(node_count=2))

# Deploy the model
response = client.deploy_model(
    full_model_id,
    image_classification_model_deployment_metadata=metadata)

print(u'Model deployment on 2 nodes finished'.format(response.result()))
Was this page helpful? Let us know how we did:

Send feedback about...

Cloud AutoML Vision
Need help? Visit our support page.