评估模型

训练模型后,Cloud AutoML Vision Object Detection 会使用测试集中的图片来评估新模型的质量和准确率。

Cloud AutoML Vision Object Detection 提供了一组总体评估指标(评估过程输出)以及针对每个类别标签的评估指标;前者指示模型的整体表现,后者指示模型针对该标签的表现。

评估概览

评估过程输入

  • IoU 阈值交并比 (Intersection over Union) 的缩写,这是一个在对象检测中使用的值,用于衡量对象的预测边界框与实际边界框的重叠度。预测边界框值与实际边界框值越接近,交叉区域和 IoU 值越大。

  • 分数阈值:在计算(下列)输出指标时,假设模型永远不会返回分数低于此值的预测。

评估过程输出

  • AuPRC精确率/召回率曲线下的面积,亦称为“平均精确率”。通常介于 0.5 和 1.0 之间。数值越高,表示模型越准确。

  • 置信度阈值曲线:显示不同的置信度阈值如何影响精确率、召回率、真正例率和假正例率。请了解精确率和召回率的关系。

  • F1 得分:精确率和召回率的调和平均数。如果您希望在精确率和召回率之间取得平衡,F1 指标会非常有用。如果您的训练数据中存在不均匀的类别分布,F1 也会非常有用。

您可利用该数据来评估您的模型的就绪情况。如果混淆率高、AUC 得分低,或者精确率和召回率得分低,这可能表明您的模型需要额外的训练数据或者存在不一致的标签。如果 AUC 得分非常高,并且精确率和召回率也很完美,这可能表明数据过于简单,模型可能无法有效泛化:AUC 高可能表明模型训练基于不能很好代表未来推断的理想化数据

管理模型评估结果

列出模型评估结果

训练模型后,您可列出该模型的评估指标。

网页界面

  1. 打开 Cloud AutoML Vision Object Detection 界面,然后点击左侧导航栏中的模型标签页(带有灯泡图标)以显示可用的模型。

    如需查看其他项目的模型,请从标题栏右上角的下拉列表中选择该项目。

  2. 点击待评估模型所在的行。

  3. 如果需要,请点击标题栏正下方的评估标签页。

    如果模型训练完毕,Cloud AutoML Vision Object Detection 会显示其评估指标。

    模型评估页面

  4. 如需查看特定标签的指标,请从页面下部的标签列表中选择标签名称。

    模型评估页面专属标签

REST 和命令行

在使用下面的任何请求数据之前,请先进行以下替换:

  • project-id:您的 GCP 项目 ID。
  • model-id:您的模型的 ID(从创建模型时返回的响应中获取)。此 ID 是模型名称的最后一个元素。 例如:
    • 模型名称:projects/project-id/locations/location-id/models/IOD4412217016962778756
    • 模型 ID:IOD4412217016962778756

HTTP 方法和网址:

GET https://automl.googleapis.com/v1/projects/project-id/locations/us-central1/models/model-id/modelEvaluations

如需发送请求,请选择以下方式之一:

curl

执行以下命令:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://automl.googleapis.com/v1/projects/project-id/locations/us-central1/models/model-id/modelEvaluations

PowerShell

执行以下命令:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://automl.googleapis.com/v1/projects/project-id/locations/us-central1/models/model-id/modelEvaluations" | Select-Object -Expand Content

您应该会收到类似以下示例的 JSON 响应。关键对象检测的专属字段以粗体显示,并且为清楚起见,显示不同数量的 boundingBoxMetricsEntries 条目:



    {
  "modelEvaluation": [
    {
      "name": "projects/project-id/locations/us-central1/models/model-id/modelEvaluations/model-eval-id",
      "annotationSpecId": "6342510834593300480",
      "createTime": "2019-07-26T22:28:56.890727Z",
      "evaluatedExampleCount": 18,
      "imageObjectDetectionEvaluationMetrics": {
        "evaluatedBoundingBoxCount": 96,
        "boundingBoxMetricsEntries": [
          {
            "iouThreshold": 0.15,
            "meanAveragePrecision": 0.6317751,
            "confidenceMetricsEntries": [
              {
                "confidenceThreshold": 0.101631254,
                "recall": 0.84375,
                "precision": 0.2555205,
                "f1Score": 0.3922518
              },
              {
                "confidenceThreshold": 0.10180253,
                "recall": 0.8333333,
                "precision": 0.25316456,
                "f1Score": 0.3883495
              },
              ...
              {
                "confidenceThreshold": 0.8791167,
                "recall": 0.020833334,
                "precision": 1,
                "f1Score": 0.040816326
              },
              {
                "confidenceThreshold": 0.8804436,
                "recall": 0.010416667,
                "precision": 1,
                "f1Score": 0.020618558
              }
            ]
          },
          {
            "iouThreshold": 0.8,
            "meanAveragePrecision": 0.15461995,
            "confidenceMetricsEntries": [
              {
                "confidenceThreshold": 0.101631254,
                "recall": 0.22916667,
                "precision": 0.06940063,
                "f1Score": 0.10653753
              },
              ...
              {
                "confidenceThreshold": 0.8804436,
                "recall": 0.010416667,
                "precision": 1,
                "f1Score": 0.020618558
              }
            ]
          },
          {
            "iouThreshold": 0.4,
            "meanAveragePrecision": 0.56170964,
            "confidenceMetricsEntries": [
              {
                "confidenceThreshold": 0.101631254,
                "recall": 0.7604167,
                "precision": 0.23028392,
                "f1Score": 0.3535109
              },
              ...
              {
                "confidenceThreshold": 0.8804436,
                "recall": 0.010416667,
                "precision": 1,
                "f1Score": 0.020618558
              }
            ]
          },
          ...
        ],
        "boundingBoxMeanAveragePrecision": 0.4306387
      },
      "displayName": "Tomato"
    },
    {
      "name": "projects/project-id/locations/us-central1/models/model-id/modelEvaluations/model-eval-id",
      "annotationSpecId": "1730824816165912576",
      "createTime": "2019-07-26T22:28:56.890727Z",
      "evaluatedExampleCount": 9,
      "imageObjectDetectionEvaluationMetrics": {
        "evaluatedBoundingBoxCount": 51,
        "boundingBoxMetricsEntries": [
          {
            ...
          }
        ],
        "boundingBoxMeanAveragePrecision": 0.29565892
      },
      "displayName": "Cheese"
    },
    {
      "name": "projects/project-id/locations/us-central1/models/model-id/modelEvaluations/model-eval-id",
      "annotationSpecId": "7495432339200147456",
      "createTime": "2019-07-26T22:28:56.890727Z",
      "evaluatedExampleCount": 4,
      "imageObjectDetectionEvaluationMetrics": {
        "evaluatedBoundingBoxCount": 22,
        "boundingBoxMetricsEntries": [
          {
            "iouThreshold": 0.2,
            "meanAveragePrecision": 0.104004614,
            "confidenceMetricsEntries": [
              {
                "confidenceThreshold": 0.1008248,
                "recall": 0.36363637,
                "precision": 0.08888889,
                "f1Score": 0.14285715
              },
              ...
              {
                "confidenceThreshold": 0.47585258,
                "recall": 0.045454547,
                "precision": 1,
                "f1Score": 0.08695653
              }
            ]
          },
          ...
        ],
        "boundingBoxMeanAveragePrecision": 0.057070773
      },
      "displayName": "Seafood"
    }
  ]
}

C#

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。

/// <summary>
/// Demonstrates using the AutoML client to list model evaluations.
/// </summary>
/// <param name="projectId">GCP Project ID.</param>
/// <param name="modelId">the Id of the model.</param>
public static object ListModelEvaluations(string projectId = "YOUR-PROJECT-ID",
    string modelId = "YOUR-MODEL-ID")
{
    // Initialize the client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests.
    AutoMlClient client = AutoMlClient.Create();

    // Get the full path of the model.
    string modelFullId = ModelName.Format(projectId, "us-central1", modelId);

    // Create list models request.
    ListModelEvaluationsRequest listModlesRequest = new ListModelEvaluationsRequest
    {
        Parent = modelFullId
    };

    // List all the model evaluations in the model by applying filter.
    Console.WriteLine("List of model evaluations:");
    foreach (ModelEvaluation modelEvaluation in client.ListModelEvaluations(listModlesRequest))
    {
        Console.WriteLine($"Model Evaluation Name: {modelEvaluation.Name}");
        Console.WriteLine($"Model Annotation Spec Id: {modelEvaluation.AnnotationSpecId}");
        Console.WriteLine("Create Time:");
        Console.WriteLine($"\tseconds: {modelEvaluation.CreateTime.Seconds}");
        Console.WriteLine($"\tnanos: {modelEvaluation.CreateTime.Nanos / 1e9}");
        Console.WriteLine(
            $"Evalution Example Count: {modelEvaluation.EvaluatedExampleCount}");
        Console.WriteLine(
            $"Object Detection Model Evaluation Metrics: {modelEvaluation.ImageObjectDetectionEvaluationMetrics}");
    }
    return 0;
}

Go

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。

import (
	"context"
	"fmt"
	"io"

	automl "cloud.google.com/go/automl/apiv1"
	"google.golang.org/api/iterator"
	automlpb "google.golang.org/genproto/googleapis/cloud/automl/v1"
)

// listModelEvaluation lists existing model evaluations.
func listModelEvaluations(w io.Writer, projectID string, location string, modelID string) error {
	// projectID := "my-project-id"
	// location := "us-central1"
	// modelID := "TRL123456789..."

	ctx := context.Background()
	client, err := automl.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("NewClient: %v", err)
	}
	defer client.Close()

	req := &automlpb.ListModelEvaluationsRequest{
		Parent: fmt.Sprintf("projects/%s/locations/%s/models/%s", projectID, location, modelID),
	}

	it := client.ListModelEvaluations(ctx, req)

	// Iterate over all results
	for {
		evaluation, err := it.Next()
		if err == iterator.Done {
			break
		}
		if err != nil {
			return fmt.Errorf("ListModelEvaluations.Next: %v", err)
		}

		fmt.Fprintf(w, "Model evaluation name: %v\n", evaluation.GetName())
		fmt.Fprintf(w, "Model annotation spec id: %v\n", evaluation.GetAnnotationSpecId())
		fmt.Fprintf(w, "Create Time:\n")
		fmt.Fprintf(w, "\tseconds: %v\n", evaluation.GetCreateTime().GetSeconds())
		fmt.Fprintf(w, "\tnanos: %v\n", evaluation.GetCreateTime().GetNanos())
		fmt.Fprintf(w, "Evaluation example count: %v\n", evaluation.GetEvaluatedExampleCount())
		fmt.Fprintf(w, "Object detection model evaluation metrics: %v\n", evaluation.GetImageObjectDetectionEvaluationMetrics())
	}

	return nil
}

Java

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。


import com.google.cloud.automl.v1.AutoMlClient;
import com.google.cloud.automl.v1.ListModelEvaluationsRequest;
import com.google.cloud.automl.v1.ModelEvaluation;
import com.google.cloud.automl.v1.ModelName;
import java.io.IOException;

class ListModelEvaluations {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    listModelEvaluations(projectId, modelId);
  }

  // List model evaluations
  static void listModelEvaluations(String projectId, String modelId) throws IOException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (AutoMlClient client = AutoMlClient.create()) {
      // Get the full path of the model.
      ModelName modelFullId = ModelName.of(projectId, "us-central1", modelId);
      ListModelEvaluationsRequest modelEvaluationsrequest =
          ListModelEvaluationsRequest.newBuilder().setParent(modelFullId.toString()).build();

      // List all the model evaluations in the model by applying filter.
      System.out.println("List of model evaluations:");
      for (ModelEvaluation modelEvaluation :
          client.listModelEvaluations(modelEvaluationsrequest).iterateAll()) {

        System.out.format("Model Evaluation Name: %s\n", modelEvaluation.getName());
        System.out.format("Model Annotation Spec Id: %s", modelEvaluation.getAnnotationSpecId());
        System.out.println("Create Time:");
        System.out.format("\tseconds: %s\n", modelEvaluation.getCreateTime().getSeconds());
        System.out.format("\tnanos: %s", modelEvaluation.getCreateTime().getNanos() / 1e9);
        System.out.format(
            "Evalution Example Count: %d\n", modelEvaluation.getEvaluatedExampleCount());
        System.out.format(
            "Object Detection Model Evaluation Metrics: %s\n",
            modelEvaluation.getImageObjectDetectionEvaluationMetrics());
      }
    }
  }
}

Node.js

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。

/**
 * TODO(developer): Uncomment these variables before running the sample.
 */
// const projectId = 'YOUR_PROJECT_ID';
// const location = 'us-central1';
// const modelId = 'YOUR_MODEL_ID';

// Imports the Google Cloud AutoML library
const {AutoMlClient} = require('@google-cloud/automl').v1;

// Instantiates a client
const client = new AutoMlClient();

async function listModelEvaluations() {
  // Construct request
  const request = {
    parent: client.modelPath(projectId, location, modelId),
    filter: '',
  };

  const [response] = await client.listModelEvaluations(request);

  console.log('List of model evaluations:');
  for (const evaluation of response) {
    console.log(`Model evaluation name: ${evaluation.name}`);
    console.log(`Model annotation spec id: ${evaluation.annotationSpecId}`);
    console.log(`Model display name: ${evaluation.displayName}`);
    console.log('Model create time');
    console.log(`\tseconds ${evaluation.createTime.seconds}`);
    console.log(`\tnanos ${evaluation.createTime.nanos / 1e9}`);
    console.log(
      `Evaluation example count: ${evaluation.evaluatedExampleCount}`
    );
    console.log(
      `Object detection model evaluation metrics: ${evaluation.imageObjectDetectionEvaluationMetrics}`
    );
  }
}

listModelEvaluations();

PHP

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。

use Google\Cloud\AutoMl\V1\AutoMlClient;

/** Uncomment and populate these variables in your code */
// $projectId = '[Google Cloud Project ID]';
// $location = 'us-central1';
// $modelId = 'my_model_id_123';

$client = new AutoMlClient();

try {
    // get full path of model
    $formattedParent = $client->modelName(
        $projectId,
        $location,
        $modelId
    );

    // list all model evaluations
    $filter = '';
    $pagedResponse = $client->listModelEvaluations($formattedParent, $filter);

    print('List of model evaluations' . PHP_EOL);
    foreach ($pagedResponse->iteratePages() as $page) {
        foreach ($page as $modelEvaluation) {
            // display model evaluation information
            $splitName = explode('/', $modelEvaluation->getName());
            printf('Model evaluation name: %s' . PHP_EOL, $modelEvaluation->getName());
            printf('Model evaluation id: %s' . PHP_EOL, end($splitName));
            printf('Model annotation spec id: %s' . PHP_EOL, $modelEvaluation->getAnnotationSpecId());
            printf('Create time' . PHP_EOL);
            printf('seconds: %d' . PHP_EOL, $modelEvaluation->getCreateTime()->getSeconds());
            printf('nanos : %d' . PHP_EOL, $modelEvaluation->getCreateTime()->getNanos());
            printf('Evaluation example count: %s' . PHP_EOL, $modelEvaluation->getEvaluatedExampleCount());
            printf('Object detection model evaluation metrics: %s' . PHP_EOL, $modelEvaluation->getImageObjectDetectionEvaluationMetrics());
        }
    }
} finally {
    $client->close();
}

Python

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。

from google.cloud import automl

# TODO(developer): Uncomment and set the following variables
# project_id = "YOUR_PROJECT_ID"
# model_id = "YOUR_MODEL_ID"

client = automl.AutoMlClient()
# Get the full path of the model.
model_full_id = client.model_path(project_id, "us-central1", model_id)

print("List of model evaluations:")
for evaluation in client.list_model_evaluations(model_full_id, ""):
    print("Model evaluation name: {}".format(evaluation.name))
    print(
        "Model annotation spec id: {}".format(
            evaluation.annotation_spec_id
        )
    )
    print("Create Time:")
    print("\tseconds: {}".format(evaluation.create_time.seconds))
    print("\tnanos: {}".format(evaluation.create_time.nanos / 1e9))
    print(
        "Evaluation example count: {}".format(
            evaluation.evaluated_example_count
        )
    )
    print(
        "Object detection model evaluation metrics: {}\n\n".format(
            evaluation.image_object_detection_evaluation_metrics
        )
    )

Ruby

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。

require "google/cloud/automl"

project_id = "YOUR_PROJECT_ID"
model_id = "YOUR_MODEL_ID"

client = Google::Cloud::AutoML::AutoML.new

# Get the full path of the model.
model_full_id = client.class.model_path project_id, "us-central1", model_id

model_evaluations = client.list_model_evaluations model_full_id

puts "List of model evaluations:"

model_evaluations.each do |evaluation|
  puts "Model evaluation name: #{evaluation.name}"
  puts "Model annotation spec id: #{evaluation.annotation_spec_id}"
  puts "Create Time: #{evaluation.create_time.to_time}"
  puts "Evaluation example count: #{evaluation.evaluated_example_count}"
  puts "Object detection model evaluation metrics: #{evaluation.image_object_detection_evaluation_metrics}"
end

获取模型评估结果

您还可以使用评估 ID 获取针对标签 (displayName) 的特定模型评估。

网页界面

Cloud AutoML Vision Object Detection 界面中,转到模型页面,然后选择模型即可执行等效操作。选择模型后,转到评估标签页,然后选择该标签以查看特定于标签的评估结果。

模型评估页面专属标签

REST 和命令行

在使用下面的任何请求数据之前,请先进行以下替换:

  • project-id:您的 GCP 项目 ID。
  • model-id:您的模型的 ID(从创建模型时返回的响应中获取)。此 ID 是模型名称的最后一个元素。 例如:
    • 模型名称:projects/project-id/locations/location-id/models/IOD4412217016962778756
    • 模型 ID:IOD4412217016962778756
  • model-evaluation-id:模型评估的 ID 值。您可以从 list 模型评估操作中获取模型评估 ID。

HTTP 方法和网址:

GET https://automl.googleapis.com/v1/projects/project-id/locations/us-central1/models/model-id/modelEvaluations/model-evaluation-id

如需发送请求,请选择以下方式之一:

curl

执行以下命令:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://automl.googleapis.com/v1/projects/project-id/locations/us-central1/models/model-id/modelEvaluations/model-evaluation-id

PowerShell

执行以下命令:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://automl.googleapis.com/v1/projects/project-id/locations/us-central1/models/model-id/modelEvaluations/model-evaluation-id" | Select-Object -Expand Content

您应该会收到类似以下示例的 JSON 响应。关键对象检测的专属字段以粗体显示,并且为清楚起见,显示 boundingBoxMetricsEntries 条目的缩短版本:



    {
  "name": "projects/project-id/locations/us-central1/models/model-id/modelEvaluations/model-evaluation-id",
  "annotationSpecId": "6342510834593300480",
  "createTime": "2019-07-26T22:28:56.890727Z",
  "evaluatedExampleCount": 18,
  "imageObjectDetectionEvaluationMetrics": {
    "evaluatedBoundingBoxCount": 96,
    "boundingBoxMetricsEntries": [
      {
        "iouThreshold": 0.15,
        "meanAveragePrecision": 0.6317751,
        "confidenceMetricsEntries": [
          {
            "confidenceThreshold": 0.101631254,
            "recall": 0.84375,
            "precision": 0.2555205,
            "f1Score": 0.3922518
          },
          ...
          {
            "confidenceThreshold": 0.8804436,
            "recall": 0.010416667,
            "precision": 1,
            "f1Score": 0.020618558
          }
        ]
      },
      {
        "iouThreshold": 0.8,
        "meanAveragePrecision": 0.15461995,
        "confidenceMetricsEntries": [
          {
            "confidenceThreshold": 0.101631254,
            "recall": 0.22916667,
            "precision": 0.06940063,
            "f1Score": 0.10653753
          },
          ...
          {
            "confidenceThreshold": 0.8804436,
            "recall": 0.010416667,
            "precision": 1,
            "f1Score": 0.020618558
          }
        ]
      },
      {
        "iouThreshold": 0.4,
        "meanAveragePrecision": 0.56170964,
        "confidenceMetricsEntries": [
          {
            "confidenceThreshold": 0.101631254,
            "recall": 0.7604167,
            "precision": 0.23028392,
            "f1Score": 0.3535109
          },
          ...
          {
            "confidenceThreshold": 0.8804436,
            "recall": 0.010416667,
            "precision": 1,
            "f1Score": 0.020618558
          }
        ]
      },
      ...
    ],
    "boundingBoxMeanAveragePrecision": 0.4306387
  },
  "displayName": "Tomato"
}

C#

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。

/// <summary>
/// Demonstrates using the AutoML client to get model evaluations.
/// </summary>
/// <param name="projectId">GCP Project ID.</param>
/// <param name="modelId">the Id of the model.</param>
/// <param name="modelEvaluationId">the Id of your model evaluation.</param>
public static object GetModelEvaluation(string projectId = "YOUR-PROJECT-ID",
    string modelId = "YOUR-MODEL-ID",
    string modelEvaluationId = " YOUR-MODEL-EVAL-ID")
{
    // Initialize the client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests.
    AutoMlClient client = AutoMlClient.Create();

    // Get the full path of the model evaluation.
    string modelEvaluationFullId =
        ModelEvaluationName.Format(projectId, "us-central1", modelId, modelEvaluationId);

    // Get complete detail of the model evaluation.
    ModelEvaluation modelEvaluation = client.GetModelEvaluation(modelEvaluationFullId);

    Console.WriteLine($"Model Evaluation Name: {modelEvaluation.Name}");
    Console.WriteLine($"Model Annotation Spec Id: {modelEvaluation.AnnotationSpecId}");
    Console.WriteLine("Create Time:");
    Console.WriteLine($"\tseconds: {modelEvaluation.CreateTime.Seconds}");
    Console.WriteLine($"\tnanos: {modelEvaluation.CreateTime.Nanos / 1e9}");
    Console.WriteLine(
        $"Evalution Example Count: {modelEvaluation.EvaluatedExampleCount}");

    Console.WriteLine(
        $"Object Detection Model Evaluation Metrics: {modelEvaluation.ImageObjectDetectionEvaluationMetrics}");
    return 0;
}

Go

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。

import (
	"context"
	"fmt"
	"io"

	automl "cloud.google.com/go/automl/apiv1"
	automlpb "google.golang.org/genproto/googleapis/cloud/automl/v1"
)

// getModelEvaluation gets a model evaluation.
func getModelEvaluation(w io.Writer, projectID string, location string, modelID string, modelEvaluationID string) error {
	// projectID := "my-project-id"
	// location := "us-central1"
	// modelID := "TRL123456789..."
	// modelEvaluationID := "123456789..."

	ctx := context.Background()
	client, err := automl.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("NewClient: %v", err)
	}
	defer client.Close()

	req := &automlpb.GetModelEvaluationRequest{
		Name: fmt.Sprintf("projects/%s/locations/%s/models/%s/modelEvaluations/%s", projectID, location, modelID, modelEvaluationID),
	}

	evaluation, err := client.GetModelEvaluation(ctx, req)
	if err != nil {
		return fmt.Errorf("GetModelEvaluation: %v", err)
	}

	fmt.Fprintf(w, "Model evaluation name: %v\n", evaluation.GetName())
	fmt.Fprintf(w, "Model annotation spec id: %v\n", evaluation.GetAnnotationSpecId())
	fmt.Fprintf(w, "Create Time:\n")
	fmt.Fprintf(w, "\tseconds: %v\n", evaluation.GetCreateTime().GetSeconds())
	fmt.Fprintf(w, "\tnanos: %v\n", evaluation.GetCreateTime().GetNanos())
	fmt.Fprintf(w, "Evaluation example count: %v\n", evaluation.GetEvaluatedExampleCount())
	fmt.Fprintf(w, "Object detection model evaluation metrics: %v\n", evaluation.GetImageObjectDetectionEvaluationMetrics())

	return nil
}

Java

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。


import com.google.cloud.automl.v1.AutoMlClient;
import com.google.cloud.automl.v1.ModelEvaluation;
import com.google.cloud.automl.v1.ModelEvaluationName;
import java.io.IOException;

class GetModelEvaluation {

  static void getModelEvaluation() throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String modelEvaluationId = "YOUR_MODEL_EVALUATION_ID";
    getModelEvaluation(projectId, modelId, modelEvaluationId);
  }

  // Get a model evaluation
  static void getModelEvaluation(String projectId, String modelId, String modelEvaluationId)
      throws IOException {
    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (AutoMlClient client = AutoMlClient.create()) {
      // Get the full path of the model evaluation.
      ModelEvaluationName modelEvaluationFullId =
          ModelEvaluationName.of(projectId, "us-central1", modelId, modelEvaluationId);

      // Get complete detail of the model evaluation.
      ModelEvaluation modelEvaluation = client.getModelEvaluation(modelEvaluationFullId);

      System.out.format("Model Evaluation Name: %s\n", modelEvaluation.getName());
      System.out.format("Model Annotation Spec Id: %s", modelEvaluation.getAnnotationSpecId());
      System.out.println("Create Time:");
      System.out.format("\tseconds: %s\n", modelEvaluation.getCreateTime().getSeconds());
      System.out.format("\tnanos: %s", modelEvaluation.getCreateTime().getNanos() / 1e9);
      System.out.format(
          "Evalution Example Count: %d\n", modelEvaluation.getEvaluatedExampleCount());
      System.out.format(
          "Object Detection Model Evaluation Metrics: %s\n",
          modelEvaluation.getImageObjectDetectionEvaluationMetrics());
    }
  }
}

Node.js

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。

/**
 * TODO(developer): Uncomment these variables before running the sample.
 */
// const projectId = 'YOUR_PROJECT_ID';
// const location = 'us-central1';
// const modelId = 'YOUR_MODEL_ID';
// const modelEvaluationId = 'YOUR_MODEL_EVALUATION_ID';

// Imports the Google Cloud AutoML library
const {AutoMlClient} = require('@google-cloud/automl').v1;

// Instantiates a client
const client = new AutoMlClient();

async function getModelEvaluation() {
  // Construct request
  const request = {
    name: client.modelEvaluationPath(
      projectId,
      location,
      modelId,
      modelEvaluationId
    ),
  };

  const [response] = await client.getModelEvaluation(request);

  console.log(`Model evaluation name: ${response.name}`);
  console.log(`Model annotation spec id: ${response.annotationSpecId}`);
  console.log(`Model display name: ${response.displayName}`);
  console.log('Model create time');
  console.log(`\tseconds ${response.createTime.seconds}`);
  console.log(`\tnanos ${response.createTime.nanos / 1e9}`);
  console.log(`Evaluation example count: ${response.evaluatedExampleCount}`);
  console.log(
    `Object detection model evaluation metrics: ${response.imageObjectDetectionEvaluationMetrics}`
  );
}

getModelEvaluation();

PHP

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。

use Google\Cloud\AutoMl\V1\AutoMlClient;

/** Uncomment and populate these variables in your code */
// $projectId = '[Google Cloud Project ID]';
// $location = 'us-central1';
// $modelId = 'my_model_id_123';
// $modeEvaluationId = 'my_model_evaluation_id_123';

$client = new AutoMlClient();

try {
    // get full path of the model evaluation
    $formattedName = $client->modelEvaluationName(
        $projectId,
        $location,
        $modelId,
        $modelEvaluationId
    );

    $modelEvaluation = $client->getModelEvaluation($formattedName);

    // display model evaluation information
    $splitName = explode('/', $modelEvaluation->getName());
    printf('Model evaluation name: %s' . PHP_EOL, $modelEvaluation->getName());
    printf('Model evaluation id: %s' . PHP_EOL, end($splitName));
    printf('Model annotation spec id: %s' . PHP_EOL, $modelEvaluation->getAnnotationSpecId());
    printf('Create time' . PHP_EOL);
    printf('seconds: %d' . PHP_EOL, $modelEvaluation->getCreateTime()->getSeconds());
    printf('nanos : %d' . PHP_EOL, $modelEvaluation->getCreateTime()->getNanos());
    printf('Evaluation example count: %s' . PHP_EOL, $modelEvaluation->getEvaluatedExampleCount());
    printf('Model evaluation metrics: %s' . PHP_EOL, $modelEvaluation->getImageObjectDetectionEvaluationMetrics());

} finally {
    $client->close();
}

Python

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。

from google.cloud import automl

# TODO(developer): Uncomment and set the following variables
# project_id = "YOUR_PROJECT_ID"
# model_id = "YOUR_MODEL_ID"
# model_evaluation_id = "YOUR_MODEL_EVALUATION_ID"

client = automl.AutoMlClient()
# Get the full path of the model evaluation.
model_evaluation_full_id = client.model_evaluation_path(
    project_id, "us-central1", model_id, model_evaluation_id
)

# Get complete detail of the model evaluation.
response = client.get_model_evaluation(model_evaluation_full_id)

print("Model evaluation name: {}".format(response.name))
print("Model annotation spec id: {}".format(response.annotation_spec_id))
print("Create Time:")
print("\tseconds: {}".format(response.create_time.seconds))
print("\tnanos: {}".format(response.create_time.nanos / 1e9))
print(
    "Evaluation example count: {}".format(response.evaluated_example_count)
)
print(
    "Object detection model evaluation metrics: {}".format(
        response.image_object_detection_evaluation_metrics
    )
)

Ruby

在试用此示例之前,请按照客户端库页面中与此编程语言对应的设置说明执行操作。

require "google/cloud/automl"

project_id = "YOUR_PROJECT_ID"
model_id = "YOUR_MODEL_ID"
model_evaluation_id = "YOUR_MODEL_EVALUATION_ID"

client = Google::Cloud::AutoML::AutoML.new

# Get the full path of the model evaluation.
model_evaluation_full_id = client.class.model_evaluation_path project_id, "us-central1", model_id, model_evaluation_id

# Get complete detail of the model evaluation.
model_evaluation = client.get_model_evaluation model_evaluation_full_id

puts "Model evaluation name: #{model_evaluation.name}"
puts "Model annotation spec id: #{model_evaluation.annotation_spec_id}"
puts "Create Time: #{model_evaluation.create_time.to_time}"
puts "Evaluation example count: #{model_evaluation.evaluated_example_count}"
puts "Object detection model evaluation metrics: #{model_evaluation.image_object_detection_evaluation_metrics}"

真正例、假负例和假正例(仅限界面)

在界面中,您可以观察模型性能的具体示例,即来自训练集和验证集的真正例 (TP)、假负例 (FN) 和假正例 (FP) 实例。

网页界面

您可在界面中选择评估标签页,然后选择任何特定标签,以访问 TP、FN 和 FP 视图。

通过查看这些预测中的趋势,您可以修改训练集来提高模型的性能。

真正例图片是提供给经过训练的模型的验证框,且模型已正确注释这些图片:

显示的真正例

假负例图片同样提供给经过训练的模型,但模型未能正确注释对象实例:

显示的假负例

最后,假正例图片是提供给经过训练的模型,但模型注释的对象实例不在指定的区域:

显示的假正例

模型选择了一些有意思的极端情况,这使您有机会优化自己的定义和标签,帮助模型理解您的标签解释。例如,更严格的定义有助于模型了解您是否将填馅甜椒视为“沙拉”。 借助重复的标签、训练和评估循环,您的模型将找出您数据中的其他此类不够明确的场景。

您也可以在界面的此视图中调整得分阈值,而显示的 TP、FN 和 FP 图片将反映阈值的更改:

具有更新后阈值的真正例

解读评估指标

对象检测模型会针对一个输入图片输出多个边界框,每个框包含一个标签和一个得分或置信度。评估指标有助于您回答有关模型的一些关键性能问题:

  • 我是否得到正确数量的边界框?
  • 对于边缘情况,模型是否倾向于给出较低的分数?
  • 预测框与标准答案框的匹配程度如何?

请注意,与多标签分类中的指标一样,除了给出通常较低的分数之外,这些指标不会指出任何类别混淆。

检查每张图片的模型输出时,您需要一种方法来检查一对框(标准答案框和预测框),并确定它们的匹配程度。您必须考虑以下因素:

  • 两个框是否具有相同的标签?
  • 框的重叠程度如何?
  • 模型对框的预测置信度如何?

为满足第二个要求,我们引入了名为交并比IoU 的新衡量指标。

IoU 和 IoU 阈值

交叉框与合并框图示

交并比决定了两个框之间的匹配度。IoU 的取值范围是 0(不重叠)到 1(框相同),计算方法是两个框之间的共同面积除以至少其中一个框所包含的面积。通过 AutoML 服务,您可以基于多个 IoU 阈值检查模型的性能。

为什么需要更改 IoU 阈值?

请考虑一个计算停车场汽车数量的用例。框坐标是否非常准确并不重要,您只需要关心框总数是否正确。在这种情况下,适合使用较低的 IoU 阈值。

图片:车辆周围的低阈值框
图片来源Nacho, "Smart"CC BY 2.0,已添加边界框和文字)。

或者,尝试测量织物污迹的大小。 在这种情况下,您需要非常精确的坐标,因此应使用高得多的 IoU 阈值。

图片:织物污迹周围的高阈值框
图片来源Housing Works Thrift Shops, "Modern Sofa"CC BY-SA 2.0,已添加边界框和文字)。

请注意,如果您对适合自己用例的阈值有新的看法,无需重新训练模型;您已经拥有各种 IoU 阈值的评估指标的访问权限。

分数和分数阈值

与分类模型类似,对象检测模型输出(现为框)包含分数。与图片分类一样,在训练后,您可以指定一个分数阈值作为判断“良好”匹配的标准。通过更改分数阈值,您可以调整假正例真正例比例以满足特定模型需求。 需要极高召回率的用户通常会在模型输出处理中应用较低的分数阈值

迭代模型

如果您对质量水平不满意,则可以重新执行前述步骤以提高质量:

  • 考虑向低质量的边界框标签添加更多图片。
  • 您可能需要添加不同类型的图片(例如更宽的角度、更高或更低的分辨率、不同的视角)。
  • 如果您没有足够的训练图片,请考虑删除所有边界框标签。
  • 我们的训练算法不会使用您的标签名称。如果您有一个名为“door”的标签和一个名为“door_with_knob”的标签,那么除了您提供的图片外,算法无法确定它们之间的细微差别。
  • 用更多的真正例和真负例示例来扩充您的数据。 接近决策边界(即可能产生混淆,但仍获得正确标记)的示例尤为重要。
  • 指定您自己的训练、测试和验证集划分比例。该工具会随机分配图片,但高度相似的图片有可能会分别归入训练集和验证集,这可能导致过拟合,进而造成测试集性能不佳。

一旦进行了更改,请训练并评估新模型,直到达到足够高的质量水平为止。