Interpret prediction results from text classification models

After requesting a prediction, Vertex AI returns results based on your model's objective. Predictions from multi-label classification models return one or more labels for each document and a confidence score for each label. For single-label classification models, predictions return only one label and confidence score per document.

The confidence score communicates how strongly your model associates each class or label with a test item. The higher the number, the higher the model's confidence that the label should be applied to that item. You decide how high the confidence score must be for you to accept the model's results.

Score threshold slider

In the Google Cloud console, Vertex AI provides a slider that's used to adjust the confidence threshold for all classes or labels, or an individual class or label. The slider is available on a model's detail page in the Evaluate tab. The confidence threshold is the confidence level that the model must have for it to assign a class or label to a test item. As you adjust the threshold, you can see how your model's precision and recall changes. Higher thresholds typically increase precision and lower recall.

Example batch prediction output

The following sample is the predicted result for a multi-label classification model. The model applied the GreatService, Suggestion, and InfoRequest labels to the submitted document. The confidence values apply to each of the labels in order. In this example, the model predicted GreatService as the most relevant label.

{
  "instance": {"content": "gs://bucket/text.txt", "mimeType": "text/plain"},
  "predictions": [
    {
      "ids": [
        "1234567890123456789",
        "2234567890123456789",
        "3234567890123456789"
      ],
      "displayNames": [
        "GreatService",
        "Suggestion",
        "InfoRequest"
      ],
      "confidences": [
        0.8986392080783844,
        0.81984345316886902,
        0.7722353458404541
      ]
    }
  ]
}