REST Resource: projects.locations.models.modelEvaluations

Resource: ModelEvaluation

Evaluation results of a model.

JSON representation
{
  "name": string,
  "annotationSpecId": string,
  "displayName": string,
  "createTime": string,
  "evaluatedExampleCount": number,

  // Union field metrics can be only one of the following:
  "classificationEvaluationMetrics": {
    object (ClassificationEvaluationMetrics)
  },
  "regressionEvaluationMetrics": {
    object (RegressionEvaluationMetrics)
  },
  "tablesEvaluationMetrics": {
    object (TablesEvaluationMetrics)
  }
  // End of list of possible types for union field metrics.
}
Fields
name

string

Output only. Resource name of the model evaluation. Format:

projects/{project_id}/locations/{locationId}/models/{modelId}/modelEvaluations/{model_evaluation_id}

annotationSpecId

string

Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation. These are the distinct values of the target column at the moment of the evaluation; for this problem annotation specs in the dataset do not exist.

NOTE: Currently there is no way to obtain the displayName of the annotation spec from its ID. To see the display_names, review the model evaluations in the AutoML UI.

displayName

string

Output only. The value of displayName at the moment when the model was trained. Because this field returns a value at model training time, for different models trained from the same dataset, the values may differ, since display names could had been changed between the two model's trainings. For Tables CLASSIFICATION

predictionType-s distinct values of the target column at the moment of the model evaluation are populated here. The displayName is empty for the overall model evaluation.

createTime

string (Timestamp format)

Output only. Timestamp when this model evaluation was created.

A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

evaluatedExampleCount

number

Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model. For overall ModelEvaluation (i.e. with annotationSpecId not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the

annotationSpecId.

Union field metrics. Output only. Problem type specific evaluation metrics. metrics can be only one of the following:
classificationEvaluationMetrics

object (ClassificationEvaluationMetrics)

Evaluation metrics for classification models.

AutoML Tables classification applies when the target column has a data type of either CATEGORY or ARRAY(CATEGORY).

regressionEvaluationMetrics

object (RegressionEvaluationMetrics)

Model evaluation metrics for Tables regression. Tables problem is considered a regression when the target column has FLOAT64 DataType.

tablesEvaluationMetrics

object (TablesEvaluationMetrics)

Evaluation metrics for Tables models.

ClassificationEvaluationMetrics

Model evaluation metrics for classification problems. For information on the prediction type, see BatchPredictRequest.params.

JSON representation
{
  "auPrc": number,
  "baseAuPrc": number,
  "auRoc": number,
  "logLoss": number,
  "confidenceMetricsEntry": [
    {
      object (ConfidenceMetricsEntry)
    }
  ],
  "confusionMatrix": {
    object (ConfusionMatrix)
  },
  "annotationSpecId": [
    string
  ]
}
Fields
auPrc

number

Output only. The Area Under Precision-Recall Curve metric. Micro-averaged for the overall evaluation.

baseAuPrc
(deprecated)

number

Output only. The Area Under Precision-Recall Curve metric based on priors. Micro-averaged for the overall evaluation. Deprecated.

auRoc

number

Output only. The Area Under Receiver Operating Characteristic curve metric. Micro-averaged for the overall evaluation.

logLoss

number

Output only. The Log Loss metric.

confidenceMetricsEntry[]

object (ConfidenceMetricsEntry)

Output only. Metrics for each confidenceThreshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and positionThreshold = INT32_MAX_VALUE. ROC and precision-recall curves, and other aggregated metrics are derived from them. The confidence metrics entries may also be supplied for additional values of positionThreshold, but from these no aggregated metrics are computed.

confusionMatrix

object (ConfusionMatrix)

Output only. Confusion matrix of the evaluation. Only set for MULTICLASS classification problems where number of labels is no more than 10. Only set for model level evaluation, not for evaluation per label.

annotationSpecId[]

string

Output only. The annotation spec ids used for this evaluation.

ConfidenceMetricsEntry

Metrics for a single confidence threshold.

JSON representation
{
  "confidenceThreshold": number,
  "positionThreshold": number,
  "recall": number,
  "precision": number,
  "falsePositiveRate": number,
  "f1Score": number,
  "recallAt1": number,
  "precisionAt1": number,
  "falsePositiveRateAt1": number,
  "f1ScoreAt1": number,
  "truePositiveCount": string,
  "falsePositiveCount": string,
  "falseNegativeCount": string,
  "trueNegativeCount": string
}
Fields
confidenceThreshold

number

Output only. Metrics are computed with an assumption that the model never returns predictions with score lower than this value.

positionThreshold

number

Output only. Metrics are computed with an assumption that the model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidenceThreshold.

recall

number

Output only. Recall (True Positive Rate) for the given confidence threshold.

precision

number

Output only. Precision for the given confidence threshold.

falsePositiveRate

number

Output only. False Positive Rate for the given confidence threshold.

f1Score

number

Output only. The harmonic mean of recall and precision.

recallAt1

number

Output only. The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

precisionAt1

number

Output only. The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

falsePositiveRateAt1

number

Output only. The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

f1ScoreAt1

number

Output only. The harmonic mean of recallAt1 and precisionAt1.

truePositiveCount

string (int64 format)

Output only. The number of model created labels that match a ground truth label.

falsePositiveCount

string (int64 format)

Output only. The number of model created labels that do not match a ground truth label.

falseNegativeCount

string (int64 format)

Output only. The number of ground truth labels that are not matched by a model created label.

trueNegativeCount

string (int64 format)

Output only. The number of labels that were not created by the model, but if they would, they would not match a ground truth label.

ConfusionMatrix

Confusion matrix of the model running the classification.

JSON representation
{
  "annotationSpecId": [
    string
  ],
  "row": [
    {
      object (Row)
    }
  ]
}
Fields
annotationSpecId[]

string

Output only. IDs of the annotation specs used in the confusion matrix.

Returns a list of displayName values.

row[]

object (Row)

Output only. Rows in the confusion matrix. The number of rows is equal to the size of annotationSpecId. row[i].value[j] is the number of examples that have ground truth of the annotationSpecId[i] and are predicted as annotationSpecId[j] by the model being evaluated.

Row

Output only. A row in the confusion matrix.

JSON representation
{
  "exampleCount": [
    number
  ]
}
Fields
exampleCount[]

number

Output only. Value of the specific cell in the confusion matrix. The number of values each row has (i.e. the length of the row) is equal to the length of the annotationSpecId field or, if that one is not populated, length of the displayName field.

RegressionEvaluationMetrics

Metrics for regression problems.

JSON representation
{
  "rootMeanSquaredError": number,
  "meanAbsoluteError": number,
  "meanAbsolutePercentageError": number,
  "rSquared": number,
  "rootMeanSquaredLogError": number
}
Fields
rootMeanSquaredError

number

Output only. Root Mean Squared Error (RMSE).

meanAbsoluteError

number

Output only. Mean Absolute Error (MAE).

meanAbsolutePercentageError

number

Output only. Mean absolute percentage error. Only set if all ground truth values are are positive.

rSquared

number

Output only. R squared.

rootMeanSquaredLogError

number

Output only. Root mean squared log error.

TablesEvaluationMetrics

Model evaluation metrics for Tables problems.

JSON representation
{

  // Union field metrics can be only one of the following:
  "classificationMetrics": {
    object (TablesClassificationMetrics)
  },
  "regressionMetrics": {
    object (TablesRegressionMetrics)
  }
  // End of list of possible types for union field metrics.
}
Fields
Union field metrics. Evaluation metrics specific to classification problem (if target column is of CATEGORY or ARRAY(CATEGORY) DataType), or regression problem (if target column is of FLOAT64 DataType). metrics can be only one of the following:
classificationMetrics

object (TablesClassificationMetrics)

Classification metrics.

regressionMetrics

object (TablesRegressionMetrics)

Regression metrics.

TablesClassificationMetrics

Metrics for Tables classification problems.

JSON representation
{
  "curveMetrics": [
    {
      object (CurveMetrics)
    }
  ]
}
Fields
curveMetrics[]

object (CurveMetrics)

Metrics building a curve.

CurveMetrics

Metrics curve data point for a single value.

JSON representation
{
  "value": string,
  "positionThreshold": number,
  "confidenceMetricsEntries": [
    {
      object (TablesConfidenceMetricsEntry)
    }
  ],
  "aucPr": number,
  "aucRoc": number,
  "logLoss": number
}
Fields
value

string

The CATEGORY row value (for ARRAY unnested) the curve metrics are for.

positionThreshold

number

The position threshold value used to compute the metrics.

confidenceMetricsEntries[]

object (TablesConfidenceMetricsEntry)

Metrics that have confidence thresholds. Precision-recall curve and ROC curve can be derived from them.

aucPr

number

The area under the precision-recall curve.

aucRoc

number

The area under receiver operating characteristic curve.

logLoss

number

The Log loss metric.

TablesConfidenceMetricsEntry

Metrics for a single confidence threshold.

JSON representation
{
  "confidenceThreshold": number,
  "falsePositiveRate": number,
  "truePositiveRate": number,
  "recall": number,
  "precision": number,
  "f1Score": number,
  "truePositiveCount": string,
  "falsePositiveCount": string,
  "trueNegativeCount": string,
  "falseNegativeCount": string
}
Fields
confidenceThreshold

number

The confidence threshold value used to compute the metrics.

falsePositiveRate

number

FPR = #false positives / (#false positives + #true negatives)

truePositiveRate

number

TPR = #true positives / (#true positives + #false negatvies)

recall

number

Recall = #true positives / (#true positives + #false negatives).

precision

number

Precision = #true positives / (#true positives + #false positives).

f1Score

number

The harmonic mean of recall and precision. (2 * precision * recall) / (precision + recall)

truePositiveCount

string (int64 format)

True positive count.

falsePositiveCount

string (int64 format)

False positive count.

trueNegativeCount

string (int64 format)

True negative count.

falseNegativeCount

string (int64 format)

False negative count.

TablesRegressionMetrics

Metrics for Tables regression problems.

JSON representation
{
  "rootMeanSquaredError": number,
  "meanAbsoluteError": number,
  "meanAbsolutePercentageError": number,
  "rSquared": number,
  "rootMeanSquaredLogError": number
}
Fields
rootMeanSquaredError

number

Root mean squared error.

meanAbsoluteError

number

Mean absolute error.

meanAbsolutePercentageError

number

Mean absolute percentage error, only set if all of the target column's values are positive.

rSquared

number

R squared.

rootMeanSquaredLogError

number

Root mean squared log error.

Methods

get

Gets a model evaluation.

list

Lists model evaluations.
هل كانت هذه الصفحة مفيدة؟ يرجى تقييم أدائنا:

إرسال تعليقات حول...