Output only. The ID of the annotation spec that the model
evaluation applies to. The The ID is empty for the overall
model evaluation.
Output only. The number of examples used for model evaluation,
i.e. for which ground truth from time of model creation is
compared against the predicted annotations created by the
model. For overall ModelEvaluation (i.e. with
annotation_spec_id not set) this is the total number of all
examples used for evaluation. Otherwise, this is the count of
examples that according to the ground truth were annotated by
the [annotation_spec_id][google.cloud.automl.v1beta1.ModelE
valuation.annotation_spec_id].
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-11-01 UTC."],[],[]]