Resource: EvaluationRun
EvaluationRun is a resource that represents a single evaluation run, which includes a set of prompts, model responses, evaluation configuration and the resulting metrics.
namestring
Identifier. The resource name of the EvaluationRun. This is a unique identifier. Format: projects/{project}/locations/{location}/evaluationRuns/{evaluationRun}
displayNamestring
Required. The display name of the Evaluation Run.
Optional. metadata about the evaluation run, can be used by the caller to store additional tracking information about the evaluation run.
labelsmap (key: string, value: string)
Optional. Labels for the evaluation run.
Required. The data source for the evaluation run.
Optional. The candidate to inference config map for the evaluation run. The candidate can be up to 128 characters long and can consist of any UTF-8 characters.
Required. The configuration used for the evaluation.
Output only. The state of the evaluation run.
Output only. Only populated when the evaluation run's state is FAILED or CANCELLED.
Output only. The results of the evaluation run. Only populated when the evaluation run's state is SUCCEEDED.
Output only. time when the evaluation run was created.
Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z", "2014-10-02T15:01:23.045123456Z" or "2014-10-02T15:01:23+05:30".
Output only. time when the evaluation run was completed.
Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z", "2014-10-02T15:01:23.045123456Z" or "2014-10-02T15:01:23+05:30".
evaluationSetSnapshotstring
Output only. The specific evaluation set of the evaluation run. For runs with an evaluation set input, this will be that same set. For runs with BigQuery input, it's the sampled BigQuery dataset.
| JSON representation |
|---|
{ "name": string, "displayName": string, "metadata": value, "labels": { string: string, ... }, "dataSource": { object ( |
DataSource
The data source for the evaluation run.
sourceUnion type
source can be only one of the following:evaluationSetstring
The EvaluationSet resource name. Format: projects/{project}/locations/{location}/evaluationSets/{evaluationSet}
Evaluation data in bigquery.
| JSON representation |
|---|
{
// source
"evaluationSet": string,
"bigqueryRequestSet": {
object ( |
BigQueryRequestSet
The request set for the evaluation run.
uristring
Required. The URI of a BigQuery table. e.g. bq://projectId.bqDatasetId.bqTableId
promptColumnstring
Optional. The name of the column that contains the requests to evaluate. This will be in evaluationItem.EvalPrompt format.
rubricsColumnstring
Optional. The name of the column that contains the rubrics. This is in evaluation_rubric.RubricGroup format.
candidateResponseColumnsmap (key: string, value: string)
Optional. Map of candidate name to candidate response column name. The column will be in evaluationItem.CandidateResponse format.
Optional. The sampling config for the bigquery resource.
| JSON representation |
|---|
{
"uri": string,
"promptColumn": string,
"rubricsColumn": string,
"candidateResponseColumns": {
string: string,
...
},
"samplingConfig": {
object ( |
SamplingConfig
The sampling config.
samplingCountinteger
Optional. The total number of logged data to import. If available data is less than the sampling count, all data will be imported. Default is 100.
Optional. The sampling method to use.
Optional. How long to wait before sampling data from the BigQuery table. If not specified, defaults to 0.
A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".
| JSON representation |
|---|
{
"samplingCount": integer,
"samplingMethod": enum ( |
SamplingMethod
The sampling method to use.
| Enums | |
|---|---|
SAMPLING_METHOD_UNSPECIFIED |
Unspecified sampling method. |
RANDOM |
Random sampling. |
InferenceConfig
An inference config used for model inference during the evaluation run.
modelstring
Optional. The fully qualified name of the publisher model or endpoint to use.
Publisher model format: projects/{project}/locations/{location}/publishers/*/models/*
Endpoint format: projects/{project}/locations/{location}/endpoints/{endpoint}
model_configUnion type
model_config can be only one of the following:Optional. Generation config.
| JSON representation |
|---|
{
"model": string,
// model_config
"generationConfig": {
object ( |
GenerationConfig
Configuration for content generation.
This message contains all the parameters that control how the model generates content. It allows you to influence the randomness, length, and structure of the output.
stopSequences[]string
Optional. A list of character sequences that will stop the model from generating further tokens. If a stop sequence is generated, the output will end at that point. This is useful for controlling the length and structure of the output. For example, you can use ["\n", "###"] to stop generation at a new line or a specific marker.
responseMimeTypestring
Optional. The IANA standard MIME type of the response. The model will generate output that conforms to this MIME type. Supported values include 'text/plain' (default) and 'application/json'. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.
Optional. The modalities of the response. The model will generate a response that includes all the specified modalities. For example, if this is set to [TEXT, IMAGE], the response will include both text and an image.
Optional. Configuration for thinking features. An error will be returned if this field is set for models that don't support thinking.
temperaturenumber
Optional. Controls the randomness of the output. A higher temperature results in more creative and diverse responses, while a lower temperature makes the output more predictable and focused. The valid range is (0.0, 2.0].
topPnumber
Optional. Specifies the nucleus sampling threshold. The model considers only the smallest set of tokens whose cumulative probability is at least topP. This helps generate more diverse and less repetitive responses. For example, a topP of 0.9 means the model considers tokens until the cumulative probability of the tokens to select from reaches 0.9. It's recommended to adjust either temperature or topP, but not both.
topKnumber
Optional. Specifies the top-k sampling threshold. The model considers only the top k most probable tokens for the next token. This can be useful for generating more coherent and less random text. For example, a topK of 40 means the model will choose the next word from the 40 most likely words.
candidateCountinteger
Optional. The number of candidate responses to generate.
A higher candidateCount can provide more options to choose from, but it also consumes more resources. This can be useful for generating a variety of responses and selecting the best one.
maxOutputTokensinteger
Optional. The maximum number of tokens to generate in the response.
A token is approximately four characters. The default value varies by model. This parameter can be used to control the length of the generated text and prevent overly long responses.
responseLogprobsboolean
Optional. If set to true, the log probabilities of the output tokens are returned.
log probabilities are the logarithm of the probability of a token appearing in the output. A higher log probability means the token is more likely to be generated. This can be useful for analyzing the model's confidence in its own output and for debugging.
logprobsinteger
Optional. The number of top log probabilities to return for each token.
This can be used to see which other tokens were considered likely candidates for a given position. A higher value will return more options, but it will also increase the size of the response.
presencePenaltynumber
Optional. Penalizes tokens that have already appeared in the generated text. A positive value encourages the model to generate more diverse and less repetitive text. Valid values can range from [-2.0, 2.0].
frequencyPenaltynumber
Optional. Penalizes tokens based on their frequency in the generated text. A positive value helps to reduce the repetition of words and phrases. Valid values can range from [-2.0, 2.0].
seedinteger
Optional. A seed for the random number generator.
By setting a seed, you can make the model's output mostly deterministic. For a given prompt and parameters (like temperature, topP, etc.), the model will produce the same response every time. However, it's not a guaranteed absolute deterministic behavior. This is different from parameters like temperature, which control the level of randomness. seed ensures that the "random" choices the model makes are the same on every run, making it essential for testing and ensuring reproducible results.
Optional. Lets you to specify a schema for the model's response, ensuring that the output conforms to a particular structure. This is useful for generating structured data such as JSON. The schema is a subset of the OpenAPI 3.0 schema object object.
When this field is set, you must also set the responseMimeType to application/json.
Optional. An alternative to responseSchema that allows you to define the output schema using the JSON Schema standard. This can be useful for developers who are already familiar with JSON Schema.
While the full JSON Schema may be sent, not all features are supported. Specifically, only the following properties are supported:
$id$defs$ref$anchortypeformattitledescriptionenum(for strings and numbers)itemsprefixItemsminItemsmaxItemsminimummaximumanyOfoneOf(interpreted the same asanyOf)propertiesadditionalPropertiesrequired
The non-standard propertyOrdering property may also be set.
Cyclic references are supported to a limited degree and can only be used within non-required properties. When $ref is used in a sub-schema, no other properties are allowed, except for those starting with $. When this field is set, responseSchema must be omitted and responseMimeType must be set to application/json.
Optional. Routing configuration.
audioTimestampboolean
Optional. If enabled, audio timestamps will be included in the request to the model. This can be useful for synchronizing audio with other modalities in the response.
Optional. The token resolution at which input media content is sampled. This is used to control the trade-off between the quality of the response and the number of tokens used to represent the media. A higher resolution allows the model to perceive more detail, which can lead to a more nuanced response, but it will also use more tokens. This does not affect the image dimensions sent to the model.
Optional. The speech generation config.
enableAffectiveDialogboolean
Optional. If enabled, the model will detect emotions and adapt its responses accordingly. For example, if the model detects that the user is frustrated, it may provide a more empathetic response.
Optional. Config for image generation features.
| JSON representation |
|---|
{ "stopSequences": [ string ], "responseMimeType": string, "responseModalities": [ enum ( |
RoutingConfig
The configuration for routing the request to a specific model. This can be used to control which model is used for the generation, either automatically or by specifying a model name.
routing_configUnion type
routing_config can be only one of the following:In this mode, the model is selected automatically based on the content of the request.
In this mode, the model is specified manually.
| JSON representation |
|---|
{ // routing_config "autoMode": { object ( |
AutoRoutingMode
The configuration for automated routing.
When automated routing is specified, the routing will be determined by the pretrained routing model and customer provided model routing preference.
The model routing preference.
| JSON representation |
|---|
{
"modelRoutingPreference": enum ( |
ModelRoutingPreference
The model routing preference.
| Enums | |
|---|---|
UNKNOWN |
Unspecified model routing preference. |
PRIORITIZE_QUALITY |
The model will be selected to prioritize the quality of the response. |
BALANCED |
The model will be selected to balance quality and cost. |
PRIORITIZE_COST |
The model will be selected to prioritize the cost of the request. |
ManualRoutingMode
The configuration for manual routing.
When manual routing is specified, the model will be selected based on the model name provided.
modelNamestring
The name of the model to use. Only public LLM models are accepted.
| JSON representation |
|---|
{ "modelName": string } |
Modality
The modalities of the response.
| Enums | |
|---|---|
MODALITY_UNSPECIFIED |
Unspecified modality. Will be processed as text. |
TEXT |
Text modality. |
IMAGE |
Image modality. |
AUDIO |
Audio modality. |
MediaResolution
Media resolution for the input media.
| Enums | |
|---|---|
MEDIA_RESOLUTION_UNSPECIFIED |
Media resolution has not been set. |
MEDIA_RESOLUTION_LOW |
Media resolution set to low (64 tokens). |
MEDIA_RESOLUTION_MEDIUM |
Media resolution set to medium (256 tokens). |
MEDIA_RESOLUTION_HIGH |
Media resolution set to high (zoomed reframing with 256 tokens). |
SpeechConfig
Configuration for speech generation.
The configuration for the voice to use.
languageCodestring
Optional. The language code (ISO 639-1) for the speech synthesis.
The configuration for a multi-speaker text-to-speech request. This field is mutually exclusive with voiceConfig.
| JSON representation |
|---|
{ "voiceConfig": { object ( |
VoiceConfig
Configuration for a voice.
voice_configUnion type
voice_config can be only one of the following:The configuration for a prebuilt voice.
| JSON representation |
|---|
{
// voice_config
"prebuiltVoiceConfig": {
object ( |
PrebuiltVoiceConfig
Configuration for a prebuilt voice.
voiceNamestring
The name of the prebuilt voice to use.
| JSON representation |
|---|
{ "voiceName": string } |
MultiSpeakerVoiceConfig
Configuration for a multi-speaker text-to-speech request.
Required. A list of configurations for the voices of the speakers. Exactly two speaker voice configurations must be provided.
| JSON representation |
|---|
{
"speakerVoiceConfigs": [
{
object ( |
SpeakerVoiceConfig
Configuration for a single speaker in a multi-speaker setup.
speakerstring
Required. The name of the speaker. This should be the same as the speaker name used in the prompt.
Required. The configuration for the voice of this speaker.
| JSON representation |
|---|
{
"speaker": string,
"voiceConfig": {
object ( |
ThinkingConfig
Configuration for the model's thinking features.
"Thinking" is a process where the model breaks down a complex task into smaller, manageable steps. This allows the model to reason about the task, plan its approach, and execute the plan to generate a high-quality response.
includeThoughtsboolean
Optional. If true, the model will include its thoughts in the response. "Thoughts" are the intermediate steps the model takes to arrive at the final response. They can provide insights into the model's reasoning process and help with debugging. If this is true, thoughts are returned only when available.
thinkingBudgetinteger
Optional. The token budget for the model's thinking process. The model will make a best effort to stay within this budget. This can be used to control the trade-off between response quality and latency.
| JSON representation |
|---|
{ "includeThoughts": boolean, "thinkingBudget": integer } |
ImageConfig
Configuration for image generation.
This message allows you to control various aspects of image generation, such as the output format, aspect ratio, and whether the model can generate images of people.
Optional. The image output format for generated images.
aspectRatiostring
Optional. The desired aspect ratio for the generated images. The following aspect ratios are supported:
"1:1" "2:3", "3:2" "3:4", "4:3" "4:5", "5:4" "9:16", "16:9" "21:9"
Optional. Controls whether the model can generate people.
| JSON representation |
|---|
{ "imageOutputOptions": { object ( |
ImageOutputOptions
The image output format for generated images.
mimeTypestring
Optional. The image format that the output should be saved as.
compressionQualityinteger
Optional. The compression quality of the output image.
| JSON representation |
|---|
{ "mimeType": string, "compressionQuality": integer } |
PersonGeneration
Enum for controlling the generation of people in images.
| Enums | |
|---|---|
PERSON_GENERATION_UNSPECIFIED |
The default behavior is unspecified. The model will decide whether to generate images of people. |
ALLOW_ALL |
Allows the model to generate images of people, including adults and children. |
ALLOW_ADULT |
Allows the model to generate images of adults, but not children. |
ALLOW_NONE |
Prevents the model from generating images of people. |
EvaluationConfig
The Evalution configuration used for the evaluation run.
Required. The metrics to be calculated in the evaluation run.
Optional. The rubric configs for the evaluation run. They are used to generate rubrics which can be used by rubric-based metrics. Multiple rubric configs can be specified for rubric generation but only one rubric config can be used for a rubric-based metric. If more than one rubric config is provided, the evaluation metric must specify a rubric group key. Note that if a generation spec is specified on both a rubric config and an evaluation metric, the rubrics generated for the metric will be used for evaluation.
Optional. The output config for the evaluation run.
Optional. The autorater config for the evaluation run.
The prompt template used for inference. The values for variables in the prompt template are defined in EvaluationItem.EvaluationPrompt.PromptTemplateData.values.
| JSON representation |
|---|
{ "metrics": [ { object ( |
EvaluationRunMetric
The metric used for evaluation runs.
metricstring
Required. The name of the metric.
The metric config.
metric_specUnion type
metric_spec can be only one of the following:Spec for rubric based metric.
Spec for a pre-defined metric.
Spec for an LLM based metric.
| JSON representation |
|---|
{ "metric": string, "metricConfig": { object ( |
RubricBasedMetricSpec
Specification for a metric that is based on rubrics.
metricPromptTemplatestring
Optional. Template for the prompt used by the judge model to evaluate against rubrics.
rubrics_sourceUnion type
rubrics_source can be only one of the following:Use rubrics provided directly in the spec.
rubricGroupKeystring
Use a pre-defined group of rubrics associated with the input content. This refers to a key in the rubricGroups map of RubricEnhancedContents.
Dynamically generate rubrics for evaluation using this specification.
Optional. Optional configuration for the judge LLM (Autorater). The definition of AutoraterConfig needs to be provided.
| JSON representation |
|---|
{ "metricPromptTemplate": string, // rubrics_source "inlineRubrics": { object ( |
RepeatedRubrics
RubricGenerationSpec
Specification for how rubrics should be generated.
promptTemplatestring
Optional. Template for the prompt used to generate rubrics. The details should be updated based on the most-recent recipe requirements.
Optional. The type of rubric content to be generated.
rubricTypeOntology[]string
Optional. An optional, pre-defined list of allowed types for generated rubrics. If this field is provided, it implies include_rubric_type should be true, and the generated rubric types should be chosen from this ontology.
Optional. Configuration for the model used in rubric generation. Configs including sampling count and base model can be specified here. Flipping is not supported for rubric generation.
| JSON representation |
|---|
{ "promptTemplate": string, "rubricContentType": enum ( |
AutoraterConfig
The autorater config used for the evaluation run.
autoraterModelstring
Optional. The fully qualified name of the publisher model or tuned autorater endpoint to use.
Publisher model format: projects/{project}/locations/{location}/publishers/*/models/*
Tuned model endpoint format: projects/{project}/locations/{location}/endpoints/{endpoint}
Optional. Configuration options for model generation and outputs.
sampleCountinteger
Optional. Number of samples for each instance in the dataset. If not specified, the default is 4. Minimum value is 1, maximum value is 32.
| JSON representation |
|---|
{
"autoraterModel": string,
"generationConfig": {
object ( |
RubricContentType
Specifies the type of rubric content to generate.
| Enums | |
|---|---|
RUBRIC_CONTENT_TYPE_UNSPECIFIED |
The content type to generate is not specified. |
PROPERTY |
Generate rubrics based on properties. |
NL_QUESTION_ANSWER |
Generate rubrics in an NL question answer format. |
PYTHON_CODE_ASSERTION |
Generate rubrics in a unit test format. |
PredefinedMetricSpec
Specification for a pre-defined metric.
metricSpecNamestring
Required. The name of a pre-defined metric, such as "instruction_following_v1" or "text_quality_v1".
Optional. The parameters needed to run the pre-defined metric.
| JSON representation |
|---|
{ "metricSpecName": string, "parameters": { object } } |
LLMBasedMetricSpec
Specification for an LLM based metric.
rubrics_sourceUnion type
rubrics_source can be only one of the following:rubricGroupKeystring
Use a pre-defined group of rubrics associated with the input. Refers to a key in the rubricGroups map of EvaluationInstance.
Dynamically generate rubrics using this specification.
Dynamically generate rubrics using a predefined spec.
metricPromptTemplatestring
Required. Template for the prompt sent to the judge model.
systemInstructionstring
Optional. System instructions for the judge model.
Optional. Optional configuration for the judge LLM (Autorater).
Optional. Optional additional configuration for the metric.
| JSON representation |
|---|
{ // rubrics_source "rubricGroupKey": string, "rubricGenerationSpec": { object ( |
Metric
The metric used for running evaluations.
Optional. The aggregation metrics to use.
metric_specUnion type
metric_spec can be only one of the following:The spec for a pre-defined metric.
Spec for an LLM based metric.
Spec for pointwise metric.
Spec for pairwise metric.
Spec for exact match metric.
Spec for bleu metric.
Spec for rouge metric.
| JSON representation |
|---|
{ "aggregationMetrics": [ enum ( |
PredefinedMetricSpec
The spec for a pre-defined metric.
metricSpecNamestring
Required. The name of a pre-defined metric, such as "instruction_following_v1" or "text_quality_v1".
Optional. The parameters needed to run the pre-defined metric.
| JSON representation |
|---|
{ "metricSpecName": string, "metricSpecParameters": { object } } |
LLMBasedMetricSpec
Specification for an LLM based metric.
rubrics_sourceUnion type
rubrics_source can be only one of the following:rubricGroupKeystring
Use a pre-defined group of rubrics associated with the input. Refers to a key in the rubricGroups map of EvaluationInstance.
Dynamically generate rubrics using this specification.
Dynamically generate rubrics using a predefined spec.
metricPromptTemplatestring
Required. Template for the prompt sent to the judge model.
systemInstructionstring
Optional. System instructions for the judge model.
Optional. Optional configuration for the judge LLM (Autorater).
Optional. Optional additional configuration for the metric.
| JSON representation |
|---|
{ // rubrics_source "rubricGroupKey": string, "rubricGenerationSpec": { object ( |
RubricGenerationSpec
Specification for how rubrics should be generated.
promptTemplatestring
Template for the prompt used to generate rubrics. The details should be updated based on the most-recent recipe requirements.
The type of rubric content to be generated.
rubricTypeOntology[]string
Optional. An optional, pre-defined list of allowed types for generated rubrics. If this field is provided, it implies include_rubric_type should be true, and the generated rubric types should be chosen from this ontology.
Configuration for the model used in rubric generation. Configs including sampling count and base model can be specified here. Flipping is not supported for rubric generation.
| JSON representation |
|---|
{ "promptTemplate": string, "rubricContentType": enum ( |
AutoraterConfig
The configs for autorater. This is applicable to both EvaluateInstances and EvaluateDataset.
autoraterModelstring
Optional. The fully qualified name of the publisher model or tuned autorater endpoint to use.
Publisher model format: projects/{project}/locations/{location}/publishers/*/models/*
Tuned model endpoint format: projects/{project}/locations/{location}/endpoints/{endpoint}
Optional. Configuration options for model generation and outputs.
samplingCountinteger
Optional. Number of samples for each instance in the dataset. If not specified, the default is 4. Minimum value is 1, maximum value is 32.
flipEnabledboolean
Optional. Default is true. Whether to flip the candidate and baseline responses. This is only applicable to the pairwise metric. If enabled, also provide PairwiseMetricSpec.candidate_response_field_name and PairwiseMetricSpec.baseline_response_field_name. When rendering PairwiseMetricSpec.metric_prompt_template, the candidate and baseline fields will be flipped for half of the samples to reduce bias.
| JSON representation |
|---|
{
"autoraterModel": string,
"generationConfig": {
object ( |
RubricContentType
Specifies the type of rubric content to generate.
| Enums | |
|---|---|
RUBRIC_CONTENT_TYPE_UNSPECIFIED |
The content type to generate is not specified. |
PROPERTY |
Generate rubrics based on properties. |
NL_QUESTION_ANSWER |
Generate rubrics in an NL question answer format. |
PYTHON_CODE_ASSERTION |
Generate rubrics in a unit test format. |
PointwiseMetricSpec
Spec for pointwise metric.
Optional. CustomOutputFormatConfig allows customization of metric output. By default, metrics return a score and explanation. When this config is set, the default output is replaced with either: - The raw output string. - A parsed output based on a user-defined schema. If a custom format is chosen, the score and explanation fields in the corresponding metric result will be empty.
metricPromptTemplatestring
Required. Metric prompt template for pointwise metric.
systemInstructionstring
Optional. System instructions for pointwise metric.
| JSON representation |
|---|
{
"customOutputFormatConfig": {
object ( |
CustomOutputFormatConfig
Spec for custom output format configuration.
custom_output_format_configUnion type
custom_output_format_config can be only one of the following:returnRawOutputboolean
Optional. Whether to return raw output.
| JSON representation |
|---|
{ // custom_output_format_config "returnRawOutput": boolean // Union type } |
PairwiseMetricSpec
Spec for pairwise metric.
candidateResponseFieldNamestring
Optional. The field name of the candidate response.
baselineResponseFieldNamestring
Optional. The field name of the baseline response.
Optional. CustomOutputFormatConfig allows customization of metric output. When this config is set, the default output is replaced with the raw output string. If a custom format is chosen, the pairwiseChoice and explanation fields in the corresponding metric result will be empty.
metricPromptTemplatestring
Required. Metric prompt template for pairwise metric.
systemInstructionstring
Optional. System instructions for pairwise metric.
| JSON representation |
|---|
{
"candidateResponseFieldName": string,
"baselineResponseFieldName": string,
"customOutputFormatConfig": {
object ( |
ExactMatchSpec
This type has no fields.
Spec for exact match metric - returns 1 if prediction and reference exactly matches, otherwise 0.
BleuSpec
Spec for bleu score metric - calculates the precision of n-grams in the prediction as compared to reference - returns a score ranging between 0 to 1.
useEffectiveOrderboolean
Optional. Whether to useEffectiveOrder to compute bleu score.
| JSON representation |
|---|
{ "useEffectiveOrder": boolean } |
RougeSpec
Spec for rouge score metric - calculates the recall of n-grams in prediction as compared to reference - returns a score ranging between 0 and 1.
rougeTypestring
Optional. Supported rouge types are rougen[1-9], rougeL, and rougeLsum.
useStemmerboolean
Optional. Whether to use stemmer to compute rouge score.
splitSummariesboolean
Optional. Whether to split summaries while using rougeLsum.
| JSON representation |
|---|
{ "rougeType": string, "useStemmer": boolean, "splitSummaries": boolean } |
AggregationMetric
The aggregation metrics supported by EvaluationService.EvaluateDataset.
| Enums | |
|---|---|
AGGREGATION_METRIC_UNSPECIFIED |
Unspecified aggregation metric. |
AVERAGE |
Average aggregation metric. Not supported for Pairwise metric. |
MODE |
Mode aggregation metric. |
STANDARD_DEVIATION |
Standard deviation aggregation metric. Not supported for pairwise metric. |
VARIANCE |
Variance aggregation metric. Not supported for pairwise metric. |
MINIMUM |
Minimum aggregation metric. Not supported for pairwise metric. |
MAXIMUM |
Maximum aggregation metric. Not supported for pairwise metric. |
MEDIAN |
Median aggregation metric. Not supported for pairwise metric. |
PERCENTILE_P90 |
90th percentile aggregation metric. Not supported for pairwise metric. |
PERCENTILE_P95 |
95th percentile aggregation metric. Not supported for pairwise metric. |
PERCENTILE_P99 |
99th percentile aggregation metric. Not supported for pairwise metric. |
EvaluationRubricConfig
Configuration for a rubric group to be generated/saved for evaluation.
rubricGroupKeystring
Required. The key used to save the generated rubrics. If a generation spec is provided, this key will be used for the name of the generated rubric group. Otherwise, this key will be used to look up the existing rubric group on the evaluation item. Note that if a rubric group key is specified on both a rubric config and an evaluation metric, the key from the metric will be used to select the rubrics for evaluation.
generation_configUnion type
generation_config can be only one of the following:Dynamically generate rubrics using this specification.
Dynamically generate rubrics using a predefined spec.
| JSON representation |
|---|
{ "rubricGroupKey": string, // generation_config "rubricGenerationSpec": { object ( |
OutputConfig
The output config for the evaluation run.
BigQuery destination for evaluation output.
Cloud Storage destination for evaluation output.
| JSON representation |
|---|
{ "bigqueryDestination": { object ( |
BigQueryDestination
The BigQuery location for the output content.
outputUristring
Required. BigQuery URI to a project or table, up to 2000 characters long.
When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist.
Accepted forms:
- BigQuery path. For example:
bq://projectIdorbq://projectId.bqDatasetIdorbq://projectId.bqDatasetId.bqTableId.
| JSON representation |
|---|
{ "outputUri": string } |
GcsDestination
The Google Cloud Storage location where the output is to be written to.
outputUriPrefixstring
Required. Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
| JSON representation |
|---|
{ "outputUriPrefix": string } |
PromptTemplate
Prompt template used for inference.
sourceUnion type
source can be only one of the following:promptTemplatestring
Inline prompt template. Template variables should be in the format "{var_name}". Example: "Translate the following from {source_lang} to {target_lang}: {text}"
gcsUristring
Prompt template stored in Cloud Storage. Format: "gs://my-bucket/file-name.txt".
| JSON representation |
|---|
{ // source "promptTemplate": string, "gcsUri": string // Union type } |
State
The state of the evaluation run.
| Enums | |
|---|---|
STATE_UNSPECIFIED |
Unspecified state. |
PENDING |
The evaluation run is pending. |
RUNNING |
The evaluation run is running. |
SUCCEEDED |
The evaluation run has succeeded. |
FAILED |
The evaluation run has failed. |
CANCELLED |
The evaluation run has been cancelled. |
INFERENCE |
The evaluation run is performing inference. |
GENERATING_RUBRICS |
The evaluation run is performing rubric generation. |
EvaluationResults
The results of the evaluation run.
Optional. The summary metrics for the evaluation run.
evaluationSetstring
The evaluation set where item level results are stored.
| JSON representation |
|---|
{
"summaryMetrics": {
object ( |
SummaryMetrics
The summary metrics for the evaluation run.
Optional. Map of metric name to metric value.
totalItemsinteger
Optional. The total number of items that were evaluated.
failedItemsinteger
Optional. The number of items that failed to be evaluated.
| JSON representation |
|---|
{ "metrics": { string: value, ... }, "totalItems": integer, "failedItems": integer } |
Methods |
|
|---|---|
|
Cancels an Evaluation Run. |
|
Creates an Evaluation Run. |
|
Deletes an Evaluation Run. |
|
Gets an Evaluation Run. |
|
Lists Evaluation Runs. |