Starting April 29, 2025, Gemini 1.5 Pro and Gemini 1.5 Flash models are not available in projects that have no prior usage of these models, including new projects. For details, see Model versions and lifecycle.
Stay organized with collections
Save and categorize content based on your preferences.
The Gen AI evaluation service lets you evaluate
your large language models (LLMs) across several metrics with your own criteria.
You can provide inference-time inputs, LLM responses and additional
parameters, and the Gen AI evaluation service returns metrics specific to the
evaluation task.
Metrics include model-based metrics, such as PointwiseMetric and PairwiseMetric, and in-memory
computed metrics, such as rouge, bleu, and tool function-call metrics.
PointwiseMetric and PairwiseMetric are generic model-based metrics that
you can customize with your own criteria.
Because the service takes the prediction results directly from models as input,
the evaluation service can perform both inference and subsequent evaluation on
all models supported by
Vertex AI.
The following are limitations of the evaluation service:
The evaluation service may have a propagation delay in your first call.
Most model-based metrics consume
gemini-2.0-flash quota
because the Gen AI evaluation service leverages gemini-2.0-flash as the underlying
judge model to compute these model-based metrics.
Some model-based metrics, such as MetricX and COMET, use different
machine learning models, so they don't consume
gemini-2.0-flash quota.
rougen[1-9]: compute rouge scores based on the overlap of n-grams between the prediction and the reference.
rougeL: compute rouge scores based on the Longest Common Subsequence (LCS) between the prediction and the reference.
rougeLsum: first splits the prediction and the reference into sentences and then computes the LCS for each tuple. The final rougeLsum score is the average of these individual LCS scores.
metric_spec.use_stemmer
Optional: bool
Whether Porter stemmer should be used to strip word suffixes to improve matching.
metric_spec.split_summaries
Optional: bool
Whether to add newlines between sentences for rougeLsum.
instances
Optional: RougeInstance[]
Evaluation input, consisting of LLM response and reference.
Evaluation input, consisting of LLM response and reference.
instance.prediction
Optional: string
Candidate model LLM response, which is a JSON serialized string that contains content and tool_calls keys. The content value is the text output from the model. The tool_call value is a JSON serialized string of a list of tool calls. An example is:
{"content":"","tool_calls":[{"name":"book_tickets","arguments":{"movie":"Mission Impossible Dead Reckoning Part 1","theater":"Regal Edwards 14","location":"Mountain View CA","showtime":"7:30","date":"2024-03-30","num_tix":"2"}}]}
instance.reference
Optional: string
Golden model output in the same format as prediction.
Evaluation input, consisting of LLM response and reference.
instance.prediction
Optional: string
Candidate model LLM response, which is a JSON serialized string that contains content and tool_calls keys. The content value is the text output from the model. The tool_call value is a JSON serialized string of a list of tool calls.
instance.reference
Optional: string
Golden model output in the same format as prediction.
Evaluation input, consisting of LLM response and reference.
instance.prediction
Optional: string
Candidate model LLM response, which is a JSON serialized string that contains content and tool_calls keys. The content value is the text output from the model. The tool_call value is a JSON serialized string of a list of tool calls.
instance.reference
Optional: string
Golden model output in the same format as prediction.
Evaluation input, consisting of LLM response and reference.
instance.prediction
Optional: string
Candidate model LLM response, which is a JSON serialized string that contains content and tool_calls keys. The content value is the text output from the model. The tool_call value is a JSON serialized string of a list of tool calls.
instance.reference
Optional: string
Golden model output in the same format as prediction.
METRICX_24_REF: MetricX 24 for translation and reference.
It evaluates the prediction (translation) by comparing with the provided
reference text input.
METRICX_24_SRC: MetricX 24 for translation and source.
It evaluates the translation (prediction) by Quality Estimation (QE),
without a reference text input.
METRICX_24_SRC_REF: MetricX 24 for translation, source
and reference. It evaluates the translation (prediction) using all three
inputs.
metric_spec.source_language
Optional: string
Source language in
BCP-47 format.
For example, "es".
metric_spec.target_language
Optional: string
Target language in
BCP-47 format.
For example, "es".
instance
Optional: MetricxInstance
Evaluation input,
consisting of LLM response and reference. The exact fields used for
evaluation are dependent on the MetricX version.
instance.prediction
Optional: string
Candidate model LLM response.
This is the output of the LLM which is being evaluated.
instance.source
Optional: string
Source text which is in the
original language that the prediction was translated from.
instance.reference
Optional: string
Ground truth used to compare
against the prediction. It is in the same language as the prediction.
MetricxResult
{"metricx_result":{"score":float}}
Output
score
float: [0, 25], where 0 represents a
perfect translation.
Examples
Evaluate an output
The following example demonstrates how to call the Gen AI Evaluation API to evaluate
the output of an LLM using a variety of evaluation metrics, including the following:
summarization_quality
groundedness
fulfillment
summarization_helpfulness
summarization_verbosity
Python
importpandasaspdimportvertexaifromvertexai.preview.evaluationimportEvalTask,MetricPromptTemplateExamples# TODO(developer): Update and un-comment below line# PROJECT_ID = "your-project-id"vertexai.init(project=PROJECT_ID,location="us-central1")eval_dataset=pd.DataFrame({"instruction":["Summarize the text in one sentence.","Summarize the text such that a five-year-old can understand.",],"context":["""As part of a comprehensive initiative to tackle urban congestion and foster sustainable urban living, a major city has revealed ambitious plans for an extensive overhaul of its public transportation system. The project aims not only to improve the efficiency and reliability of public transit but also to reduce the city\'s carbon footprint and promote eco-friendly commuting options. City officials anticipate that this strategic investment will enhance accessibility for residents and visitors alike, ushering in a new era of efficient, environmentally conscious urban transportation.""","""A team of archaeologists has unearthed ancient artifacts shedding light on a previously unknown civilization. The findings challenge existing historical narratives and provide valuable insights into human history.""",],"response":["A major city is revamping its public transportation system to fight congestion, reduce emissions, and make getting around greener and easier.","Some people who dig for old things found some very special tools and objects that tell us about people who lived a long, long time ago! What they found is like a new puzzle piece that helps us understand how people used to live.",],})eval_task=EvalTask(dataset=eval_dataset,metrics=[MetricPromptTemplateExamples.Pointwise.SUMMARIZATION_QUALITY,MetricPromptTemplateExamples.Pointwise.GROUNDEDNESS,MetricPromptTemplateExamples.Pointwise.VERBOSITY,MetricPromptTemplateExamples.Pointwise.INSTRUCTION_FOLLOWING,],)prompt_template=("Instruction: {instruction}. Article: {context}. Summary: {response}")result=eval_task.evaluate(prompt_template=prompt_template)print("Summary Metrics:\n")forkey,valueinresult.summary_metrics.items():print(f"{key}: \t{value}")print("\n\nMetrics Table:\n")print(result.metrics_table)# Example response:# Summary Metrics:# row_count: 2# summarization_quality/mean: 3.5# summarization_quality/std: 2.1213203435596424# ...
Go
import(context_pkg"context""fmt""io"aiplatform"cloud.google.com/go/aiplatform/apiv1beta1"aiplatformpb"cloud.google.com/go/aiplatform/apiv1beta1/aiplatformpb""google.golang.org/api/option")// evaluateModelResponse evaluates the output of an LLM for groundedness, i.e., how well// the model response connects with verifiable sources of informationfuncevaluateModelResponse(wio.Writer,projectID,locationstring)error{// location = "us-central1"ctx:=context_pkg.Background()apiEndpoint:=fmt.Sprintf("%s-aiplatform.googleapis.com:443",location)client,err:=aiplatform.NewEvaluationClient(ctx,option.WithEndpoint(apiEndpoint))iferr!=nil{returnfmt.Errorf("unable to create aiplatform client: %w",err)}deferclient.Close()// evaluate the pre-generated model response against the reference (ground truth)responseToEvaluate:=`The city is undertaking a major project to revamp its public transportation system.This initiative is designed to improve efficiency, reduce carbon emissions, and promoteeco-friendly commuting. The city expects that this investment will enhance accessibilityand usher in a new era of sustainable urban transportation.`reference:=`As part of a comprehensive initiative to tackle urban congestion and fostersustainable urban living, a major city has revealed ambitious plans for anextensive overhaul of its public transportation system. The project aims notonly to improve the efficiency and reliability of public transit but also toreduce the city\'s carbon footprint and promote eco-friendly commuting options.City officials anticipate that this strategic investment will enhanceaccessibility for residents and visitors alike, ushering in a new era ofefficient, environmentally conscious urban transportation.`req:=aiplatformpb.EvaluateInstancesRequest{Location:fmt.Sprintf("projects/%s/locations/%s",projectID,location),// Check the API reference for a full list of supported metric inputs:// https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1#evaluateinstancesrequestMetricInputs:&aiplatformpb.EvaluateInstancesRequest_GroundednessInput{GroundednessInput:&aiplatformpb.GroundednessInput{MetricSpec:&aiplatformpb.GroundednessSpec{},Instance:&aiplatformpb.GroundednessInstance{Context:&reference,Prediction:&responseToEvaluate,},},},}resp,err:=client.EvaluateInstances(ctx,&req)iferr!=nil{returnfmt.Errorf("evaluateInstances failed: %v",err)}results:=resp.GetGroundednessResult()fmt.Fprintf(w,"score: %.2f\n",results.GetScore())fmt.Fprintf(w,"confidence: %.2f\n",results.GetConfidence())fmt.Fprintf(w,"explanation:\n%s\n",results.GetExplanation())// Example response:// score: 1.00// confidence: 1.00// explanation:// STEP 1: All aspects of the response are found in the context.// The response accurately summarizes the city's plan to overhaul its public transportation system, highlighting the goals of ...// STEP 2: According to the rubric, the response is scored 1 because all aspects of the response are attributable to the context.returnnil}
Evaluate an output: pairwise summarization quality
The following example demonstrates how to call the Gen AI evaluation service API to evaluate
the output of an LLM using a pairwise summarization quality comparison.
REST
Before using any of the request data,
make the following replacements:
PROJECT_ID: .
LOCATION: The region to process the request.
PREDICTION: LLM response.
BASELINE_PREDICTION: Baseline model LLM response.
INSTRUCTION: The instruction used at inference time.
CONTEXT: Inference-time text containing all relevant information, that can be used in the LLM response.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID-/locations/LOCATION:evaluateInstances \
importpandasaspdimportvertexaifromvertexai.generative_modelsimportGenerativeModelfromvertexai.evaluationimport(EvalTask,PairwiseMetric,MetricPromptTemplateExamples,)# TODO(developer): Update & uncomment line below# PROJECT_ID = "your-project-id"vertexai.init(project=PROJECT_ID,location="us-central1")prompt="""Summarize the text such that a five-year-old can understand.# TextAs part of a comprehensive initiative to tackle urban congestion and fostersustainable urban living, a major city has revealed ambitious plans for anextensive overhaul of its public transportation system. The project aims notonly to improve the efficiency and reliability of public transit but also toreduce the city\'s carbon footprint and promote eco-friendly commuting options.City officials anticipate that this strategic investment will enhanceaccessibility for residents and visitors alike, ushering in a new era ofefficient, environmentally conscious urban transportation."""eval_dataset=pd.DataFrame({"prompt":[prompt]})# Baseline model for pairwise comparisonbaseline_model=GenerativeModel("gemini-2.0-flash-lite-001")# Candidate model for pairwise comparisoncandidate_model=GenerativeModel("gemini-2.0-flash-001",generation_config={"temperature":0.4})prompt_template=MetricPromptTemplateExamples.get_prompt_template("pairwise_summarization_quality")summarization_quality_metric=PairwiseMetric(metric="pairwise_summarization_quality",metric_prompt_template=prompt_template,baseline_model=baseline_model,)eval_task=EvalTask(dataset=eval_dataset,metrics=[summarization_quality_metric],experiment="pairwise-experiment",)result=eval_task.evaluate(model=candidate_model)baseline_model_response=result.metrics_table["baseline_model_response"].iloc[0]candidate_model_response=result.metrics_table["response"].iloc[0]winner_model=result.metrics_table["pairwise_summarization_quality/pairwise_choice"].iloc[0]explanation=result.metrics_table["pairwise_summarization_quality/explanation"].iloc[0]print(f"Baseline's story:\n{baseline_model_response}")print(f"Candidate's story:\n{candidate_model_response}")print(f"Winner: {winner_model}")print(f"Explanation: {explanation}")# Example response:# Baseline's story:# A big city wants to make it easier for people to get around without using cars! They're going to make buses and trains ...## Candidate's story:# A big city wants to make it easier for people to get around without using cars! ... This will help keep the air clean ...## Winner: CANDIDATE# Explanation: Both responses adhere to the prompt's constraints, are grounded in the provided text, and ... However, Response B ...
import(context_pkg"context""fmt""io"aiplatform"cloud.google.com/go/aiplatform/apiv1beta1"aiplatformpb"cloud.google.com/go/aiplatform/apiv1beta1/aiplatformpb""google.golang.org/api/option")// pairwiseEvaluation lets the judge model to compare the responses of two models and pick the better onefuncpairwiseEvaluation(wio.Writer,projectID,locationstring)error{// location = "us-central1"ctx:=context_pkg.Background()apiEndpoint:=fmt.Sprintf("%s-aiplatform.googleapis.com:443",location)client,err:=aiplatform.NewEvaluationClient(ctx,option.WithEndpoint(apiEndpoint))iferr!=nil{returnfmt.Errorf("unable to create aiplatform client: %w",err)}deferclient.Close()context:=`As part of a comprehensive initiative to tackle urban congestion and fostersustainable urban living, a major city has revealed ambitious plans for anextensive overhaul of its public transportation system. The project aims notonly to improve the efficiency and reliability of public transit but also toreduce the city\'s carbon footprint and promote eco-friendly commuting options.City officials anticipate that this strategic investment will enhanceaccessibility for residents and visitors alike, ushering in a new era ofefficient, environmentally conscious urban transportation.`instruction:="Summarize the text such that a five-year-old can understand."baselineResponse:=`The city wants to make it easier for people to get around without using cars.They're going to make the buses and trains better and faster, so people will want touse them more. This will help the air be cleaner and make the city a better place to live.`candidateResponse:=`The city is making big changes to how people get around. They want to make the buses andtrains work better and be easier for everyone to use. This will also help the environmentby getting people to use less gas. The city thinks these changes will make it easier foreveryone to get where they need to go.`req:=aiplatformpb.EvaluateInstancesRequest{Location:fmt.Sprintf("projects/%s/locations/%s",projectID,location),MetricInputs:&aiplatformpb.EvaluateInstancesRequest_PairwiseSummarizationQualityInput{PairwiseSummarizationQualityInput:&aiplatformpb.PairwiseSummarizationQualityInput{MetricSpec:&aiplatformpb.PairwiseSummarizationQualitySpec{},Instance:&aiplatformpb.PairwiseSummarizationQualityInstance{Context:&context,Instruction:&instruction,Prediction:&candidateResponse,BaselinePrediction:&baselineResponse,},},},}resp,err:=client.EvaluateInstances(ctx,&req)iferr!=nil{returnfmt.Errorf("evaluateInstances failed: %v",err)}results:=resp.GetPairwiseSummarizationQualityResult()fmt.Fprintf(w,"choice: %s\n",results.GetPairwiseChoice())fmt.Fprintf(w,"confidence: %.2f\n",results.GetConfidence())fmt.Fprintf(w,"explanation:\n%s\n",results.GetExplanation())// Example response:// choice: BASELINE// confidence: 0.50// explanation:// BASELINE response is easier to understand. For example, the phrase "..." is easier to understand than "...". Thus, BASELINE response is ...returnnil}
Get ROUGE score
The following example calls the Gen AI evaluation service API to get the ROUGE score
of a prediction, generated by a number of inputs. The ROUGE inputs use
metric_spec, which determines the metric's behavior.
REST
Before using any of the request data,
make the following replacements:
PROJECT_ID: .
LOCATION: The region to process the request.
PREDICTION: LLM response.
REFERENCE: Golden LLM response for reference.
ROUGE_TYPE: The calculation used to determine the rouge score. See metric_spec.rouge_type for acceptable values.
USE_STEMMER: Determines whether the Porter stemmer is used to strip word suffixes to improve matching. For acceptable values, see metric_spec.use_stemmer.
SPLIT_SUMMARIES: Determines if new lines are added between rougeLsum sentences. For acceptable values, see metric_spec.split_summaries .
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID-/locations/REGION:evaluateInstances \
importpandasaspdimportvertexaifromvertexai.preview.evaluationimportEvalTask# TODO(developer): Update & uncomment line below# PROJECT_ID = "your-project-id"vertexai.init(project=PROJECT_ID,location="us-central1")reference_summarization="""The Great Barrier Reef, the world's largest coral reef system, islocated off the coast of Queensland, Australia. It's a vastecosystem spanning over 2,300 kilometers with thousands of reefsand islands. While it harbors an incredible diversity of marinelife, including endangered species, it faces serious threats fromclimate change, ocean acidification, and coral bleaching."""# Compare pre-generated model responses against the reference (ground truth).eval_dataset=pd.DataFrame({"response":["""The Great Barrier Reef, the world's largest coral reef system located in Australia, is a vast and diverse ecosystem. However, it faces serious threats from climate change, ocean acidification, and coral bleaching, endangering its rich marine life.""","""The Great Barrier Reef, a vast coral reef system off the coast of Queensland, Australia, is the world's largest. It's a complex ecosystem supporting diverse marine life, including endangered species. However, climate change, ocean acidification, and coral bleaching are serious threats to its survival.""","""The Great Barrier Reef, the world's largest coral reef system off the coast of Australia, is a vast and diverse ecosystem with thousands of reefs and islands. It is home to a multitude of marine life, including endangered species, but faces serious threats from climate change, ocean acidification, and coral bleaching.""",],"reference":[reference_summarization]*3,})eval_task=EvalTask(dataset=eval_dataset,metrics=["rouge_1","rouge_2","rouge_l","rouge_l_sum",],)result=eval_task.evaluate()print("Summary Metrics:\n")forkey,valueinresult.summary_metrics.items():print(f"{key}: \t{value}")print("\n\nMetrics Table:\n")print(result.metrics_table)# Example response:## Summary Metrics:## row_count: 3# rouge_1/mean: 0.7191161666666667# rouge_1/std: 0.06765143922270488# rouge_2/mean: 0.5441118566666666# ...# Metrics Table:## response reference ... rouge_l/score rouge_l_sum/score# 0 The Great Barrier Reef, the world's ... \n The Great Barrier Reef, the ... ... 0.577320 0.639175# 1 The Great Barrier Reef, a vast coral... \n The Great Barrier Reef, the ... ... 0.552381 0.666667# 2 The Great Barrier Reef, the world's ... \n The Great Barrier Reef, the ... ... 0.774775 0.774775
import("context""fmt""io"aiplatform"cloud.google.com/go/aiplatform/apiv1beta1"aiplatformpb"cloud.google.com/go/aiplatform/apiv1beta1/aiplatformpb""google.golang.org/api/option")// getROUGEScore evaluates a model response against a reference (ground truth) using the ROUGE metricfuncgetROUGEScore(wio.Writer,projectID,locationstring)error{// location = "us-central1"ctx:=context.Background()apiEndpoint:=fmt.Sprintf("%s-aiplatform.googleapis.com:443",location)client,err:=aiplatform.NewEvaluationClient(ctx,option.WithEndpoint(apiEndpoint))iferr!=nil{returnfmt.Errorf("unable to create aiplatform client: %w",err)}deferclient.Close()modelResponse:=`The Great Barrier Reef, the world's largest coral reef system located in Australia,is a vast and diverse ecosystem. However, it faces serious threats from climate change,ocean acidification, and coral bleaching, endangering its rich marine life.`reference:=`The Great Barrier Reef, the world's largest coral reef system, islocated off the coast of Queensland, Australia. It's a vastecosystem spanning over 2,300 kilometers with thousands of reefsand islands. While it harbors an incredible diversity of marinelife, including endangered species, it faces serious threats fromclimate change, ocean acidification, and coral bleaching.`req:=aiplatformpb.EvaluateInstancesRequest{Location:fmt.Sprintf("projects/%s/locations/%s",projectID,location),MetricInputs:&aiplatformpb.EvaluateInstancesRequest_RougeInput{RougeInput:&aiplatformpb.RougeInput{// Check the API reference for the list of supported ROUGE metric types:// https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1#rougespecMetricSpec:&aiplatformpb.RougeSpec{RougeType:"rouge1",},Instances:[]*aiplatformpb.RougeInstance{{Prediction:&modelResponse,Reference:&reference,},},},},}resp,err:=client.EvaluateInstances(ctx,&req)iferr!=nil{returnfmt.Errorf("evaluateInstances failed: %v",err)}fmt.Fprintln(w,"evaluation results:")fmt.Fprintln(w,resp.GetRougeResults().GetRougeMetricValues())// Example response:// [score:0.6597938]returnnil}
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-15 UTC."],[],[]]