A partire dal 29 aprile 2025, i modelli Gemini 1.5 Pro e Gemini 1.5 Flash non sono disponibili nei progetti che non li hanno mai utilizzati, inclusi i nuovi progetti. Per maggiori dettagli, vedi
Versioni e ciclo di vita dei modelli.
Contare i token per Gemini
Mantieni tutto organizzato con le raccolte
Salva e classifica i contenuti in base alle tue preferenze.
L'esempio di codice mostra come utilizzare l'API Vertex AI Generative Models per conteggiare il numero di token in un prompt e generare contenuti utilizzando il modello Gemini.
Esempio di codice
Salvo quando diversamente specificato, i contenuti di questa pagina sono concessi in base alla licenza Creative Commons Attribution 4.0, mentre gli esempi di codice sono concessi in base alla licenza Apache 2.0. Per ulteriori dettagli, consulta le norme del sito di Google Developers. Java è un marchio registrato di Oracle e/o delle sue consociate.
[[["Facile da capire","easyToUnderstand","thumb-up"],["Il problema è stato risolto","solvedMyProblem","thumb-up"],["Altra","otherUp","thumb-up"]],[["Difficile da capire","hardToUnderstand","thumb-down"],["Informazioni o codice di esempio errati","incorrectInformationOrSampleCode","thumb-down"],["Mancano le informazioni o gli esempi di cui ho bisogno","missingTheInformationSamplesINeed","thumb-down"],["Problema di traduzione","translationIssue","thumb-down"],["Altra","otherDown","thumb-down"]],[],[],[],null,["# Count tokens for Gemini\n\nThe code sample demonstrates how to use the Vertex AI Generative Models API to count the number of tokens in a prompt and generate content using the Gemini model.\n\nCode sample\n-----------\n\n### Go\n\n\nBefore trying this sample, follow the Go setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Go API\nreference documentation](/go/docs/reference/cloud.google.com/go/aiplatform/latest/apiv1).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n import (\n \t\"context\"\n \t\"fmt\"\n \t\"io\"\n \t\"mime\"\n \t\"path/filepath\"\n\n \t\"cloud.google.com/go/vertexai/genai\"\n )\n\n // countTokensMultimodal finds the number of tokens for a multimodal prompt (video+text), and writes to w. Then,\n // it calls the model with the multimodal prompt and writes token counts from the response metadata to w.\n //\n // video is a Google Cloud Storage path starting with \"gs://\"\n func countTokensMultimodal(w io.Writer, projectID, location, modelName string) error {\n \t// location := \"us-central1\"\n \t// modelName := \"gemini-2.0-flash-001\"\n \tprompt := \"Provide a description of the video.\"\n \tvideo := \"gs://cloud-samples-data/generative-ai/video/pixel8.mp4\"\n\n \tctx := context.Background()\n\n \tclient, err := genai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_Client_NewClient(ctx, projectID, location)\n \tif err != nil {\n \t\treturn fmt.Errorf(\"unable to create client: %w\", err)\n \t}\n \tdefer client.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_Client_Close()\n\n \tmodel := client.GenerativeModel(modelName)\n\n \tpart1 := genai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_Text(prompt)\n\n \t// Given a video file URL, prepare video file as genai.Part\n \tpart2 := genai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_FileData{\n \t\tMIMEType: mime.TypeByExtension(filepath.Ext(video)),\n \t\tFileURI: video,\n \t}\n\n \t// Finds the total number of tokens for the 2 parts (text, video) of the multimodal prompt,\n \t// before actually calling the model for inference.\n \tresp, err := model.CountTokens(ctx, part1, part2)\n \tif err != nil {\n \t\treturn err\n \t}\n\n \tfmt.Fprintf(w, \"Number of tokens for the multimodal video prompt: %d\\n\", resp.TotalTokens)\n\n \tres, err := model.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_GenerativeModel_GenerateContent(ctx, part1, part2)\n \tif err != nil {\n \t\treturn fmt.Errorf(\"unable to generate contents: %w\", err)\n \t}\n\n \t// The token counts are also provided in the model response metadata, after inference.\n \tfmt.Fprintln(w, \"\\nModel response\")\n \tmd := res.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_UsageMetadata\n \tfmt.Fprintf(w, \"Prompt Token Count: %d\\n\", md.PromptTokenCount)\n \tfmt.Fprintf(w, \"Candidates Token Count: %d\\n\", md.CandidatesTokenCount)\n \tfmt.Fprintf(w, \"Total Token Count: %d\\n\", md.TotalTokenCount)\n\n \treturn nil\n }\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=generativeaionvertexai)."]]