Compter les jetons pour Gemini
Restez organisé à l'aide des collections
Enregistrez et classez les contenus selon vos préférences.
L'exemple de code montre comment utiliser l'API de modèles génératifs Vertex AI pour compter le nombre de jetons dans un prompt et générer du contenu à l'aide du modèle Gemini.
Exemple de code
Sauf indication contraire, le contenu de cette page est régi par une licence Creative Commons Attribution 4.0, et les échantillons de code sont régis par une licence Apache 2.0. Pour en savoir plus, consultez les Règles du site Google Developers. Java est une marque déposée d'Oracle et/ou de ses sociétés affiliées.
[[["Facile à comprendre","easyToUnderstand","thumb-up"],["J'ai pu résoudre mon problème","solvedMyProblem","thumb-up"],["Autre","otherUp","thumb-up"]],[["Difficile à comprendre","hardToUnderstand","thumb-down"],["Informations ou exemple de code incorrects","incorrectInformationOrSampleCode","thumb-down"],["Il n'y a pas l'information/les exemples dont j'ai besoin","missingTheInformationSamplesINeed","thumb-down"],["Problème de traduction","translationIssue","thumb-down"],["Autre","otherDown","thumb-down"]],[],[],[],null,["# Count tokens for Gemini\n\nThe code sample demonstrates how to use the Vertex AI Generative Models API to count the number of tokens in a prompt and generate content using the Gemini model.\n\nCode sample\n-----------\n\n### Go\n\n\nBefore trying this sample, follow the Go setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Go API\nreference documentation](/go/docs/reference/cloud.google.com/go/aiplatform/latest/apiv1).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n import (\n \t\"context\"\n \t\"fmt\"\n \t\"io\"\n \t\"mime\"\n \t\"path/filepath\"\n\n \t\"cloud.google.com/go/vertexai/genai\"\n )\n\n // countTokensMultimodal finds the number of tokens for a multimodal prompt (video+text), and writes to w. Then,\n // it calls the model with the multimodal prompt and writes token counts from the response metadata to w.\n //\n // video is a Google Cloud Storage path starting with \"gs://\"\n func countTokensMultimodal(w io.Writer, projectID, location, modelName string) error {\n \t// location := \"us-central1\"\n \t// modelName := \"gemini-2.0-flash-001\"\n \tprompt := \"Provide a description of the video.\"\n \tvideo := \"gs://cloud-samples-data/generative-ai/video/pixel8.mp4\"\n\n \tctx := context.Background()\n\n \tclient, err := genai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_Client_NewClient(ctx, projectID, location)\n \tif err != nil {\n \t\treturn fmt.Errorf(\"unable to create client: %w\", err)\n \t}\n \tdefer client.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_Client_Close()\n\n \tmodel := client.GenerativeModel(modelName)\n\n \tpart1 := genai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_Text(prompt)\n\n \t// Given a video file URL, prepare video file as genai.Part\n \tpart2 := genai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_FileData{\n \t\tMIMEType: mime.TypeByExtension(filepath.Ext(video)),\n \t\tFileURI: video,\n \t}\n\n \t// Finds the total number of tokens for the 2 parts (text, video) of the multimodal prompt,\n \t// before actually calling the model for inference.\n \tresp, err := model.CountTokens(ctx, part1, part2)\n \tif err != nil {\n \t\treturn err\n \t}\n\n \tfmt.Fprintf(w, \"Number of tokens for the multimodal video prompt: %d\\n\", resp.TotalTokens)\n\n \tres, err := model.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_GenerativeModel_GenerateContent(ctx, part1, part2)\n \tif err != nil {\n \t\treturn fmt.Errorf(\"unable to generate contents: %w\", err)\n \t}\n\n \t// The token counts are also provided in the model response metadata, after inference.\n \tfmt.Fprintln(w, \"\\nModel response\")\n \tmd := res.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_UsageMetadata\n \tfmt.Fprintf(w, \"Prompt Token Count: %d\\n\", md.PromptTokenCount)\n \tfmt.Fprintf(w, \"Candidates Token Count: %d\\n\", md.CandidatesTokenCount)\n \tfmt.Fprintf(w, \"Total Token Count: %d\\n\", md.TotalTokenCount)\n\n \treturn nil\n }\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=generativeaionvertexai)."]]