Cuenta tokens para Gemini
Organiza tus páginas con colecciones
Guarda y categoriza el contenido según tus preferencias.
En la muestra de código, se indica cómo usar la API de Vertex AI Generative Models para contar la cantidad de tokens en una instrucción y generar contenido con el modelo de Gemini.
Muestra de código
Salvo que se indique lo contrario, el contenido de esta página está sujeto a la licencia Atribución 4.0 de Creative Commons, y los ejemplos de código están sujetos a la licencia Apache 2.0. Para obtener más información, consulta las políticas del sitio de Google Developers. Java es una marca registrada de Oracle o sus afiliados.
[[["Fácil de comprender","easyToUnderstand","thumb-up"],["Resolvió mi problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Información o código de muestra incorrectos","incorrectInformationOrSampleCode","thumb-down"],["Faltan la información o los ejemplos que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],[],[],[],null,["# Count tokens for Gemini\n\nThe code sample demonstrates how to use the Vertex AI Generative Models API to count the number of tokens in a prompt and generate content using the Gemini model.\n\nCode sample\n-----------\n\n### Go\n\n\nBefore trying this sample, follow the Go setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Go API\nreference documentation](/go/docs/reference/cloud.google.com/go/aiplatform/latest/apiv1).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n import (\n \t\"context\"\n \t\"fmt\"\n \t\"io\"\n \t\"mime\"\n \t\"path/filepath\"\n\n \t\"cloud.google.com/go/vertexai/genai\"\n )\n\n // countTokensMultimodal finds the number of tokens for a multimodal prompt (video+text), and writes to w. Then,\n // it calls the model with the multimodal prompt and writes token counts from the response metadata to w.\n //\n // video is a Google Cloud Storage path starting with \"gs://\"\n func countTokensMultimodal(w io.Writer, projectID, location, modelName string) error {\n \t// location := \"us-central1\"\n \t// modelName := \"gemini-2.0-flash-001\"\n \tprompt := \"Provide a description of the video.\"\n \tvideo := \"gs://cloud-samples-data/generative-ai/video/pixel8.mp4\"\n\n \tctx := context.Background()\n\n \tclient, err := genai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_Client_NewClient(ctx, projectID, location)\n \tif err != nil {\n \t\treturn fmt.Errorf(\"unable to create client: %w\", err)\n \t}\n \tdefer client.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_Client_Close()\n\n \tmodel := client.GenerativeModel(modelName)\n\n \tpart1 := genai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_Text(prompt)\n\n \t// Given a video file URL, prepare video file as genai.Part\n \tpart2 := genai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_FileData{\n \t\tMIMEType: mime.TypeByExtension(filepath.Ext(video)),\n \t\tFileURI: video,\n \t}\n\n \t// Finds the total number of tokens for the 2 parts (text, video) of the multimodal prompt,\n \t// before actually calling the model for inference.\n \tresp, err := model.CountTokens(ctx, part1, part2)\n \tif err != nil {\n \t\treturn err\n \t}\n\n \tfmt.Fprintf(w, \"Number of tokens for the multimodal video prompt: %d\\n\", resp.TotalTokens)\n\n \tres, err := model.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_GenerativeModel_GenerateContent(ctx, part1, part2)\n \tif err != nil {\n \t\treturn fmt.Errorf(\"unable to generate contents: %w\", err)\n \t}\n\n \t// The token counts are also provided in the model response metadata, after inference.\n \tfmt.Fprintln(w, \"\\nModel response\")\n \tmd := res.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/go/latest/genai.html#cloud_google_com_go_vertexai_genai_UsageMetadata\n \tfmt.Fprintf(w, \"Prompt Token Count: %d\\n\", md.PromptTokenCount)\n \tfmt.Fprintf(w, \"Candidates Token Count: %d\\n\", md.CandidatesTokenCount)\n \tfmt.Fprintf(w, \"Total Token Count: %d\\n\", md.TotalTokenCount)\n\n \treturn nil\n }\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=generativeaionvertexai)."]]