CountTokens API

The CountTokens API calculates the number of input tokens before sending a request to the Gemini API.

Use the CountTokens API to prevent requests from exceeding the model context window, and estimate potential costs based on billable characters.

The CountTokens API can use the same contents parameter as Gemini API inference requests.

Supported Models:

Model Code
Gemini 1.5 Flash gemini-1.5-flash-002
gemini-1.5-flash-001
gemini-1.5-flash-preview-0514
Gemini 1.5 Pro gemini-1.5-pro-002
gemini-1.5-pro-001
gemini-1.5-pro-preview-0514
Gemini 1.0 Pro Vision gemini-1.0-pro-vision
gemini-1.0-pro-vision-001
Gemini 1.0 Pro gemini-1.0-pro
gemini-1.0-pro-001
gemini-1.0-pro-002
Gemini Experimental gemini-experimental

Limitations:

gemini-1.0-pro-vision-001 and gemini-1.0-ultra-vision-001 use a fixed number of tokens for video inputs.

Example syntax

Syntax to send a count tokens request.

curl

curl -X POST \
  -H "Authorization: Bearer $(gcloud auth print-access-token)" \
  -H "Content-Type: application/json" \

https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL_ID}:countTokens \
-d '{
  "contents": [{
    ...
  }],
  "system_instruction": {
  "role": "...",
  "parts": [{
      ...
    }],
  "tools": [{
      "function_declarations": [{
        ...
      }]
    }],
  }
}'

Python

gemini_model = GenerativeModel(MODEL_ID)
model_response = gemini_model.count_tokens([...])

Parameter list

This class consists of two main properties: role and parts. The role property denotes the individual producing the content, while the parts property contains multiple elements, each representing a segment of data within a message.

Parameters

role

Optional: string

The identity of the entity that creates the message. Set the string to one of the following:

  • user: This indicates that the message is sent by a real person. For example, a user-generated message.
  • model: This indicates that the message is generated by the model.

The model value is used to insert messages from the model into the conversation during multi-turn conversations.

For non-multi-turn conversations, this field can be left blank or unset.

parts

part

A list of ordered parts that make up a single message. Different parts may have different IANA MIME types.

Part

A data type containing media that is part of a multi-part Content message.

Parameters

text

Optional: string

A text prompt or code snippet.

inline_data

Optional: Blob

Inline data in raw bytes.

file_data

Optional: FileData

Data stored in a file.

Blob

Content blob. If possible this send as text rather than raw bytes.

Parameters

mime_type

string

IANA MIME type of the data.

data

bytes

Raw bytes.

FileData

URI based data.

Parameters

mime_type

string

IANA MIME type of the data.

file_uri

string

The Cloud Storage URI to the file storing the data.

system_instruction

This field is for user provided system_instructions. It is the same as contents but with a limited support of the content types.

Parameters

role

string

IANA MIME type of the data. This field is ignored internally.

parts

Part

Text only. Instructions that users want to pass to the model.

FunctionDeclaration

A structured representation of a function declaration as defined by the OpenAPI 3.0 specification that represents a function the model may generate JSON inputs for.

Parameters

name

string

The name of the function to call.

description

Optional: string

Description and purpose of the function.

parameters

Optional: Schema

Describes the parameters of the function in the OpenAPI JSON Schema Object format: OpenAPI 3.0 specification.

response

Optional: Schema

Describes the output from the function in the OpenAPI JSON Schema Object format: OpenAPI 3.0 specification.

Examples

Get token count from text prompt

This example counts the tokens of a single text prompt:

REST

To get the token count and the number of billable characters for a prompt by using the Vertex AI API, send a POST request to the publisher model endpoint.

Before using any of the request data, make the following replacements:

  • LOCATION: The region to process the request. Available options include the following:

    Click to expand a partial list of available regions

    • us-central1
    • us-west4
    • northamerica-northeast1
    • us-east4
    • us-west1
    • asia-northeast3
    • asia-southeast1
    • asia-northeast1
  • PROJECT_ID: Your project ID.
  • MODEL_ID: The model ID of the multimodal model that you want to use.
  • ROLE: The role in a conversation associated with the content. Specifying a role is required even in singleturn use cases. Acceptable values include the following:
    • USER: Specifies content that's sent by you.
  • TEXT: The text instructions to include in the prompt.
  • NAME: The name of the function to call.
  • DESCRIPTION: Description and purpose of the function.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:countTokens

Request JSON body:

{
  "contents": [{
    "role": "ROLE",
    "parts": [{
      "text": "TEXT"
    }]
  }],
  "system_instruction": {
    "role": "ROLE",
    "parts": [{
      "text": "TEXT"
    }]
  },
  "tools": [{
    "function_declarations": [
      {
        "name": "NAME",
        "description": "DESCRIPTION",
        "parameters": {
          "type": "OBJECT",
          "properties": {
            "location": {
              "type": "TYPE",
              "description": "DESCRIPTION"
            }
          },
          "required": [
            "location"
          ]
        }
      }
    ]
  }]
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:countTokens"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:countTokens" | Select-Object -Expand Content

You should receive a JSON response similar to the following.

Python

import vertexai
from vertexai.generative_models import GenerativeModel

# TODO(developer): Update and un-comment below line
# PROJECT_ID = "your-project-id"
vertexai.init(project=PROJECT_ID, location="us-central1")

model = GenerativeModel("gemini-1.5-flash-002")

prompt = "Why is the sky blue?"
# Prompt tokens count
response = model.count_tokens(prompt)
print(f"Prompt Token Count: {response.total_tokens}")
print(f"Prompt Character Count: {response.total_billable_characters}")

# Send text to Gemini
response = model.generate_content(prompt)

# Response tokens count
usage_metadata = response.usage_metadata
print(f"Prompt Token Count: {usage_metadata.prompt_token_count}")
print(f"Candidates Token Count: {usage_metadata.candidates_token_count}")
print(f"Total Token Count: {usage_metadata.total_token_count}")
# Example response:
# Prompt Token Count: 6
# Prompt Character Count: 16
# Prompt Token Count: 6
# Candidates Token Count: 315
# Total Token Count: 321

NodeJS

const {VertexAI} = require('@google-cloud/vertexai');

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function countTokens(
  projectId = 'PROJECT_ID',
  location = 'us-central1',
  model = 'gemini-1.5-flash-001'
) {
  // Initialize Vertex with your Cloud project and location
  const vertexAI = new VertexAI({project: projectId, location: location});

  // Instantiate the model
  const generativeModel = vertexAI.getGenerativeModel({
    model: model,
  });

  const req = {
    contents: [{role: 'user', parts: [{text: 'How are you doing today?'}]}],
  };

  // Prompt tokens count
  const countTokensResp = await generativeModel.countTokens(req);
  console.log('Prompt tokens count: ', countTokensResp);

  // Send text to gemini
  const result = await generativeModel.generateContent(req);

  // Response tokens count
  const usageMetadata = result.response.usageMetadata;
  console.log('Response tokens count: ', usageMetadata);
}

Java

import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.CountTokensResponse;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import java.io.IOException;

public class GetTokenCount {
  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.5-flash-001";

    getTokenCount(projectId, location, modelName);
  }

  // Gets the number of tokens for the prompt and the model's response.
  public static int getTokenCount(String projectId, String location, String modelName)
      throws IOException {
    // Initialize client that will be used to send requests.
    // This client only needs to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {
      GenerativeModel model = new GenerativeModel(modelName, vertexAI);

      String textPrompt = "Why is the sky blue?";
      CountTokensResponse response = model.countTokens(textPrompt);

      int promptTokenCount = response.getTotalTokens();
      int promptCharCount = response.getTotalBillableCharacters();

      System.out.println("Prompt token Count: " + promptTokenCount);
      System.out.println("Prompt billable character count: " + promptCharCount);

      GenerateContentResponse contentResponse = model.generateContent(textPrompt);

      int tokenCount = contentResponse.getUsageMetadata().getPromptTokenCount();
      int candidateTokenCount = contentResponse.getUsageMetadata().getCandidatesTokenCount();
      int totalTokenCount = contentResponse.getUsageMetadata().getTotalTokenCount();

      System.out.println("Prompt token Count: " + tokenCount);
      System.out.println("Candidate Token Count: " + candidateTokenCount);
      System.out.println("Total token Count: " + totalTokenCount);

      return promptTokenCount;
    }
  }
}

Go

import (
	"context"
	"fmt"
	"io"

	"cloud.google.com/go/vertexai/genai"
)

// countTokens returns the number of tokens for this prompt.
func countTokens(w io.Writer, projectID, location, modelName string) error {
	// location := "us-central1"
	// modelName := "gemini-1.5-flash-001"

	ctx := context.Background()
	prompt := genai.Text("Why is the sky blue?")

	client, err := genai.NewClient(ctx, projectID, location)
	if err != nil {
		return fmt.Errorf("unable to create client: %w", err)
	}
	defer client.Close()

	model := client.GenerativeModel(modelName)

	resp, err := model.CountTokens(ctx, prompt)
	if err != nil {
		return err
	}

	fmt.Fprintf(w, "Number of tokens for the prompt: %d\n", resp.TotalTokens)

	resp2, err := model.GenerateContent(ctx, prompt)
	if err != nil {
		return err
	}
	fmt.Fprintf(w, "Number of tokens for the prompt: %d\n", resp2.UsageMetadata.PromptTokenCount)
	fmt.Fprintf(w, "Number of tokens for the candidates: %d\n", resp2.UsageMetadata.CandidatesTokenCount)
	fmt.Fprintf(w, "Total number of tokens: %d\n", resp2.UsageMetadata.TotalTokenCount)

	return nil
}

Get token count from media prompt

This example counts the tokens of a prompt that uses various media types.

REST

To get the token count and the number of billable characters for a prompt by using the Vertex AI API, send a POST request to the publisher model endpoint.

Before using any of the request data, make the following replacements:

  • LOCATION: The region to process the request. Available options include the following:

    Click to expand a partial list of available regions

    • us-central1
    • us-west4
    • northamerica-northeast1
    • us-east4
    • us-west1
    • asia-northeast3
    • asia-southeast1
    • asia-northeast1
  • PROJECT_ID: Your project ID.
  • MODEL_ID: The model ID of the multimodal model that you want to use.
  • ROLE: The role in a conversation associated with the content. Specifying a role is required even in singleturn use cases. Acceptable values include the following:
    • USER: Specifies content that's sent by you.
  • TEXT: The text instructions to include in the prompt.
  • FILE_URI: The URI or URL of the file to include in the prompt. Acceptable values include the following:
    • Cloud Storage bucket URI: The object must either be publicly readable or reside in the same Google Cloud project that's sending the request. For gemini-1.5-pro and gemini-1.5-flash, the size limit is 2 GB. For gemini-1.0-pro-vision, the size limit is 20 MB.
    • HTTP URL: The file URL must be publicly readable. You can specify one video file, one audio file, and up to 10 image files per request. Audio files, video files, and documents can't exceed 15 MB.
    • YouTube video URL:The YouTube video must be either owned by the account that you used to sign in to the Google Cloud console or is public. Only one YouTube video URL is supported per request.

    When specifying a fileURI, you must also specify the media type (mimeType) of the file.

  • MIME_TYPE: The media type of the file specified in the data or fileUri fields. Acceptable values include the following:

    Click to expand MIME types

    • application/pdf
    • audio/mpeg
    • audio/mp3
    • audio/wav
    • image/png
    • image/jpeg
    • image/webp
    • text/plain
    • video/mov
    • video/mpeg
    • video/mp4
    • video/mpg
    • video/avi
    • video/wmv
    • video/mpegps
    • video/flv

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:countTokens

Request JSON body:

{
  "contents": [{
    "role": "ROLE",
    "parts": [
      {
        "file_data": {
          "file_uri": "FILE_URI",
          "mime_type": "MIME_TYPE"
        }
      },
      {
        "text": "TEXT
      }
    ]
  }]
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:countTokens"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:countTokens" | Select-Object -Expand Content

You should receive a JSON response similar to the following.

Python

import vertexai
from vertexai.generative_models import GenerativeModel, Part

# TODO(developer): Update and un-comment below line
# PROJECT_ID = "your-project-id"
vertexai.init(project=PROJECT_ID, location="us-central1")

model = GenerativeModel("gemini-1.5-flash-002")

contents = [
    Part.from_uri(
        "gs://cloud-samples-data/generative-ai/video/pixel8.mp4",
        mime_type="video/mp4",
    ),
    "Provide a description of the video.",
]

# tokens count for user prompt
response = model.count_tokens(contents)
print(f"Prompt Token Count: {response.total_tokens}")
print(f"Prompt Character Count: {response.total_billable_characters}")
# Example response:
#     Prompt Token Count: 16822
#     Prompt Character Count: 30

# Send text to Gemini
response = model.generate_content(contents)
usage_metadata = response.usage_metadata

# tokens count for model response
print(f"Prompt Token Count: {usage_metadata.prompt_token_count}")
print(f"Candidates Token Count: {usage_metadata.candidates_token_count}")
print(f"Total Token Count: {usage_metadata.total_token_count}")
# Example response:
#     Prompt Token Count: 16822
#     Candidates Token Count: 71
#     Total Token Count: 16893

NodeJS

const {VertexAI} = require('@google-cloud/vertexai');

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function countTokens(
  projectId = 'PROJECT_ID',
  location = 'us-central1',
  model = 'gemini-1.5-flash-001'
) {
  // Initialize Vertex with your Cloud project and location
  const vertexAI = new VertexAI({project: projectId, location: location});

  // Instantiate the model
  const generativeModel = vertexAI.getGenerativeModel({
    model: model,
  });

  const req = {
    contents: [
      {
        role: 'user',
        parts: [
          {
            file_data: {
              file_uri:
                'gs://cloud-samples-data/generative-ai/video/pixel8.mp4',
              mime_type: 'video/mp4',
            },
          },
          {text: 'Provide a description of the video.'},
        ],
      },
    ],
  };

  const countTokensResp = await generativeModel.countTokens(req);
  console.log('Prompt Token Count:', countTokensResp.totalTokens);
  console.log(
    'Prompt Character Count:',
    countTokensResp.totalBillableCharacters
  );

  // Sent text to Gemini
  const result = await generativeModel.generateContent(req);
  const usageMetadata = result.response.usageMetadata;

  console.log('Prompt Token Count:', usageMetadata.promptTokenCount);
  console.log('Candidates Token Count:', usageMetadata.candidatesTokenCount);
  console.log('Total Token Count:', usageMetadata.totalTokenCount);
}

Java

import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.Content;
import com.google.cloud.vertexai.api.CountTokensResponse;
import com.google.cloud.vertexai.generativeai.ContentMaker;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import com.google.cloud.vertexai.generativeai.PartMaker;
import java.io.IOException;

public class GetMediaTokenCount {
  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.5-flash-001";

    getMediaTokenCount(projectId, location, modelName);
  }

  // Gets the number of tokens for the prompt with text and video and the model's response.
  public static int getMediaTokenCount(String projectId, String location, String modelName)
      throws IOException {
    // Initialize client that will be used to send requests.
    // This client only needs to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {
      GenerativeModel model = new GenerativeModel(modelName, vertexAI);

      Content content = ContentMaker.fromMultiModalData(
          "Provide a description of the video.",
          PartMaker.fromMimeTypeAndData(
              "video/mp4", "gs://cloud-samples-data/generative-ai/video/pixel8.mp4")
      );

      CountTokensResponse response = model.countTokens(content);

      int tokenCount = response.getTotalTokens();
      System.out.println("Token count: " + tokenCount);

      return tokenCount;
    }
  }
}

Go

import (
	"context"
	"fmt"
	"io"
	"mime"
	"path/filepath"

	"cloud.google.com/go/vertexai/genai"
)

// countTokensMultimodal finds the number of tokens for a multimodal prompt (video+text), and writes to w. Then,
// it calls the model with the multimodal prompt and writes token counts from the response metadata to w.
//
// video is a Google Cloud Storage path starting with "gs://"
func countTokensMultimodal(w io.Writer, projectID, location, modelName string) error {
	// location := "us-central1"
	// modelName := "gemini-1.5-flash-001"
	prompt := "Provide a description of the video."
	video := "gs://cloud-samples-data/generative-ai/video/pixel8.mp4"

	ctx := context.Background()

	client, err := genai.NewClient(ctx, projectID, location)
	if err != nil {
		return fmt.Errorf("unable to create client: %w", err)
	}
	defer client.Close()

	model := client.GenerativeModel(modelName)

	part1 := genai.Text(prompt)

	// Given a video file URL, prepare video file as genai.Part
	part2 := genai.FileData{
		MIMEType: mime.TypeByExtension(filepath.Ext(video)),
		FileURI:  video,
	}

	// Finds the total number of tokens for the 2 parts (text, video) of the multimodal prompt,
	// before actually calling the model for inference.
	resp, err := model.CountTokens(ctx, part1, part2)
	if err != nil {
		return err
	}

	fmt.Fprintf(w, "Number of tokens for the multimodal video prompt: %d\n", resp.TotalTokens)

	res, err := model.GenerateContent(ctx, part1, part2)
	if err != nil {
		return fmt.Errorf("unable to generate contents: %w", err)
	}

	// The token counts are also provided in the model response metadata, after inference.
	fmt.Fprintln(w, "\nModel response")
	md := res.UsageMetadata
	fmt.Fprintf(w, "Prompt Token Count: %d\n", md.PromptTokenCount)
	fmt.Fprintf(w, "Candidates Token Count: %d\n", md.CandidatesTokenCount)
	fmt.Fprintf(w, "Total Token Count: %d\n", md.TotalTokenCount)

	return nil
}

What's next