计算提示中的词元数

此示例演示了如何使用 Vertex AI Gemini API 计算提示中的词元数

深入探索

如需查看包含此代码示例的详细文档,请参阅以下内容:

代码示例

C#

在尝试此示例之前,请按照《Vertex AI 快速入门:使用客户端库》中的 C# 设置说明执行操作。 如需了解详情,请参阅 Vertex AI C# API 参考文档

如需向 Vertex AI 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证


using Google.Cloud.AIPlatform.V1;
using System;
using System.Threading.Tasks;

public class GetTokenCount
{
    public async Task<int> CountTokens(
        string projectId = "your-project-id",
        string location = "us-central1",
        string publisher = "google",
        string model = "gemini-1.5-flash-001"
    )
    {
        var client = new LlmUtilityServiceClientBuilder
        {
            Endpoint = $"{location}-aiplatform.googleapis.com"
        }.Build();

        var request = new CountTokensRequest
        {
            Endpoint = $"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}",
            Model = $"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}",
            Contents =
            {
                new Content
                {
                    Role = "USER",
                    Parts = { new Part { Text = "Why is the sky blue?" } }
                }
            }
        };

        var response = await client.CountTokensAsync(request);
        int tokenCount = response.TotalTokens;
        Console.WriteLine($"There are {tokenCount} tokens in the prompt.");
        return tokenCount;
    }
}

Go

在尝试此示例之前,请按照《Vertex AI 快速入门:使用客户端库》中的 Go 设置说明执行操作。 如需了解详情,请参阅 Vertex AI Go API 参考文档

如需向 Vertex AI 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import (
	"context"
	"fmt"
	"io"

	"cloud.google.com/go/vertexai/genai"
)

// countTokens returns the number of tokens for this prompt.
func countTokens(w io.Writer, prompt, projectID, location, modelName string) error {
	// prompt := "why is the sky blue?"
	// location := "us-central1"
	// modelName := "gemini-1.5-flash-001"

	ctx := context.Background()

	client, err := genai.NewClient(ctx, projectID, location)
	if err != nil {
		return fmt.Errorf("unable to create client: %w", err)
	}
	defer client.Close()

	model := client.GenerativeModel(modelName)

	resp, err := model.CountTokens(ctx, genai.Text(prompt))
	if err != nil {
		return err
	}

	fmt.Fprintf(w, "Number of tokens for the prompt: %d\n", resp.TotalTokens)

	return nil
}

Java

在尝试此示例之前,请按照《Vertex AI 快速入门:使用客户端库》中的 Java 设置说明执行操作。 如需了解详情,请参阅 Vertex AI Java API 参考文档

如需向 Vertex AI 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.CountTokensResponse;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import java.io.IOException;

public class GetTokenCount {
  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.5-flash-001";

    getTokenCount(projectId, location, modelName);
  }

  // Gets the number of tokens for the prompt and the model's response.
  public static int getTokenCount(String projectId, String location, String modelName)
      throws IOException {
    // Initialize client that will be used to send requests.
    // This client only needs to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {
      GenerativeModel model = new GenerativeModel(modelName, vertexAI);

      String textPrompt = "Why is the sky blue?";
      CountTokensResponse response = model.countTokens(textPrompt);

      int promptTokenCount = response.getTotalTokens();
      int promptCharCount = response.getTotalBillableCharacters();

      System.out.println("Prompt token Count: " + promptTokenCount);
      System.out.println("Prompt billable character count: " + promptCharCount);

      GenerateContentResponse contentResponse = model.generateContent(textPrompt);

      int tokenCount = contentResponse.getUsageMetadata().getPromptTokenCount();
      int candidateTokenCount = contentResponse.getUsageMetadata().getCandidatesTokenCount();
      int totalTokenCount = contentResponse.getUsageMetadata().getTotalTokenCount();

      System.out.println("Prompt token Count: " + tokenCount);
      System.out.println("Candidate Token Count: " + candidateTokenCount);
      System.out.println("Total token Count: " + totalTokenCount);

      return promptTokenCount;
    }
  }
}

Node.js

在尝试此示例之前,请按照《Vertex AI 快速入门:使用客户端库》中的 Node.js 设置说明执行操作。 如需了解详情,请参阅 Vertex AI Node.js API 参考文档

如需向 Vertex AI 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

const {VertexAI} = require('@google-cloud/vertexai');

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function countTokens(
  projectId = 'PROJECT_ID',
  location = 'us-central1',
  model = 'gemini-1.5-flash-001'
) {
  // Initialize Vertex with your Cloud project and location
  const vertexAI = new VertexAI({project: projectId, location: location});

  // Instantiate the model
  const generativeModel = vertexAI.getGenerativeModel({
    model: model,
  });

  const req = {
    contents: [{role: 'user', parts: [{text: 'How are you doing today?'}]}],
  };

  const countTokensResp = await generativeModel.countTokens(req);
  console.log('count tokens response: ', countTokensResp);
}

后续步骤

如需搜索和过滤其他 Google Cloud 产品的代码示例,请参阅 Google Cloud 示例浏览器