View an Apache Kafka for BigQuery topic

To view the detailed information about a single topic, you can use the Google Cloud console, the Google Cloud CLI, the client library, the Apache Kafka for BigQuery API, or the open source Apache Kafka APIs.

Required roles and permissions to view a topic

To get the permissions that you need to view a topic, ask your administrator to grant you the Managed Kafka Viewer (roles/managedkafka.viewer) IAM role on your project. For more information about granting roles, see Manage access.

This predefined role contains the permissions required to view a topic. To see the exact permissions that are required, expand the Required permissions section:

Required permissions

The following permissions are required to view a topic:

  • List topics: managedkafka.topics.list
  • Get topic: managedkafka.topics.get

You might also be able to get these permissions with custom roles or other predefined roles.

For more information about the Managed Kafka Viewer (roles/managedkafka.viewer) IAM role, see Apache Kafka for BigQuery predefined roles.

Topic properties in the console

In the console, you can view the following topic properties:

  • Configurations: This tab provides general configuration details about the topic, including the following:

    • Name: The unique identifier of the topic within the cluster.

    • Partitions: The number of partitions in the topic. Partitions divide the topic's data into segments for scalability and parallelism.

    • Replicas: The number of copies (replicas) maintained for each partition to ensure data redundancy and availability.

    • Cluster: The name of the Apache Kafka for BigQuery cluster to which the topic belongs.

    • Region: The Google Cloud region where the cluster and the topic is located.

    • Non-default topic parameters: Any topic-level configuration overrides that have been set for the topic, different from the cluster-wide defaults.

  • Monitoring: This tab provides visual charts that display key metrics related to the topic's activity and performance. These charts include the following:

    • Byte count: A time-series chart showing the rate at which bytes are produced or sent to the topic. This indicates the volume of data published to the topic over time. The corresponding metric is managedkafka.googleapis.com/byte_in_count.

    • Request count: A time-series chart representing the rate of requests made to the topic. It reflects the overall activity and usage of the topic. The related metric is managedkafka.googleapis.com/topic_request_count.

    • Log segments by partition: This chart displays the number of active log segments for each partition within the topic. Log segments are the physical files on disk where Kafka stores the topic data. The relevant metric is managedkafka.googleapis.com/log_segments.

  • Consumer groups: This section lists the consumer groups that are subscribed to the topic. A consumer group is a set of consumers that work together to read messages from the topic.

View a topic

Console

  1. In the Google Cloud console, go to the Clusters page.

    Go to Clusters

    The clusters that you created in a project are listed.

  2. Click the cluster for which you want to see the topics.

    The cluster details page is displayed. In the cluster details page, for the Resources tab, the topics are listed.

  3. To view a specific topic, click the topic name.

    The topic details page is displayed.

gcloud

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Run the gcloud beta managed-kafka topics describe command:

    gcloud beta managed-kafka topics describe TOPIC_ID \
        --cluster=CLUSTER_ID --location=LOCATION_ID
    

    This command fetches and displays comprehensive details about the specified topic. This information includes its configuration settings, such as the number of partitions, replication factor, and any topic-level configuration overrides.

    Replace the following:

    • TOPIC_ID: The ID of the topic.

    • CLUSTER_ID: The ID of the cluster containing the topic.

    • LOCATION_ID: The location of the cluster.

Go

import (
	"context"
	"fmt"
	"io"

	"cloud.google.com/go/managedkafka/apiv1/managedkafkapb"
	"google.golang.org/api/option"

	managedkafka "cloud.google.com/go/managedkafka/apiv1"
)

func getTopic(w io.Writer, projectID, region, clusterID, topicID string, opts ...option.ClientOption) error {
	// projectID := "my-project-id"
	// region := "us-central1"
	// clusterID := "my-cluster"
	// topicID := "my-topic"
	ctx := context.Background()
	client, err := managedkafka.NewClient(ctx, opts...)
	if err != nil {
		return fmt.Errorf("managedkafka.NewClient got err: %w", err)
	}
	defer client.Close()

	clusterPath := fmt.Sprintf("projects/%s/locations/%s/clusters/%s", projectID, region, clusterID)
	topicPath := fmt.Sprintf("%s/topics/%s", clusterPath, topicID)
	req := &managedkafkapb.GetTopicRequest{
		Name: topicPath,
	}
	topic, err := client.GetTopic(ctx, req)
	if err != nil {
		return fmt.Errorf("client.GetTopic got err: %w", err)
	}
	fmt.Fprintf(w, "Got topic: %#v\n", topic)
	return nil
}

Java

import com.google.api.gax.rpc.ApiException;
import com.google.cloud.managedkafka.v1.ManagedKafkaClient;
import com.google.cloud.managedkafka.v1.Topic;
import com.google.cloud.managedkafka.v1.TopicName;
import java.io.IOException;

public class GetTopic {

  public static void main(String[] args) throws Exception {
    // TODO(developer): Replace these variables before running the example.
    String projectId = "my-project-id";
    String region = "my-region"; // e.g. us-east1
    String clusterId = "my-cluster";
    String topicId = "my-topic";
    getTopic(projectId, region, clusterId, topicId);
  }

  public static void getTopic(String projectId, String region, String clusterId, String topicId)
      throws Exception {
    try (ManagedKafkaClient managedKafkaClient = ManagedKafkaClient.create()) {
      // This operation is being handled synchronously.
      Topic topic =
          managedKafkaClient.getTopic(TopicName.of(projectId, region, clusterId, topicId));
      System.out.println(topic.getAllFields());
    } catch (IOException | ApiException e) {
      System.err.printf("managedKafkaClient.getTopic got err: %s", e.getMessage());
    }
  }
}

What's next?