[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-08-17 (世界標準時間)。"],[],[],null,["# Publish message overview\n\nTo publish a message with Pub/Sub, a publisher application\ncreates and sends messages to a topic.\n\nThis document provides an overview about the publish workflow, including the\nconcept of topics and messages.\n\nAbout topics\n------------\n\nA Pub/Sub *topic* is a named resource that represents a feed of\nmessages. When a publisher sends a message, it targets a specific topic. The\nPub/Sub service uses this topic name to route the message to all\nsubscriptions attached to the topic. If there are multiple subscribers for a\nsubscription, only one subscriber in the subscription receives the message.\n\nPublishers don't have to know how many subscribers exist. They focus\non the topic, ensuring the separation of concerns between message sending\nand message receiving.\n\nPub/Sub supports two kinds of topics: a standard topic\nand an import topic.\n\n### Properties of a topic\n\nWhen you create or update a topic, you can specify the topic properties.\n\nFor more information about the topic properties, see [Properties of a topic](/pubsub/docs/create-topic#properties_of_a_topic).\n\n### About import topics\n\nAn import topic lets Pub/Sub ingest streaming data from\nanother source and act as the publisher application that sends the data to\nthe topic. You can enable ingestion on a\ntopic by using the console, Google Cloud CLI, REST\ncalls, or the client libraries. As part of managing the import topic,\nGoogle Cloud provides monitoring and scaling of the ingestion pipeline.\n\nWithout an import topic, streaming data into Pub/Sub from a data\nsource requires an additional service. This additional service pulls\ndata from the original source and publishes it to Pub/Sub. The\nadditional service can be a streaming engine such as Apache Spark or a\ncustom-written service. You must also configure, deploy, run, scale, and\nmonitor this service.\n\nThe following is a list of important information regarding import topics:\n\n- Similar to a standard topic, you can still manually publish to an\n import topic.\n\n- You can only attach a single ingestion source to an import topic.\n\nWe recommend import topics for streaming data. If you are considering\nbatch data ingestion into BigQuery instead of streaming\ndata ingestion, you can try\n[BigQuery Data Transfer Service](/bigquery/docs/dts-introduction).\nIf you want to ingest data into Cloud Storage,\n[Storage Transfer Service](/storage-transfer/docs/overview)\n(STS) is a good option.\n\nPub/Sub supports the following sources for\nimport topics:\n\n- [Amazon Kinesis Data Streams](/pubsub/docs/create-aws-kinesis-import-topic)\n- [Cloud Storage](/pubsub/docs/create-cloud-storage-import-topic)\n- [Azure Event Hubs](/pubsub/docs/create-azure-event-hub-import-topic)\n- [Amazon MSK](/pubsub/docs/create-amazon-msk-import-topic)\n- [Confluent Cloud](/pubsub/docs/create-confluent-cloud-import-topic)\n\n### Data replication in a topic\n\nA Pub/Sub topic uses three zones to store data. The service\nsupports synchronous replication to at least two zones, and best-effort\nreplication to an additional third zone. Pub/Sub replication is\nwithin just one region.\n\nAbout messages\n--------------\n\nA Pub/Sub *message* is the data that moves through the service.\n\nA message consists of fields with the message data and metadata. One of the following\nmust be specified in a message.\n\n- **The message data** : This is the core content of the message and can be\n any text or binary data. It represents the actual information\n you want to communicate between publishers and subscribers. If you're using\n the [REST API](/pubsub/docs/reference/rest/v1/PubsubMessage) directly,\n the message data must be base64-encoded.\n See the example in the REST tab in the [Publish messages](/pubsub/docs/publisher#rest) section.\n\n- **An ordering key** : This is an\n identifier that represents the entity for which messages must be ordered.\n Messages with the same ordering key are expected to be delivered to\n a subscriber in the order they were published. An ordering key is only\n required if you want ordered delivery of your messages. For more information\n about ordering keys, see [Order message](/pubsub/docs/ordering).\n\n- **Attributes** : These are optional key-value pairs that provide additional\n context and information about the message. They can be used for routing,\n filtering, or enriching the message content. For example, you could add\n attributes such as timestamps or transaction IDs.\n For more information about attributes used in publishing messages, see\n [Use attributes to publish a message](/pubsub/docs/publisher#using-attributes).\n\nThe Pub/Sub service adds the following fields to the message:\n\n- A message ID unique to the topic\n- A timestamp for when the Pub/Sub service receives the message\n\nFor example, here is a [message format in JSON](/pubsub/docs/reference/rest/v1/PubsubMessage): \n\n {\n \"data\": \"This is the core message content.\",\n \"attributes\": {\n \"category\": \"notification\",\n \"user_id\": \"12345\",\n \"priority\": \"medium\"\n },\n \"orderingKey\": \"12345\"\n }\n\nWhen publishing messages using Pub/Sub client libraries,\nprovide the message `data` as a byte array, such as a Node.js\n`Buffer`. If your data is a string, you must first encode it to bytes, for\nexample by using UTF-8 encoding, before passing it to the client library.\n\nIf you're using the [REST API](/pubsub/docs/reference/rest/v1/PubsubMessage) directly,\nthe message data must be base64-encoded and sent as a string.\n\nPublish message workflow\n------------------------\n\nTo publish a message with Pub/Sub, a publisher application creates\nand sends messages to a **topic**.\n\n1. Create a message containing your data.\n2. Select any optional publishing attributes.\n3. Send a request to the Pub/Sub server to publish the message to a specified topic.\n4. The Pub/Sub service receives the message and processes it as follows:\n\n - The message is stored for distribution.\n\n - The message is replicated across multiple zones for durability and high availability.\n\n - Pub/Sub identifies subscribers with subscriptions matching\n the message's topic, and delivers a copy of the message to each.\n\nPub/Sub offers at-least-once message delivery and best-effort ordering to\nexisting subscribers.\n\nFor more information about the Pub/Sub system, see\n[Overview of the Pub/Sub service](/pubsub/docs/pubsub-basics).\n\nFor more information about how Pub/Sub works,\nsee [Architectural overview of Pub/Sub](/pubsub/architecture).\n\nWhat's next\n-----------\n\n- [Create a standard topic](/pubsub/docs/create-topic)\n\n- [Publish messages to a topic](/pubsub/docs/publisher)\n\n- [Learn about schemas](/pubsub/docs/schemas)\n\n*Apache Kafka® is a registered\ntrademark of The Apache Software Foundation or its affiliates in the United\nStates and/or other countries.*"]]