Subscription properties

Pub/Sub subscription properties are the characteristics of a subscription. You can set subscription properties when you create or update a subscription.

This document describes the different subscription properties you can set for a subscription.

Before you begin

Common subscription properties

When you create a subscription, you must specify a number of options to set up the subscription. Some of these properties are common to all types of subscriptions and are discussed in the next sections.

Message retention duration

The Message retention duration option specifies how long Pub/Sub retains messages after publication. After the message retention duration passes, Pub/Sub might discard the message independent of the acknowledgment state of the message. To retain acknowledged messages for the message retention duration, see Replaying and discarding messages.

The following are the values for the Message retention duration option:

  • Default value = 7 days
  • Minimum value = 10 minutes
  • Maximum value = 7 days

Unacknowledged messages can result from idle subscriptions, backup needs, or slow processing. If you are able to process the messages within 24 hours, the additional charges are not incurred. You can avoid new charges by managing these scenarios as follows:

  • Idle subscriptions. Delete idle subscriptions to avoid incurring subscription message retention charges.

  • Backup storage. If you are using subscription retention as backup storage, you can switch to another storage option such as topic message retention or retaining acknowledged messages. Topic message retention stores messages just once at the topic level and they remain available for all the subscriptions to consume when needed.

  • Processing delays. Add more subscribers (if possible) to process the messages within a day.

Retain acknowledged messages

If you specify Message retention duration, you can also specify if you wish to retain acknowledged messages.

The Retain acknowledged messages option lets you retain acknowledged messages for the specified message retention duration. This option increases message storage fees. For more information, see storage costs.

Expiration period

The Expiration period option lets you extend the expiration period of your subscription.

Subscriptions without any subscriber activity or changes made to the subscription properties expire. If Pub/Sub detects subscriber activity or if you update any of the subscription properties, the subscription deletion clock restarts. Examples of subscriber activities include open connections, active pulls, or successful pushes.

If you specify the expiration period, the value must be longer than the message retention duration specified in the Message retention duration option.

The following are the values for the Expiration period option:

  • Default value = 31 days
  • Minimum value = 1 day
  • Maximum value = 365 days.

To prevent a subscription from expiring, set the expiration period to never expire.

Acknowledgment deadline

The Acknowledgment deadline option specifies the initial deadline after which an unacknowledged message is sent again. You can extend the acknowledgement deadline on a per-message basis by sending subsequent ModifyAckDeadline requests.

The following are the values for the Acknowledgement deadline option:

  • Default value = 10 seconds
  • Minimum value = 10 seconds
  • Maximum value = 600 seconds

In some cases, Pub/Sub client libraries can control the rate of delivery and dynamically modify the acknowledgment deadline. In doing so, the message might get re-delivered before the acknowledgement deadline that you set. To override this behaviour, use minDurationPerAckExtension and maxDurationPerAckExtension. For more information on using these values, see Exactly-once delivery support in client libraries.

Subscription filter

Use the Subscription filter option to specify a string with a filtering expression. If a subscription has a filter, the subscription only delivers the messages that match the filter. The Pub/Sub service automatically acknowledges the messages that don't match the filter.

  • You can filter messages by their attributes, but not by the data in the message.

  • If unspecified, the subscription doesn't filter messages and subscribers receive all messages.

  • Filters cannot be changed or removed after you apply them.

When you receive messages from a subscription with a filter, you don't incur egress fees for the messages that Pub/Sub automatically acknowledges. You incur message delivery fees and seek-related storage fees for these messages.

For more information, see Filter messages from a subscription.

Message ordering

When a subscription has Message ordering enabled, the subscriber clients receive messages published in the same region with the same ordering key in the order in which the messages were received by the service.

When using ordered delivery, acknowledgments for later messages are not processed until acknowledgments for earlier messages are processed.

The publishers must send messages with an ordering key so that Pub/Sub can deliver the messages in order.

If not set, Pub/Sub may not deliver messages in order, even if they have an ordering key.

Dead letter topic

When a message can't be delivered after a set number of delivery attempts or a subscriber can't acknowledge the message, you can configure a dead-letter topic to which these messages can be republished.

If you set a dead-letter topic, you can also specify the maximum number of delivery attempts. The following are the values for maximum number of delivery attempts for the dead-letter topic:

  • Default value = 5 delivery attempts
  • Minimum value = 5 delivery attempts
  • Maximum value = 100 delivery attempts

If the dead-letter topic is in a different project than the subscription, you must also specify the project ID with the dead-letter topic.

For more information, see Forwarding to dead-letter topics.

Retry policy

If the acknowledgment deadline expires or a subscriber responds with a negative acknowledgment, Pub/Sub can send the message again. This redelivery attempt is known as the subscription Retry policy.

By default, the retry policy for a subscription is set to use Retry immediately. With this option, Pub/Sub resends the message when the acknowledgment deadline expires or a subscriber responds with a negative acknowledgment.

You can also set the value to Retry after exponential backoff delay. In this case, you must specify the maximum and minimum backoff values.

Here are some guidelines to set the values for maximum and minimum backoff values:

  • If you set the maximum value for the backoff duration, the default value for the minimum backoff duration is 10 seconds.

  • If you set the minimum value for the backoff duration, the default value for the maximum backoff duration is 600 seconds.

  • The longest backoff duration that you can specify is 600 seconds.

Retry policy and batched messages

If messages are in a batch, Pub/Sub starts the exponential backoff when one of the following occurs:

  • The subscriber sends a negative acknowledgment for every message in the batch.

  • The acknowledgment deadline expires.

Retry policy and push subscription

If you receive messages from a push subscription, Pub/Sub might redeliver messages after the push backoff instead of the exponential backoff duration. When the push backoff is longer than the exponential backoff duration, Pub/Sub redelivers unacknowledged messages after the push backoff.

Pull subscription properties

When you configure a pull subscription, you can specify the following properties.

Exactly-once delivery

Exactly-once delivery. If set, Pub/Sub fulfills exactly-once delivery guarantees. If unspecified, the subscription supports at-least-once delivery for each message.

Push subscription properties

When you configure a push subscription, you can specify the following properties.

Endpoints

Endpoint URL (required). A publicly accessible HTTPS address. The server for the push endpoint must have a valid SSL certificate signed by a certificate authority. The Pub/Sub service delivers messages to push endpoints from the same Google Cloud region that the Pub/Sub service stores the messages. The Pub/Sub service delivers messages from the same Google Cloud region on a best-effort basis.

Pub/Sub no longer requires proof of ownership for push subscription URL domains. If your domain receives unexpected POST requests from Pub/Sub, you can report suspected abuse.

Authentication

Enable authentication. When enabled, messages delivered by Pub/Sub to the push endpoint include an authorization header to allow the endpoint to authenticate the request. Automatic authentication and authorization mechanisms are available for App Engine Standard and Cloud Functions endpoints hosted in the same project as the subscription.

The authentication configuration for an authenticated push subscription consists of a user-managed service account, and the audience parameters that are specified in a create, patch, or ModifyPushConfig call. You must also grant a special Google-managed service account a specific role - as discussed in the next section.

  • User-managed service account (required). The service account associated with the push subscription. This account is used as the email claim of the generated JSON Web Token (JWT). The following is a list of requirements for the service account:

  • Audience. A single, case-insensitive string that the webhook uses to validate the intended audience of this particular token.

  • Google-managed service account (required).

    • Pub/Sub automatically creates a service account for you with the format service-{PROJECT_NUMBER}@gcp-sa-pubsub.iam.gserviceaccount.com.

    • This service account must be granted the iam.serviceAccounts.getOpenIdToken permission (included in the roles/iam.serviceAccountTokenCreator role) to allow Pub/Sub to create JWT tokens for authenticated push requests.

Payload unwrapping

The Enable payload unwrapping option strips Pub/Sub messages of all message metadata, except for the message data. With payload unwrapping, the message data is delivered directly as the HTTP body.

  • Write metadata. Adds previously removed message metadata back into the request header.

BigQuery properties

When you select a subscription delivery type as Write to BigQuery, you can specify the following additional properties.

Use topic schema

This option lets Pub/Sub use the schema of the Pub/Sub topic to which the subscription is attached. In addition, Pub/Sub writes the fields in messages to the corresponding columns in the BigQuery table.

When you use this option, remember to check the following additional requirements:

  • The fields in the topic schema and the BigQuery schema must have the same names and their types must be compatible with each other.

  • Any optional field in the topic schema must also be optional in the BigQuery schema.

  • Required fields in the topic schema don't need to be required in the BigQuery schema.

  • If there are BigQuery fields that are not present in the topic schema, these BigQuery fields must be in mode NULLABLE.

  • If the topic schema has additional fields that are not present in the BigQuery schema and these fields can be dropped, select the option Drop unknown fields.

  • You can select only one of the subscription properties, Use topic schema or Use table schema.

If you don't select the Use topic schema or Use table schema option, ensure that the BigQuery table has a column called data of type BYTES, STRING, or JSON. Pub/Sub writes the message to this BigQuery column.

You might not see changes to the Pub/Sub topics schema or BigQuery table schema take effect immediately with messages written to the BigQuery table. For example, if the Drop unknown fields option is enabled and a field is present in the Pub/Sub schema, but not the BigQuery schema, messages written to the BigQuery table might still not contain the field after adding it to the BigQuery schema. Eventually, the schemas synchronize and subsequent messages include the field.

When you use the Use topic schema option for your BigQuery subscription, you can also take advantage of BigQuery change data capture (CDC). CDC updates your BigQuery tables by processing and applying changes to existing rows.

To learn more about this feature, see Stream table updates with change data capture.

To learn how to use this feature with BigQuery subscriptions, see BigQuery change data capture.

Use table schema

This option lets Pub/Sub use the schema of the BigQuery table to write the fields of a JSON message to the corresponding columns. When you use this option, remember to check the following additional requirements:

  • Published messages must be in JSON format.

  • If the subscription's topic has a schema associated with it, then the message encoding property must be set to JSON.

  • If there are BigQuery fields that are not present in the messages, these BigQuery fields must be in mode NULLABLE.

  • If the messages have additional fields that are not present in the BigQuery schema and these fields can be dropped, select the option Drop unknown fields.

  • In the JSON message, DATE, DATETIME, TIME, and TIMESTAMP values must be integers that adhere to the supported representations.

  • In the JSON message, NUMERIC and BIGNUMERIC values must be bytes encoded using the BigDecimalByteStringEncoder.

  • You can select only one of the subscription properties, Use topic schema or Use table schema.

If you don't select the Use topic schema or Use table schema option, ensure that the BigQuery table has a column called data of type BYTES, STRING, or JSON. Pub/Sub writes the message to this BigQuery column.

You might not see changes to the BigQuery table schema take effect immediately with messages written to the BigQuery table. For example, if the Drop unknown fields option is enabled and a field is present in the messages, but not in the BigQuery schema, messages written to the BigQuery table might still not contain the field after adding it to the BigQuery schema. Eventually, the schema synchronizes and subsequent messages include the field.

When you use the Use table schema option for your BigQuery subscription, you can also take advantage of BigQuery change data capture (CDC). CDC updates your BigQuery tables by processing and applying changes to existing rows.

To learn more about this feature, see Stream table updates with change data capture.

To learn how to use this feature with BigQuery subscriptions, see BigQuery change data capture.

Drop unknown fields

This option is used with the Use topic schema or Use table schema option. This option lets Pub/Sub drop any field that is present in the topic schema or message but not in the BigQuery schema. Without Drop unknown fields set, messages with extra fields are not written to BigQuery and remain in the subscription backlog. The subscription ends up in an error state.

Write metadata

This option lets Pub/Sub write the metadata of each message to additional columns in the BigQuery table. Else, the metadata is not written to the BigQuery table.

If you select the Write metadata option, ensure that the BigQuery table has the fields described in the following table.

If you don't select the Write metadata option, then the destination BigQuery table only requires the data field unless use_topic_schema is true. If you select both the Write metadata and Use topic schema options, then the schema of the topic must not contain any fields with names that match those of the metadata parameters. This limitation includes camelcase versions of these snake case parameters.

Parameters
subscription_name

STRING

Name of a subscription.

message_id

STRING

ID of a message

publish_time

TIMESTAMP

The time of publishing a message.

data

BYTES, STRING, or JSON

The message body.

The data field is required for all destination BigQuery tables that don't select Use topic schema. If the field is of type JSON, then the message body must be valid JSON.

attributes

STRING or JSON

A JSON object containing all message attributes. It also contains additional fields that are part of the Pub/Sub message including the ordering key, if present.

Cloud Storage properties

When you select a subscription delivery type as Write to Cloud Storage, you can specify the following additional properties.

Bucket name

A Cloud Storage bucket must already exist before you create a Cloud Storage subscription.

The messages are sent as batches and stored in the Cloud Storage bucket. A single batch or file is stored as an object in the bucket.

The Cloud Storage bucket must have Requester Pays disabled.

To create a Cloud Storage bucket, see Create buckets.

Filename prefix, suffix, and datetime

The output Cloud Storage files generated by the Cloud Storage subscription are stored as objects in the Cloud Storage bucket. The name of the object stored in the Cloud Storage bucket is of the following format: <file-prefix><UTC-date-time>_<uuid><file-suffix>.

The following list includes details of the file format and the fields that you can customize:

  • <file-prefix> is the custom filename prefix. This is an optional field.

  • <UTC-date-time> is a customizable auto-generated string based on the time the object is created.

  • <uuid> is an auto-generated random string for the object.

  • <file-suffix> is the custom filename suffix. This is an optional field. The filename suffix cannot end in "/".

  • You can change the filename prefix and suffix:

    • For example, if the value of the filename prefix is prod_ and the value of the filename suffix is _archive, a sample object name is prod_2023-09-25T04:10:00+00:00_uN1QuE_archive.

    • If you don't specify the filename prefix and suffix, the object name stored in the Cloud Storage bucket is of the format: <UTC-date-time>_<uuid>.

    • Cloud Storage object naming requirements also apply to the filename prefix and suffix. For more information, see About Cloud Storage objects.

  • You can change how the date and time are displayed in the filename:

    • Required datetime matchers that you can use only once: year (YYYY or YY), month (MM), day (DD), hour (hh), minute (mm), second (ss). For example, YY-YYYY or MMM is invalid.

    • Optional matchers that you can use only once: datetime separator (T) and and timezone offset (Z or +00:00).

    • Optional elements that you can use multiple times: hyphen (-), underscore (_), colon (:), and forward slash (/).

    • For example, if the value of the filename datetime format is YYYY-MM-DD/hh_mm_ssZ, a sample object name is prod_2023-09-25/04_10_00Z_uNiQuE_archive.

    • If the filename datetime format ends in a character which is not a matcher, that character will replace the separator between <UTC-date-time> and <uuid>. For example, if the value of the filename datetime format is YYYY-MM-DDThh_mm_ss-, a sample object name is prod_2023-09-25T04_10_00-uNiQuE_archive.

File batching

Cloud Storage subscriptions let you decide when you want to create a new output file that is stored as an object in the Cloud Storage bucket. Pub/Sub writes an output file when one of the specified batching conditions are met. The following are the Cloud Storage batching conditions:

  • Storage batch max duration. This is a required setting. The Cloud Storage subscription writes a new output file if the specified value of max duration is exceeded. If you don't specify the value, a default value of 5 minutes is applied. The following are the applicable values for max duration:

    • Minimum value = 1 minute
    • Default value = 5 minutes
    • Maximum value = 10 minutes
  • Storage batch max bytes. This is an optional setting. The Cloud Storage subscription writes a new output file if the specified value of max bytes is exceeded. The following are the applicable values for max bytes:

    • Minimum value = 1 KB
    • Maximum value = 10 GiB

For example, you can configure max duration as 6 minutes and max bytes as 2 GB. If at the 4th minute, the output file reaches a file size of 2 GB, Pub/Sub finalizes the previous file and starts writing to a new file.

A Cloud Storage subscription might write to multiple files in a Cloud Storage bucket simultaneously. If you have configured your subscription to create a new file every 6th minute, you might observe multiple Cloud Storage files being created every 6 minutes.

In some situations, Pub/Sub might start writing to a new file earlier than the time configured by the file batching conditions. A file might also exceed the Max bytes value if the subscription receives messages larger than the Max bytes value.

File format

When you create a Cloud Storage subscription, you can specify the format of the output files that are to be stored in a Cloud Storage bucket as Text or Avro.

  • Text: The messages are stored as plain text. A newline character separates a message from the previous message in the file. Only message payloads are stored, not attributes or other metadata.

  • Avro: The messages are stored in Apache Avro binary format.

    When you select Avro, you can additionally enable the write metadata option. This option lets you store the message metadata along with the message.

    Metadata such as subscription_name, message_id, publish_time, and attributes fields are written to top-level fields in the output Avro object while all other message properties other than data (for example, an ordering_key, if present) are added as entries in the attributes map.

    If write metadata is disabled, only the message payload is written to the output Avro object.

    Here is the Avro schema for the output messages without write metadata enabled:

    {
      "type": "record",
      "namespace": "com.google.pubsub",
      "name": "PubsubMessage",
      "fields": [
        { "name": "data", "type": "bytes" }
      ]
    }
    

    Here is the Avro schema for the output messages with write metadata enabled:

    {
      "type": "record",
      "namespace": "com.google.pubsub",
      "name": "PubsubMessageWithMetadata",
      "fields": [
        { "name": "subscription_name", "type": "string" },
        { "name": "message_id", "type": "string"  },
        { "name": "publish_time", "type": {
            "type": "long",
            "logicalType": "timestamp-micros"
          }
        },
        { "name": "attributes", "type": { "type": "map", "values": "string" } },
        { "name": "data", "type": "bytes" }
      ]
    }
    

What's next