Restrict a credential's Cloud Storage permissions

This page explains how to use Credential Access Boundaries to downscope, or restrict, the Identity and Access Management (IAM) permissions that a short-lived credential can use.

You can use Credential Access Boundaries to generate OAuth 2.0 access tokens that represent a service account but have fewer permissions than the service account. For example, if one of your customers needs to access Cloud Storage data that you control, you can do the following:

  1. Create a service account that can access every Cloud Storage bucket that you own.
  2. Generate an OAuth 2.0 access token for the service account.
  3. Apply a Credential Access Boundary that only allows access to the bucket that contains your customer's data.

How Credential Access Boundaries work

To downscope permissions, you define a Credential Access Boundary that specifies which resources the short-lived credential can access, as well as an upper bound on the permissions that are available on each resource. You can then create a short-lived credential, then exchange it for a new credential that respects the Credential Access Boundary.

If you need to give principals a distinct set of permissions for each session, using Credential Access Boundaries can be more efficient than creating many different service accounts and granting each service account a different set of roles.

Examples of Credential Access Boundaries

The following sections show examples of Credential Access Boundaries for common use cases. You use the Credential Access Boundary when you exchange an OAuth 2.0 access token for a downscoped token.

Limit permissions for a bucket

The following example shows a simple Credential Access Boundary. It applies to the Cloud Storage bucket example-bucket, and it sets the upper bound to the permissions included in the Storage Object Viewer role (roles/storage.objectViewer):

{
  "accessBoundary": {
    "accessBoundaryRules": [
      {
        "availablePermissions": [
          "inRole:roles/storage.objectViewer"
        ],
        "availableResource": "//storage.googleapis.com/projects/_/buckets/example-bucket"
      }
    ]
  }
}

Limit permissions for multiple buckets

The following example shows a Credential Access Boundary that includes rules for multiple buckets:

  • The Cloud Storage bucket example-bucket-1: For this bucket, only the permissions in the Storage Object Viewer role (roles/storage.objectViewer) are available.
  • The Cloud Storage bucket example-bucket-2: For this bucket, only the permissions in the Storage Object Creator role (roles/storage.objectCreator) are available.
{
  "accessBoundary": {
    "accessBoundaryRules": [
      {
        "availablePermissions": [
          "inRole:roles/storage.objectViewer"
        ],
        "availableResource": "//storage.googleapis.com/projects/_/buckets/example-bucket-1"
      },
      {
        "availablePermissions": [
          "inRole:roles/storage.objectCreator"
        ],
        "availableResource": "//storage.googleapis.com/projects/_/buckets/example-bucket-2"
      }
    ]
  }
}

Limit permissions for specific objects

You can also use IAM Conditions to specify which Cloud Storage objects a principal can access. For example, you can add a condition that makes permissions available for objects whose name starts with customer-a:

{
  "accessBoundary": {
    "accessBoundaryRules": [
      {
        "availablePermissions": [
          "inRole:roles/storage.objectViewer"
        ],
        "availableResource": "//storage.googleapis.com/projects/_/buckets/example-bucket",
        "availabilityCondition": {
          "expression" : "resource.name.startsWith('projects/_/buckets/example-bucket/objects/customer-a')"
        }
      }
    ]
  }
}

Limit permissions when listing objects

When you list the objects in a Cloud Storage bucket, you are calling a method on a bucket resource, not an object resource. As a result, if a condition is evaluated for a list request, and the condition refers to the resource name, then the resource name identifies the bucket, not an object within the bucket. For example, when you list objects in example-bucket, the resource name is projects/_/buckets/example-bucket.

This naming convention can lead to unexpected behavior when you list objects. For example, suppose you want a Credential Access Boundary that allows view access to objects in example-bucket with the prefix customer-a/invoices/. You might try to use the following condition in the Credential Access Boundary:

Incomplete: Condition that checks only the resource name

resource.name.startsWith('projects/_/buckets/example-bucket/objects/customer-a/invoices/')

This condition works for reading objects, but not for listing objects:

  • When a principal tries to read an object in example-bucket with the prefix customer-a/invoices/, the condition evaluates to true.
  • When a principal tries to list objects with that prefix, the condition evaluates to false. The value of resource.name is projects/_/buckets/example-bucket, which does not start with projects/_/buckets/example-bucket/objects/customer-a/invoices/.

To prevent this issue, in addition to using resource.name.startsWith(), your condition can check an API attribute named storage.googleapis.com/objectListPrefix. This attribute contains the value of the prefix parameter that was used to filter the list of objects. As a result, you can write a condition that refers to the value of the prefix parameter.

The following example shows how to use the API attribute in a condition. It allows reading and listing objects in example-bucket with the prefix customer-a/invoices/:

Complete: Condition that checks the resource name and the prefix

resource.name.startsWith('projects/_/buckets/example-bucket/objects/customer-a/invoices/')  ||
    api.getAttribute('storage.googleapis.com/objectListPrefix', '')
                     .startsWith('customer-a/invoices/')

You can now use this condition in a Credential Access Boundary:

{
  "accessBoundary": {
    "accessBoundaryRules": [
      {
        "availablePermissions": [
          "inRole:roles/storage.objectViewer"
        ],
        "availableResource": "//storage.googleapis.com/projects/_/buckets/example-bucket",
        "availabilityCondition": {
          "expression":
            "resource.name.startsWith('projects/_/buckets/example-bucket/objects/customer-a/invoices/') || api.getAttribute('storage.googleapis.com/objectListPrefix', '').startsWith('customer-a/invoices/')"
        }
      }
    ]
  }
}

Before you begin

Before you use Credential Access Boundaries, make sure you meet the following requirements:

  • You need to downscope permissions only for Cloud Storage, not for other Google Cloud services.

    If you need to downscope permissions for additional Google Cloud services, you can create multiple service accounts and grant a different set of roles to each service account.

  • You can use OAuth 2.0 access tokens for authentication. Other types of short-lived credentials do not support Credential Access Boundaries.

Also, you must enable the required APIs:

  • Enable the IAM and Security Token Service APIs.

    Enable the APIs

Create a downscoped short-lived credential

To create an OAuth 2.0 access token with downscoped permissions, follow these steps:

  1. Grant the appropriate IAM roles to a user or service account.
  2. Define a Credential Access Boundary that sets an upper bound on the permissions that are available to the user or service account.
  3. Create an OAuth 2.0 access token for the user or service account.
  4. Exchange the OAuth 2.0 access token for a new token that respects the Credential Access Boundary.

You can then use the new, downscoped OAuth 2.0 access token to authenticate requests to Cloud Storage.

Grant IAM roles

A Credential Access Boundary sets an upper bound on the available permissions for a resource. It can subtract permissions from a principal, but it cannot add permissions that the principal does not already have.

As a result, you must also grant roles to the principal that provide the permissions they need, either on a Cloud Storage bucket or on a higher-level resource, such as the project.

For example, suppose you need to create a downscoped short-lived credential that allows a service account to create objects in a bucket:

  • At a minimum, you must grant a role to the service account that includes the storage.objects.create permission, such as the Storage Object Creator role (roles/storage.objectCreator). The Credential Access Boundary must also include this permission.
  • You can also grant a role that includes more permissions, such as the Storage Object Admin role (roles/storage.objectAdmin). The service account can use only the permissions that appear in both the role grant and the Credential Access Boundary.

To learn about predefined roles for Cloud Storage, see Cloud Storage roles.

Components of a Credential Access Boundary

A Credential Access Boundary is an object that contains a list of access boundary rules. Each rule contains the following information:

  • The resource that the rule applies to.
  • The upper bound of the permissions that are available on that resource.
  • Optional: A condition that further restricts permissions. A condition includes the following:
    • A condition expression that evaluates to true or false. If it evaluates to true, access is allowed; otherwise, access is denied.
    • Optional: A title that identifies the condition.
    • Optional: A description with more information about the condition.

If you apply a Credential Access Boundary to a short-lived credential, then the credential can access only the resources in the Credential Access Boundary. No permissions are available on other resources.

A Credential Access Boundary can contain up to 10 access boundary rules. You can apply only one Credential Access Boundary to each short-lived credential.

When represented as a JSON object, a Credential Access Boundary contains the following fields:

Fields
accessBoundary

object

A wrapper for the Credential Access Boundary.

accessBoundary.accessBoundaryRules[]

object

A list of access boundary rules to apply to a short-lived credential.

accessBoundary.accessBoundaryRules[].availablePermissions[]

string

A list that defines the upper bound on the available permissions for the resource.

Each value is the identifier for an IAM predefined role or custom role, with the prefix inRole:. For example: inRole:roles/storage.objectViewer. Only the permissions in these roles will be available.

accessBoundary.accessBoundaryRules[].availableResource

string

The full resource name of the Cloud Storage bucket that the rule applies to. Use the format //storage.googleapis.com/projects/_/buckets/bucket-name.

accessBoundary.accessBoundaryRules[].availabilityCondition

object

Optional. A condition that restricts the availability of permissions to specific Cloud Storage objects.

Use this field if you want to make permissions available for specific objects, rather than all objects in a Cloud Storage bucket.

accessBoundary.accessBoundaryRules[].availabilityCondition.expression

string

A condition expression that specifies the Cloud Storage objects where permissions are available.

To learn how to refer to specific objects in a condition expression, see resource.name attribute.

accessBoundary.accessBoundaryRules[].availabilityCondition.title

string

Optional. A short string that identifies the purpose of the condition.

accessBoundary.accessBoundaryRules[].availabilityCondition.description

string

Optional. Details about the purpose of the condition.

For examples in JSON format, see Examples of Credential Access Boundaries on this page.

Create an OAuth 2.0 access token

Before you create a downscoped short-lived credential, you must create a normal OAuth 2.0 access token. You can then exchange the normal credential for a downscoped credential. When you create the access token, use the OAuth 2.0 scope https://www.googleapis.com/auth/cloud-platform.

To create an access token for a service account, you can complete the server-to-server OAuth 2.0 flow, or you can use the Service Account Credentials API to generate an OAuth 2.0 access token.

To create an access token for a user, see Obtaining OAuth 2.0 access tokens. You can also use the OAuth 2.0 Playground to create an access token for your own Google Account.

Exchange the OAuth 2.0 access token

After you create an OAuth 2.0 access token, you can exchange the access token for a downscoped token that respects the Credential Access Boundary. This process typically involves a token broker and a token consumer:

  • The token broker is responsible for defining the Credential Access Boundary and exchanging an access token for a downscoped token.

    The token broker can use a supported authentication library to exchange access tokens automatically, or it can call the Security Token Service to exchange tokens manually.

  • The token consumer requests a downscoped access token from the token broker, then uses the downscoped access token to perform another action.

    The token consumer can use a supported authentication library to automatically refresh access tokens before they expire. Alternatively, it can refresh tokens manually, or it can allow tokens to expire without refreshing them.

Exchange and refresh the access token automatically

If you create the token broker and token consumer with one of the following languages, you can use Google's authentication library to exchange and refresh tokens automatically:

Go

For Go, you can exchange and refresh tokens automatically with version v0.0.0-20210819190943-2bc19b11175f or later of the golang.org/x/oauth2 package.

To check which version of this package you are using, run the following command in your application directory:

go list -m golang.org/x/oauth2

The following example shows how a token broker can generate downscoped tokens:


import (
	"context"
	"fmt"

	"golang.org/x/oauth2"
	"golang.org/x/oauth2/google"
	"golang.org/x/oauth2/google/downscope"
)

// createDownscopedToken would be run on the token broker in order to generate
// a downscoped access token that only grants access to objects whose name begins with prefix.
// The token broker would then pass the newly created token to the requesting token consumer for use.
func createDownscopedToken(bucketName string, prefix string) error {
	// bucketName := "foo"
	// prefix := "profile-picture-"

	ctx := context.Background()
	// A condition can optionally be provided to further restrict access permissions.
	condition := downscope.AvailabilityCondition{
		Expression:  "resource.name.startsWith('projects/_/buckets/" + bucketName + "/objects/" + prefix + "')",
		Title:       prefix + " Only",
		Description: "Restricts a token to only be able to access objects that start with `" + prefix + "`",
	}
	// Initializes an accessBoundary with one Rule which restricts the downscoped
	// token to only be able to access the bucket "bucketName" and only grants it the
	// permission "storage.objectViewer".
	accessBoundary := []downscope.AccessBoundaryRule{
		{
			AvailableResource:    "//storage.googleapis.com/projects/_/buckets/" + bucketName,
			AvailablePermissions: []string{"inRole:roles/storage.objectViewer"},
			Condition:            &condition, // Optional
		},
	}

	// This Source can be initialized in multiple ways; the following example uses
	// Application Default Credentials.
	var rootSource oauth2.TokenSource

	// You must provide the "https://www.googleapis.com/auth/cloud-platform" scope.
	rootSource, err := google.DefaultTokenSource(ctx, "https://www.googleapis.com/auth/cloud-platform")
	if err != nil {
		return fmt.Errorf("failed to generate rootSource: %w", err)
	}

	// downscope.NewTokenSource constructs the token source with the configuration provided.
	dts, err := downscope.NewTokenSource(ctx, downscope.DownscopingConfig{RootSource: rootSource, Rules: accessBoundary})
	if err != nil {
		return fmt.Errorf("failed to generate downscoped token source: %w", err)
	}
	// Token() uses the previously declared TokenSource to generate a downscoped token.
	tok, err := dts.Token()
	if err != nil {
		return fmt.Errorf("failed to generate token: %w", err)
	}
	// Pass this token back to the token consumer.
	_ = tok
	return nil
}

The following example shows how a token consumer can use a refresh handler to automatically obtain and refresh downscoped tokens:


import (
	"context"
	"fmt"
	"io"
	"io/ioutil"

	"golang.org/x/oauth2/google"
	"golang.org/x/oauth2/google/downscope"

	"cloud.google.com/go/storage"
	"golang.org/x/oauth2"
	"google.golang.org/api/option"
)

// A token consumer should define their own tokenSource. In the Token() method,
// it should send a query to a token broker requesting a downscoped token.
// The token broker holds the root credential that is used to generate the
// downscoped token.
type localTokenSource struct {
	ctx        context.Context
	bucketName string
	brokerURL  string
}

func (lts localTokenSource) Token() (*oauth2.Token, error) {
	var remoteToken *oauth2.Token
	// Usually you would now retrieve remoteToken, an oauth2.Token, from token broker.
	// This snippet performs the same functionality locally.
	accessBoundary := []downscope.AccessBoundaryRule{
		{
			AvailableResource:    "//storage.googleapis.com/projects/_/buckets/" + lts.bucketName,
			AvailablePermissions: []string{"inRole:roles/storage.objectViewer"},
		},
	}
	rootSource, err := google.DefaultTokenSource(lts.ctx, "https://www.googleapis.com/auth/cloud-platform")
	if err != nil {
		return nil, fmt.Errorf("failed to generate rootSource: %w", err)
	}
	dts, err := downscope.NewTokenSource(lts.ctx, downscope.DownscopingConfig{RootSource: rootSource, Rules: accessBoundary})
	if err != nil {
		return nil, fmt.Errorf("failed to generate downscoped token source: %w", err)
	}
	// Token() uses the previously declared TokenSource to generate a downscoped token.
	remoteToken, err = dts.Token()
	if err != nil {
		return nil, fmt.Errorf("failed to generate token: %w", err)
	}

	return remoteToken, nil
}

// getObjectContents will read the contents of an object in Google Storage
// named objectName, contained in the bucket "bucketName".
func getObjectContents(output io.Writer, bucketName string, objectName string) error {
	// bucketName := "foo"
	// prefix := "profile-picture-"

	ctx := context.Background()

	thisTokenSource := localTokenSource{
		ctx:        ctx,
		bucketName: bucketName,
		brokerURL:  "yourURL.com/internal/broker",
	}

	// Wrap the TokenSource in an oauth2.ReuseTokenSource to enable automatic refreshing.
	refreshableTS := oauth2.ReuseTokenSource(nil, thisTokenSource)
	// You can now use the token source to access Google Cloud Storage resources as follows.
	storageClient, err := storage.NewClient(ctx, option.WithTokenSource(refreshableTS))
	if err != nil {
		return fmt.Errorf("failed to create the storage client: %w", err)
	}
	defer storageClient.Close()
	bkt := storageClient.Bucket(bucketName)
	obj := bkt.Object(objectName)
	rc, err := obj.NewReader(ctx)
	if err != nil {
		return fmt.Errorf("failed to retrieve the object: %w", err)
	}
	defer rc.Close()
	data, err := ioutil.ReadAll(rc)
	if err != nil {
		return fmt.Errorf("could not read the object's contents: %w", err)
	}
	// Data now contains the contents of the requested object.
	output.Write(data)
	return nil
}

Java

For Java, you can exchange and refresh tokens automatically with version 1.1.0 or later of the com.google.auth:google-auth-library-oauth2-http artifact.

To check which version of this artifact you are using, run the following Maven command in your application directory:

mvn dependency:list -DincludeArtifactIds=google-auth-library-oauth2-http

The following example shows how a token broker can generate downscoped tokens:

public static AccessToken getTokenFromBroker(String bucketName, String objectPrefix)
    throws IOException {
  // Retrieve the source credentials from ADC.
  GoogleCredentials sourceCredentials =
      GoogleCredentials.getApplicationDefault()
          .createScoped("https://www.googleapis.com/auth/cloud-platform");

  // Initialize the Credential Access Boundary rules.
  String availableResource = "//storage.googleapis.com/projects/_/buckets/" + bucketName;

  // Downscoped credentials will have readonly access to the resource.
  String availablePermission = "inRole:roles/storage.objectViewer";

  // Only objects starting with the specified prefix string in the object name will be allowed
  // read access.
  String expression =
      "resource.name.startsWith('projects/_/buckets/"
          + bucketName
          + "/objects/"
          + objectPrefix
          + "')";

  // Build the AvailabilityCondition.
  CredentialAccessBoundary.AccessBoundaryRule.AvailabilityCondition availabilityCondition =
      CredentialAccessBoundary.AccessBoundaryRule.AvailabilityCondition.newBuilder()
          .setExpression(expression)
          .build();

  // Define the single access boundary rule using the above properties.
  CredentialAccessBoundary.AccessBoundaryRule rule =
      CredentialAccessBoundary.AccessBoundaryRule.newBuilder()
          .setAvailableResource(availableResource)
          .addAvailablePermission(availablePermission)
          .setAvailabilityCondition(availabilityCondition)
          .build();

  // Define the Credential Access Boundary with all the relevant rules.
  CredentialAccessBoundary credentialAccessBoundary =
      CredentialAccessBoundary.newBuilder().addRule(rule).build();

  // Create the downscoped credentials.
  DownscopedCredentials downscopedCredentials =
      DownscopedCredentials.newBuilder()
          .setSourceCredential(sourceCredentials)
          .setCredentialAccessBoundary(credentialAccessBoundary)
          .build();

  // Retrieve the token.
  // This will need to be passed to the Token Consumer.
  AccessToken accessToken = downscopedCredentials.refreshAccessToken();
  return accessToken;
}

The following example shows how a token consumer can use a refresh handler to automatically obtain and refresh downscoped tokens:

public static void tokenConsumer(final String bucketName, final String objectName)
    throws IOException {
  // You can pass an `OAuth2RefreshHandler` to `OAuth2CredentialsWithRefresh` which will allow the
  // library to seamlessly handle downscoped token refreshes on expiration.
  OAuth2CredentialsWithRefresh.OAuth2RefreshHandler handler =
      new OAuth2CredentialsWithRefresh.OAuth2RefreshHandler() {
        @Override
        public AccessToken refreshAccessToken() throws IOException {
          // The common pattern of usage is to have a token broker pass the downscoped short-lived
          // access tokens to a token consumer via some secure authenticated channel.
          // For illustration purposes, we are generating the downscoped token locally.
          // We want to test the ability to limit access to objects with a certain prefix string
          // in the resource bucket. objectName.substring(0, 3) is the prefix here. This field is
          // not required if access to all bucket resources are allowed. If access to limited
          // resources in the bucket is needed, this mechanism can be used.
          return getTokenFromBroker(bucketName, objectName.substring(0, 3));
        }
      };

  // Downscoped token retrieved from token broker.
  AccessToken downscopedToken = handler.refreshAccessToken();

  // Create the OAuth2CredentialsWithRefresh from the downscoped token and pass a refresh handler
  // which will handle token expiration.
  // This will allow the consumer to seamlessly obtain new downscoped tokens on demand every time
  // token expires.
  OAuth2CredentialsWithRefresh credentials =
      OAuth2CredentialsWithRefresh.newBuilder()
          .setAccessToken(downscopedToken)
          .setRefreshHandler(handler)
          .build();

  // Use the credentials with the Cloud Storage SDK.
  StorageOptions options = StorageOptions.newBuilder().setCredentials(credentials).build();
  Storage storage = options.getService();

  // Call Cloud Storage APIs.
  Blob blob = storage.get(bucketName, objectName);
  String content = new String(blob.getContent());
  System.out.println(
      "Retrieved object, "
          + objectName
          + ", from bucket,"
          + bucketName
          + ", with content: "
          + content);
}

Node.js

For Node.js, you can exchange and refresh tokens automatically with version 7.9.0 or later of the google-auth-library package.

To check which version of this package you are using, run the following command in your application directory:

npm list google-auth-library

The following example shows how a token broker can generate downscoped tokens:

// Imports the Google Auth libraries.
const {GoogleAuth, DownscopedClient} = require('google-auth-library');
/**
 * Simulates token broker generating downscoped tokens for specified bucket.
 *
 * @param bucketName The name of the Cloud Storage bucket.
 * @param objectPrefix The prefix string of the object name. This is used
 *        to ensure access is restricted to only objects starting with this
 *        prefix string.
 */
async function getTokenFromBroker(bucketName, objectPrefix) {
  const googleAuth = new GoogleAuth({
    scopes: 'https://www.googleapis.com/auth/cloud-platform',
  });

  // Define the Credential Access Boundary object.
  const cab = {
    // Define the access boundary.
    accessBoundary: {
      // Define the single access boundary rule.
      accessBoundaryRules: [
        {
          availableResource: `//storage.googleapis.com/projects/_/buckets/${bucketName}`,
          // Downscoped credentials will have readonly access to the resource.
          availablePermissions: ['inRole:roles/storage.objectViewer'],
          // Only objects starting with the specified prefix string in the object name
          // will be allowed read access.
          availabilityCondition: {
            expression:
              "resource.name.startsWith('projects/_/buckets/" +
              `${bucketName}/objects/${objectPrefix}')`,
          },
        },
      ],
    },
  };

  // Obtain an authenticated client via ADC.
  const client = await googleAuth.getClient();

  // Use the client to create a DownscopedClient.
  const cabClient = new DownscopedClient(client, cab);

  // Refresh the tokens.
  const refreshedAccessToken = await cabClient.getAccessToken();

  // This will need to be passed to the token consumer.
  return refreshedAccessToken;
}

The following example shows how a token consumer can provide a refresh handler that automatically obtains and refreshes downscoped tokens:

// Imports the Google Auth and Google Cloud libraries.
const {OAuth2Client} = require('google-auth-library');
const {Storage} = require('@google-cloud/storage');
/**
 * Simulates token consumer generating calling GCS APIs using generated
 * downscoped tokens for specified bucket.
 *
 * @param bucketName The name of the Cloud Storage bucket.
 * @param objectName The name of the object in the Cloud Storage bucket
 *        to read.
 */
async function tokenConsumer(bucketName, objectName) {
  // Create the OAuth credentials (the consumer).
  const oauth2Client = new OAuth2Client();
  // We are defining a refresh handler instead of a one-time access
  // token/expiry pair.
  // This will allow the consumer to obtain new downscoped tokens on
  // demand every time a token is expired, without any additional code
  // changes.
  oauth2Client.refreshHandler = async () => {
    // The common pattern of usage is to have a token broker pass the
    // downscoped short-lived access tokens to a token consumer via some
    // secure authenticated channel. For illustration purposes, we are
    // generating the downscoped token locally. We want to test the ability
    // to limit access to objects with a certain prefix string in the
    // resource bucket. objectName.substring(0, 3) is the prefix here. This
    // field is not required if access to all bucket resources are allowed.
    // If access to limited resources in the bucket is needed, this mechanism
    // can be used.
    const refreshedAccessToken = await getTokenFromBroker(
      bucketName,
      objectName.substring(0, 3)
    );
    return {
      access_token: refreshedAccessToken.token,
      expiry_date: refreshedAccessToken.expirationTime,
    };
  };

  const storageOptions = {
    projectId: process.env.GOOGLE_CLOUD_PROJECT,
    authClient: oauth2Client,
  };

  const storage = new Storage(storageOptions);
  const downloadFile = await storage
    .bucket(bucketName)
    .file(objectName)
    .download();
  console.log(downloadFile.toString('utf8'));
}

Python

For Python, you can exchange and refresh tokens automatically with version 2.0.0 or later of the google-auth package.

To check which version of this package you are using, run the following command in the environment where the package is installed:

pip show google-auth

The following example shows how a token broker can generate downscoped tokens:

import google.auth

from google.auth import downscoped
from google.auth.transport import requests

def get_token_from_broker(bucket_name, object_prefix):
    """Simulates token broker generating downscoped tokens for specified bucket.

    Args:
        bucket_name (str): The name of the Cloud Storage bucket.
        object_prefix (str): The prefix string of the object name. This is used
            to ensure access is restricted to only objects starting with this
            prefix string.

    Returns:
        Tuple[str, datetime.datetime]: The downscoped access token and its expiry date.
    """
    # Initialize the Credential Access Boundary rules.
    available_resource = f"//storage.googleapis.com/projects/_/buckets/{bucket_name}"
    # Downscoped credentials will have readonly access to the resource.
    available_permissions = ["inRole:roles/storage.objectViewer"]
    # Only objects starting with the specified prefix string in the object name
    # will be allowed read access.
    availability_expression = (
        "resource.name.startsWith('projects/_/buckets/{}/objects/{}')".format(
            bucket_name, object_prefix
        )
    )
    availability_condition = downscoped.AvailabilityCondition(availability_expression)
    # Define the single access boundary rule using the above properties.
    rule = downscoped.AccessBoundaryRule(
        available_resource=available_resource,
        available_permissions=available_permissions,
        availability_condition=availability_condition,
    )
    # Define the Credential Access Boundary with all the relevant rules.
    credential_access_boundary = downscoped.CredentialAccessBoundary(rules=[rule])

    # Retrieve the source credentials via ADC.
    source_credentials, _ = google.auth.default()
    if source_credentials.requires_scopes:
        source_credentials = source_credentials.with_scopes(
            ["https://www.googleapis.com/auth/cloud-platform"]
        )

    # Create the downscoped credentials.
    downscoped_credentials = downscoped.Credentials(
        source_credentials=source_credentials,
        credential_access_boundary=credential_access_boundary,
    )

    # Refresh the tokens.
    downscoped_credentials.refresh(requests.Request())

    # These values will need to be passed to the token consumer.
    access_token = downscoped_credentials.token
    expiry = downscoped_credentials.expiry
    return (access_token, expiry)

The following example shows how a token consumer can provide a refresh handler that automatically obtains and refreshes downscoped tokens:

from google.cloud import storage
from google.oauth2 import credentials

def token_consumer(bucket_name, object_name):
    """Tests token consumer readonly access to the specified object.

    Args:
        bucket_name (str): The name of the Cloud Storage bucket.
        object_name (str): The name of the object in the Cloud Storage bucket
            to read.
    """

    # Create the OAuth credentials from the downscoped token and pass a
    # refresh handler to handle token expiration. We are passing a
    # refresh_handler instead of a one-time access token/expiry pair.
    # This will allow the consumer to obtain new downscoped tokens on
    # demand every time a token is expired, without any additional code
    # changes.
    def refresh_handler(request, scopes=None):
        # The common pattern of usage is to have a token broker pass the
        # downscoped short-lived access tokens to a token consumer via some
        # secure authenticated channel.
        # For illustration purposes, we are generating the downscoped token
        # locally.
        # We want to test the ability to limit access to objects with a certain
        # prefix string in the resource bucket. object_name[0:3] is the prefix
        # here. This field is not required if access to all bucket resources are
        # allowed. If access to limited resources in the bucket is needed, this
        # mechanism can be used.
        return get_token_from_broker(bucket_name, object_prefix=object_name[0:3])

    creds = credentials.Credentials(
        None,
        scopes=["https://www.googleapis.com/auth/cloud-platform"],
        refresh_handler=refresh_handler,
    )

    # Initialize a Cloud Storage client with the oauth2 credentials.
    storage_client = storage.Client(credentials=creds)
    # The token broker has readonly access to the specified bucket object.
    bucket = storage_client.bucket(bucket_name)
    blob = bucket.blob(object_name)
    print(blob.download_as_bytes().decode("utf-8"))

Exchange and refresh the access token manually

A token broker can use the Security Token Service API to exchange an access token for a downscoped access token. It can then provide the downscoped token to a token consumer.

To exchange the access token, use the following HTTP method and URL:

POST https://sts.googleapis.com/v1/token

Set the Content-Type header in the request to application/x-www-form-urlencoded. Include the following fields in the request body:

Fields
grant_type

string

Use the value urn:ietf:params:oauth:grant-type:token-exchange.

options

string

A JSON-format Credential Access Boundary, encoded with percent encoding.

requested_token_type

string

Use the value urn:ietf:params:oauth:token-type:access_token.

subject_token

string

The OAuth 2.0 access token that you want to exchange.

subject_token_type

string

Use the value urn:ietf:params:oauth:token-type:access_token.

The response is a JSON object that contains the following fields:

Fields
access_token

string

A downscoped OAuth 2.0 access token that respects the Credential Access Boundary.

expires_in

number

The amount of time until the downscoped token expires, in seconds.

This field is present only if the original access token represents a service account. When this field is not present, the downscoped token has the same time to expire as the original access token.

issued_token_type

string

Contains the value urn:ietf:params:oauth:token-type:access_token.

token_type

string

Contains the value Bearer.

For example, if a JSON-format Credential Access Boundary is stored in the file ./access-boundary.json, you can use the following curl command to exchange the access token. Replace original-token with the original access token:

curl -H "Content-Type:application/x-www-form-urlencoded" \
    -X POST \
    https://sts.googleapis.com/v1/token \
    -d "grant_type=urn:ietf:params:oauth:grant-type:token-exchange&subject_token_type=urn:ietf:params:oauth:token-type:access_token&requested_token_type=urn:ietf:params:oauth:token-type:access_token&subject_token=original-token" \
    --data-urlencode "options=$(cat ./access-boundary.json)"

The response is similar to the following example:

{
  "access_token": "ya29.dr.AbCDeFg-123456...",
  "issued_token_type": "urn:ietf:params:oauth:token-type:access_token",
  "token_type": "Bearer",
  "expires_in": 3600
}

When a token consumer requests a downscoped token, the token broker should respond with both the downscoped token and the number of seconds until it expires. To refresh the downscoped token, the consumer can request a downscoped token from the broker before the existing token expires.

What's next