Service perimeter details and configuration

This page describes service perimeters and includes the high-level steps for configuring perimeters.

About service perimeters

This section provides details about the way service perimeters function, and the differences between enforced and dry run perimeters.

Service perimeters are an Organization-level method to protect Google Cloud services in your projects in order to mitigate the risk of data exfiltration. For a detailed look at the benefits of service perimeters, refer to the Overview of VPC Service Controls.

Additionally, the services that are accessible inside a perimeter, such as from VMs in a VPC network that is hosted inside a perimeter, can be restricted using the VPC accessible services feature.

VPC Service Controls perimeters can be configured in two modes: enforced and dry run. In general, the same configuration steps apply to both enforced and dry run perimeters. The critical difference is that dry run perimeters will not prevent access to protected services, only log violations as though the perimeters were enforced. This guide explicitly calls out the differences between enforced and dry run service perimeters where necessary.

Enforced mode

Enforced mode is the default mode for service perimeters. When a service perimeter is enforced, requests that violate the perimeter policy, such as requests to protected services from outside a perimeter, are denied.

When a service is protected by an enforced perimeter:

  • That service cannot transmit data out of the perimeter.

    Protected services function as normal inside the perimeter, but cannot take actions to send resources and data out of the perimeter. This helps prevent malicious insiders who may have access to projects in the perimeter from exfiltrating data.

  • Requests from outside the perimeter to the protected service are honored only if the requests meet the criteria of access levels assigned to the perimeter.

  • That service can be made accessible to projects in other perimeters using perimeter bridges.

Dry run mode

In dry run mode, requests that violate the perimeter policy are not denied, only logged. Dry run service perimeters are used to test perimeter configuration and to monitor usage of services without preventing access to resources. Common use cases include:

  • Determining the impact that changes to existing service perimeters will have.

  • Previewing the impact that new service perimeters will have.

  • Monitoring requests to protected services that originate from outside a service perimeter. For example, seeing where requests to a given service are coming from, or to identify unexpected service usage in your organization.

  • In your development environments, creating an analogous perimeter architecture to your production environment. This allows you to identify and mitigate any issues that will be caused by your service perimeters before pushing changes to your production environment.

For more information, see Dry run mode.

Service perimeter configuration stages

VPC Service Controls can be configured using the Google Cloud Console, the gcloud command-line tool, and the Access Context Manager APIs.

To configure VPC Service Controls:

  1. If you want to use the gcloud command-line tool or the Access Context Manager APIs to create your service perimeters, create an access policy.

  2. Secure GCP resources with service perimeters.

  3. Set up VPC accessible services to add additional restrictions to how services can be used inside your perimeters (optional).

  4. Set up private connectivity from a VPC network (optional).

  5. Grant access from outside a service perimeter using access levels (optional).

  6. Set up resource sharing between perimeters using service perimeter bridges (optional).

Create an access policy

An access policy collects the service perimeters and access levels you create for your Organization. An Organization can only have one access policy.

When service perimeters are created and managed using the VPC Service Controls page of the Cloud Console, you do not need to create an access policy.

However, when using the gcloud command-line tool or the Access Context Manager APIs to create and configure your service perimeters, you must first create an access policy.

To learn more about Access Context Manager and access policies, read the overview of Access Context Manager.

Secure GCP resources with service perimeters

Service perimeters are used to protect services used by projects in your Organization. After identifying the projects and services you want to protect, create one or more service perimeters.

To learn more about how service perimeters work and what services VPC Service Controls can be used to secure, read the Overview of VPC Service Controls.

Some services have limitations with how they can be used with VPC Service Controls. If you encounter issues with your projects after setting up your service perimeters, read Troubleshooting.

Set up VPC accessible services

When you enable VPC accessible services for a perimeter, access from network endpoints inside your perimeter is limited to a set of services that you specify.

To learn more about how to limit access inside your perimeter to only a specific set of services, read about VPC accessible services.

Set up private connectivity from a VPC network

To provide additional security for VPC networks that are protected by a service perimeter, we recommend using Private Google Access. This includes private connectivity from on-premises networks.

To learn about configuring private connectivity, read Setting up private connectivity to Google APIs and services.

Restricting access to Google Cloud resources to only private access from VPC networks means that access using interfaces such as the Cloud Console and the Cloud Monitoring console will be denied. You can continue to use the gcloud command-line tool or API clients from VPC networks that share a service perimeter or perimeter bridge with the restricted resources.

Grant access from outside a service perimeter using access levels

Access levels can be used to allow requests from outside a service perimeter to resources protected by that perimeter.

Using access levels, you can specify public IPv4 and IPv6 CIDR blocks, and individual user and service accounts that you want to permit to access resources protected by VPC Service Controls.

If you are restricting resources using private connectivity from VPC networks, you can re-enable using the Cloud Console to access protected services by adding a CIDR block to an access level that includes the public IP address of the host where the Cloud Console is being used. If you want to re-enable the Cloud Console for a specific user regardless of IP address, add that user account as a member to the access level.

To learn about using access levels, read Creating an access level.

Sharing data across service perimeters

A project can only be included in one service perimeter. If you want to allow communication between two perimeters, create a service perimeter bridge.

Perimeter bridges can be used to enable communication between projects in different service perimeters. A project can belong to more than one perimeter bridge.

To learn more about perimeter bridges, read Sharing across perimeters with bridges.