viewing Apigee X documentation.
View Apigee Edge documentation.
Apigee integrates with VPC Service Controls, which let you isolate resources of your Google Cloud projects. This helps prevent data leaks/exfiltration.
This section describes how to use VPC Service Controls with Apigee.
VPC Service Controls defines a service perimeter that acts as a boundary between a project and other services. Service perimeters are an organization-level method to protect Google Cloud services in your projects in order to mitigate the risk of data exfiltration.
VPC Service Controls can also ensure that clients within a perimeter that have private access to resources do not have access to unauthorized resources outside the perimeter.
For a detailed look at the benefits of service perimeters, refer to the Overview of VPC Service Controls.
When using VPC Service Controls, note that:
- Both the Google Cloud project and its associated runtime are incuded within that project's VPC Service Controls perimeter.
- Interaction among services inside a perimeter can be restricted using the VPC network accessible services feature.
Both Apigee and Apigee hybrid integrate with VPC Service Controls. For a complete list of products that integrate with VPC Service Controls, see Supported products.
Setting up VPC Service Controls with Apigee
The general process for setting up VPC Service Controls with Apigee is as follows:
- Enable VPC Service Controls.
- Create a new service perimeter.
- Configure the service perimeter.
These steps are described in more detail below.
To set up VPC Service Controls with Apigee:
Enable VPC Service Controls for the peered connection from your network to Apigee by executing the following command:
gcloud services vpc-peerings enable-vpc-service-controls \ --network=NETWORK_NAME --project=PROJECT_ID
NETWORK_NAME is the name of your VPC peering network.
If you used the default values during Apigee setup, the name of the network is "DEFAULT". In production environments, however, this is the name of your custom peering network.
- PROJECT_ID is the name of the project that you created during the Apigee setup process.
This command enables VPC Service Controls for your project. You can execute this command multiple times to enable VPC Service Controls for more than one project.
Create a new perimeter as described in the the VPC Service Controls Quickstart. When you create a perimeter, you choose which projects to add within that perimeter as well as which services to secure.
For Apigee and Apigee hybrid, Google recommends that you secure all services when you create a perimeter, including the Apigee APIs.
For more information, see Creating a service perimeter.
- Configure the service perimeter, as described in Service perimeter details and configuration.
To add an integrated portal within your perimeter, see Adding an integrated portal to the perimeter.
Setting up VPC Service Controls with Apigee hybrid
Apigee hybrid supports VPC Service Controls, but there are additional steps that you must perform. The general process for integrating Apigee hybrid with VPC Service Controls is as follows:
- Set up private connectivity.
- Secure additional services within the perimeter.
- Set up a private repository. (A private repository is one that is within the perimeter; it does not necessarily need to be a local repository as long as it is inside the perimeter.)
- Push the Apigee images to your private repository.
- Update overrides to use the private repository during the hybrid installation and configuration process.
Each of these steps are described in more detail in the following procedure.
To set up VPC Service Controls with Apigee hybrid:
- Set up private IP addresses for your hybrid network hosts, as described in Setting up private connectivity to Google APIs and services. This involves configuring routes, firewall rules, and DNS entries to let the Google APIs access those private IPs.
Follow the steps in Setting up VPC Service Controls with Apigee.
During this process, you must be sure to secure the following services in addition to those specified for Apigee, within your perimeter:
- Anthos Service Mesh
- Cloud Monitoring (Stackdriver)
- Google Kubernetes Engine (if you are running on GKE)
- Google Container Registry (if you are using this as your local repository)
To add these services to your perimeter, follow the instructions in Service perimeter details and configuration.
- Copy the Apigee images into your private repository:
Download the signed Apigee images from Docker Hub as described here. Be sure to specify the latest version numbers.
docker pull google/apigee-installer:1.3.3 docker pull google/apigee-authn-authz:1.3.3 docker pull google/apigee-mart-server:1.3.3 docker pull google/apigee-synchronizer:1.3.3 docker pull google/apigee-runtime:1.3.3 docker pull google/apigee-hybrid-cassandra-client:1.3.3 docker pull google/apigee-hybrid-cassandra:1.3.3 docker pull google/apigee-cassandra-backup-utility:1.3.3 docker pull google/apigee-udca:1.3.3 docker pull google/apigee-stackdriver-logging-agent:1.6.8 docker pull google/apigee-prom-prometheus:v2.9.2 docker pull google/apigee-stackdriver-prometheus-sidecar:0.7.5 docker pull google/apigee-connect-agent:1.3.3 docker pull google/apigee-watcher:1.3.3 docker pull google/apigee-operators:1.3.3 docker pull google/apigee-kube-rbac-proxy:v0.4.1
Tag the images.
The following example tags the images in a US-based GCR repo:
docker tag google/apigee-installer:1.3.3 us.gcr.io/project_ID/apigee-installer:1.3.3 docker tag google/apigee-authn-authz:1.3.3 us.gcr.io/project_ID/apigee-authn-authz:1.3.3 docker tag google/apigee-mart-server:1.3.3 us.gcr.io/project_ID/apigee-mart-server:1.3.3 docker tag google/apigee-synchronizer:1.3.3 us.gcr.io/project_ID/apigee-synchronizer:1.3.3 docker tag google/apigee-runtime:1.3.3 us.gcr.io/project_ID/apigee-runtime:1.3.3 docker tag google/apigee-hybrid-cassandra-client:1.3.3 us.gcr.io/project_ID/apigee-hybrid-cassandra-client:1.3.3 docker tag google/apigee-hybrid-cassandra:1.3.3 us.gcr.io/project_ID/apigee-hybrid-cassandra:1.3.3 docker tag google/apigee-cassandra-backup-utility:1.3.3 us.gcr.io/project_ID/apigee-cassandra-backup-utility:1.3.3 docker tag google/apigee-udca:1.3.3 us.gcr.io/project_ID/apigee-udca:1.3.3 docker tag google/apigee-stackdriver-logging-agent:1.6.8 us.gcr.io/project_ID/apigee-stackdriver-logging-agent:1.6.8 docker tag google/apigee-prom-prometheus:v2.9.2 us.gcr.io/project_ID/apigee-prom-prometheus:v2.9.2 docker tag google/apigee-stackdriver-prometheus-sidecar:0.7.5 us.gcr.io/project_ID/apigee-stackdriver-prometheus-sidecar:0.7.5 docker tag google/apigee-connect-agent:1.3.3 us.gcr.io/project_ID/apigee-connect-agent:1.3.3 docker tag google/apigee-watcher:1.3.3 us.gcr.io/project_ID/apigee-watcher:1.3.3 docker tag google/apigee-operators:1.3.3 us.gcr.io/project_ID/apigee-operators:1.3.3 docker tag google/apigee-kube-rbac-proxy:v0.4.1 us.gcr.io/project_ID/apigee-kube-rbac-proxy:v0.4.1
While not required, Google recommends that you include the project ID or other identifying value in the repo path for each image.
Push the images to your private repository.
The following example pushes the images to a US-based GCR repo:
docker push us.gcr.io/project_ID/apigee-installer:1.3.3 docker push us.gcr.io/project_ID/apigee-authn-authz:1.3.3 docker push us.gcr.io/project_ID/apigee-mart-server:1.3.3 docker push us.gcr.io/project_ID/apigee-synchronizer:1.3.3 docker push us.gcr.io/project_ID/apigee-runtime:1.3.3 docker push us.gcr.io/project_ID/apigee-hybrid-cassandra-client:1.3.3 docker push us.gcr.io/project_ID/apigee-hybrid-cassandra:1.3.3 docker push us.gcr.io/project_ID/apigee-cassandra-backup-utility:1.3.3 docker push us.gcr.io/project_ID/apigee-cassandra-backup-utility:1.3.3 docker push us.gcr.io/project_ID/apigee-udca:1.3.3 docker push us.gcr.io/project_ID/apigee-stackdriver-logging-agent:1.6.8 docker push us.gcr.io/project_ID/apigee-prom-prometheus:v2.9.2 docker push us.gcr.io/project_ID/apigee-stackdriver-prometheus-sidecar:0.7.5 docker push us.gcr.io/project_ID/apigee-connect-agent1.3.3 docker push us.gcr.io/project_ID/apigee-watcher1.3.3 docker push us.gcr.io/project_ID/apigee-operators1.3.3 docker push us.gcr.io/project_ID/apigee-kube-rbac-proxy:v0.4.1
While not required, Google recommends that you include the project ID or other identifying value in the repo path for each image.
Update your overrides file to point the image URLs to your private repository, as described in Specify configuration overrides
You must change the image URLs for the following components:
Component Name (in overrides file) Image URL
- Apply your changes using the new images in GCR, as described in Apply the configuration to the cluster.
Granting integrated portals access to the perimeter
VPC-SC supports granting VPC-SC access levels to integrated portals, but this process requires additional steps, as described in this section.
If you do not grant an access level to integrated portals, then integrated portals are unavailable for VPC-SC-enabled Apigee organizations.
Granting an access level to portals:
- Does not put integrated portals within the perimeter.
- Allows integrated portals to be accessed from outside the perimeter.
- Allows VPC-SC protected Apigee data (such as application data) exposure to portal users outside the VPC-SC perimeter.
For more information, see Allowing access to protected resources from outside a perimeter.
Before granting perimeter access to an integrated portal, you must enable the
Access Context Manager API for your project, if it is not already
enabled. You can do this in Google Cloud Console or by using the
services enable command.
In addition, you must have the service account email address for the project that the portal is used in. To get this, you need the GCP project ID and project number. The following steps describe how to get these values:
- Get GCP project details by using the
projects listcommand, as the following example shows:
gcloud projects list
This command returns the project ID (in the
PROJECT_IDcolumn) and the project number (in the
PROJECT_NUMBERcolumn) for each project in your GCP organization.
Identify the Apigee service account email address. This is the same account that the Apigee installer created when you provisioned your organization in Step 3: Create an organization.
To get this email address, use the
iam service-accounts listcommand, which uses the following syntax:
gcloud iam service-accounts list --project GCP_PROJECT_ID
gcloud iam service-accounts list --project my-project DISPLAY NAME EMAIL DISABLED Apigee default service account service-
firstname.lastname@example.org False Compute Engine default service account
The service account that you want is the one whose email address matches the following format:
Get the policy (or perimeter) ID by using the
access-context-manager policies listcommand. Pass the organization ID to this command, as the following example shows:
gcloud access-context-manager policies list --organization=organizations/GCP_ORG_ID
gcloudresponds with a list of policies associated with the specified organization; for example:
gcloud access-context-manager policies list --organization=organizations/
2244340NAME ORGANIZATION TITLE ETAG
VPC-SC's policy ID (also known as the perimeter ID) is the ID of the VPC-SC service perimeter that acts as a boundary between your project and other services). It is the value in the
Steps to grant perimeter access to integrated portals
To grant perimeter access to an integrated portal:
- Gather the service account email address and VPC-SC's policy ID, as described in Prerequsites.
Create a conditions file on your admin machine that specifies the service account address that will grant the portal access through the perimeter.
The file can be any name you want, but it must have a
*.yamlextension. For example,
In the conditions file, add a
memberssection that specifies the Apigee service account, as the following example shows:
- members: - serviceAccount:
Note that adding a
memberssection is sufficient; you do not need to add an access level section. For more information on creating a conditions file, see Limit access by user or service account.
- Create an access level with the
access-context-manager levels createcommand; for example:
gcloud access-context-manager levels create ACCESS_LEVEL_ID \ --title ACCESS_LEVEL_TITLE \ --basic-level-spec PATH/TO/CONDITIONS_FILE.yaml \ --policy=POLICY_ID
- ACCESS_LEVEL_ID is an identifier for the new access level that is being
granted; for example,
- ACCESS_LEVEL_TITLE is a title for the access level. The title can be anything you want, but Google recommends that you give it a meaningful value so that you and other administrators will know what it applies to. For example, My Portal Access Level.
- CONDITIONS_FILE is the path to the YAML file that you created in the previous step.
- POLICY_ID is the policy or perimeter ID.
gcloud access-context-manager levels create
my-portal-access-level\ --title My Portal Access Level \ --basic-level-spec ~/
- ACCESS_LEVEL_ID is an identifier for the new access level that is being granted; for example,
- Update the perimeter with the new access level by using the
access-context-manager perimeters updatecommand:
gcloud access-context-manager perimeters update POLICY_ID \ --add-access-levels=ACCESS_LEVEL_ID \ --policy=POLICY_ID
gcloud access-context-manager perimeters update
Check the following:
- If the Access Context Manager API is not enabled for your GCP project,
gcloudprompts you to enable it when you try to list or set policies.
- Make sure that you use the GCP organization ID and not the Apigee organization ID when you get details about the organization.
- Some commands described in this section require elevated permissions; for example, to get details about service accounts for a project, you must be an owner of that project.
To verify that the service account exists, execute the
iam service-accounts describecommand, as the following example shows:
gcloud iam service-accounts describe
gcloudresponds with information about the service account, including the display name and project ID to which it belongs. If the service account doesn't exist,
gcloudresponds with a
Apigee integrations with VPC Service Controls have the following limitations:
- Integrated portals require additional steps to configure.
- You must deploy Drupal portals within the service perimeter.