This page applies to Apigee and Apigee hybrid.
View Apigee Edge documentation.
Apigee integrates with VPC Service Controls, which let you isolate resources of your Google Cloud projects. This helps prevent data leaks/exfiltration.
This section describes how to use VPC Service Controls with Apigee.
Overview
VPC Service Controls defines a service perimeter that acts as a boundary between a project and other services. Service perimeters are an organization-level method to protect Google Cloud services in your projects in order to mitigate the risk of data exfiltration.
VPC Service Controls can also ensure that clients within a perimeter that have private access to resources do not have access to unauthorized resources outside the perimeter.
For a detailed look at the benefits of service perimeters, refer to the Overview of VPC Service Controls.
When using VPC Service Controls, note that:
- Both the Google Cloud project and its associated runtime are included within that project's VPC Service Controls perimeter.
- Interaction among services inside a perimeter can be restricted using the VPC network accessible services feature.
Both Apigee and Apigee hybrid integrate with VPC Service Controls. For a complete list of products that integrate with VPC Service Controls, see Supported products.
Impact on internet connectivity
When VPC Service Controls are enabled, access to the internet is disabled: the Apigee runtime will no longer communicate with any public internet target. You have to route traffic to your VPC by establishing custom routes. See Importing and exporting custom routes.
Setting up VPC Service Controls with Apigee
The general process for setting up VPC Service Controls with Apigee is as follows:
- Enable VPC Service Controls.
- Create a new service perimeter.
- Configure the service perimeter.
These steps are described in more detail below.
To set up VPC Service Controls with Apigee:
-
Enable VPC Service Controls for the peered connection from your network to Apigee by executing the following command:
gcloud services vpc-peerings enable-vpc-service-controls \ --network=SHARED_VPC_NETWORK --project=PROJECT_ID
Where:
- SHARED_VPC_NETWORK is the name of your shared VPC network.
- PROJECT_ID is the name of the project hosting the shared VPC network; it is not the project used to create the Apigee organization.
This command enables VPC Service Controls for your project. You can execute this command multiple times to enable VPC Service Controls for more than one project.
-
Create a new perimeter as described in the the VPC Service Controls Quickstart. When you create a perimeter, you choose which projects to add within that perimeter as well as which services to secure.
For Apigee and Apigee hybrid, Google recommends that you secure all services when you create a perimeter, including the Apigee API.
For more information, see Creating a service perimeter.
- Configure the service perimeter, as described in Service perimeter details and configuration.
To add an integrated portal within your perimeter, see Adding an integrated portal to the perimeter.
Setting up VPC Service Controls with Apigee hybrid
Apigee hybrid supports VPC Service Controls, but there are additional steps that you must perform. The general process for integrating Apigee hybrid with VPC Service Controls is as follows:
- Set up private connectivity.
- Secure additional services within the perimeter.
- Set up a private repository. (A private repository is one that is within the perimeter; it does not necessarily need to be a local repository as long as it is inside the perimeter.)
- Push the Apigee images to your private repository.
- Update overrides to use the private repository during the hybrid installation and configuration process.
Each of these steps are described in more detail in the following procedure.
To set up VPC Service Controls with Apigee hybrid:
- Set up private IP addresses for your hybrid network hosts, as described in Setting up private connectivity to Google APIs and services. This involves configuring routes, firewall rules, and DNS entries to let the Google APIs access those private IPs.
-
Follow the steps in Setting up VPC Service Controls with Apigee.
During this process, you must be sure to secure the following services in addition to those specified for Apigee, within your perimeter:
- Anthos Service Mesh
- Cloud Monitoring (Stackdriver)
- Google Kubernetes Engine (if you are running on GKE)
- Google Container Registry (if you are using this as your local repository)
To add these services to your perimeter, follow the instructions in Service perimeter details and configuration.
- Copy the Apigee images into your private repository:
-
Download the signed Apigee images from Docker Hub as described here. Be sure to specify the latest version numbers.
For example:
docker pull google/apigee-installer:1.3.3 docker pull google/apigee-authn-authz:1.3.3 docker pull google/apigee-mart-server:1.3.3 docker pull google/apigee-synchronizer:1.3.3 docker pull google/apigee-runtime:1.3.3 docker pull google/apigee-hybrid-cassandra-client:1.3.3 docker pull google/apigee-hybrid-cassandra:1.3.3 docker pull google/apigee-cassandra-backup-utility:1.3.3 docker pull google/apigee-udca:1.3.3 docker pull google/apigee-stackdriver-logging-agent:1.6.8 docker pull google/apigee-prom-prometheus:v2.9.2 docker pull google/apigee-stackdriver-prometheus-sidecar:0.7.5 docker pull google/apigee-connect-agent:1.3.3 docker pull google/apigee-watcher:1.3.3 docker pull google/apigee-operators:1.3.3 docker pull google/apigee-kube-rbac-proxy:v0.4.1
-
Tag the images.
The following example tags the images in a US-based GCR repo:
docker tag google/apigee-installer:1.3.3 us.gcr.io/project_ID/apigee-installer:1.3.3 docker tag google/apigee-authn-authz:1.3.3 us.gcr.io/project_ID/apigee-authn-authz:1.3.3 docker tag google/apigee-mart-server:1.3.3 us.gcr.io/project_ID/apigee-mart-server:1.3.3 docker tag google/apigee-synchronizer:1.3.3 us.gcr.io/project_ID/apigee-synchronizer:1.3.3 docker tag google/apigee-runtime:1.3.3 us.gcr.io/project_ID/apigee-runtime:1.3.3 docker tag google/apigee-hybrid-cassandra-client:1.3.3 us.gcr.io/project_ID/apigee-hybrid-cassandra-client:1.3.3 docker tag google/apigee-hybrid-cassandra:1.3.3 us.gcr.io/project_ID/apigee-hybrid-cassandra:1.3.3 docker tag google/apigee-cassandra-backup-utility:1.3.3 us.gcr.io/project_ID/apigee-cassandra-backup-utility:1.3.3 docker tag google/apigee-udca:1.3.3 us.gcr.io/project_ID/apigee-udca:1.3.3 docker tag google/apigee-stackdriver-logging-agent:1.6.8 us.gcr.io/project_ID/apigee-stackdriver-logging-agent:1.6.8 docker tag google/apigee-prom-prometheus:v2.9.2 us.gcr.io/project_ID/apigee-prom-prometheus:v2.9.2 docker tag google/apigee-stackdriver-prometheus-sidecar:0.7.5 us.gcr.io/project_ID/apigee-stackdriver-prometheus-sidecar:0.7.5 docker tag google/apigee-connect-agent:1.3.3 us.gcr.io/project_ID/apigee-connect-agent:1.3.3 docker tag google/apigee-watcher:1.3.3 us.gcr.io/project_ID/apigee-watcher:1.3.3 docker tag google/apigee-operators:1.3.3 us.gcr.io/project_ID/apigee-operators:1.3.3 docker tag google/apigee-kube-rbac-proxy:v0.4.1 us.gcr.io/project_ID/apigee-kube-rbac-proxy:v0.4.1
While not required, Google recommends that you include the project ID or other identifying value in the repo path for each image.
-
Push the images to your private repository.
The following example pushes the images to a US-based GCR repo:
docker push us.gcr.io/project_ID/apigee-installer:1.3.3 docker push us.gcr.io/project_ID/apigee-authn-authz:1.3.3 docker push us.gcr.io/project_ID/apigee-mart-server:1.3.3 docker push us.gcr.io/project_ID/apigee-synchronizer:1.3.3 docker push us.gcr.io/project_ID/apigee-runtime:1.3.3 docker push us.gcr.io/project_ID/apigee-hybrid-cassandra-client:1.3.3 docker push us.gcr.io/project_ID/apigee-hybrid-cassandra:1.3.3 docker push us.gcr.io/project_ID/apigee-cassandra-backup-utility:1.3.3 docker push us.gcr.io/project_ID/apigee-cassandra-backup-utility:1.3.3 docker push us.gcr.io/project_ID/apigee-udca:1.3.3 docker push us.gcr.io/project_ID/apigee-stackdriver-logging-agent:1.6.8 docker push us.gcr.io/project_ID/apigee-prom-prometheus:v2.9.2 docker push us.gcr.io/project_ID/apigee-stackdriver-prometheus-sidecar:0.7.5 docker push us.gcr.io/project_ID/apigee-connect-agent1.3.3 docker push us.gcr.io/project_ID/apigee-watcher1.3.3 docker push us.gcr.io/project_ID/apigee-operators1.3.3 docker push us.gcr.io/project_ID/apigee-kube-rbac-proxy:v0.4.1
While not required, Google recommends that you include the project ID or other identifying value in the repo path for each image.
-
-
Update your overrides file to point the image URLs to your private repository, as described in Specify configuration overrides
You must change the image URLs for the following components:
Component Name (in overrides file) Image URL ao
your_private_repo/apigee-operators
authz
your_private_repo/apigee-authn-authz
cassandra
your_private_repo/apigee-hybrid-cassandra
auth: your_private_repo/apigee-hybrid-cassandra-client
backup: your_private_repo/apigee-cassandra-backup-utility
restore: your_private_repo/apigee-cassandra-backup-utilityconnectAgent
your_private_repo/apigee-connect-agent
installer
your_private_repo/apigee-installer
kubeRBACProxy
your_private_repo/apigee-kube-rbac-proxy
logger
your_private_repo/apigee-stackdriver-logging-agent
mart
your_private_repo/apigee-mart-server
metrics
your_private_repo/apigee-prom-prometheus
sdSidecar: your_private_repo/apigee-stackdriver-prometheus-sidecarruntime
your_private_repo/apigee-runtime
synchronizer
your_private_repo/apigee-synchronizer
udca
your_private_repo/apigee-udca
fluentd: your_private_repo/apigee-stackdriver-logging-agentwatcher
your_private_repo/apigee-watcher
- Apply your changes using the new images in GCR, as described in Apply the configuration to the cluster.
Granting integrated portals access to the perimeter
VPC-SC supports granting VPC-SC access levels to integrated portals, but this process requires additional steps, as described in this section.
If you do not grant an access level to integrated portals, then integrated portals are unavailable for VPC-SC-enabled Apigee organizations.
Granting an access level to portals:
- Does not put integrated portals within the perimeter.
- Allows integrated portals to be accessed from outside the perimeter.
- Allows VPC-SC protected Apigee data (such as application data) exposure to portal users outside the VPC-SC perimeter.
For more information, see Allowing access to protected resources from outside a perimeter.
Prerequisites
Before granting perimeter access to an integrated portal, you must enable the
Access Context Manager API
for your project, if it is not already
enabled. You can do this in Cloud console or by using the gcloud services enable command.
To check if the API is enabled, examine the output of the gcloud services list command, as described in Step 2: Enable Apigee APIs.
In addition, you must have the service account email address for the project that the portal is used in. To get this, you need the GCP project ID and project number. The following steps describe how to get these values:
- Get GCP project details by using the gcloud projects list command, as the
following example shows:
gcloud projects list
This command returns the project ID (in the
PROJECT_ID
column) and the project number (in thePROJECT_NUMBER
column) for each project in your GCP organization. -
Identify the Apigee service account email address. This is the same account that the Apigee installer created when you provisioned your organization in Step 3: Create an organization.
To get this email address, use the
iam service-accounts list
command, which uses the following syntax:gcloud iam service-accounts list --project GCP_PROJECT_ID
For example:
gcloud iam service-accounts list --project my-project DISPLAY NAME EMAIL DISABLED Apigee default service account service-
8675309
@gcp-sa-apigee.iam.gserviceaccount.com False Compute Engine default service account8675309
-compute@developer.gserviceaccount.com FalseThe service account that you want is the one whose email address matches the following format:
service-GCP_PROJECT_NUMBER@gcp-sa-apigee.iam.gserviceaccount.com
For example:
service-
8675309
@gcp-sa-apigee.iam.gserviceaccount.com -
Get the policy (or perimeter) ID by using the
access-context-manager policies list
command. Pass the organization ID to this command, as the following example shows:gcloud access-context-manager policies list --organization=organizations/GCP_ORG_ID
gcloud
responds with a list of policies associated with the specified organization; for example:gcloud access-context-manager policies list --organization=organizations/
2244340
NAME ORGANIZATION TITLE ETAG04081981
2244340
Default policy
421924c5a97c0Icu8VPC-SC's policy ID (also known as the perimeter ID) is the ID of the VPC-SC service perimeter that acts as a boundary between your project and other services). It is the value in the
NAME
column.
Steps to grant perimeter access to integrated portals
To grant perimeter access to an integrated portal:
- Gather the service account email address and VPC-SC's policy ID, as described in Prerequsites.
-
Create a conditions file on your admin machine that specifies the service account address that will grant the portal access through the perimeter.
The file can be any name you want, but it must have a
*.yaml
extension. For example,my-portal-access-rules.yaml
. -
In the conditions file, add a
members
section that specifies the Apigee service account, as the following example shows:- members: - serviceAccount:
service-
8675309
@gcp-sa-apigee.iam.gserviceaccount.comNote that adding a
members
section is sufficient; you do not need to add an access level section. For more information on creating a conditions file, see Limit access by user or service account. - Create an access level with the
access-context-manager levels create
command; for example:gcloud access-context-manager levels create ACCESS_LEVEL_ID \ --title ACCESS_LEVEL_TITLE \ --basic-level-spec PATH/TO/CONDITIONS_FILE.yaml \ --policy=POLICY_ID
Where:
- ACCESS_LEVEL_ID is an identifier for the new access level that is being
granted; for example,
my-portal-access-level
. - ACCESS_LEVEL_TITLE is a title for the access level. The title can be anything you want, but Google recommends that you give it a meaningful value so that you and other administrators will know what it applies to. For example, My Portal Access Level.
- CONDITIONS_FILE is the path to the YAML file that you created in the previous step.
- POLICY_ID is the policy or perimeter ID.
For example:
gcloud access-context-manager levels create
my-portal-access-level
\ --title My Portal Access Level \ --basic-level-spec ~/my-portal-access-rules.yaml
\ --policy=04081981
- ACCESS_LEVEL_ID is an identifier for the new access level that is being
granted; for example,
- Update the perimeter with the new access level by using the
access-context-manager perimeters update
command:gcloud access-context-manager perimeters update POLICY_ID \ --add-access-levels=ACCESS_LEVEL_ID \ --policy=POLICY_ID
For example:
gcloud access-context-manager perimeters update
04081981
\ --add-access-levels=my-portal-access-level
\ --policy=04081981
Troubleshooting
Check the following:
- If the Access Context Manager API is not enabled for your GCP project,
gcloud
prompts you to enable it when you try to list or set policies. - Make sure that you use the GCP organization ID and not the Apigee organization ID when you get details about the organization.
- Some commands described in this section require elevated permissions; for example, to get details about service accounts for a project, you must be an owner of that project.
-
To verify that the service account exists, execute the
iam service-accounts describe
command, as the following example shows:gcloud iam service-accounts describe
service-
8675309
@gcp-sa-apigee.iam.gserviceaccount.comgcloud
responds with information about the service account, including the display name and project ID to which it belongs. If the service account doesn't exist,gcloud
responds with aNOT_FOUND
error.
Limitations
Apigee integrations with VPC Service Controls have the following limitations:
- Integrated portals require additional steps to configure.
- You must deploy Drupal portals within the service perimeter.