Using VPC Service Controls with Apigee and Apigee hybrid

Apigee integrates with VPC Service Controls, which let you isolate resources of your Google Cloud projects. This helps prevent data leaks/exfiltration.

This section describes how to use VPC Service Controls with Apigee.

Overview

VPC Service Controls defines a service perimeter that acts as a boundary between a project and other services. Service perimeters are an organization-level method to protect Google Cloud services in your projects in order to mitigate the risk of data exfiltration.

VPC Service Controls can also ensure that clients within a perimeter that have private access to resources do not have access to unauthorized resources outside the perimeter.

For a detailed look at the benefits of service perimeters, refer to the Overview of VPC Service Controls.

When using VPC Service Controls, note that:

  • Both the Google Cloud project and its associated runtime are incuded within that project's VPC Service Controls perimeter.
  • Interaction among services inside a perimeter can be restricted using the VPC network accessible services feature.

Both Apigee and Apigee hybrid integrate with VPC Service Controls. For a complete list of products that integrate with VPC Service Controls, see Supported products.

Setting up VPC Service Controls with Apigee

The general process for setting up VPC Service Controls with Apigee is as follows:

  1. Enable VPC Service Controls.
  2. Create a new service perimeter.
  3. Configure the service perimeter.

These steps are described in more detail below.

To set up VPC Service Controls with Apigee:

  1. Enable VPC Service Controls for the peered connection from your network to Apigee by executing the following command:

    gcloud services vpc-peerings enable-vpc-service-controls \
      --network=NETWORK_NAME --project=PROJECT_ID

    Where:

    • NETWORK_NAME is the name of your VPC peering network.

      If you used the default values during Apigee setup, the name of the network is "DEFAULT". In production environments, however, this is the name of your custom peering network.

    • PROJECT_ID is the name of the project that you created during the Apigee setup process.

    This command enables VPC Service Controls for your project. You can execute this command multiple times to enable VPC Service Controls for more than one project.

  2. Create a new perimeter as described in the the VPC Service Controls Quickstart. When you create a perimeter, you choose which projects to add within that perimeter as well as which services to secure.

    For Apigee and Apigee hybrid, Google recommends that you secure all services when you create a perimeter, including Apigee API and Apigee Connect API

    For more information, see Creating a service perimeter.

  3. Configure the service perimeter, as described in Service perimeter details and configuration.

Setting up VPC Service Controls with Apigee hybrid

Apigee hybrid supports VPC Service Controls, but there are additional steps that you must perform. The general process for integrating Apigee hybrid with VPC Service Controls is as follows:

  1. Set up private connectivity.
  2. Secure additional services within the perimeter.
  3. Set up a private repository. (A private repository is one that is within the perimeter; it does not necessarily need to be a local repository as long as it is inside the perimeter.)
  4. Push the Apigee images to your private repository.
  5. Update overrides to use the private repository during the hybrid installation and configuration process.

Each of these steps are described in more detail in the following procedure.

To set up VPC Service Controls with Apigee hybrid:

  1. Set up private IP addresses for your hybrid network hosts, as described in Setting up private connectivity to Google APIs and services. This involves configuring routes, firewall rules, and DNS entries to let the Google APIs access those private IPs.
  2. Follow the steps in Setting up VPC Service Controls with Apigee.

    During this process, you must be sure to secure the following services in addition to those specified for Apigee, within your perimeter:

    • Anthos Service Mesh
    • Cloud Monitoring (Stackdriver)
    • Google Kubernetes Engine (if you are running on GKE)
    • Google Container Registry (if you are using this as your local repository)

    To add these services to your perimeter, follow the instructions in Service perimeter details and configuration.

  3. Copy the Apigee images into your private repository:
    1. Download the signed Apigee images from Docker Hub as described here. Be sure to specify the latest version numbers.

      For example:

      docker pull google/apigee-installer:1.3.3
      docker pull google/apigee-authn-authz:1.3.3
      docker pull google/apigee-mart-server:1.3.3
      docker pull google/apigee-synchronizer:1.3.3
      docker pull google/apigee-runtime:1.3.3
      docker pull google/apigee-hybrid-cassandra-client:1.3.3
      docker pull google/apigee-hybrid-cassandra:1.3.3
      docker pull google/apigee-cassandra-backup-utility:1.3.3
      docker pull google/apigee-udca:1.3.3
      docker pull google/apigee-stackdriver-logging-agent:1.6.8
      docker pull google/apigee-prom-prometheus:v2.9.2
      docker pull google/apigee-stackdriver-prometheus-sidecar:0.7.5
      docker pull google/apigee-connect-agent:1.3.3
      docker pull google/apigee-watcher:1.3.3
      docker pull google/apigee-operators:1.3.3
      docker pull google/apigee-kube-rbac-proxy:v0.4.1
    2. Tag the images.

      The following example tags the images in a US-based GCR repo:

      docker tag google/apigee-installer:1.3.3 us.gcr.io/project_ID/apigee-installer:1.3.3
      docker tag google/apigee-authn-authz:1.3.3 us.gcr.io/project_ID/apigee-authn-authz:1.3.3
      docker tag google/apigee-mart-server:1.3.3 us.gcr.io/project_ID/apigee-mart-server:1.3.3
      docker tag google/apigee-synchronizer:1.3.3 us.gcr.io/project_ID/apigee-synchronizer:1.3.3
      docker tag google/apigee-runtime:1.3.3 us.gcr.io/project_ID/apigee-runtime:1.3.3
      docker tag google/apigee-hybrid-cassandra-client:1.3.3 us.gcr.io/project_ID/apigee-hybrid-cassandra-client:1.3.3
      docker tag google/apigee-hybrid-cassandra:1.3.3 us.gcr.io/project_ID/apigee-hybrid-cassandra:1.3.3
      docker tag google/apigee-cassandra-backup-utility:1.3.3 us.gcr.io/project_ID/apigee-cassandra-backup-utility:1.3.3
      docker tag google/apigee-udca:1.3.3 us.gcr.io/project_ID/apigee-udca:1.3.3
      docker tag google/apigee-stackdriver-logging-agent:1.6.8 us.gcr.io/project_ID/apigee-stackdriver-logging-agent:1.6.8
      docker tag google/apigee-prom-prometheus:v2.9.2 us.gcr.io/project_ID/apigee-prom-prometheus:v2.9.2
      docker tag google/apigee-stackdriver-prometheus-sidecar:0.7.5 us.gcr.io/project_ID/apigee-stackdriver-prometheus-sidecar:0.7.5
      docker tag google/apigee-connect-agent:1.3.3 us.gcr.io/project_ID/apigee-connect-agent:1.3.3
      docker tag google/apigee-watcher:1.3.3 us.gcr.io/project_ID/apigee-watcher:1.3.3
      docker tag google/apigee-operators:1.3.3 us.gcr.io/project_ID/apigee-operators:1.3.3
      docker tag google/apigee-kube-rbac-proxy:v0.4.1 us.gcr.io/project_ID/apigee-kube-rbac-proxy:v0.4.1

      While not required, Google recommends that you include the project ID or other identifying value in the repo path for each image.

    3. Push the images to your private repository.

      The following example pushes the images to a US-based GCR repo:

      docker push us.gcr.io/project_ID/apigee-installer:1.3.3
      docker push us.gcr.io/project_ID/apigee-authn-authz:1.3.3
      docker push us.gcr.io/project_ID/apigee-mart-server:1.3.3
      docker push us.gcr.io/project_ID/apigee-synchronizer:1.3.3
      docker push us.gcr.io/project_ID/apigee-runtime:1.3.3
      docker push us.gcr.io/project_ID/apigee-hybrid-cassandra-client:1.3.3
      docker push us.gcr.io/project_ID/apigee-hybrid-cassandra:1.3.3
      docker push us.gcr.io/project_ID/apigee-cassandra-backup-utility:1.3.3
      docker push us.gcr.io/project_ID/apigee-cassandra-backup-utility:1.3.3
      docker push us.gcr.io/project_ID/apigee-udca:1.3.3
      docker push us.gcr.io/project_ID/apigee-stackdriver-logging-agent:1.6.8
      docker push us.gcr.io/project_ID/apigee-prom-prometheus:v2.9.2
      docker push us.gcr.io/project_ID/apigee-stackdriver-prometheus-sidecar:0.7.5
      docker push us.gcr.io/project_ID/apigee-connect-agent1.3.3
      docker push us.gcr.io/project_ID/apigee-watcher1.3.3
      docker push us.gcr.io/project_ID/apigee-operators1.3.3
      docker push us.gcr.io/project_ID/apigee-kube-rbac-proxy:v0.4.1

      While not required, Google recommends that you include the project ID or other identifying value in the repo path for each image.

  4. Update your overrides file to point the image URLs to your private repository, as described in Specify configuration overrides

    You must change the image URLs for the following components:

    Component Name (in overrides file) Image URL
    ao your_private_repo/apigee-operators
    authz your_private_repo/apigee-authn-authz
    cassandra your_private_repo/apigee-hybrid-cassandra

    auth: your_private_repo/apigee-hybrid-cassandra-client
    backup: your_private_repo/apigee-cassandra-backup-utility
    restore: your_private_repo/apigee-cassandra-backup-utility
    connectAgent your_private_repo/apigee-connect-agent
    installer your_private_repo/apigee-installer
    kubeRBACProxy your_private_repo/apigee-kube-rbac-proxy
    logger your_private_repo/apigee-stackdriver-logging-agent
    mart your_private_repo/apigee-mart-server
    metrics your_private_repo/apigee-prom-prometheus

    sdSidecar: your_private_repo/apigee-stackdriver-prometheus-sidecar
    runtime your_private_repo/apigee-runtime
    synchronizer your_private_repo/apigee-synchronizer
    udca your_private_repo/apigee-udca

    fluentd: your_private_repo/apigee-stackdriver-logging-agent
    watcher your_private_repo/apigee-watcher
  5. Apply your changes using the new images in GCR, as described in Apply the configuration to the cluster.

Limitations

Apigee integrations with VPC Service Controls have the following limitations:

  • You must use Drupal if you use portals. You cannot use integrated portals.
  • You must deploy Drupal portals within the service perimeter.