Edit on GitHub
Report issue
Page history

Connecting GKE workloads to services in different Google Cloud projects using Shared VPC

Author(s): @soeirosantos ,   Published: 2020-11-05

Romulo Santos

Contributed by the Google Cloud community. Not official Google documentation.

In this tutorial, you configure Shared VPC and connect two service projects. One service project contains a GKE cluster, and the other service project contains managed services that are accessible from applications deployed to the GKE cluster.

The following image provides an overview:

Before you begin

For this tutorial, you need three Google Cloud projects to configure the Shared VPC network and the service projects. To make cleanup easiest, create new projects for this tutorial, so you can delete the projects when you're done. For details, see the "Cleaning up" section at the end of the tutorial.

  1. Create three Google Cloud projects.
  2. Enable billing for your projects.
  3. Make sure that you have the required administrative roles to configure Shared VPC:

    • Organization Admin: resourcemanager.organizationAdmin
    • Shared VPC Admin: compute.xpnAdmin and resourcemanager.projectIamAdmin
    • Service Project Admin: compute.networkUser

    For more information about the purposes of these roles, see the Shared VPC IAM documentation.

  4. For the commands in this tutorial, you use the gcloud command-line interface. To install the Cloud SDK, which includes the gcloud tool, follow these instructions.

Costs

This tutorial uses billable components of Google Cloud, including the following:

Use the pricing calculator to generate a cost estimate based on your projected usage.

Configure the Shared VPC network

This tutorial refers to the host project using the environment variable SHARED_VPC_PROJECT, and to the two service projects using the SERVICE_PROJECT and GKE_CLUSTER_PROJECT environment variables.

  1. Define the environment variables for your projects and Google Cloud region:

    export SHARED_VPC_PROJECT=your-project-name-1
    export GKE_CLUSTER_PROJECT=your-project-name-2
    export SERVICE_PROJECT=your-project-name-3
    export GCP_REGION=us-central1
    
  2. Enable the container.googleapis.com service API for the host project and the service project used for the GKE cluster, and the compute.googleapis.com API for the other service project:

    gcloud services enable container.googleapis.com --project $SHARED_VPC_PROJECT
    
    gcloud services enable container.googleapis.com --project $GKE_CLUSTER_PROJECT
    
    gcloud services enable compute.googleapis.com --project $SERVICE_PROJECT
    
  3. Create a VPC network in the SHARED_VPC_PROJECT project and subnets to be used by the service projects:

    gcloud compute networks create shared-net \
        --subnet-mode custom \
        --project $SHARED_VPC_PROJECT
    
    gcloud compute networks subnets create k8s-subnet \
        --network shared-net \
        --range 10.0.4.0/22 \
        --region $GCP_REGION \
        --secondary-range k8s-services=10.0.32.0/20,k8s-pods=10.4.0.0/14 \
        --project $SHARED_VPC_PROJECT
    
    gcloud compute networks subnets create service-subnet \
        --network shared-net \
        --range 172.16.4.0/22 \
        --region $GCP_REGION \
        --project $SHARED_VPC_PROJECT
    

    This also adds secondary ranges for the pods and services that are used by the GKE cluster in the GKE_CLUSTER_PROJECT project.

    The region and ranges used here are arbitrary. You can use what works best for you.

  4. Enable the Shared VPC network and associate the service projects:

    gcloud compute shared-vpc enable $SHARED_VPC_PROJECT
    
    gcloud compute shared-vpc associated-projects add $GKE_CLUSTER_PROJECT \
        --host-project $SHARED_VPC_PROJECT
    
    gcloud compute shared-vpc associated-projects add $SERVICE_PROJECT \
        --host-project $SHARED_VPC_PROJECT
    
  5. Verify the service project configuration:

    gcloud compute shared-vpc get-host-project $GKE_CLUSTER_PROJECT
    
    gcloud compute shared-vpc get-host-project $SERVICE_PROJECT
    

Authorize the service projects in the Shared VPC network

In this section, you configure an IAM member from a service project to access only specific subnets in the host project. This provides a granular way to define service project admins by granting them the compute.networkUser role for only the subnets that they really need access to.

  1. Review the members of the GKE_CLUSTER_PROJECT project and get the name of the service account that you'll use in later steps:

    gcloud projects get-iam-policy $GKE_CLUSTER_PROJECT
    

    The output should look something like this:

    bindings:
    - members:
      - serviceAccount:service-48977974920@compute-system.iam.gserviceaccount.com
      role: roles/compute.serviceAgent
    - members:
      - serviceAccount:service-48977974920@container-engine-robot.iam.gserviceaccount.com
      role: roles/container.serviceAgent
    - members:
      - serviceAccount:48977974920-compute@developer.gserviceaccount.com
      - serviceAccount:48977974920@cloudservices.gserviceaccount.com
      - serviceAccount:service-48977974920@containerregistry.iam.gserviceaccount.com
      role: roles/editor
    
  2. Review the members of the SERVICE_PROJECT project and get the name of the service account that you'll use in later steps:

    gcloud projects get-iam-policy $SERVICE_PROJECT
    

    The output should look something like this:

    bindings:
    - members:
      - serviceAccount:service-507534582923@compute-system.iam.gserviceaccount.com
      role: roles/compute.serviceAgent
    - members:
      - serviceAccount:507534582923-compute@developer.gserviceaccount.com
      - serviceAccount:507534582923@cloudservices.gserviceaccount.com
      role: roles/editor
    
  3. Review the IAM policies for the subnets in the SHARED_VPC_PROJECT and retrieve the etag values. You use these values in a later step to update the policies.

    gcloud compute networks subnets get-iam-policy k8s-subnet \
        --region $GCP_REGION \
        --project $SHARED_VPC_PROJECT
    etag: ACAB
    
    gcloud compute networks subnets get-iam-policy service-subnet \
        --region $GCP_REGION \
        --project $SHARED_VPC_PROJECT
    etag: ACAB
    
  4. Create a k8s-subnet-policy.yaml file with the service accounts from the GKE_CLUSTER_PROJECT project:

    # k8s-subnet-policy.yaml
    bindings:
    - members:
      - serviceAccount:48977974920@cloudservices.gserviceaccount.com
      - serviceAccount:service-48977974920@container-engine-robot.iam.gserviceaccount.com
      role: roles/compute.networkUser
    etag: ACAB
    
  5. Update the policy to bind the members to the k8s-subnet subnet:

    gcloud compute networks subnets set-iam-policy k8s-subnet \
        k8s-subnet-policy.yaml \
        --region $GCP_REGION \
        --project $SHARED_VPC_PROJECT
    
  6. Create a service-subnet-policy.yaml file with the service accounts from the SERVICE_PROJECT project:

    # service-subnet-policy.yaml
    bindings:
    - members:
      - serviceAccount:507534582923@cloudservices.gserviceaccount.com
      role: roles/compute.networkUser
    etag: ACAB
    
  7. Update the policy to bind the members to the service-subnet subnet:

    gcloud compute networks subnets set-iam-policy service-subnet \
        service-subnet-policy.yaml \
        --region $GCP_REGION \
        --project $SHARED_VPC_PROJECT
    
  8. Add the GKE service account from the GKE_CLUSTER_PROJECT project to the host project with the roles/container.hostServiceAgentUser role:

    gcloud projects add-iam-policy-binding $SHARED_VPC_PROJECT \
      --member serviceAccount:service-48977974920@container-engine-robot.iam.gserviceaccount.com \
      --role roles/container.hostServiceAgentUser
    
  9. Check the GKE cluster usable subnets and validate the configuration that you have done:

    gcloud container subnets list-usable \
        --network-project $SHARED_VPC_PROJECT \
        --project $GKE_CLUSTER_PROJECT
    

Your Shared VPC network is configured and the service projects are authorized to manage their subnets in the Shared VPC network.

Connect to a Memorystore instance

In this section, you start some resources and play around to validate your network connectivity.

  1. Create a GKE cluster:

    gcloud container clusters create gke-cluster \
        --zone=$GCP_REGION-a \
        --enable-ip-alias \
        --network projects/$SHARED_VPC_PROJECT/global/networks/shared-net \
        --subnetwork projects/$SHARED_VPC_PROJECT/regions/$GCP_REGION/subnetworks/k8s-subnet \
        --cluster-secondary-range-name k8s-pods \
        --services-secondary-range-name k8s-services \
        --project $GKE_CLUSTER_PROJECT
    
  2. Before you create a Memorystore instance in the SERVICE_PROJECT project that is reachable from the GKE cluster, you need to establish a private service access connection in the SHARED_VPC_PROJECT project.

    Check the Memorystore documentation for more details about these steps.

    gcloud services enable servicenetworking.googleapis.com --project $SHARED_VPC_PROJECT
    
    gcloud beta compute addresses create \
    memorystore-pvt-svc --global --prefix-length=24 \
    --description "memorystore private service range" --network shared-net \
    --purpose vpc_peering --project $SHARED_VPC_PROJECT
    
    gcloud services vpc-peerings connect \
    --service servicenetworking.googleapis.com --ranges memorystore-pvt-svc \
    --network shared-net --project $SHARED_VPC_PROJECT
    
  3. Enable the necessary APIs and create the Memorystore instance:

    gcloud services enable redis.googleapis.com --project $SERVICE_PROJECT
    gcloud services enable servicenetworking.googleapis.com --project $SERVICE_PROJECT
    
    gcloud redis instances create my-redis --size 5 --region $GCP_REGION \
    --network=projects/$SHARED_VPC_PROJECT/global/networks/shared-net \
    --connect-mode=private-service-access --project $SERVICE_PROJECT
    
  4. Verify that the instance was created in the Shared VPC network, and retrieve the instance IP to use in a later step:

    gcloud redis instances list --region $GCP_REGION --project $SERVICE_PROJECT
    
  5. Check whether you can access this Redis instance from within the cluster:

    gcloud container clusters get-credentials gke-cluster --zone $GCP_REGION-a --project $GKE_CLUSTER_PROJECT
    
    kubectl run -it --rm --image gcr.io/google_containers/redis:v1 r2 --restart=Never -- sh
    
    redis-cli -h 10.192.163.3 info
    
    redis-benchmark -c 100 -n 100000 -d 1024 -r 100000 -t PING,SET,GET,INCR,LPUSH,RPUSH,LPOP,RPOP,SADD,SPOP,MSET -h 10.192.163.3
    

Connect to a Compute Engine instance

In this section, you access a service running inside a Compute Engine VM instance.

  1. Create a VM instance:

    gcloud compute instances create my-instance --zone $GCP_REGION-a \
    --subnet projects/$SHARED_VPC_PROJECT/regions/$GCP_REGION/subnetworks/service-subnet \
    --project $SERVICE_PROJECT
    
  2. Try to connect to the instance using SSH (which won't work yet):

    gcloud compute ssh my-instance \
        --zone $GCP_REGION-a \
        --project $SERVICE_PROJECT
    

    You can't access the VM instance with SSH yet.

  3. Add a firewall rule in the Shared VPC network to enable SSH access:

    gcloud compute firewall-rules create shared-net-ssh \
        --network shared-net \
        --direction INGRESS \
        --allow tcp:22,icmp \
        --project $SHARED_VPC_PROJECT
    
  4. Try again to connect to the instance using SSH (which should work this time):

    gcloud compute ssh my-instance \
        --zone $GCP_REGION-a \
        --project $SERVICE_PROJECT
    
  5. From inside the my-instance VM instance, run the following command to install the Nginx server and exit:

    sudo apt install -y nginx
    
  6. Retrieve the internal IP address for the instance just created:

    gcloud compute instances list --project $SERVICE_PROJECT
    

    You should have only only instance in this project.

  7. Try to access this VM from the GKE cluster:

    kubectl run -it --rm --image busybox bb8 --restart=Never
    
    wget -qO- 172.16.4.3 # vm internal IP
    

    The wget command should hang and not return.

  8. To enable ingress access from the GKE cluster to the Compute Engine instance created in the service-subnet subnet, add the following firewall rule for the k8s pod, service, and nodes ranges.

    gcloud compute firewall-rules create k8s-access \
        --network shared-net \
        --allow tcp,udp \
        --direction INGRESS \
        --source-ranges 10.0.4.0/22,10.4.0.0/14,10.0.32.0/20
        --project $SHARED_VPC_PROJECT
    
  9. Try again to access the VM from the GKE cluster:

    kubectl run -it --rm --image busybox bb8 --restart=Never
    
    wget -qO- 172.16.4.3 # vm internal IP
    

    This time, you should see the Welcome to nginx! page response.

Cleaning up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, perform the following actions in the Cloud Console:

  1. In the GKE_CLUSTER_PROJECT project, delete the GKE cluster.
  2. In the SERVICE_PROJECT project, delete the Compute Engine instance.
  3. Detach the two service projects on the Shared VPC page of the Shared VPC project.
  4. Disable the Shared VPC network on the Shared VPC page of the Shared VPC project.
  5. Shut down the three projects using the Resource Manager.

What next

Submit a tutorial

Share step-by-step guides

Submit a tutorial

Request a tutorial

Ask for community help

Submit a request

View tutorials

Search Google Cloud tutorials

View tutorials

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.