Using Google Cloud services from Google Kubernetes Engine

Stay organized with collections Save and categorize content based on your preferences.

This document shows you how to use Google Cloud services from Google Kubernetes Engine (GKE). When you use Google Cloud services such as Cloud Storage or Cloud SQL from apps that run in GKE, you must configure your environment for the services that you use. This document explains common architectural patterns and their associated tasks, and provides links to documentation that explains example configurations.


  • Configure Workload Identity to use Google Cloud services.
  • Configure the Cloud SQL Proxy Docker image to use a Cloud SQL database.
  • Configure internal load balancing to use custom services that run on Compute Engine VMs.
  • Configure a NAT gateway to use external services that require a fixed IP address.
  • Use Cloud Logging to record application logs.

Understanding common tasks

The following diagram shows common architectural patterns of using other services from GKE.

Diagram of common architectural patterns for using GCP services from GKE

You can configure them with the following tasks:

  • To use Google Cloud services such as Cloud Storage through the Cloud APIs, you configure Workload Identity.
  • To use Cloud SQL, you assign an appropriate role to the service account, and add the Cloud SQL Proxy to your pod by using the sidecar pod pattern.
  • To use custom services that run on Compute Engine VMs in a scalable manner, you configure internal load balancing.
  • To use external services that require a fixed IP address, you configure a NAT gateway.
  • To record application logs in Logging, you have your app write log messages to standard output (stdout) and standard error (stderr).

The following sections have links to configuration steps.

Using Google Cloud services through Cloud APIs

You can use Google Cloud services through Cloud APIs by using Workload Identity. It is the recommended way to access Google Cloud services from applications running within GKE due to its improved security properties and manageability. Pods running as the Kubernetes service account will automatically authenticate as the Google service account through the namespace mapping. For more information about Workload Identity, see Using Workload Identity.

One exception to this approach is Cloud SQL. For that service, you need a different approach, because Cloud SQL requires Cloud SQL Proxy client in order to securely access a database, as explained in the next section.

Using Cloud SQL with Cloud SQL Proxy

To access a Cloud SQL instance from an app that runs in GKE, you use the Cloud SQL Proxy Docker image. You attach the image to your application pod so that your app can use the Cloud SQL Proxy client in the same pod. The Cloud SQL Proxy client securely transfers the data between your app and the Cloud SQL instance, as shown in the following diagram:

Diagram showing an application communicating with Cloud SQL Proxy in a container, which in turn uses a secure connection to communicate with Cloud SQL

For information about how to attach the Cloud SQL Proxy image to your application pod, see Cloud SQL: Connecting from GKE.

Using external services through internal load balancing

To access external services from an app that runs in GKE, you use internal or external name services so that the app can discover the service endpoint.

If the external service runs on Compute Engine instances, you might want to use internal load balancing to make the external service redundant and scalable. The following diagram illustrates this approach.

Diagram showing an application in a container communicating with Cloud Load Balancing, which in turn communicates with multiple instances of Compute Engine that run different services

For information on how to set up internal load balancing for backend services that run Compute Engine instances, see Setting Up Internal Load Balancing.

Using external services through a NAT gateway

VM nodes that host the application pods send egress packets from apps that run in GKE. The VM nodes have ephemeral IP addresses that are used as the source IP address of egress packets. Because of this, the source IP address from the app might change depending on the VM node that sends the packets. As a result, external services receive packets from multiple source IP addresses even though the packets are sent from the same app. Under normal circumstances, this is not a problem. However, you might want to send packets from a fixed IP address, because some external services are configured to accept packets from only a single source.

In this scenario, you can use a Compute Engine instance that works as a NAT gateway. By creating custom routing rules for the NAT gateway, you can send packets to external services with a fixed IP address, as shown in the following diagram:

Diagram showing GKE using custom routing to communicate with a NAT gateway that's in front of external services

For more information, see Using a NAT Gateway with GKE and Cloud NAT.

You can apply the same architecture when you use a private cluster where VM nodes have only private IP addresses. In that case, the NAT gateway receives packets from a private subnet and transfers them to external services using a single public IP address.

Monitoring clusters using Cloud Operations for GKE

Cloud Operations for GKE is designed to monitor Google Kubernetes Engine clusters. It manages Cloud Monitoring and Cloud Logging together. It also features a console that provides a dashboard customized to GKE clusters.

For more information about Cloud Operations for GKE, see the overview of Cloud Operations for GKE.

What's next?