Using Google Cloud Platform Services from Google Kubernetes Engine

This document shows you how to use Google Cloud Platform (GCP) services from Google Kubernetes Engine (GKE). When you use GCP services such as Cloud Storage or Cloud SQL from apps that run in GKE, you must configure your environment for the services that you use. This document explains common architectural patterns and their associated tasks, and provides links to documentation that explains example configurations.

Objectives

  • Configure a service account and a secret key to use GCP services.
  • Configure the Cloud SQL Proxy Docker image to use a Cloud SQL database.
  • Configure internal load balancing to use custom services that run on Compute Engine VMs.
  • Configure a NAT gateway to use external services that require a fixed IP address.
  • Use Stackdriver Logging to record application logs.

Understanding common tasks

The following diagram shows common architectural patterns of using other services from GKE.

Diagram of common architectural patterns for using GCP services from GKE

You can configure them with the following tasks:

  • To use GCP services such as Cloud Storage through the Cloud APIs, you assign an appropriate role to the service account and provide the associated secret key to your app by using the Kubernetes secret object.
  • To use Cloud SQL, you assign an appropriate role to the service account, and add the Cloud SQL Proxy to your pod by using the sidecar pod pattern.
  • To use custom services that run on Compute Engine VMs in a scalable manner, you configure internal load balancing.
  • To use external services that require a fixed IP address, you configure a NAT gateway.
  • To record application logs in Logging, you have your app write log messages to standard output (stdout) and standard error (stderr).

The following sections have links to configuration steps.

Using GCP services through Cloud APIs

You can use GCP services through Cloud APIs by using a service account and a secret key. Kubernetes offers the secret resource type to store credentials inside the cluster and attach them to application pods, as shown in the following diagram:

Diagram showing a secret key in a Kubernetes "secret" object, accessed by multiple pods

For an example that shows how to use Cloud Pub/Sub from apps that run in GKE, see Authenticating to Cloud Platform with Service Accounts. You can apply the same steps for other GCP services such as Cloud Storage, BigQuery, Cloud Datastore, and Cloud Spanner. However, you must choose an appropriate role for the service account and service, and you might need to perform steps that are specific to each service.

One exception to this approach is Cloud SQL. For that service, you need a different approach, because Cloud SQL requires Cloud SQL Proxy client in order to securely access a database, as explained in the next section.

Using Cloud SQL with Cloud SQL Proxy

To access a Cloud SQL instance from an app that runs in GKE, you use the Cloud SQL Proxy Docker image. You attach the image to your application pod so that your app can use the Cloud SQL Proxy client in the same pod. The Cloud SQL Proxy client securely transfers the data between your app and the Cloud SQL instance, as shown in the following diagram:

Diagram showing an application communicating with Cloud SQL Proxy in a container, which in turn uses a secure connection to communicate with Cloud SQL

For information about how to attach the Cloud SQL Proxy image to your application pod, see Cloud SQL: Connecting from GKE.

Using external services through internal load balancing

To access external services from an app that runs in GKE, you use internal or external name services so that the app can discover the service endpoint. For an explanation of three ways to configure name services, see Connecting from inside a cluster to external services.

If the external service runs on Compute Engine instances, you might want to use internal load balancing to make the external service redundant and scalable. The following diagram illustrates this approach.

Diagram showing an application in a container communicating with Cloud Load Balancing, which in turn communicates with multiple instances of Compute Engine that run different services

For information on how to set up internal load balancing for backend services that run Compute Engine instances, see Setting Up Internal Load Balancing.

Using external services through a NAT gateway

VM nodes that host the application pods send egress packets from apps that run in GKE. The VM nodes have ephemeral IP addresses that are used as the source IP address of egress packets. Because of this, the source IP address from the app might change depending on the VM node that sends the packets. As a result, external services receive packets from multiple source IP addresses even though the packets are sent from the same app. Under normal circumstances, this is not a problem. However, you might want to send packets from a fixed IP address, because some external services are configured to accept packets from only a single source.

In this scenario, you can use a Compute Engine instance that works as a NAT gateway. By creating custom routing rules for the NAT gateway, you can send packets to external services with a fixed IP address, as shown in the following diagram:

Diagram showing GKE using custom routing to communicate with a NAT gateway that's in front of external services

For more information, see Using a NAT Gateway with GKE and Build high availability and high bandwidth NAT gateways. These articles explain how to deploy the NAT gateway instance and create custom routing rules.

You can apply the same architecture when you use a private cluster where VM nodes have only private IP addresses. In that case, the NAT gateway receives packets from a private subnet and transfers them to external services using a single public IP address.

Sending application logs to Stackdriver Logging

Stackdriver Logging is enabled by default to allow GKE to automatically collect, process, and store your container and system logs in a dedicated, persistent datastore. GKE deploys a per-node logging agent that reads logs from stdout and stderr in pods, as shown in the following diagram:

Diagram showing multiple application pods writing to "stdout" and "stderr", whose contents are written to a logging agent and then to Logging

Therefore, you must make your apps write log messages to stdout and stderr. For information on how to use Logging to collect, query, and analyze logs from an app that runs in GKE, see GKE - Logging.

What's next?

Was this page helpful? Let us know how we did:

Send feedback about...

Architectures