Service architecture

Last reviewed 2024-04-19 UTC

A Kubernetes service is an abstraction that lets you expose a set of pods as a single entity. Services are fundamental building blocks for exposing and managing containerized applications in a Kubernetes cluster. Services in this blueprint are architected in a standardized manner in consideration to namespaces, identity, service exposure, and service-to-service communication.

Namespaces

Each namespace has its own set of resources, such as pods, services, and deployments. Namespaces let you organize your applications and isolate them from each other. The blueprint uses namespaces to group services by their purpose. For example, you can create a namespace for all your frontend services and a namespace for your backend services. This grouping makes it easier to manage your services and to control access to them.

Service exposure

A service is exposed to the internet through the GKE Gateway controller. The GKE Gateway controller creates a load balancer using Cloud Load Balancing in a multi-cluster, multi-region configuration. Cloud Load Balancing uses Google's network infrastructure to provide the service with an anycast IP address that enables low-latency access to the service. Client access to the service is done over HTTPS connections and client HTTP requests are redirected to HTTPS. The load balancer uses Certificate Manager to manage public certificates. Services are further protected by Cloud Armour and Cloud CDN. The following diagram shows how services are exposed to the internet.

Blueprint services that are exposed to the internet.

Cloud Service Mesh

The blueprint uses Cloud Service Mesh for mutual authentication and authorization for all communications between services. For this deployment, the Cloud Service Mesh uses CA Service for issuing TLS certificates to authenticate peers and to help ensure that only authorized clients can access a service. Using mutual TLS (mTLS) for authentication also helps ensure that all TCP communications between services are encrypted in transit. For service ingress traffic into the service mesh, the blueprint uses the GKE Gateway controller.

Distributed services

A distributed service is an abstraction of a Kubernetes service that runs in the same namespace across multiple clusters. A distributed service remains available even if one or more GKE clusters are unavailable, as long as any remaining healthy clusters are able to serve the load. To create a distributed service across clusters, Cloud Service Mesh provides Layer 4 and Layer 7 connectivity between an application's services on all clusters in the environment. This connectivity enables the Kubernetes services on multiple clusters to act as a single logical service. Traffic between clusters is only routed to another region if intra-region traffic cannot occur because of a regional failure.

Service identity

Services running on GKE have identities that are associated with them. The blueprint configures Workload Identity Federation for GKE to let a Kubernetes service account act as a Google Cloud service account. Each instance of a distributed service within the same environment has a common identity which simplifies permission management. When accessing Google Cloud APIs, services that run as the Kubernetes service account automatically authenticate as the Google Cloud service account. Each service has only the minimal permissions necessary for the service to operate.

What's next