Service security use cases
This document describes common Cloud Service Mesh security use cases. Use this information to help you determine which security model best suits your needs. This document also provides a high-level overview of what you need to configure for each use case.
For an overview of service security, see Cloud Service Mesh service security.
Enabling mutual TLS for services in the mesh
In a service mesh, you can enable mutual TLS (mTLS) so that both the client and the server in a communication must prove their identities and encrypt communications.
The following section omits discussion of the Mesh
, Gateway
and Route
resources. These API resources are required to create your mesh and route
traffic, but you don't need to update them to enable mTLS.
The preceding pattern can be achieved by configuring the following Compute Engine API resources. This diagram uses sidecar proxies, but configuring a proxyless gRPC application with mTLS uses the same resources.
To create this model, do the following:
- Create a client transport layer security (TLS) policy.
- Create a server TLS policy.
- Update the
securitySettings
field in your existing global backend services to reference the new client TLS policy. Create an endpoint policy:
- Reference the server TLS policy in the
server_tls_policy
field. Define an
EndpointMatcher
to select the Cloud Service Mesh clients that should enforce authentication on inbound traffic.Selecting Cloud Service Mesh clients is based on labels specified in the Cloud Service Mesh client's bootstrap configuration. These labels can be supplied manually or populated automatically based on labels supplied to your Google Kubernetes Engine (GKE) deployments.
In the preceding diagram, the labels
"mesh-service":"true"
are configured on the endpoint policy and the Cloud Service Mesh clients. You can choose labels that suit your deployment.Optionally, define a
TrafficPortSelector
that applies the policies only when inbound requests are made to the specified port on the data plane entity.
- Reference the server TLS policy in the
The following diagram shows the Compute Engine resources that you configure for mTLS, regardless of whether you use Envoy or a proxyless gRPC application.
The following diagram shows traffic flow and lists the Compute Engine API resources that you configure to enable mTLS. The local sidecar proxy that sits alongside Service B's GKE Pod is the endpoint in the communication.
The endpoint policy does the following:
Selects a set of endpoints by using an endpoint matcher and, optionally, ports on those endpoints.
The endpoint matcher lets you specify rules that determine whether a Cloud Service Mesh client receives the configuration. These rules are based on the xDS metadata that the data plane entity provides to the control plane—in this case, Cloud Service Mesh.
You can add labels to the Cloud Service Mesh client as follows:
- You can manually specify this metadata in your Cloud Service Mesh client's bootstrap file.
Alternatively, the metadata can be automatically populated when you use GKE by adding the key-value pairs to the
env
section of thedemo_server.yaml
ordemo_client.yaml
files. These values are provided in the Envoy setup guide and proxyless gRPC setup guide.For example, with Envoy only, you can prefix the key with the
ISTIO_META_
prefix. Proxy environment variable names that start withISTIO_META_
are included in the generated bootstrap and sent to the xDS server.- name: ISTIO_META_app value: 'review' - name: ISTIO_META_version value: 'canary'
If you specify a port, the policies referenced in the endpoint policy are only enforced on inbound requests that specify the same port. If you don't specify a port, the policies are enforced on inbound requests that specify a port that is also in the
TRAFFICDIRECTOR_INBOUND_BACKEND_PORTS
field, which is provided to the Cloud Service Mesh client in its bootstrap information.
References client TLS, server TLS, and authorization policies that configure the endpoints to which requests resolve.
Configuring incompatible TLS modes might result in a disruption of
communications. For example, setting OPEN
on the global backend service or
leaving the client TLS policy field empty, and setting MTLS
as the value of
the server TLS policy on the endpoint policy, results in failed communication
attempts. This is because endpoints that are configured to only accept mTLS
reject attempts to establish unauthenticated communication channels.
Note the distinction between a client TLS policy and a server TLS policy attached to a global backend service and endpoint policy, respectively:
- The client TLS policy is applied to the global backend service. It tells the Envoy proxy or proxyless client which TLS mode, identity, and peer validation approach to use when addressing the service.
- The server TLS policy is attached to the endpoint policy. It tells the server which TLS mode, identity, and peer validation approach to use for incoming connections.
Enabling TLS for an ingress gateway
After you set up mTLS for in-mesh communications, you might want to secure traffic that is entering your mesh, known as ingress traffic. Cloud Service Mesh can configure your data plane to require ingress traffic to use TLS-encrypted communications channels.
To achieve this goal, choose one of the following architecture options:
- Services in the mesh terminate TLS for traffic from a load balancer. In this model, each service in the mesh is configured as a backend in the load balancer's configuration—specifically, in the load balancer's URL map.
- An ingress gateway terminates TLS for traffic from a load balancer before forwarding traffic to services in the mesh. In this model, a dedicated service in the mesh, the ingress gateway, is configured as a backend in the load balancer's configuration—specifically, in the load balancer's URL map.
Both options are explained in this section.
Services in the mesh terminate TLS for traffic from a load balancer
If you want to make your services available to clients outside of Google Cloud, you might use an external Application Load Balancer. Clients send traffic to the load balancer's global Anycast virtual IP address (VIP), which then forwards that traffic to services in your mesh. This means that there are two connections when an external client needs to reach a service in the mesh.
The same pattern applies when you use an internal Application Load Balancer. Traffic from internal clients first reaches the load balancer, which then establishes a connection to the backend.
To secure both connections, do the following:
- Secure the connection between the client and the load balancer by using an external Application Load Balancer.
- Configure the load balancer to use the HTTPS or HTTP/2 protocols when it attempts to establish a connection with services in the mesh.
- Configure Cloud Service Mesh so that your Cloud Service Mesh clients terminate HTTPS and present certificates to the client, which, in this case, is the load balancer.
For more information about steps 1 and 2, see Setting up a multi-region, content-based external HTTPS load balancer.
When you set up Cloud Service Mesh security, you configure various
Compute Engine API resources. These resources are separate from the
resources that you configure for the load balancer. You create a set of
Compute Engine API resources (global forwarding rule, target proxy, URL map,
and global backend services) for the load balancer and configure
Cloud Service Mesh with the service routing APIs. In addition, in the backend
service resource, the load balancer has the load balancing scheme
INTERNAL_MANAGED
and Cloud Service Mesh has the load balancing scheme
INTERNAL_SELF_MANAGED
.
In step 3, you configure Cloud Service Mesh so that your Cloud Service Mesh clients terminate HTTPS and present certificates to clients.
In this model, you do the following:
- Create a
serverTlsPolicy
: configureserverCertificate
on theserverTlsPolicy
resource. - Create an endpoint policy:
- Reference the server TLS policy in the
authentication
field. - Define an
EndpointMatcher
to select the xDS data plane entities that should enforce authentication on inbound traffic. - Optionally, define a
TrafficPortSelector
that applies the policies only when inbound requests are made to the specified port on the Cloud Service Mesh client.
- Reference the server TLS policy in the
Because the external Application Load Balancer is already configured to initiate TLS connections to services in your mesh, Cloud Service Mesh only needs to configure your Cloud Service Mesh clients to terminate TLS connections.
Ingress gateway terminates TLS for traffic from a load balancer before forwarding traffic to services in the mesh
If you only want to expose an ingress gateway to your load balancer, you can use the ingress gateway deployment pattern. In this pattern, the load balancer does not directly address services in your mesh. Instead, a middle proxy sits at the edge of your mesh and routes traffic to services inside the mesh, based on the configuration that it receives from Cloud Service Mesh. The middle proxy can be an Envoy proxy that you deployed on virtual machine (VM) instances in a Compute Engine managed instance group.
From a security perspective, you configure the ingress gateway to terminate TLS, and then optionally configure connections within your mesh so that they are protected by mTLS. These include connections between the ingress gateway and your in-mesh services, and connections among your in-mesh services.
From a configuration perspective, you do the following:
- Configure your service mesh and enable mTLS for communications within the mesh (as explained earlier).
- Configure your load balancer to route traffic to the ingress gateway and initiate connections by using the HTTPS protocol (as explained earlier).
- Create a set of Compute Engine API resources that represent the ingress gateway and its server TLS policy.
For the third step, configure Cloud Service Mesh to terminate HTTPS and present certificates as follows:
Create a
Mesh
resource to represent the mesh.Create a
Route
resource that points to the correct global backend services and attach theRoute
resource to theMesh
resource.Create a server TLS policy: configure
serverCertificate
.Create a
Gateway
resource to represent the Cloud Service Mesh managed ingress gateway.Attach the server TLS policy resource to the
Gateway
resource.
The ingress gateway pattern is especially useful in large organizations that use
Shared VPC. In such a setting, a team might only
allow access to its services through an ingress gateway. In the preceding
diagram, when you configure the global forwarding rule for the load balancer,
you supply a different IP address (in this example, 10.0.0.2
) than the one
supplied when you configure the mesh (in this example, the mesh address is
10.0.0.1
). Clients that communicate through a Cloud Service Mesh-configured xDS
data plane entity can use this address to access the ingress gateway.
As an example, assume the following:
- Two service projects (1 and 2), both attached to the same Shared VPC network.
Service project 1 contains a service mesh configured by Cloud Service Mesh.
Service project 1 has configured a mesh and an ingress gateway. This ingress gateway is reachable on the
10.0.0.2
address/VIP.Service project 2 contains a service mesh configured by Cloud Service Mesh.
Service project 2 might or might not have its own ingress gateway.
Cloud Service Mesh configures the Cloud Service Mesh clients in each service project. The clients are bootstrapped to use the same network.
Given this configuration, clients in service project 2's mesh can communicate
with the ingress gateway in service project 1 by using the 10.0.0.2
VIP. This
enables the owners of service project 1 to configure routing, security, and other
policies that are specific to traffic entering the mesh. In effect, the owners
of service project 1 can tell others that clients in your mesh can reach my
services on 10.0.0.2
.
Limitations
Cloud Service Mesh service security is supported only with GKE. You cannot deploy service security with Compute Engine.
What's next
- To configure Cloud Service Mesh service security with Envoy proxies, see Setting up Cloud Service Mesh service security with Envoy.
- To configure Cloud Service Mesh service security with proxyless gRPC applications, see Setting up Cloud Service Mesh service security with proxyless gRPC.