Service discovery

Cloud Service Mesh provides service and endpoint discovery. These features let you group the virtual machine (VM) instances and container instances that run your code as endpoints of your services.

Cloud Service Mesh monitors these services so that it can share up-to-date health check information with its clients. Therefore, when one of your applications uses its Cloud Service Mesh client (such as an Envoy sidecar proxy or a proxyless gRPC application) to send a request, it benefits from up-to-date information about your services.

In the context of Cloud Service Mesh, a client is application code that runs on a VM or container that formulates requests to send to a server. A server is application code that receives such requests. A Cloud Service Mesh client is an Envoy or gRPC or other xDS client that is connected to Cloud Service Mesh and is part of the data plane.

In the data plane, Envoy or gRPC does the following:

  1. It examines a request and matches the request to a backend service, a resource that you configure during deployment.
  2. After the request is matched, Envoy or gRPC applies any previously configured traffic or security policies, chooses a backend or endpoint, and directs the request to that backend or endpoint.

Service discovery

Cloud Service Mesh provides service discovery. When you configure Cloud Service Mesh, you create backend services. You also define routing rules that specify how an outbound request (a request sent by your application code and handled by a Cloud Service Mesh client) is matched to a particular service. When a Cloud Service Mesh client handles a request that matches a rule, it can choose the service that should receive the request.

For example:

  • You have a VM running your application. This VM has an Envoy sidecar proxy that is connected to Cloud Service Mesh and handles outbound requests on behalf of the application.
  • You configured a backend service named payments. This backend service has two network endpoint group (NEG) backends that point to various container instances that run the code for your payments service.
  • You configured a Mesh resource defining a mesh called sidecar-mesh.
  • You configured a Route resource that defines traffic destinations for the backend service payments and the hostname helloworld-gce.

With this configuration, when your application (on the VM) sends an HTTP request to payments.example.com, the Cloud Service Mesh client knows that this request is destined for the payments service.

When you use Cloud Service Mesh with proxyless gRPC services, service discovery works similarly. However, a gRPC library acting as a Cloud Service Mesh client only gets information about the services for which you specify an xDS resolver. By default, Envoy gets information about all services configured on the Virtual Private Cloud (VPC) network specified in the Envoy bootstrap file.

Endpoint discovery

Service discovery enables clients to know about your services. Endpoint discovery enables clients to know about the instances that are running your code.

When you create a service, you also specify the backends for that service. These backends are either VMs in managed instance groups (MIGs) or containers in NEGs. Cloud Service Mesh monitors these MIGs and NEGs so that it knows when instances and endpoints are created and removed.

Cloud Service Mesh continuously shares up-to-date information about these services with its clients. This information enables clients to avoid sending traffic to endpoints that no longer exist. It also enables clients to learn about new endpoints and take advantage of the additional capacity that these endpoints provide.

Beyond adding endpoints into MIGs or NEGs and setting up Cloud Service Mesh, you don't need any additional configuration to enable service discovery with Cloud Service Mesh.

Sidecar proxy traffic interception in Cloud Service Mesh

Cloud Service Mesh supports the sidecar proxy model. Under this model, when an application sends traffic to its destination, the traffic is intercepted and and redirected it to a port on the sidecar proxy on the host where the application is running. The sidecar proxy decides how to load balance the traffic, and then sends the traffic to its destination.

With Cloud Service Mesh and the service routing APIs, traffic interception is managed automatically.

What's next