This page shows you how to deploy an Ingress that serves an application across multiple GKE clusters.
Deployment tutorial
In the tasks below, you will deploy a fictional app named zoneprinter
and
Multi-cluster Ingress in two clusters. The Ingress provides a shared virtual IP (VIP)
address for the app deployments.
This page builds upon the work done in Setting up Multi-cluster Ingress, where you created and registered two clusters. At this point you should have two clusters that are also registered to an environ:
$ gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
gke-eu europe-west1-c 1.16.8-gke.9 *** e2-medium 1.16.8-gke.9 2 RUNNING
gke-us us-central1-a 1.16.8-gke.9 *** e2-medium 1.16.6-gke.13 * 2 RUNNING
Creating the Namespace
Because environs have the property of
namespace sameness,
we recommend that you coordinate Namespace creation and management across clusters
so identical Namespaces are owned and managed by the same group. You can create
Namespaces per team, per environment, per application, or per application
component. Namespaces can be as granular as necessary, as long as a Namespace ns1
in one
cluster has the same meaning and usage as ns1
in another cluster.
In this example you create a zoneprinter
Namespace for each application in
each cluster.
Create
namespace.yaml
from the following manifest:apiVersion: v1 kind: Namespace metadata: name: zoneprinter
Switch to the gke-us context:
kubectl config use-context gke-us
Create the Namespace:
kubectl apply -f namespace.yaml
Switch to the gke-eu context:
kubectl config use-context gke-eu
Create the Namespace:
kubectl apply -f namespace.yaml
Deploying the app
Create
deploy.yaml
from the following manifest:apiVersion: apps/v1 kind: Deployment metadata: name: zone-ingress namespace: zoneprinter labels: app: zoneprinter spec: selector: matchLabels: app: zoneprinter template: metadata: labels: app: zoneprinter spec: containers: - name: frontend image: gcr.io/google-samples/zone-printer:0.2 ports: - containerPort: 8080
Switch to the gke-us context:
kubectl config use-context gke-us
Deploy the
zoneprinter
app:kubectl apply -f deploy.yaml
Switch to the gke-eu context:
kubectl config use-context gke-eu
Deploy the
zoneprinter
app:kubectl apply -f deploy.yaml
Verify that the
zoneprinter
app has successfully deployed in each cluster:kubectl get deployment --namespace zoneprinter
The output should be similar to this in both clusters:
NAME READY UP-TO-DATE AVAILABLE AGE zone-ingress 2/2 2 2 12m
Deploying through the config cluster
Now that the application is deployed across gke-us
and gke-eu
, you will
deploy a load balancer by deploying MultiClusterIngress
(MCI) and
MultiClusterService
(MCS) resources in the config cluster. MCI and MCS are
custom resources (CRDs) that are the multi-cluster equivalents of Ingress and
Service resources.
In the setup guide,
you configured the gke-us
cluster as the config cluster. The config cluster is
used to deploy and configure Ingress across all clusters.
Set the context to the config cluster.
kubectl config use-context gke-us
MultiClusterService
Create a file named
mcs.yaml
from the following manifest:apiVersion: networking.gke.io/v1 kind: MultiClusterService metadata: name: zone-mcs namespace: zoneprinter spec: template: spec: selector: app: zoneprinter ports: - name: web protocol: TCP port: 8080 targetPort: 8080
Deploy the
MultiClusterService
resource that matches thezoneprinter
app:kubectl apply -f mcs.yaml
Verify that the
zone-mcs
resource has successfully deployed in the config cluster:kubectl get mcs -n zoneprinter
The output is similar to this:
NAME AGE zone-mcs 9m26s
This MCS creates a derived headless Service in every cluster that matches Pods with
app: zoneprinter
. You can see that one exists in thegke-us
clusterkubectl get service -n zoneprinter
The output is similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mci-zone-mcs-svc-lgq966x5mxwwvvum ClusterIP None <none> 8080/TCP 4m59s
A similar headless Service will also exist in gke-eu
. These local Services are
used to dynamically select Pod endpoints to program the global Ingress load
balancer with backends.
MultiClusterIngress
Create a file named
mci.yaml
from the following manifest:apiVersion: networking.gke.io/v1 kind: MultiClusterIngress metadata: name: zone-ingress namespace: zoneprinter spec: template: spec: backend: serviceName: zone-mcs servicePort: 8080
Note that this configuration routes all traffic to the
MultiClusterService
namedzone-mcs
that exists in thezoneprinter
namespace.Deploy the
MultiClusterIngress
resource that referenceszone-mcs
as a backend:kubectl apply -f mci.yaml
The output is similar to this:
multiclusteringress.networking.gke.io/zone-mci created
Note that
MultiClusterIngress
has the same schema as the Kubernetes Ingress. The Ingress resource semantics are also the same with the exception of thebackend.serviceName
field.
The backend.serviceName
field in a MultiClusterIngress
references a
MultiClusterService
in the environ API rather than a Service in a Kubernetes
cluster. This means that any of the settings for Ingress, such as TLS
termination, settings can be configured in the same way.
Validating a successful deployment status
Google Cloud Load Balancer deployment may take several minutes to deploy for new load balancers. Updating existing load balancers completes faster because new resources do not need to be deployed. The MCI resource details the underlying Compute Engine resources that have been created on behalf of the MCI.
Verify that deployment has succeeded:
kubectl describe mci zone-ingress -n zoneprinter
The output is similar to this:
Name: zone-ingress Namespace: zoneprinter Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.gke.io/v1","kind":"MultiClusterIngress","metadata":{"annotations":{},"name":"zone-ingress","namespace":"zon... API Version: networking.gke.io/v1 Kind: MultiClusterIngress Metadata: Creation Timestamp: 2020-04-10T23:35:10Z Finalizers: mci.finalizer.networking.gke.io Generation: 2 Resource Version: 26458887 Self Link: /apis/networking.gke.io/v1/namespaces/zoneprinter/multiclusteringresses/zone-ingress UID: 62bec0a4-8a08-4cd8-86b2-d60bc2bda63d Spec: Template: Spec: Backend: Service Name: zone-mcs Service Port: 8080 Status: Cloud Resources: Backend Services: mci-8se3df-8080-zoneprinter-zone-mcs Firewalls: mci-8se3df-default-l7 Forwarding Rules: mci-8se3df-fw-zoneprinter-zone-ingress Health Checks: mci-8se3df-8080-zoneprinter-zone-mcs Network Endpoint Groups: zones/europe-west1-c/networkEndpointGroups/k8s1-e4adffe6-zoneprint-mci-zone-mcs-svc-lgq966x5m-808-88670678 zones/us-central1-a/networkEndpointGroups/k8s1-a6b112b6-zoneprint-mci-zone-mcs-svc-lgq966x5m-808-609ab6c6 Target Proxies: mci-8se3df-zoneprinter-zone-ingress URL Map: mci-8se3df-zoneprinter-zone-ingress VIP: ingress-vip Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 3m35s multi-cluster-ingress-controller zoneprinter/zone-ingress Normal UPDATE 3m10s (x2 over 3m34s) multi-cluster-ingress-controller zoneprinter/zone-ingress
There are several fields that indicate the status of this Ingress deployment:
Events
is the first place to look. If an error has occurred it will be listed here.Cloud Resource
lists the Compute Engine resources like forwarding rules, backend services, and firewall rules that have been created by the Anthos Ingress Controller. If these are not listed it means that they have not been created yet. You can inspect individual Compute Engine resources with the Console orgcloud
command to get its status.VIP
lists an IP address when one has been allocated. Note that the load balancer may not yet be processing traffic even though the VIP exists. If you do not see a VIP after a couple minutes, or if the load balancer is not serving a 200 response within 10 minutes, see Troubleshooting and operations.
If the output events are Normal
, then the MCI deployment is likely
successful, but the only way to determine that the full traffic path is
functional is to test it.
Validate that the application is serving on the VIP with the
/ping
endpoint:curl ingress-vip/ping
The output is similar to this:
{"Hostname":"34.98.102.37","Version":"1.0","GCPZone":"us-central1-a","Backend":"zone-ingress-5554bb95b4-svv5d"}
The output should indicate the region and backend of the application.
You can also go to the
http://ingress-vip
URL in your browser to see a graphical version of the application that shows the region that it's being served from.The cluster that the traffic is forwarded to depends on your location. The GCLB is designed to forward client traffic to the closest available backend with capacity.
Resource specs
MultiClusterService spec
The MultiClusterService
definition consists of two pieces:
A
template
section that defines the Service to be created in the Kubernetes clusters. Note that while thetemplate
section contains fields supported in a typical Service, there are only two fields that are supported in aMultiClusterService
:selector
andports
. The other fields are ignored.An optional
clusters
section that defines which clusters receive traffic and the load balancing properties for each cluster. If noclusters
section is specified or if no clusters are listed, all clusters are used by default.
The following manifest describes a standard MCS:
apiVersion: networking.gke.io/v1
kind: MultiClusterService
metadata:
name: name
namespace: namespace
spec:
template:
spec:
selector:
app: pod-label
ports:
- name: web
protocol: TCP
port: port
targetPort: target-port
where:
- name is the name of the MCS. This name is referenced by the
serviceName
field in the MCI resources. - namespace is the Kubernetes Namespace that the MCS is deployed in. It must match be in the same Namespace as the MCI and the Pods across all clusters in the environ.
- pod-label is the label that determines which pods are selected as backends for this MCS across all clusters in the environ.
- port must match with the port referenced by the MCI that references this MCS.
- target-port is the port that is used to send traffic to the Pod from the GCLB. A NEG is created in each cluster with this port as its serving port.
MultiClusterIngress spec
The following mci.yaml
describes the load balancer frontend:
apiVersion: networking.gke.io/v1
kind: MultiClusterIngress
metadata:
name: name
namespace: namespace
spec:
template:
spec:
backend:
serviceName: default-service
servicePort: port
rules:
- host: host-header
http:
paths:
- path: path
backend:
serviceName: service
servicePort: port
where:
- name is the name of the MCI resource.
- namespace is the Kubernetes Namespace that the MCI is deployed in. It must be in the same Namespace as the MCS and the Pods across all clusters in the environ.
- default-service acts as the default backend for all traffic that does not match any host or path rules. This is a required field and a default backend must be specified in the MCI even if there are other host or path matches configured.
- port is any valid port number. This must match with the
port
field of the MCS resources. - host-header matches traffic by the HTTP host header field. The
host
field is optional. - path matches traffic by the path of the HTTP URL. The
path
field is optional. - service is the name of an MCS that is deployed in the same Namespace and config cluster as this MCI.
Multi-cluster Ingress features
This section shows you how to configure additional Multi-cluster Ingress features.
Cluster selection
By default, Services derived from Multi-cluster Ingress are scheduled on every member cluster. However, you may want to apply ingress rules to specific clusters. Some use-cases include:
- Applying Multi-cluster Ingress to all clusters but the config cluster for isolation of the config cluster.
- Migrating workloads between clusters in a blue-green fashion.
- Routing to application backends that only exist in a subset of clusters.
- Using a single L7 VIP for host/path routing to backends that live on different clusters.
Cluster selection allows you to select clusters by region/name in the
MultiClusterService
object. This controls which clusters your
Multi-cluster Ingress is pointing to and where the derived Services are scheduled.
Clusters within the same environ and region should not have the same name so that
clusters can be referenced uniquely.
Open
mcs.yaml
apiVersion: networking.gke.io/v1 kind: MultiClusterService metadata: name: zone-mcs namespace: zoneprinter spec: template: spec: selector: app: zoneprinter ports: - name: web protocol: TCP port: 8080 targetPort: 8080
This specification currently creates Derived Services in all clusters, the default behavior.
Append the following lines in the clusters section:
apiVersion: networking.gke.io/v1 kind: MultiClusterService metadata: name: zone-mcs namespace: zoneprinter spec: template: spec: selector: app: zoneprinter ports: - name: web protocol: TCP port: 8080 targetPort: 8080 clusters: - link: "us-central1-a/gke-us" - link: "europe-west1-c/gke-eu"
This example creates Derived Service resources only in gke-us and gke-eu clusters. You must select clusters to selectively apply ingress rules. If the "clusters" section of the
MultiClusterService
is not specified or if no clusters are listed, it is interpreted as the default "all" clusters.
HTTPS support
The Kubernetes Secret supports HTTPS. Before enabling HTTPS support, you must create a static IP address. This static IP allows HTTP and HTTPS to share the same IP address. For more information, see Creating a static IP.
Once you have created a static IP address, you can create a Secret.
Create a Secret:
kubectl -n prod create secret tls secret-name --key /path/to/keyfile --cert/path/to/certfile
where secret-name is the name of your Secret.
Update the
mci.yaml
file with the static IP and Secret:apiVersion: networking.gke.io/v1 kind: MultiClusterIngress metadata: name: zone-ingress namespace: zoneprinter annotations: networking.gke.io/static-ip: static-ip-address spec: template: spec: backend: serviceName: zone-mcs servicePort: 8080 tls: - secretName: secret-name
BackendConfig support
The BackendConfig CRD allows you to customize settings on the Compute Engine BackendService resource. The supported specification is below:
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: zone-health-check-cfg
namespace: zoneprinter
spec:
healthCheck:
checkIntervalSec: [int]
timeoutSec: [int]
healthyThreshold: [int]
unhealthyThreshold: [int]
type: [HTTP | HTTPS | HTTP2 | TCP]
port: [int]
requestPath: [string]
timeoutSec: [int]
connectionDraining:
drainingTimeoutSec: [int]
sessionAffinity:
affinityType: [CLIENT_IP | CLIENT_IP_PORT_PROTO | CLIENT_IP_PROTO | GENERATED_COOKIE | HEADER_FIELD | HTTP_COOKIE | NONE]
affinityCookieTtlSec: [int]
cdn:
enabled: [bool]
cachePolicy:
includeHost: [bool]
includeQueryString: [bool]
includeProtocol: [bool]
queryStringBlacklist: [string list]
queryStringWhitelist: [string list]
securityPolicy:
name: ca-how-to-security-policy
logging:
enable: [bool]
sampleRate: [float]
iap:
enabled: [bool]
oauthclientCredentials:
secretName: [string]
To use BackendConfig, attach it on your MultiClusterService
resource using an annotation:
apiVersion: networking.gke.io/v1
kind: MultiClusterService
metadata:
name: zone-mcs
namespace: zoneprinter
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"8080":"zone-health-check-cfg"}}'
spec:
template:
spec:
selector:
app: zoneprinter
ports:
- name: web
protocol: TCP
port: 8080
targetPort: 8080
For more information about BackendConfig semantics, see Associating a service port with a BackendConfig.
gRPC support
Configuring gRPC applications on Multi-cluster Ingress requires very specific setup. Here are some tips to make sure your load balancer is configured properly:
- Make sure that the traffic from the load balancer to your application is HTTP/2. Use application protocols to configure this.
- Make sure that your application is properly configured for SSL since this is a requirement of HTTP/2. Note that using self-signed certs is acceptable.
- You must turn off mTLS on your application because mTLS is not supported for L7 external load balancers.
Resource lifecycle
Configuration changes
MultiClusterIngress
and MultiClusterService
resources behave as standard
Kubernetes objects, so changes to the objects are asynchronously reflected in
the system. Any changes that result in an invalid configuration cause associated
Google Cloud objects to remain unchanged and raise an error in the object event
stream. Errors associated with the configuration will be reported as events.
Managing Kubernetes resources
Deleting the Ingress object tears down the HTTP(S) load balancer so
traffic is no longer forwarded to any defined MultiClusterService
.
Deleting the MultiClusterService
removes the associated derived services in
each of the clusters.
Managing clusters
The set of clusters targeted by the load balancer can be changed by adding or removing a Membership.
For example, to remove the gke-eu
cluster as a backend for an ingress,
run:
gcloud container hub memberships unregister cluster-name \
--gke-uri=uri
where
- cluster-name is the name of your cluster.
- uri is the URI of the GKE cluster.
To add a cluster in Europe, run:
gcloud container hub memberships register europe-cluster \
--context=europe-cluster --service-account-key-file=/path/to/service-account-key-file
Note that registering or unregistering a cluster changes its status as a backend
for all Ingresses. In the above case, unregistering the gke-eu
cluster
removes it as an available backend for all Ingresses you create. The
reverse is true for registering a new cluster.
Disabling Multi-cluster Ingress
In Beta, disabling Multi-cluster Ingress results in orphaned networking resources. To
avoid this, delete your MultiClusterIngress
and MultiClusterService
resources and verify any
associated networking resources are deleted.
Disable Multi-cluster Ingress:
gcloud alpha container hub ingress disable
Annotations
Creating a static IP
Allocate a static IP:
gcloud compute addresses create address-name --global
where address-name is the name of the address to create.
The output is similar to this:
Created [https://www.googleapis.com/compute/v1/projects/project-id/global/addresses/address-name].
View the IP address you just created:
gcloud compute addresses list
The output is similar to this:
NAME ADDRESS/RANGE TYPE STATUS address-name 34.102.201.47 EXTERNAL RESERVED
Apply the static IP by updating
mci.yaml
:kubectl get mci zone-mci -o yaml
The output is similar to this:
kind: MultiClusterIngress metadata: name: shopping-service namespace: prod annotations: networking.gke.io/static-ip: static-ip-address spec: template: spec: backend: serviceName: shopping-service servicePort: 80
Pre-shared certificates
Pre-shared certificates are certificates uploaded to Google Cloud that can be used by the load balancer for TLS termination instead of certificates stored in Kubernetes Secrets. These certificates are uploaded out of band from GKE to Google Cloud and referenced by an Multi-cluster Ingress object. Multiple certificates, either through pre-shared certs or Kubernetes secrets, are also supported.
Using the certificates in Multi-cluster Ingress requires the
networking.gke.io/pre-shared-certs
annotation and the names of the certs. When
multiple certificates are specified for a given Multi-cluster Ingress object, a
predetermined order governs which cert is presented to the client.
You can list the available SSL certificates by running:
gcloud compute ssl-certificates list
The example below describes client traffic to one of the specified hosts that matches the Common Name of the pre-shared certs so the respective certificate that matches the domain name will be presented.
kind: MultiClusterIngress
metadata:
name: shopping-service
namespace: prod
annotations:
networking.gke.io/pre-shared-certs: "domain1-cert, domain2-cert"
spec:
template:
spec:
rules:
- host: my-domain1.gcp.com
http:
paths:
- backend:
serviceName: domain1-svc
servicePort: 443
- host: my-domain2.gcp.com
http:
paths:
- backend:
serviceName: domain2-svc
servicePort: 443
Google-managed Certificates
Google-managed Certificates
are supported on MCI through the networking.gke.io/pre-shared-certs
annotation. MCI supports the attachment of Google-managed
certificates to a MultiClusterIngress
resource, however unlike single-cluster
Ingress, the declarative generation of a Kubernetes ManagedCertificate
resource
is not supported on MCI. The original creation of the
Google-managed certificate must be done directly through the
compute ssl-certificates create
API before you can attach it to a
MultiClusterIngress
. That can be done following these steps:
Create a Google-managed Certificate as in step 1 here. Do not move to step 2 as MCI will attach this certificate for you.
gcloud compute ssl-certificates create my-google-managed-cert \ --domains=my-domain.gcp.com \ --global
Reference the name of the certificate in your
MultiClusterIngress
using thenetworking.gke.io/pre-shared-certs
annotation.kind: MultiClusterIngress metadata: name: shopping-service namespace: prod annotations: networking.gke.io/pre-shared-certs: "my-google-managed-cert" spec: template: spec: rules: - host: my-domain.gcp.com http: paths: - backend: serviceName: my-domain-svc servicePort: 8080
The preceding manifest attaches the certificate to your MultiClusterIngress
so that it can terminate traffic for your backend GKE clusters.
Google Cloud will automatically renew your certificate
prior to certificate expiry. Renewals occur transparently and does not require any
updates to MCI.
Application protocols
The connection from the load balancer proxy to your application uses HTTP by
default. Using networking.gke.io/app-protocols
annotation, you can configure
the load balancer to use HTTPS or HTTP/2 when it forwards requests to your
application.
kind: MultiClusterService
metadata:
name: shopping-service
namespace: prod
annotations:
networking.gke.io/app-protocols: '{"http2":"HTTP2"}'
spec:
template:
spec:
ports:
- port: 443
name: http2
BackendConfig
Refer to the section above on how to configure the annotation.
Known issues, product limits, and guidance
The following are important limitations or notices about Multi-cluster Ingress behavior that dictate safe and acceptable usage. Contact your account team or gke-mci-feedback@google.com if you have questions.
Compute Engine load balancer resources are created with a name containing a prefix of
mci-[6 char hash]
. All Multi-cluster Ingress managed resources in a project use the same prefix. This prefix is used to manage and garbage collect Compute Engine resources Multi-cluster Ingress deploys. Since this prefix contains a generated hash, it is unlikely that a project will contain Compute Engine resources outside the realm of Multi-cluster Ingress with this same prefix. However, Compute Engine load balancers in the same project that are not managed by Multi-cluster Ingress should not use this prefix or else they will be deleted.Multi-cluster Ingress only supports clusters within the same project. Only one instance of Multi-cluster Ingress can be deployed per project, though the scoping of clusters can be controlled with cluster selection. This allows
MultiClusterService
resources to be deployed only to specific subsets of clusters within a project.Multi-cluster Ingress and environs have a pre-configured quota of maximum 15 clusters per project. This quota can be increased if necessary. Contact your account team to request a higher per-project cluster limit if you have requirements for registering more clusters.
Multi-cluster Ingress has a pre-configured quota of 50
MultiClusterIngress
and 100MultiClusterService
resources per project. This allows up to 50 MCI and 100 MCS resources to be created in a config cluster for any number of backend clusters up to the per-project cluster maximum. This quota can be increased if necessary. Contact your account team to request higher per-project MCI & MCS quotas if you have requirements for higher scale.Configuration of HTTPS requires a pre-allocated static IP address. HTTPS is not currently supported with ephemeral IP addresses.
What's next
- Read the GKE network overview.
- Learn more about setting up HTTP Load Balancing with Ingress.