This deployment shows how to combine Anthos Service Mesh with Cloud Load Balancing to expose applications in a service mesh to internet clients.
You can expose an application to clients in many ways, depending on where the client is. This deployment shows you how to expose an application to clients by combining Cloud Load Balancing with Anthos Service Mesh in order to integrate load balancers with a service mesh. This deployment is intended for advanced practitioners who run Anthos Service Mesh but it works for Istio on Google Kubernetes Engine too.
Architecture
The following diagram shows how you can use mesh ingress gateways to integrate load balancers with a service mesh:
In the topology of the preceding diagram, the cloud ingress layer sources traffic from outside of the service mesh and directs that traffic to the mesh ingress layer. The mesh ingress layer then directs traffic to the mesh-hosted application backends.
The preceding topology has the following considerations:
- Cloud ingress: In this reference architecture, you configure the Google Cloud load balancer through ingress to check the health of the mesh ingress proxies on their exposed health check ports.
- Mesh ingress: In the mesh application, you perform health checks on the backends directly so that you can execute load balancing and traffic management locally.
The preceding topology has the following considerations:
- Cloud ingress: In this reference architecture, you configure the Google Cloud load balancer through ingress to check the health of the mesh ingress proxies on their exposed health check ports. If a mesh proxy is down, or if the cluster, mesh, or region is unavailable, the Google Cloud load balancer detects this condition and doesn't send traffic to the mesh proxy.
- Mesh ingress: In the mesh application, you perform health checks on the backends directly so that you can execute load balancing and traffic management locally.
The preceding diagram illustrates HTTPS encryption from the client to the Google Cloud load balancer, from the load balancer to the mesh ingress proxy, and from the ingress proxy to the sidecar proxy.
Objectives
- Deploy a Google Kubernetes Engine (GKE) cluster on Google Cloud.
- Deploy an Istio-based Anthos Service Mesh on your GKE cluster.
- Configure GKE Ingress to terminate public HTTPS traffic and direct that traffic to service mesh-hosted applications.
- Deploy the Online Boutique application on the GKE cluster that you expose to clients on the internet.
Cost optimization
In this document, you use the following billable components of Google Cloud:
- Google Kubernetes Engine
- Compute Engine
- Cloud Load Balancing
- Anthos Service Mesh
- Google Cloud Armor
- Cloud Endpoints
Before you begin
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
In the Google Cloud console, activate Cloud Shell.
You run all of the terminal commands for this deployment from Cloud Shell.
Upgrade to the latest version of the Google Cloud CLI:
gcloud components update
Set your default Google Cloud project:
export PROJECT=PROJECT export PROJECT_NUMBER=$(gcloud projects describe ${PROJECT} --format="value(projectNumber)") gcloud config set project ${PROJECT}
Replace
PROJECT
with the project ID that you want to use for this deployment.Create a working directory:
mkdir -p ${HOME}/edge-to-mesh cd ${HOME}/edge-to-mesh export WORKDIR=`pwd`
After you finish this deployment, you can delete the working directory.
Create GKE clusters
The features that are described in this deployment require a GKE cluster version 1.16 or later.
In Cloud Shell, create a new
kubeconfig
file. This step ensures that you don't create a conflict with your existing (default)kubeconfig
file.touch edge2mesh_kubeconfig export KUBECONFIG=${WORKDIR}/edge2mesh_kubeconfig
Define environment variables for the GKE cluster:
export CLUSTER_NAME=edge-to-mesh export CLUSTER_LOCATION=us-west1-a
Enable the Google Kubernetes Engine API.
gcloud
gcloud services enable container.googleapis.com
Config Connector
This deployment includes Config Connector resources. You can use these resources to complete the same tasks that you complete in the
gcloud
tab. To use these resources, install Config Connector and apply the resources in the way that works best for your environment.Use the following
Services
manifest:apiVersion: serviceusage.cnrm.cloud.google.com/v1beta1 kind: Service metadata: annotations: cnrm.cloud.google.com/deletion-policy: "abandon" cnrm.cloud.google.com/disable-dependent-services: "false" name: container.googleapis.com spec: resourceID: container.googleapis.com projectRef: external: PROJECT
Create a GKE cluster.
gcloud
gcloud container clusters create ${CLUSTER_NAME} \ --machine-type=e2-standard-4 \ --num-nodes=4 \ --zone ${CLUSTER_LOCATION} \ --enable-ip-alias \ --workload-pool=${PROJECT}.svc.id.goog \ --release-channel rapid \ --addons HttpLoadBalancing \ --labels mesh_id=proj-${PROJECT_NUMBER}
Config Connector
Use the following
ContainerCluster
andContainerNodePool
manifests:apiVersion: container.cnrm.cloud.google.com/v1beta1 kind: ContainerNodePool metadata: annotations: cnrm.cloud.google.com/project-id: PROJECT name: edge-to-mesh spec: clusterRef: name: edge-to-mesh location: us-west1-a nodeConfig: machineType: e2-standard-4 nodeCount: 4 --- apiVersion: container.cnrm.cloud.google.com/v1beta1 kind: ContainerCluster metadata: annotations: cnrm.cloud.google.com/project-id: PROJECT cnrm.cloud.google.com/remove-default-node-pool: "true" labels: mesh_id: proj-PROJECT_NUMBER name: edge-to-mesh spec: addonsConfig: httpLoadBalancing: disabled: false location: us-west1-a initialNodeCount: 1 releaseChannel: channel: RAPID workloadIdentityConfig: workloadPool: PROJECT.svc.id.goog
Replace
PROJECT_NUMBER
with the value of thePROJECT_NUMBER
environment variable retrieved earlier.To use cloud ingress, you must have the HTTP load balancing add-on enabled. GKE clusters have HTTP load balancing enabled by default; you must not disable it.
To use the managed Anthos Service Mesh, you must apply the
mesh_id
label on the cluster.Ensure that the cluster is running:
gcloud container clusters list
The output is similar to the following:
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS edge-to-mesh us-west1-a v1.22.6-gke.300 35.233.195.59 e2-standard-4 v1.22.6-gke.300 4 RUNNING
Connect to the cluster:
gcloud container clusters get-credentials ${CLUSTER_NAME} \ --zone ${CLUSTER_LOCATION} \ --project ${PROJECT}
Install a service mesh
In this section, you configure the managed Anthos Service Mesh with fleet API.
Enable the required APIs:
gcloud
gcloud services enable mesh.googleapis.com
Config Connector
Use the following
Services
manifest:apiVersion: serviceusage.cnrm.cloud.google.com/v1beta1 kind: Service metadata: annotations: cnrm.cloud.google.com/deletion-policy: "abandon" cnrm.cloud.google.com/disable-dependent-services: "false" name: mesh.googleapis.com spec: resourceID: mesh.googleapis.com projectRef: external: PROJECT
Enable Anthos Service Mesh on the fleet:
gcloud
gcloud container fleet mesh enable
Config Connector
Use the following
GKEHubFeature
manifest:apiVersion: gkehub.cnrm.cloud.google.com/v1beta1 kind: GKEHubFeature metadata: name: servicemesh spec: projectRef: external: PROJECT location: global resourceID: servicemesh
Register the cluster to the fleet:
gcloud
gcloud container fleet memberships register ${CLUSTER_NAME} \ --gke-cluster ${CLUSTER_LOCATION}/${CLUSTER_NAME} \ --enable-workload-identity
Config Connector
Use the following
GKEHubMembership
manifest:apiVersion: gkehub.cnrm.cloud.google.com/v1beta1 kind: GKEHubMembership metadata: annotations: cnrm.cloud.google.com/project-id: PROJECT name: edge-to-mesh spec: location: global authority: issuer: https://container.googleapis.com/v1/projects/PROJECT/locations/us-west1-a/clusters/edge-to-mesh endpoint: gkeCluster: resourceRef: name: edge-to-mesh
Enable automatic control plane management and managed data plane:
gcloud
gcloud container fleet mesh update \ --management automatic \ --memberships ${CLUSTER_NAME}
Config Connector
Use the following
GKEHubFeatureMembership
manifest:apiVersion: gkehub.cnrm.cloud.google.com/v1beta1 kind: GKEHubFeatureMembership metadata: name: servicemesh-membership spec: projectRef: external: PROJECT_ID location: global membershipRef: name: edge-to-mesh featureRef: name: servicemesh mesh: management: MANAGEMENT_AUTOMATIC
After a few minutes, verify that the control plane status is
ACTIVE
:gcloud container fleet mesh describe
The output is similar to the following:
... membershipSpecs: projects/841956571429/locations/global/memberships/edge-to-mesh: mesh: management: MANAGEMENT_AUTOMATIC membershipStates: projects/841956571429/locations/global/memberships/edge-to-mesh: servicemesh: controlPlaneManagement: details: - code: REVISION_READY details: 'Ready: asm-managed-rapid' state: ACTIVE dataPlaneManagement: details: - code: OK details: Service is running. state: ACTIVE state: code: OK description: 'Revision(s) ready for use: asm-managed-rapid.' updateTime: '2022-09-29T05:30:28.320896186Z' name: projects/your-project/locations/global/features/servicemesh resourceState: state: ACTIVE ...
Deploy GKE Ingress
In the following steps, you deploy the external Application Load Balancer through GKE's Ingress controller. The Ingress resource automates the provisioning of the load balancer, its TLS certificates, and backend health checking. Additionally, you use Cloud Endpoints to automatically provision a public DNS name for the application.
Install an ingress gateway
As a security best practice, we recommend that you deploy the ingress gateway in a different namespace from the control plane.
In Cloud Shell, create a dedicated
asm-ingress
namespace:kubectl create namespace asm-ingress
Add a namespace label to the
asm-ingress
namespace:kubectl label namespace asm-ingress istio-injection=enabled
The output is similar to the following:
namespace/asm-ingress labeled
Labeling the
asm-ingress
namespace withistio-injection=enabled
instructs Anthos Service Mesh to automatically inject Envoy sidecar proxies when an application is deployed.Run the following command to create the
Deployment
manifest asingress-deployment.yaml
:cat <<EOF > ingress-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: asm-ingressgateway namespace: asm-ingress spec: selector: matchLabels: asm: ingressgateway template: metadata: annotations: # This is required to tell Anthos Service Mesh to inject the gateway with the # required configuration. inject.istio.io/templates: gateway labels: asm: ingressgateway spec: securityContext: fsGroup: 1337 runAsGroup: 1337 runAsNonRoot: true runAsUser: 1337 containers: - name: istio-proxy securityContext: allowPrivilegeEscalation: false capabilities: drop: - all privileged: false readOnlyRootFilesystem: true image: auto # The image will automatically update each time the pod starts. resources: limits: cpu: 2000m memory: 1024Mi requests: cpu: 100m memory: 128Mi serviceAccountName: asm-ingressgateway --- apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: asm-ingressgateway namespace: asm-ingress spec: maxReplicas: 5 minReplicas: 3 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: asm-ingressgateway --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: asm-ingressgateway namespace: asm-ingress rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: asm-ingressgateway namespace: asm-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: asm-ingressgateway subjects: - kind: ServiceAccount name: asm-ingressgateway --- apiVersion: v1 kind: ServiceAccount metadata: name: asm-ingressgateway namespace: asm-ingress EOF
This
Deployment
has its ownServiceAccount
with associatedRole
andRoleBinding
allowing the gateway to access certificates.Deploy
ingress-deployment.yaml
in your cluster to create theDeployment
resource:kubectl apply -f ingress-deployment.yaml
The output is similar to the following:
deployment.apps/asm-ingressgateway configured role.rbac.authorization.k8s.io/asm-ingressgateway configured rolebinding.rbac.authorization.k8s.io/asm-ingressgateway configured serviceaccount/asm-ingressgateway created
Ensure that all deployments are up and running:
kubectl wait --for=condition=available --timeout=600s deployment --all -n asm-ingress
The output is similar to the following:
deployment.apps/asm-ingressgateway condition met
Run the following command to create the
Service
manifest asingress-service.yaml
:cat <<EOF > ingress-service.yaml apiVersion: v1 kind: Service metadata: name: asm-ingressgateway namespace: asm-ingress annotations: cloud.google.com/neg: '{"ingress": true}' cloud.google.com/backend-config: '{"default": "ingress-backendconfig"}' cloud.google.com/app-protocols: '{"https":"HTTP2"}' # HTTP/2 with TLS encryption labels: asm: ingressgateway spec: ports: # status-port exposes a /healthz/ready endpoint that can be used with GKE Ingress health checks - name: status-port port: 15021 protocol: TCP targetPort: 15021 # Any ports exposed in Gateway resources should be exposed here. - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 selector: asm: ingressgateway type: ClusterIP EOF
This
Service
has the following annotations that set parameters for the Ingress load balancer when it's deployed:cloud.google.com/backend-config
refers to the name of a custom resource calledBackendConfig
. The Ingress controller usesBackendConfig
to set parameters on the Google CloudBackendService
resource. You use this resource in the next step to define custom parameters of the Google Cloud health check.cloud.google.com/neg: '{"ingress": true}'
enables the Ingress backends (the mesh ingress proxies in this case) for container-native load balancing. For more efficient and stable load balancing, these backends use network endpoint groups (NEGs) instead of instance groups.cloud.google.com/app-protocols: '{"https":"HTTP2"}'
directs the GFE to connect to the service mesh's ingress gateway using HTTP2 with TLS as described in Ingress for external HTTP(S) load balancers and External HTTP(S) load balancer overview, for an additional layer of encryption.
Deploy
ingress-service.yaml
in your cluster to create theService
resource:kubectl apply -f ingress-service.yaml
The output is similar to the following:
service/asm-ingressgateway created
Apply backend Service settings
In Cloud Shell, run the following command to create the
BackendConfig
manifest asingress-backendconfig.yaml
:cat <<EOF > ingress-backendconfig.yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: ingress-backendconfig namespace: asm-ingress spec: healthCheck: requestPath: /healthz/ready port: 15021 type: HTTP securityPolicy: name: edge-fw-policy EOF
BackendConfig
is a Custom Resource Definition (CRD) that defines backend parameters for Ingress load balancing. For a complete list of the backend and frontend parameters that you can configure through GKE Ingress, see Ingress features.In this deployment, the
BackendConfig
manifest specifies custom health checks for the mesh ingress proxies. Anthos Service Mesh and Istio expose their sidecar proxy health checks on port15021
at the/healthz/ready
path. Custom health check parameters are required because the serving port (443
) of mesh ingress proxies is different from their health check port (15021
). GKE Ingress uses the following health check parameters inBackendConfig
to configure the Google Cloud load balancer health checks. A security policy that helps protect load balancing traffic from different kinds of network attacks is also referenced.healthCheck.port
defines the port that receives a health check by the Google Cloud load balancer on each Pod's IP address.healthCheck.requestPath
defines the HTTP path that receives a health check on the specified port.type
defines the protocol of the health check (in this case, HTTP).securityPolicy.name
refers to the name of a Cloud Armor security policy.
Deploy
ingress-backendconfig.yaml
in your cluster to create theBackendConfig
resource:kubectl apply -f ingress-backendconfig.yaml
The output is similar to the following:
backendconfig.cloud.google.com/ingress-backendconfig created
The
BackendConfig
parameters and theasm-ingressgateway
Service annotations aren't applied to a Google Cloud load balancer until the Ingress resource is deployed. The Ingress deployment ties all of these resources together.
Define security policies
Google Cloud Armor provides DDoS defense and customizable security policies that you can attach to a load balancer through Ingress resources. In the following steps, you create a security policy that uses preconfigured rules to block cross-site scripting (XSS) attacks. This rule helps block traffic that matches known attack signatures but allows all other traffic. Your environment might use different rules depending on what your workload is.
gcloud
In Cloud Shell, create a security policy that is called
edge-fw-policy
:gcloud compute security-policies create edge-fw-policy \ --description "Block XSS attacks"
Create a security policy rule that uses the preconfigured XSS filters:
gcloud compute security-policies rules create 1000 \ --security-policy edge-fw-policy \ --expression "evaluatePreconfiguredExpr('xss-stable')" \ --action "deny-403" \ --description "XSS attack filtering"
Config Connector
Use the following ComputeSecurityPolicy
manifest:
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeSecurityPolicy
metadata:
annotations:
cnrm.cloud.google.com/project-id: PROJECT_ID
name: edge-fw-policy
spec:
rule:
- action: allow
description: "Default rule"
match:
versionedExpr: SRC_IPS_V1
config:
srcIpRanges:
- "*"
priority: 2147483647
- action: deny-403
description: "XSS attack filtering"
match:
expr:
expression: "evaluatePreconfiguredExpr('xss-stable')"
priority: 1000
The edge-fw-policy
was referenced by ingress-backendconfig
in the previous
section. When the Ingress resource is deployed, it binds this security policy
with the load balancer to help protect any backends of the
asm-ingressgateway
Service.
Configure IP addressing and DNS
In Cloud Shell, create a global static IP address for the Google Cloud load balancer:
gcloud
gcloud compute addresses create ingress-ip --global
Config Connector
Use the following
ComputeAddress
manifest:apiVersion: compute.cnrm.cloud.google.com/v1beta1 kind: ComputeAddress metadata: annotations: cnrm.cloud.google.com/project-id: PROJECT_ID name: ingress-ip spec: location: global
This static IP address is used by the Ingress resource and allows the IP address to remain the same, even if the external load balancer changes.
Get the static IP address:
export GCLB_IP=$(gcloud compute addresses describe ingress-ip --global --format "value(address)") echo ${GCLB_IP}
To create a stable, human-friendly mapping to your Ingress IP address, you must have a public DNS record. You can use any DNS provider and automation that you want. This deployment uses Endpoints instead of creating a managed DNS zone. Endpoints provides a free Google-managed DNS record for a public IP address.
Run the following command to create the YAML specification file named
dns-spec.yaml
:cat <<EOF > dns-spec.yaml swagger: "2.0" info: description: "Cloud Endpoints DNS" title: "Cloud Endpoints DNS" version: "1.0.0" paths: {} host: "frontend.endpoints.${PROJECT}.cloud.goog" x-google-endpoints: - name: "frontend.endpoints.${PROJECT}.cloud.goog" target: "${GCLB_IP}" EOF
The YAML specification defines the public DNS record in the form of
frontend.endpoints.${PROJECT}.cloud.goog
, where${PROJECT}
is your unique project number.Deploy the
dns-spec.yaml
file in your Google Cloud project:gcloud endpoints services deploy dns-spec.yaml
The output is similar to the following:
Operation finished successfully. The following command can describe the Operation details: gcloud endpoints operations describe operations/rollouts.frontend.endpoints.edge2mesh.cloud.goog:442b2b38-4aee-4c60-b9fc-28731657ee08 Service Configuration [2021-11-14r0] uploaded for service [frontend.endpoints.edge2mesh.cloud.goog]
Now that the IP address and DNS are configured, you can generate a public certificate to secure the Ingress frontend. GKE Ingress supports Google-managed certificates as Kubernetes resources, which lets you provision them through declarative means.
Provision a TLS certificate
In Cloud Shell, run the following command to create the
ManagedCertificate
manifest asmanaged-cert.yaml
:cat <<EOF > managed-cert.yaml apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: gke-ingress-cert namespace: asm-ingress spec: domains: - "frontend.endpoints.${PROJECT}.cloud.goog" EOF
This YAML file specifies that the DNS name created through Endpoints is used to provision a public certificate. Because Google fully manages the lifecycle of these public certificates, they are automatically generated and rotated on a regular basis without direct user intervention.
Deploy the
managed-cert.yaml
file in your GKE cluster:kubectl apply -f managed-cert.yaml
The output is similar to the following:
managedcertificate.networking.gke.io/gke-ingress-cert created
Inspect the
ManagedCertificate
resource to check the progress of certificate generation:kubectl describe managedcertificate gke-ingress-cert -n asm-ingress
The output is similar to the following:
Name: gke-ingress-cert Namespace: asm-ingress Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.gke.io/v1","kind":"ManagedCertificate","metadata":{"annotations":{},"name":"gke-ingress-cert","namespace":"... API Version: networking.gke.io/v1 Kind: ManagedCertificate Metadata: Creation Timestamp: 2020-08-05T20:44:49Z Generation: 2 Resource Version: 1389781 Self Link: /apis/networking.gke.io/v1/namespaces/asm-ingress/managedcertificates/gke-ingress-cert UID: d74ec346-ced9-47a8-988a-6e6e9ddc4019 Spec: Domains: frontend.endpoints.edge2mesh.cloud.goog Status: Certificate Name: mcrt-306c779e-8439-408a-9634-163664ca6ced Certificate Status: Provisioning Domain Status: Domain: frontend.endpoints.edge2mesh.cloud.goog Status: Provisioning Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Create 44s managed-certificate-controller Create SslCertificate mcrt-306c779e-8439-408a-9634-163664ca6ced
When the certificate is ready, the
Certificate Status
isActive
.
Deploy the Ingress resource
In Cloud Shell, run the following command to create the
Ingress
manifest asingress.yaml
:cat <<EOF > ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: gke-ingress namespace: asm-ingress annotations: kubernetes.io/ingress.allow-http: "false" kubernetes.io/ingress.global-static-ip-name: "ingress-ip" networking.gke.io/managed-certificates: "gke-ingress-cert" kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: asm-ingressgateway port: number: 443 rules: - http: paths: - path: /* pathType: ImplementationSpecific backend: service: name: asm-ingressgateway port: number: 443 EOF
This manifest defines an Ingress resource that ties all of the previous resources together. The manifest specifies the following fields:
kubernetes.io/ingress.allow-http: "false"
disables HTTP traffic on port80
of the Google Cloud load balancer. This effectively prevents any clients connecting with unencrypted traffic because port443
only listens for HTTPS, and port80
is disabled.kubernetes.io/ingress.global-static-ip-name: "ingress-ip"
links the previously created IP address with the load balancer. This link allows the IP address to be created separately from the load balancer so that it can be reused separately from the load balancer lifecycle.networking.gke.io/managed-certificates: "gke-ingress-cert"
links this load balancer with the previously created Google-managed SSL Certificate resource.
Deploy
ingress.yaml
in your cluster:kubectl apply -f ingress.yaml
Inspect the Ingress resource to check the progress of the load balancer deployment:
kubectl describe ingress gke-ingress -n asm-ingress
The output is similar to the following:
... Annotations: ingress.kubernetes.io/https-forwarding-rule: k8s2-fs-fq3ng2uk-asm-ingress-gke-ingress-qm3qqdor ingress.kubernetes.io/ssl-cert: mcrt-306c779e-8439-408a-9634-163664ca6ced networking.gke.io/managed-certificates: gke-ingress-cert kubernetes.io/ingress.global-static-ip-name: ingress-ip ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-306c779e-8439-408a-9634-163664ca6ced ingress.kubernetes.io/backends: {"k8s-be-31610--07bdde06b914144a":"HEALTHY","k8s1-07bdde06-asm-ingress-asm-ingressgateway-443-228c1881":"HEALTHY"} ingress.kubernetes.io/forwarding-rule: k8s2-fr-fq3ng2uk-asm-ingress-gke-ingress-qm3qqdor ingress.kubernetes.io/https-target-proxy: k8s2-ts-fq3ng2uk-asm-ingress-gke-ingress-qm3qqdor ingress.kubernetes.io/target-proxy: k8s2-tp-fq3ng2uk-asm-ingress-gke-ingress-qm3qqdor ingress.kubernetes.io/url-map: k8s2-um-fq3ng2uk-asm-ingress-gke-ingress-qm3qqdor ...
The Ingress resource is ready when the
ingress.kubernetes.io/backends
annotations indicate that the backends areHEALTHY
. The annotations also show the names of different Google Cloud resources that are provisioned, including backend services, SSL certificates, and HTTPS target proxies.
Install the self-signed ingress gateway certificate
In the following steps, you generate and install a certificate (as a Kubernetes
secret
resource) that enables the GFE to establish a TLS connection to the
service mesh's ingress gateway. For more details about the requirements of the
ingress gateway certificate, see the
secure backend protocol considerations guide.
In Cloud Shell, create the private key and certificate using
openssl
:openssl req -new -newkey rsa:4096 -days 365 -nodes -x509 \ -subj "/CN=frontend.endpoints.${PROJECT}.cloud.goog/O=Edge2Mesh Inc" \ -keyout frontend.endpoints.${PROJECT}.cloud.goog.key \ -out frontend.endpoints.${PROJECT}.cloud.goog.crt
Create the
Secret
in theasm-ingress
namespace:kubectl -n asm-ingress create secret tls edge2mesh-credential \ --key=frontend.endpoints.${PROJECT}.cloud.goog.key \ --cert=frontend.endpoints.${PROJECT}.cloud.goog.crt
Configure the ingress gateway for external load balancing
In the following steps, you create a
shared Gateway
resource in the asm-ingress
namespace. Gateways are generally owned by the
platform admins or network admins team. Therefore, the Gateway
resource is
created in the asm-ingress
namespace owned by the platform admin and could be
used in other namespaces through their own VirtualService
entries.
In Cloud Shell, run the following command to create the
Gateway
manifest asingress-gateway.yaml
:cat <<EOF > ingress-gateway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: asm-ingressgateway namespace: asm-ingress spec: selector: asm: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - "*" # IMPORTANT: Must use wildcard here when using SSL, see note below tls: mode: SIMPLE credentialName: edge2mesh-credential EOF
You must use the wildcard
*
entry in thehosts
field in theGateway
. GCLB doesn't use SNI extension to the backends. Using the wildcard entry sends the encrypted packet (from GCLB) to the ASM ingress gateway. The ASM Ingress gateway decrypts the packet and uses the HTTP Host Header (in the decrypted packet) to make routing decisions (based onVirtualService
entries).Deploy
ingress-gateway.yaml
in your cluster:kubectl apply -f ingress-gateway.yaml
The output is similar to the following:
gateway.networking.istio.io/asm-ingressgateway created
Install the Online Boutique sample app
In Cloud Shell, create a dedicated
onlineboutique
namespace:kubectl create namespace onlineboutique
Add a namespace label to the
onlineboutique
namespace:kubectl label namespace onlineboutique istio-injection=enabled
The output is similar to the following:
namespace/onlineboutique labeled
Labeling the
onlineboutique
namespace withistio-injection=enabled
instructs Anthos Service Mesh to automatically inject Envoy sidecar proxies when an application is deployed.Download the Kubernetes YAML files for the Online Boutique sample app:
curl -LO \ https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/main/release/kubernetes-manifests.yaml
Deploy the Online Boutique app:
kubectl apply -f kubernetes-manifests.yaml -n onlineboutique
The output is similar to the following:
deployment.apps/frontend created service/frontend created service/frontend-external created ...
Ensure that all deployments are up and running:
kubectl get pods -n onlineboutique
The output is similar to the following:
NAME READY STATUS RESTARTS AGE adservice-d854d8786-fjb7q 2/2 Running 0 3m cartservice-85b5d5b4ff-8qn7g 2/2 Running 0 2m59s checkoutservice-5f9bf659b8-sxhsq 2/2 Running 0 3m1s ...
Run the following command to create the
VirtualService
manifest asfrontend-virtualservice.yaml
:cat <<EOF > frontend-virtualservice.yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: frontend-ingress namespace: onlineboutique spec: hosts: - "frontend.endpoints.${PROJECT}.cloud.goog" gateways: - asm-ingress/asm-ingressgateway http: - route: - destination: host: frontend port: number: 80 EOF
Note that the
VirtualService
is created in the application namespace (onlineboutique
). Typically, the application owner decides and configures how and what traffic gets routed to thefrontend
application soVirtualService
is deployed by the app owner.Deploy
frontend-virtualservice.yaml
in your cluster:kubectl apply -f frontend-virtualservice.yaml
The output is similar to the following:
virtualservice.networking.istio.io/frontend-virtualservice created
Access the following link:
echo "https://frontend.endpoints.${PROJECT}.cloud.goog"
Your Online Boutique frontend is displayed.
To display the details of your certificate, click
View site information in your browser's address bar, and then click Certificate (Valid).The certificate viewer displays details for the managed certificate, including the expiration date and who issued the certificate.
You now have a global HTTPS load balancer serving as a frontend to your service mesh-hosted application.
Clean up
After you've finished the deployment, you can clean up the resources you created on Google Cloud so you won't be billed for them in the future. You can either delete the project entirely or delete cluster resources and then delete the cluster.
Delete the project
- In the Google Cloud console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
Delete the individual resources
If you want to keep the Google Cloud project you used in this deployment, delete the individual resources:
Delete the Ingress resource:
kubectl delete -f ingress.yaml
Delete the managed certificate:
kubectl delete -f managed-cert.yaml
Delete the Endpoints DNS entry:
gcloud endpoints services delete "frontend.endpoints.${PROJECT}.cloud.goog"
The output is similar to the following:
Are you sure? This will set the service configuration to be deleted, along with all of the associated consumer information. Note: This does not immediately delete the service configuration or data and can be undone using the undelete command for 30 days. Only after 30 days will the service be purged from the system.
When you are prompted to continue, enter Y.
The output is similar to the following:
Waiting for async operation operations/services.frontend.endpoints.edge2mesh.cloud.goog-5 to complete... Operation finished successfully. The following command can describe the Operation details: gcloud endpoints operations describe operations/services.frontend.endpoints.edge2mesh.cloud.goog-5
Delete the static IP address:
gcloud compute addresses delete ingress-ip --global
The output is similar to the following:
The following global addresses will be deleted: - [ingress-ip]
When you are prompted to continue, enter Y.
The output is similar to the following:
Deleted [https://www.googleapis.com/compute/v1/projects/edge2mesh/global/addresses/ingress-ip].
Delete the GKE cluster:
gcloud container clusters delete $CLUSTER_NAME --zone $CLUSTER_LOCATION
What's next
- Learn about more features offered by GKE Ingress that you can use with your service mesh.
- Learn about the different types of cloud load balancing available for GKE.
- Learn about the features and functionality offered by Anthos Service Mesh.
- See how to deploy Ingress across multiple GKE clusters for multi-regional load balancing.
- For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center.