이 페이지에서는 Kubernetes 인그레스 및 서비스 객체를 사용하여 백엔드 서비스와의 통신에 HTTP/2를 사용하도록 외부 애플리케이션 부하 분산기를 구성하는 방법을 보여줍니다.
개요
애플리케이션 부하 분산기는 클라이언트와 애플리케이션 사이의 프록시 역할을 합니다. 클라이언트는 HTTP/1.1 또는 HTTP/2를 사용하여 부하 분산기 프록시와 통신할 수 있습니다. 하지만 부하 분산기 프록시에서 애플리케이션까지의 연결은 기본적으로 HTTP/1.1을 사용합니다. Google Kubernetes Engine(GKE) 포드에서 실행되는 애플리케이션이 HTTP/2 요청을 수신할 수 있는 경우, 요청을 애플리케이션으로 전달할 때 HTTP/2를 사용하도록 외부 부하 분산기를 구성합니다.
이 연습에서는 배포, 서비스, 인그레스를 만듭니다. 부하 분산기가 HTTP/2를 사용하여 애플리케이션과 통신하도록 지정하기 위해 서비스 매니페스트에 cloud.google.com/app-protocols 주석을 추가합니다.
그런 다음 서비스를 호출하고 애플리케이션이 HTTP/2 요청을 수신했는지 확인합니다.
이 매니페스트는 들어오는 요청이 echoheaders 서비스의 구성원인 포드로 전송되도록 지정하는 인그레스를 설명합니다. 요청은 echoheaders 서비스 매니페스트에서 지정된 targetPort의 포드로 라우팅됩니다. 이 연습에서 포드 targetPort는 8443입니다.
매니페스트를 클러스터에 적용합니다.
kubectlapply-fmy-ingress.yaml
Kubernetes 인그레스 컨트롤러가 애플리케이션 부하 분산기를 구성하는 동안 이 명령어를 완료하는 데 몇 분 정도 걸릴 수 있습니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2024-11-21(UTC)"],[],[],null,["# Using HTTP/2 for load balancing with Ingress\n\n[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview) [Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\nThis page shows how to use Kubernetes\n[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/)\nand\n[Service](https://kubernetes.io/docs/concepts/services-networking/service/)\nobjects to configure an external Application Load Balancer to use\n[HTTP/2](https://http2.github.io/)\nfor communication with backend services.\n\nOverview\n--------\n\nAn Application Load Balancer acts as a proxy between your clients and your\napplication. Clients can use HTTP/1.1 or HTTP/2 to communicate with the load\nbalancer proxy. However, the connection from the load balancer proxy to your\napplication uses HTTP/1.1 by default. If your application, running in a\nGoogle Kubernetes Engine (GKE) Pod, is capable of receiving HTTP/2 requests, you\nconfigure the external load balancer to use HTTP/2 when it forwards requests to\nyour application.\n\nIn this exercise, you create a Deployment, a Service, and an Ingress. You put a\n`cloud.google.com/app-protocols` annotation in your Service manifest to specify\nthat the load balancer should use HTTP/2 to communicate with your application.\nThen you call your service and verify that your application received an HTTP/2\nrequest.\n\nBefore you begin\n----------------\n\nBefore you start, make sure that you have performed the following tasks:\n\n- Enable the Google Kubernetes Engine API.\n[Enable Google Kubernetes Engine API](https://console.cloud.google.com/flows/enableapi?apiid=container.googleapis.com)\n- If you want to use the Google Cloud CLI for this task, [install](/sdk/docs/install) and then [initialize](/sdk/docs/initializing) the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running `gcloud components update`. **Note:** For existing gcloud CLI installations, make sure to set the `compute/region` [property](/sdk/docs/properties#setting_properties). If you use primarily zonal clusters, set the `compute/zone` instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: `One of [--zone, --region] must be supplied: Please specify location`. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.\n\n\u003c!-- --\u003e\n\n- Read about the Kubernetes [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) and [Service](https://kubernetes.io/docs/concepts/services-networking/service/) resources.\n- Read about the [HTTP/2 limitations for an external Application Load Balancer](/load-balancing/docs/https#HTTP2-limitations).\n\nCreate the Deployment\n---------------------\n\n1. Copy the following manifest to a file named `my-deployment.yaml`:\n\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: echoheaders\n spec:\n replicas: 2\n selector:\n matchLabels:\n app: echoheaders\n template:\n metadata:\n labels:\n app: echoheaders\n spec:\n containers:\n - name: echoheaders\n image: registry.k8s.io/echoserver:1.10\n ports:\n - containerPort: 8443\n\n This manifest describes a Deployment with two replicas of the\n `echoheaders` web application.\n2. Apply the manifest to your cluster:\n\n kubectl apply -f my-deployment.yaml\n\n| **Note:** To ensure the load balancer can make a correct HTTP2 request to your backend, your backend must be configured with SSL. For more information on what types of certificates are accepted, see [Encryption from the load balancer to the backends](/load-balancing/docs/ssl-certificates#backend-encryption).\n\nCreate the Service\n------------------\n\n1. Copy the following manifest to a file named `my-service.yaml`:\n\n apiVersion: v1\n kind: Service\n metadata:\n annotations:\n cloud.google.com/app-protocols: '{\"my-port\":\"HTTP2\"}'\n name: echoheaders\n labels:\n app: echoheaders\n spec:\n type: NodePort\n ports:\n - port: 443\n targetPort: 8443\n protocol: TCP\n name: my-port\n selector:\n app: echoheaders\n\n This manifest describes a Service with the following properties:\n - `type: NodePort`: Specifies that this is a Service of type [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types).\n - `app: echoheaders`: Specifies that any Pod that has this label is a member of the Service.\n - `cloud.google.com/app-protocols`: Specifies that `my-port` should use the HTTP/2 protocol.\n - `port: 443`, `protocol: TCP`, and `targetPort: 8433`: Specify that traffic directed to the Service on TCP port 443 should be routed to TCP port 8443 on one of the member Pods.\n2. Apply the manifest to your cluster:\n\n kubectl apply -f my-service.yaml\n\n3. View the Service:\n\n kubectl get service echoheaders --output yaml\n\n The output is similar to the following: \n\n apiVersion: v1\n kind: Service\n metadata:\n annotations:\n cloud.google.com/app-protocols: '{\"my-port\":\"HTTP2\"}'\n ...\n labels:\n app: echoheaders\n name: echoheaders\n ...\n spec:\n clusterIP: 10.39.251.148\n ...\n ports:\n - name: my-port\n nodePort: 30647\n port: 443\n protocol: TCP\n targetPort: 8443\n selector:\n app: echoheaders\n ...\n type: NodePort\n ...\n\nCreate the Ingress\n------------------\n\n1. Copy the following manifest to a file named `my-ingress.yaml`:\n\n apiVersion: networking.k8s.io/v1\n kind: Ingress\n metadata:\n name: echomap\n spec:\n defaultBackend:\n service:\n name: echoheaders\n port:\n number: 443\n\n This manifest describes an Ingress that specifies that incoming requests are\n sent to a Pod that is a member of the `echoheaders` Service. Requests are\n routed to the Pod on the `targetPort` that is specified in the `echoheaders`\n Service manifest. In this exercise, the Pod `targetPort` is `8443`.\n2. Apply the manifest to your cluster:\n\n kubectl apply -f my-ingress.yaml\n\n This command can take several minutes to complete while the Kubernetes\n Ingress controller configures the Application Load Balancer.\n3. View the Ingress:\n\n kubectl get ingress echomap --output yaml\n\n The output is similar to the following: \n\n kind: Ingress\n metadata:\n ...\n name: echomap\n ...\n spec:\n backend:\n serviceName: echoheaders\n servicePort: 443\n status:\n loadBalancer:\n ingress:\n - ip: 203.0.113.2\n\n In this output, the IP address of the Ingress is `203.0.113.2`.\n\nTest the load balancer\n----------------------\n\n### gcloud\n\n1. List your backend services:\n\n gcloud compute backend-services list\n\n2. Describe your backend service:\n\n gcloud beta compute backend-services describe \u003cvar translate=\"no\"\u003eBACKEND_SERVICE_NAME\u003c/var\u003e --global\n\n Replace \u003cvar translate=\"no\"\u003eBACKEND_SERVICE_NAME\u003c/var\u003e with the name of\n your backend service.\n\n The output specifies the `protocol` is `HTTP2`: \n\n backends:\n ...\n description: '{...,\"kubernetes.io/service-port\":\"443\",\"x-features\":[\"HTTP2\"]}'\n ...\n kind: compute#backendService\n loadBalancingScheme: EXTERNAL\n protocol: HTTP2\n ...\n\n### Console\n\n1. Go to the **Load balancing** page in the Google Cloud console.\n\n [Go to Load balancing](https://console.cloud.google.com/networking/loadbalancing/loadBalancers/list)\n2. Under **Name**, locate your load balancer.\n\n3. Click the name of your load balancer to view your backend service.\n\n4. Verify that the **Endpoint protocol** for your backend service is\n **HTTP/2**.\n\nCall your Service\n-----------------\n\nWait a few minutes for GKE to configure the load balancer and\nbackend service, then enter the external IP address of your load balancer in\nyour browser's address bar.\n\nThe output is similar to the following: \n\n Hostname: echoheaders-7886d5bc68-xnrwj\n ...\n Request Information:\n ...\n method=GET\n real path=/\n query=\n request_version=2\n request_scheme=https\n ...\n\n Request Headers:\n ...\n x-forwarded-for=[YOUR_IP_ADDRESS], 203.0.113.2\n x-forwarded-proto=http\n ...\n\nThis output information about the request from the load balancer to the\nPod:\n\n- `request_version=2`: Indicates that the request between the load balancer and the Pod used HTTP/2.\n- `x-forwarded-proto=http`: Indicates that the request between the browser and the load balancer used HTTP 1.1, not HTTP/2.\n\nWhat's next\n-----------\n\n- Set up an\n [external Application Load Balancer with Ingress](/kubernetes-engine/docs/tutorials/http-balancer).\n\n- Configure a\n [static IP address and domain name](/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip)\n for your application using Ingress.\n\n- Configure [SSL certificates](/kubernetes-engine/docs/how-to/ingress-multi-ssl)\n for your Ingress load balancer."]]