PostgreSQL용 AlloyDB는 기본 인스턴스와 보조 인스턴스 모두에서 고가용성을 지원합니다.
기본 인스턴스의 고가용성
고가용성 (HA)을 보장하기 위해 모든 AlloyDB 기본 인스턴스에는 활성 노드와 대기 노드가 모두 있으며, 이 노드들은 서로 다른 영역에 있습니다. 활성 노드를 사용할 수 없게 되면 AlloyDB가 기본 인스턴스를 대기 노드로 자동으로 장애 조치하여 새 활성 노드로 만듭니다.
활성 노드가 예상대로 작동하는 경우에도 언제든지 기본 인스턴스를 대기 노드로 수동으로 장애 조치할 수 있습니다. 수동 장애 조치를 시작하면 AlloyDB는 다음을 실행합니다.
기본 노드를 오프라인으로 전환합니다.
대기 모드 노드를 새 활성 노드로 전환합니다.
이전 활성 노드를 새 대기 노드로 다시 활성화합니다.
수동 장애 조치는 기본 인스턴스의 노드에 할당된 활성 역할과 대기 역할을 바꿉니다. 이 교체가 발생하도록 하려면 언제든지 수동 장애 조치를 트리거할 수 있습니다.
예를 들어 활성 노드와 대기 노드가 각각 us-central1-a 및 us-central1-b 영역에 있는 기본 인스턴스가 있다고 가정해 보겠습니다. us-central1-a의 중단은 자동 장애 조치를 트리거하여 활성 노드를 호스팅하는 us-central1-b 영역이 됩니다. us-central1-a 영역에 활성 노드를 유지하려면 수동 장애 조치를 시작하여 AlloyDB가 기본 인스턴스 노드를 서비스 중단 전 위치로 다시 스왑하도록 하면 됩니다.
유지보수 작업 중에 HA 기본 인스턴스와 기본 인스턴스는 일반적으로 1초 미만의 최소 유지보수 다운타임을 경험합니다.
수동 장애 조치는 의도적이고 제어된 절차이므로 예기치 않은 하드웨어 또는 네트워크 오류를 시뮬레이션하기 위한 것이 아닙니다. 대신 오류 삽입을 사용하여 기본 인스턴스의 고가용성을 테스트할 수 있습니다.
보조 인스턴스의 고가용성
AlloyDB는 보조 인스턴스에서 HA를 제공하여 재해 복구를 지원하고 보조 인스턴스를 사용할 수 없게 될 때 다운타임을 줄입니다.
기본적으로 HA는 보조 인스턴스에 구성됩니다.
AlloyDB 보조 인스턴스에는 다음 노드가 포함됩니다.
요청에 응답하는 활성 보조 노드
대기 보조 노드
활성 노드와 대기 노드가 리전의 서로 다른 두 영역에 있습니다. AlloyDB에서 활성 노드를 사용할 수 없음을 감지하면 활성 노드가 대기 노드로 장애 조치되어 새 활성 노드 역할을 합니다. 그러면 데이터가 새 활성 노드로 다시 라우팅됩니다. 이 프로세스를 장애 조치라고 합니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[[["\u003cp\u003eAlloyDB supports high availability (HA) for both primary and secondary instances, with each having an active and a standby node in different zones.\u003c/p\u003e\n"],["\u003cp\u003eManual failover allows users to swap the active and standby roles of a primary instance's nodes, even when the active node is functioning correctly.\u003c/p\u003e\n"],["\u003cp\u003eInitiating a manual failover on a primary instance involves taking the primary node offline, making the standby node active, and then turning the previous active node into the standby.\u003c/p\u003e\n"],["\u003cp\u003eSecondary instances also have HA capabilities, and they can be failed over manually using similar steps to those used for primary instances, both through the Google Cloud console and the gcloud CLI.\u003c/p\u003e\n"],["\u003cp\u003eA manual failover can be used on a primary or secondary instance, but this procedure does not apply to basic instances.\u003c/p\u003e\n"]]],[],null,["# Fail over a primary or secondary instance manually\n\nThis document describes how to manually fail over a primary or secondary instance.\n\nHigh availability on primary and secondary instances\n----------------------------------------------------\n\nAlloyDB for PostgreSQL supports high availability on both primary and secondary instances.\n\n### High availability on primary instances\n\nTo help ensure high availability (HA), every AlloyDB primary\ninstance has both an active node and a standby node, which are located in\ndifferent zones. If the active node becomes unavailable, then\nAlloyDB automatically *fails over* the primary instance to its\nstandby node, making it the new active node.\n\nYou can manually fail over your primary instance to its standby node at any\ntime, even if the active node is working as expected. When you initiate a manual\nfailover, AlloyDB does the following:\n\n1. Takes the primary node offline.\n\n2. Turns the standby node into the new active node.\n\n3. Reactivates the previous active node as the new\n standby node.\n\nManual failover swaps the active and standby roles of the nodes of your primary\ninstance. You can trigger a manual failover any time that you want\nthis exchange to occur.\n\nFor example, imagine that you have a primary instance whose active and standby\nnodes reside in the `us-central1-a` and `us-central1-b` zones, respectively. An\noutage in `us-central1-a` triggers an automatic failover, resulting in the\n`us-central1-b` zone hosting the active node. If you prefer to keep the active\nnode in the `us-central1-a` zone, then you can initiate a manual failover to\ncause AlloyDB to swap the primary instance nodes back to their\npre-outage locations.\n\nDuring maintenance operations, an HA primary instance and a basic instance\ntypically experience minimal maintenance downtime of less than a second.\nBecause manual failover is an intentional and controlled procedure, it isn't\nintended for simulating unexpected hardware or network faults. Instead, you can\n[test high availability for your primary instance by using fault\ninjection](/alloydb/docs/test-high-availability).\n\n### High availability on secondary instances\n\nAlloyDB offers HA on secondary instances to support disaster recovery and to\nreduce downtime when a secondary instance becomes unavailable.\n\nBy default, HA is configured on a secondary instance.\n\nAn AlloyDB secondary instance includes the following nodes:\n\n- An active secondary node, which responds to requests\n- A standby secondary node\n\nThe active and standby nodes are located in two different zones in a region. If\nAlloyDB detects unavailability of the active node, the active node\nfails over to the standby node to act as the new active node. Your data is then\nrerouted to the new active node. This process is called a *failover*.\n\n\nBefore you begin\n----------------\n\n- The Google Cloud project you are using must have been [enabled to access AlloyDB](/alloydb/docs/project-enable-access).\n- You must have one of these IAM roles in the Google Cloud project you are using:\n - `roles/alloydb.admin` (the AlloyDB Admin predefined IAM role)\n - `roles/owner` (the Owner basic IAM role)\n - `roles/editor` (the Editor basic IAM role)\n\n If you don't have any of these roles, contact your Organization Administrator to request\n access.\n\n\u003cbr /\u003e\n\nPerform a manual failover on a primary instance\n-----------------------------------------------\n\n| **Note:** This procedure doesn't apply to [basic\ninstances](/alloydb/docs/basic-instance), which don't have a standby node to failover to. \n\n### Console\n\n1. Go to the **Clusters** page.\n\n[Go to Clusters](https://console.cloud.google.com/alloydb/clusters)\n\n1. In the **Resource Name** column, click a cluster name.\n\n2. In the **Instances in your cluster** section, open your primary\n instance's more_vert\n **Instance actions menu**.\n\n3. Click **Failover**.\n\n4. In the dialog that appears, enter the instance's ID.\n\n5. Click **Trigger failover**.\n\n### gcloud\n\nRun the [`gcloud alloydb instances\nfailover`](/sdk/gcloud/reference/alloydb/instances/failover) command: \n\n gcloud alloydb instances failover \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-n\"\u003eINSTANCE_ID\u003c/span\u003e\u003c/var\u003e \\\n --region=\u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-n\"\u003eREGION_ID\u003c/span\u003e\u003c/var\u003e \\\n --cluster=\u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-n\"\u003eCLUSTER_ID\u003c/span\u003e\u003c/var\u003e \\\n --project=\u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-n\"\u003ePROJECT_ID\u003c/span\u003e\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eINSTANCE_ID\u003c/var\u003e: The ID of the instance.\n- \u003cvar translate=\"no\"\u003eREGION_ID\u003c/var\u003e: The region where the instance is placed.\n- \u003cvar translate=\"no\"\u003eCLUSTER_ID\u003c/var\u003e: The ID of the cluster where the instance is placed.\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: The ID of the project where the cluster is placed.\n\nTo confirm that the failover worked, follow these steps:\n\n1. Before performing the failover, [note the zones of the primary instance's\n nodes](/alloydb/docs/instance-view#zones).\n\n2. After running the failover, note the two nodes' new zones.\n\n3. Confirm that the zones of the active and standby nodes have switched places.\n\nPerform a manual failover on a secondary instance\n-------------------------------------------------\n\nFailing over a secondary instance manually is similar to the steps followed for\n[failing over the primary instance manually](#manual-failover-primary).\n\nTo fail over a secondary cluster manually, follow these steps: \n\n### Console\n\n1. In the Google Cloud console, go to the **Clusters** page.\n\n [Go to Clusters](https://console.cloud.google.com/alloydb/clusters)\n2. Click the name of a secondary cluster in the **Resource Name** column.\n\n3. On the **Overview** page, go to the **Instances in your cluster** section,\n choose the secondary instance, and click **Failover**.\n\n4. In the dialog that appears, enter the instance's ID, and click **Trigger failover**.\n\n### gcloud\n\n\nTo use the gcloud CLI, you can\n[install and initialize](/sdk/docs/install) the Google Cloud CLI, or you\ncan use [Cloud Shell](/shell/docs/using-cloud-shell).\n\n\u003cbr /\u003e\n\nUse the [`gcloud alloydb instances failover`](/sdk/gcloud/reference/alloydb/instances/failover) command to force a secondary instance to fail over its standby. \n\n gcloud alloydb instances failover \u003cvar translate=\"no\"\u003eSECONDARY_INSTANCE_ID\u003c/var\u003e \\\n --cluster=\u003cvar translate=\"no\"\u003eSECONDARY_CLUSTER_ID\u003c/var\u003e \\\n --region=\u003cvar translate=\"no\"\u003eREGION_ID\u003c/var\u003e \\\n --project=\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eSECONDARY_INSTANCE_ID\u003c/var\u003e: The ID of the secondary instance that you want to fail over.\n- \u003cvar translate=\"no\"\u003eSECONDARY_CLUSTER_ID\u003c/var\u003e: The ID of the secondary cluster that the secondary instance is associated with.\n- \u003cvar translate=\"no\"\u003eREGION_ID\u003c/var\u003e: The ID of the secondary instance's region---for example, `us-central1`.\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: The ID of the secondary cluster's project.\n\nWhat's next\n-----------\n\n- [Work with cross-region replication](/alloydb/docs/cross-region-replication/work-with-cross-region-replication)"]]