PostgreSQL용 AlloyDB는 리전 및 영역 인스턴스 유형을 제공합니다. 고가용성 (HA)을 보장하기 위해 모든 리전별 AlloyDB 기본 인스턴스에는 두 가지 영역에 있는 활성 노드와 대기 노드가 모두 있습니다. 어떤 이유로든 활성 노드를 사용할 수 없게 되면 AlloyDB가 자동으로 대기 노드를 새 활성 노드로 승격합니다.
오류 삽입을 사용하여 기본 인스턴스의 활성 노드를 갑자기 오프라인으로 강제하면 이 자동 HA 기능을 테스트할 수 있습니다. 그러면 AlloyDB에서 기본 인스턴스의 상태를 확인한 후 대기 노드를 활성 노드 역할에 다시 할당하는 비상 HA 절차를 활성화합니다.
오류 삽입은 또한 짧은 간격 후에 이전 활성 노드를 다시 온라인 상태로 만드는 장기 실행 작업을 시작합니다. 해당 노드가 기본 인스턴스의 새 대기 노드가 됩니다.
기본 인스턴스 노드의 활성 역할과 대기 역할을 더 빠르게 전환하는 방법은 기본 인스턴스 수동 장애 조치를 참고하세요.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[[["\u003cp\u003eAlloyDB for PostgreSQL ensures high availability by utilizing an active and a standby node for each regional primary instance, located in separate zones.\u003c/p\u003e\n"],["\u003cp\u003eFault injection can be used to test AlloyDB's high availability feature by abruptly forcing the primary instance's active node offline.\u003c/p\u003e\n"],["\u003cp\u003eUpon detecting an active node failure, AlloyDB automatically promotes the standby node to the active role and then later brings the former active node back online as the new standby.\u003c/p\u003e\n"],["\u003cp\u003eTo perform a fault injection for HA testing, use the \u003ccode\u003egcloud alloydb instances inject-fault\u003c/code\u003e command, specifying instance, region, cluster, and project IDs.\u003c/p\u003e\n"],["\u003cp\u003eBasic instances are not applicable for fault injection testing due to the absence of a standby node for failover.\u003c/p\u003e\n"]]],[],null,["# Test a primary instance for high availability\n\nAlloyDB for PostgreSQL offers regional and zonal instance types. To help ensure high availability (HA), every regional AlloyDB primary\ninstance has both an active node and a standby node, located in two different\nzones. If the active node becomes unavailable for any reason, then\nAlloyDB automatically promotes the standby node to become the new\nactive node.\n\nYou can test this automatic HA feature by using *fault injection* to abruptly\nforce your primary instance's active node offline. AlloyDB then\nactivates the emergency HA procedure that checks the primary instance's health\nand then reassigns the standby node into the active-node role.\n\nFault injection also initiates a long-running operation that brings the former\nactive node back online after a brief interval. That node becomes the new\nstandby node of the primary instance.\n\nFor a faster method of swapping the active and standby roles of your primary\ninstance's nodes, see [Fail over a primary instance\nmanually](/alloydb/docs/instance-primary-secondary-failover).\n\n\nBefore you begin\n----------------\n\n- The Google Cloud project you are using must have been [enabled to access AlloyDB](/alloydb/docs/project-enable-access).\n- You must have one of these IAM roles in the Google Cloud project you are using:\n - `roles/alloydb.admin` (the AlloyDB Admin predefined IAM role)\n - `roles/owner` (the Owner basic IAM role)\n - `roles/editor` (the Editor basic IAM role)\n\n If you don't have any of these roles, contact your Organization Administrator to request\n access.\n\n\u003cbr /\u003e\n\nSimulate an outage with a fault injection\n-----------------------------------------\n\n| **Note:** This procedure does not apply to [basic\n| instances](/alloydb/docs/basic-instance), which do not have a standby node to fail over to.\n\nTo test your primary instance's HA resiliency by abruptly shutting down its\nactive node, use the [`gcloud alloydb instances\ninject-fault`](/sdk/gcloud/reference/alloydb/instances/inject-fault) command.\nAfter a long-running operation completes, AlloyDB reinstates the\nnode. \n\n gcloud alloydb instances inject-fault \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-nx\"\u003eINSTANCE_ID\u003c/span\u003e\u003c/var\u003e \\\n --fault-type=stop-vm \\\n --region=\u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-nx\"\u003eREGION_ID\u003c/span\u003e\u003c/var\u003e \\\n --cluster=\u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-nx\"\u003eCLUSTER_ID\u003c/span\u003e\u003c/var\u003e \\\n --project=\u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-nx\"\u003ePROJECT_ID\u003c/span\u003e\u003c/var\u003e\n\n- \u003cvar translate=\"no\"\u003eINSTANCE_ID\u003c/var\u003e: The ID of the instance.\n- \u003cvar translate=\"no\"\u003eREGION_ID\u003c/var\u003e: The region where the instance is placed.\n- \u003cvar translate=\"no\"\u003eCLUSTER_ID\u003c/var\u003e: The ID of the cluster where the instance is placed.\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: The ID of the project where the cluster is placed."]]