[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-08-25 (世界標準時間)。"],[[["\u003cp\u003eAlloyDB instances, both primary and read pools, can be scaled vertically by changing their machine type.\u003c/p\u003e\n"],["\u003cp\u003eRead pool instances in AlloyDB can also be scaled horizontally by adjusting the number of nodes within the instance.\u003c/p\u003e\n"],["\u003cp\u003eScaling actions for instances can be performed using the Google Cloud console or the \u003ccode\u003egcloud\u003c/code\u003e CLI tool, via the command \u003ccode\u003egcloud alloydb instances update\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eScaling operations are temporarily unavailable when the cluster status is set to \u003ccode\u003eMAINTENANCE\u003c/code\u003e, and will only become available again after the status returns to \u003ccode\u003eREADY\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eWhen decreasing the node count in a read pool, clients connected to a removed node can reconnect to other active nodes.\u003c/p\u003e\n"]]],[],null,["This page shows how to scale an AlloyDB instance. You can\nscale both primary and read pool instances vertically by changing the\ninstance's machine type, and you can scale read pool instances\nhorizontally by changing the number of nodes in the instance.\n\n\nBefore you begin\n\n- The Google Cloud project you are using must have been [enabled to access AlloyDB](/alloydb/docs/project-enable-access).\n- You must have one of these IAM roles in the Google Cloud project you are using:\n - `roles/alloydb.admin` (the AlloyDB Admin predefined IAM role)\n - `roles/owner` (the Owner basic IAM role)\n - `roles/editor` (the Editor basic IAM role)\n\n If you don't have any of these roles, contact your Organization Administrator to request\n access.\n\n\u003cbr /\u003e\n\nScale an instance's machine type \n\nConsole\n\n1. In the Google Cloud console, go to the **Clusters** page.\n\n [Go to Clusters](https://console.cloud.google.com/alloydb/clusters)\n2. Click a cluster in the **Resource Name** column.\n\n3. On the **Overview** page, go to the **Instances in your cluster**\n section, and click **Edit primary** or **Edit read pool**.\n\n Note that this action is not available if the page reports a cluster\n **Status** of **Maintenance** . The action becomes available again\n after **Status** changes to **Ready**.\n4. Select one of the following machine series:\n\n - C4A (Google Axion-based machine series) ([Preview](https://cloud.google.com/products?e=48754805product-launch-stages))\n - N2 (x86-based machine series). This is the default machine series.\n5. Select a machine type.\n\n - C4A supports 1, 4, 8, 16, 32, 48, 64, and 72 machine types or shapes.\n - N2 supports 2,4,8,16,32,64,96, and 128 machine types or shapes.\n\n For more information about using the C4A Axion-based machine series, including\n the 1 vCPU machine type, see [Considerations when using the C4A Axion-based machine series](/alloydb/docs/cluster-create#considerations-c4a).\n6. Click **Update instance** or **Update read pool**.\n\ngcloud\n\n\nTo use the gcloud CLI, you can\n[install and initialize](/sdk/docs/install) the Google Cloud CLI, or you\ncan use [Cloud Shell](/shell/docs/using-cloud-shell).\n\n\u003cbr /\u003e\n\nUse the [`gcloud alloydb instances\nupdate`](/sdk/gcloud/reference/alloydb/instances/update)\ncommand to change the machine type of the primary instance. \n\n gcloud alloydb instances update \u003cvar translate=\"no\"\u003eINSTANCE_ID\u003c/var\u003e \\\n --cpu-count=\u003cvar translate=\"no\"\u003eCPU_COUNT\u003c/var\u003e \\\n --machine-type=\u003cvar translate=\"no\"\u003eMACHINE_TYPE\u003c/var\u003e \\\n --region=\u003cvar translate=\"no\"\u003eREGION_ID\u003c/var\u003e \\\n --cluster=\u003cvar translate=\"no\"\u003eCLUSTER_ID\u003c/var\u003e \\\n --project=\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eINSTANCE_ID\u003c/var\u003e: The ID of the instance that you are updating.\n\n- \u003cvar translate=\"no\"\u003eCPU_COUNT\u003c/var\u003e: The number of N2 vCPUs that you want for\n the instance. N2 is the default. Valid values include the following:\n\n - `2`: 2 vCPUs, 16 GB RAM\n - `4`: 4 vCPUs, 32 GB RAM\n - `8`: 8 vCPUs, 64 GB RAM\n - `16`: 16 vCPUs, 128 GB RAM\n - `32`: 32 vCPUs, 256 GB RAM\n - `64`: 64 vCPUs, 512 GB RAM\n - `96`: 96 vCPUs, 768 GB RAM\n - `128`: 128 vCPUs, 864 GB RAM\n\n \u003cbr /\u003e\n\n- \u003cvar translate=\"no\"\u003eMACHINE_TYPE\u003c/var\u003e: This parameter is optional when you\n deploy N2 machines. To deploy the\n C4A Axion-based machine series ([Preview](https://cloud.google.com/products?e=48754805product-launch-stages)), or to migrate between C4A and N2 machines,\n choose this parameter with the following values.\n\n | **Note:** Deploy C4A 1vCPU for developmental and sandbox workloads only. 1 vCPU doesn't have the `lssd` suffix. You can also choose to use only `CPU_COUNT` for C4A 1 vCPU deployment.\n\n When you use `MACHINE_TYPE` and `CPU_COUNT`\n together, the values in `CPU_COUNT` and `MACHINE_TYPE`\n must match, otherwise you get an error.\n\n For the C4A Axion-based machine series, choose the machine type with following values:\n - `c4a-highmem-1`\n - `c4a-highmem-4-lssd`\n - `c4a-highmem-8-lssd`\n - `c4a-highmem-16-lssd`\n - `c4a-highmem-32-lssd`\n - `c4a-highmem-48-lssd`\n - `c4a-highmem-64-lssd`\n - `c4a-highmem-72-lssd`\n\n To deploy C4A with 4 vCPU and higher, use the suffix `lssd`, to enable\n ultra fast cache.\n\n For more information about using the C4A Axion-based machine series, including\n the 1 vCPU machine type, see [Considerations when using the C4A Axion-based machine series](/alloydb/docs/cluster-create#considerations-c4a).\n\n For the N2 x86-based machine series, use the following values:\n - `N2-highmem-2`\n - `N2-highmem-4`\n - `N2-highmem-8`\n - `N2-highmem-16`\n - `N2-highmem-32`\n - `N2-highmem-64`\n - `N2-highmem-96`\n - `N2-highmem-128`\n\n | **Note:** When you use N2 deployments, you can choose to use only the `CPU_COUNT` parameter.\n- \u003cvar translate=\"no\"\u003eREGION_ID\u003c/var\u003e: The region where the instance is placed.\n\n- \u003cvar translate=\"no\"\u003eCLUSTER_ID\u003c/var\u003e: The ID of the cluster where the instance is placed.\n\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: The ID of the project where the cluster is placed.\n\nIf the command returns an error message that includes the phrase\n`invalid cluster state MAINTENANCE`, then the cluster is undergoing\nroutine maintenance. This temporarily disallows instance reconfiguration.\nRun the command again after the cluster returns to a `READY` state.\nTo check on the cluster's status, see [View cluster\ndetails](/alloydb/docs/cluster-view).\n\nAccelerate machine type updates\n\nTo update the machine type faster, use the `FORCE_APPLY`\noption with the `gcloud beta alloydb instances update` command.\n**Note:** This approach is best suited for development environments because it lets you change the machine type more quickly, but it can result in longer instance downtime. \n\n gcloud beta alloydb instances update \u003cvar translate=\"no\"\u003eINSTANCE_ID\u003c/var\u003e \\\n --cpu-count=\u003cvar translate=\"no\"\u003eCPU_COUNT\u003c/var\u003e \\\n --machine-type=\u003cvar translate=\"no\"\u003eMACHINE_TYPE\u003c/var\u003e \\\n --region=\u003cvar translate=\"no\"\u003eREGION_ID\u003c/var\u003e \\\n --cluster=\u003cvar translate=\"no\"\u003eCLUSTER_ID\u003c/var\u003e \\\n --project=\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e\n --update-mode=FORCE_APPLY\n\n- The instance experiences approximately one minute of downtime.\n\n- The machine type of an instance changes after 10 to 15 minutes.\n\nScale the node count of a read pool instance\n\nAlloyDB lets you scale the number of nodes in a read pool instance without any\ndowntime at the instance level. When you increase the node count, client\nconnections remain unaffected.\n\nWhen you decrease the node count, any clients connected to a node that's being\nshut down can reconnect to the other nodes using the instance endpoint. \n\nConsole\n\n1. In the Google Cloud console, go to the **Clusters** page.\n\n [Go to Clusters](https://console.cloud.google.com/alloydb/clusters)\n2. Click a cluster in the **Resource Name** column.\n\n3. On the **Overview** page, go to the **Instances in your cluster**\n section, and click **Edit read pool**.\n\n Note that this action is not available if the page reports a\n cluster **Status** of **Maintenance** . The action becomes\n available again after **Status** changes to **Ready**.\n4. In the **Node count** field, enter a node count.\n **Note:** You can have a maximum of 20 nodes across all the read\n pool instances in a cluster.\n\n5. Click **Update read pool**.\n\ngcloud\n\n\nTo use the gcloud CLI, you can\n[install and initialize](/sdk/docs/install) the Google Cloud CLI, or you\ncan use [Cloud Shell](/shell/docs/using-cloud-shell).\n\n\u003cbr /\u003e\n\nUse the [`gcloud alloydb instances\nupdate`](/sdk/gcloud/reference/alloydb/instances/update) command to change the\nnumber of nodes in a read pool instance. \n\n gcloud alloydb instances update \u003cvar translate=\"no\"\u003eINSTANCE_ID\u003c/var\u003e \\\n --read-pool-node-count=\u003cvar translate=\"no\"\u003eNODE_COUNT\u003c/var\u003e \\\n --region=\u003cvar translate=\"no\"\u003eREGION_ID\u003c/var\u003e \\\n --cluster=\u003cvar translate=\"no\"\u003eCLUSTER_ID\u003c/var\u003e \\\n --project=\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e\n\n- \u003cvar translate=\"no\"\u003eINSTANCE_ID\u003c/var\u003e: The ID of the read pool instance.\n\n- \u003cvar translate=\"no\"\u003eNODE_COUNT\u003c/var\u003e: The number of nodes in the read pool\n instance. Specify a number `1` through `20`, inclusive. Note that\n you cannot have more than 20 nodes across all read pool instances in\n a cluster.\n\n- \u003cvar translate=\"no\"\u003eREGION_ID\u003c/var\u003e`:` The region where the instance is placed.\n\n- \u003cvar translate=\"no\"\u003eCLUSTER_ID\u003c/var\u003e: The ID of the cluster where the instance is\n placed.\n\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: The ID of the project where the cluster is\n placed.\n\nIf the command returns an error message that includes the phrase\n`invalid cluster state MAINTENANCE`, then the cluster is undergoing\nroutine maintenance. This temporarily disallows instance reconfiguration.\nRun the command again after the cluster returns to a `READY` state. To\ncheck on the cluster's status, see [View cluster\ndetails](/alloydb/docs/cluster-view)."]]