Modify an instance
After you create a Bigtable instance, you can update the following settings without any downtime:
-
You can enable or disable autoscaling for an instance's clusters or configure the settings for clusters that already have autoscaling enabled.
The number of nodes in manually scaled clusters
After you add or remove nodes, it typically takes a few minutes under load for Bigtable to optimize the cluster's performance.
The number of clusters in the instance
After you add a cluster, it takes time for Bigtable to replicate your data to the new cluster. New clusters are replicated from the geographically nearest cluster in the instance. In general, the greater the distance, the longer replication will take.
The application profiles for the instance, which contain replication settings
The labels for the instance, which provide metadata about the instance
The display name for the instance
You can change a cluster ID only by deleting and recreating the cluster.
To change any of the following, you must create a new instance with your preferred settings, export your data from the old instance, import your data into the new instance, and then delete the old instance.
Instance ID
Storage type (SSD or HDD)
Customer-managed encryption key (CMEK) configuration
Before you begin
If you want to use the command-line interfaces for Bigtable,
install the Google Cloud CLI and the
cbt
CLI
if you haven't already.
Configure autoscaling
You can enable or disable autoscaling for any existing cluster. You can also
change the CPU utilization target, minimum number of nodes, and maximum number
of nodes for a cluster. For guidance on choosing your autoscaling settings, see
Autoscaling. You are not able to use the
cbt
CLI
to
configure autoscaling.
Enable autoscaling
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then click Edit instance.
Under Configure clusters, click Edit
for the cluster that you want to update.Select Autoscaling.
Enter values for the following:
- Minimum number of nodes
- Maximum number of nodes
- CPU utilization target
- Storage utilization target
Click Save.
gcloud
If you don't know the instance ID, use the
bigtable instances list
command to view a list of your project's instances:gcloud bigtable instances list
If you don't know the instance's cluster IDs, use the
bigtable clusters list
command to view a list of clusters in the instance:gcloud bigtable clusters list --instances=INSTANCE_ID
Replace
INSTANCE_ID
with the permanent identifier for the instance.Use the
bigtable clusters update
command to enable autoscaling:gcloud bigtable clusters update CLUSTER_ID \ --instance=INSTANCE_ID \ --autoscaling-max-nodes=AUTOSCALING_MAX_NODES \ --autoscaling-min-nodes=AUTOSCALING_MIN_NODES \ --autoscaling-cpu-target=AUTOSCALING_CPU_TARGET \ --autoscaling-storage-target=AUTOSCALING_STORAGE_TARGET
Provide the following:
CLUSTER_ID
: The permanent identifier for the cluster.INSTANCE_ID
: The permanent identifier for the instance.AUTOSCALING_MAX_NODES
: The minimum number of nodesAUTOSCALING_MIN_NODES
: The maximum number of nodesAUTOSCALING_CPU_TARGET
: The CPU utilization target percentage that Bigtable maintains by adding or removing nodes. This value must be from 10 to 80.AUTOSCALING_STORAGE_TARGET
: The storage utilization target in GiB per node that Bigtable maintains by adding or removing nodesIn many cases, each cluster in an instance should have the same number of nodes, but there are exceptions. Learn about nodes and replication.
Disable autoscaling
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then click Edit instance.
Under Configure clusters, click
for the cluster that you want to update.Select Manual node allocation.
Enter the number of nodes for the cluster in the Quantity field.
In many cases, each cluster in an instance should have the same number of nodes, but there are exceptions. Learn about nodes and replication.
Click Save.
gcloud
If you don't know the instance ID, use the
bigtable instances list
command to view a list of your project's instances:gcloud bigtable instances list
If you don't know the instance's cluster IDs, use the
bigtable clusters list
command to view a list of clusters in the instance:gcloud bigtable clusters list --instances=INSTANCE_ID
Replace
INSTANCE_ID
with the permanent identifier for the instance.Use the
bigtable clusters update
command to disable autoscaling and configure a constant number of nodes:gcloud bigtable clusters update CLUSTER_ID \ --instance=INSTANCE_ID \ --num-nodes=NUM_NODES --disable-autoscaling
Provide the following:
CLUSTER_ID
: The permanent identifier for the cluster.INSTANCE_ID
: The permanent identifier for the instance.NUM_NODES
: This field is optional. If no value is set, Bigtable automatically allocates nodes based on your data footprint and optimizes for 50% storage utilization. If you want to control the number of nodes in a cluster, update theNUM_NODES
value. Ensure that number of nodes is set to a non-zero value.In many cases, each cluster in an instance should have the same number of nodes, but there are exceptions. Learn about nodes and replication.
Change autoscaling settings
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then click Edit instance.
Under Configure clusters, click
for the cluster that you want to update.Enter new values for any of the floowing that you want to change:
- Minimum number of nodes
- Maximum number of nodes
- CPU utilization target
- Storage utilization target
Click Save.
gcloud
If you don't know the instance ID, use the
bigtable instances list
command to view a list of your project's instances:gcloud bigtable instances list
If you don't know the instance's cluster IDs, use the
bigtable clusters list
command to view a list of clusters in the instance:gcloud bigtable clusters list --instances=INSTANCE_ID
Replace
INSTANCE_ID
with the permanent identifier for the instance.Use the
bigtable clusters update
command to update the settings for autoscaling:gcloud bigtable clusters update CLUSTER_ID \ --instance=INSTANCE_ID \ --autoscaling-max-nodes=AUTOSCALING_MAX_NODES \ --autoscaling-min-nodes=AUTOSCALING_MIN_NODES \ --autoscaling-cpu-target=AUTOSCALING_CPU_TARGET \ --autoscaling-storage-target=AUTOSCALING_STORAGE_TARGET
Provide the following:
CLUSTER_ID
: The permanent identifier for the cluster.INSTANCE_ID
: The permanent identifier for the instance.
The command accepts optional autoscaling flags. You can use all of the flags or just the flags for the values that you want to change.
AUTOSCALING_MAX_NODES
: The minimum number of nodes.AUTOSCALING_MIN_NODES
: The maximum number of nodes.AUTOSCALING_CPU_TARGET
: The CPU utilization target percentage that Bigtable maintains by adding or removing nodes. This value must be from 10 to 80.AUTOSCALING_STORAGE_TARGET
: The storage utilization target in GiB per node that Bigtable maintains by adding or removing nodes.In many cases, each cluster in an instance should have the same number of nodes, but there are exceptions. Learn about nodes and replication.
Add or remove nodes manually
In most cases, we recommend that you enable autoscaling. If you choose not to, and your cluster's node scaling mode is manual, you can add or remove nodes, and the number of nodes remains constant until you change it again. To review the default node quotas per zone per Google Cloud project, see Node quotas. If you need to provision more nodes than the default, you can request more.
To change the number of nodes in a cluster that uses manual scaling:
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then click Edit instance.
Under Configure clusters, click Edit
for the cluster that you want to update.In the Manual node allocation section, enter the number of nodes for the cluster in the Quantity field.
In many cases, each cluster in an instance should have the same number of nodes, but there are exceptions. Learn about nodes and replication.
Click Save.
gcloud
If you don't know the instance ID, use the
bigtable instances list
command to view a list of your project's instances:gcloud bigtable instances list
If you don't know the instance's cluster IDs, use the
bigtable clusters list
command to view a list of clusters in the instance:gcloud bigtable clusters list --instances=INSTANCE_ID
Replace
INSTANCE_ID
with the permanent identifier for the instance.Use the
bigtable clusters update
command to change the number of nodes:gcloud bigtable clusters update CLUSTER_ID \ --instance=INSTANCE_ID \ --num-nodes=NUM_NODES
Provide the following:
CLUSTER_ID
: The permanent identifier for the cluster.INSTANCE_ID
: The permanent identifier for the instance.NUM_NODES
: This field is optional. If no value is set, Bigtable automatically allocates nodes based on your data footprint and optimizes for 50% storage utilization. If you want to control the number of nodes in a cluster, update theNUM_NODES
value. Ensure that number of nodes is set to a non-zero value.In many cases, each cluster in an instance should have the same number of nodes, but there are exceptions. Learn about nodes and replication.
cbt
If you don't know the instance ID, use the
listinstances
command to view a list of your project's instances:cbt listinstances
If you don't know the instance's cluster IDs, use the
listclusters
command to view a list of clusters in the instance:cbt -instance=INSTANCE_ID listclusters
Replace
INSTANCE_ID
with the permanent identifier for the instance.Use the
updatecluster
command to change the number of nodes:cbt -instance=INSTANCE_ID updatecluster CLUSTER_ID NUM_NODES
Provide the following:
INSTANCE_ID
: The permanent identifier for the instance.CLUSTER_ID
: The permanent identifier for the cluster.NUM_NODES
: This field is optional. If no value is set, Bigtable automatically allocates nodes based on your data footprint and optimizes for 50% storage utilization. If you want to control the number of nodes in a cluster, update theNUM_NODES
value. Ensure that number of nodes is set to a non-zero value.In many cases, each cluster in an instance should have the same number of nodes, but there are exceptions. Learn about nodes and replication.
Add a cluster
You can add clusters to an existing instance. An instance can have clusters in up to 8 regions where Bigtable is available. Each zone in a region can contain only one cluster. The ideal locations for additional clusters depend on your use case.
If your instance is protected by CMEK, each new cluster must use a CMEK key that is in the same region as the cluster. Before you add a new cluster to a CMEK-protected instance, identify or create a CMEK key in the region where you plan to locate the cluster.
Before you add clusters to a single-cluster instance, read about the restrictions that apply when you change garbage collection policies on replicated tables. Then see examples of replication settings for recommendations.
To add a cluster to an instance:
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then click Edit instance.
Under Configure clusters, click Add cluster.
If this button is disabled, the instance already has the maximum number of clusters.
Enter a cluster ID and select the cluster's region and zone.
Enter the number of nodes for the cluster.
In many cases, each cluster in an instance should have the same number of nodes, but there are exceptions. Learn about nodes and replication.
If the instance is CMEK-protected, select or enter a customer-managed key. The CMEK key must be in the same region as the cluster.
Click Add.
Repeat these steps for each additional cluster, then click Save. Bigtable creates the cluster and starts replicating your data to the new cluster. You may see CPU utilization increase as replication begins.
Review the replication settings in the default app profile to see if they make sense for your replication use case. You might need to update the default app profile or create custom app profiles.
gcloud
If you don't know the instance ID, use the
bigtable instances list
command to view a list of your project's instances:gcloud bigtable instances list
If you don't know the instance's cluster IDs, use the
bigtable clusters list
command to view a list of clusters in the instance:gcloud bigtable clusters list --instances=INSTANCE_ID
Replace
INSTANCE_ID
with the permanent identifier for the instance.Use the
bigtable clusters create
command to add a cluster:gcloud bigtable clusters create CLUSTER_ID \ --async \ --instance=INSTANCE_ID \ --zone=ZONE \ [--num-nodes=NUM_NODES] \ [--autoscaling-min-nodes=AUTOSCALING_MIN_NODES, \ --autoscaling-max-nodes=AUTOSCALING_MAX_NODES, \ --autoscaling-cpu-target=AUTOSCALING_CPU_TARGET \ --autoscaling-storage-target=AUTOSCALING_STORAGE_TARGET] \ [--kms-key=KMS_KEY --kms-keyring=KMS_KEYRING \ --kms-location=KMS_LOCATION --kms-project=KMS_PROJECT] \
Provide the following:
CLUSTER_ID
: The permanent identifier for the cluster.INSTANCE_ID
: The permanent identifier for the instance.ZONE
: The zone where the cluster runs.Each zone in a region can contain only one cluster. For example, if an instance has a cluster in
us-east1-b
, you can add a cluster in a different zone in the same region, such asus-east1-c
, or a zone in a separate region, such aseurope-west2-a
. View the zone list.
The
--async
flag is not required but is strongly recommended. Without this flag, the command might time out before the operation is complete. Bigtable will continue to create the cluster in the background.The command accepts the following optional flags:
--kms-key=KMS_KEY
: The CMEK key in use by the cluster. You can add CMEK clusters only to instances that are already CMEK-protected.--kms-keyring=KMS_KEYRING
: The KMS keyring ID for the key.--kms-location=KMS_LOCATION
: The Google Cloud location for the key.--kms-project=KMS_PROJECT
: The Google Cloud project ID for the key.--storage-type=STORAGE_TYPE
: The type of storage to use for the cluster. Each cluster in an instance must use the same storage type. Accepts the valuesSSD
andHDD
. The default value isSSD
.
If no value is set for the
--num-nodes
option, Bigtable allocates nodes to the cluster automatically based on your data footprint and optimizes for 50% storage utilization. This automatic allocation of nodes has a pricing impact. If you want to control the number of nodes in a cluster, update theNUM_NODES
value. Ensure that number of nodes is set to a non-zero value. In many cases, each cluster in an instance should have the same number of nodes, but there are exceptions. Learn about nodes and replication.For autoscaling, provide all
autoscaling-
flags and do not usenum-nodes
. See Autoscaling for guidance on choosing the values for your autoscaling settings. Replace the following:AUTOSCALING_MIN_NODES
: The minimum number of nodes for the cluster.AUTOSCALING_MAX_NODES
: The maximum number of nodes for the cluster.AUTOSCALING_CPU_TARGET
: The target CPU utilization for the cluster. This value must be from 10 to 80.AUTOSCALING_STORAGE_TARGET
: The storage utilization target in GiB that Bigtable maintains by adding or removing nodes.
Review the replication settings in the default app profile to see if they make sense for your replication use case. You might need to update the default app profile or create custom app profiles.
cbt
If you don't know the instance ID, use the
listinstances
command to view a list of your project's instances:cbt listinstances
If you don't know the instance's cluster IDs, use the
listclusters
command to view a list of clusters in the instance:cbt -instance=INSTANCE_ID listclusters
Replace
INSTANCE_ID
with the permanent identifier for the instance.Use the
createcluster
command to add a cluster:cbt -instance=INSTANCE_ID \ createcluster CLUSTER_ID \ ZONE \ NUM_NODES \ STORAGE_TYPE
Provide the following:
INSTANCE_ID
: The permanent identifier for the instance.CLUSTER_ID
: The permanent identifier for the cluster.ZONE
: The zone where the cluster runs.Each zone in a region can contain only one cluster. For example, if an instance has a cluster in
us-east1-b
, you can add a cluster in a different zone in the same region, such asus-east1-c
, or a zone in a separate region, such aseurope-west2-a
. View the zone list.NUM_NODES
: This field is optional. If no value is set, Bigtable automatically allocates nodes based on your data footprint and optimizes for 50% storage utilization. If you want to control the number of nodes in a cluster, update theNUM_NODES
value. Ensure that number of nodes is set to a non-zero value.In many cases, each cluster in an instance should have the same number of nodes, but there are exceptions. Learn about nodes and replication.
STORAGE_TYPE
: The type of storage to use for the cluster. Each cluster in an instance must use the same storage type. Accepts the valuesSSD
andHDD
.
Review the replication settings in the default app profile to see if they make sense for your replication use case. You might need to update the default app profile or create custom app profiles.
Delete a cluster
If an instance has multiple clusters, you can delete all but 1 of the clusters. Deleting all but 1 cluster automatically disables replication.
In some cases, Bigtable does not allow you to delete a cluster:
- If one of your application profiles routes all traffic to a single cluster, Bigtable will not allow you to delete that cluster. You must edit or delete the application profile before you can remove the cluster.
- If you add new clusters to an existing instance, you cannot delete clusters in that instance until the initial data copy to the new clusters is complete.
To delete a cluster from an instance:
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then click Edit instance.
Under Configure clusters, click Delete cluster
for the cluster that you want to delete.To cancel the delete operation, click Undo, which is available until you click Save. Otherwise, click Save.
gcloud
If you don't know the instance ID, use the
bigtable instances list
command to view a list of your project's instances:gcloud bigtable instances list
If you don't know the instance's cluster IDs, use the
bigtable clusters list
command to view a list of clusters in the instance:gcloud bigtable clusters list --instances=INSTANCE_ID
Replace
INSTANCE_ID
with the permanent identifier for the instance.Use the
bigtable clusters delete
command to delete the cluster:gcloud bigtable clusters delete CLUSTER_ID \ --instance=INSTANCE_ID
Provide the following:
CLUSTER_ID
: The permanent identifier for the cluster.INSTANCE_ID
: The permanent identifier for the instance.
cbt
If you don't know the instance ID, use the
listinstances
command to view a list of your project's instances:cbt listinstances
If you don't know the instance's cluster IDs, use the
listclusters
command to view a list of clusters in the instance:cbt -instance=INSTANCE_ID listclusters
Replace
INSTANCE_ID
with the permanent identifier for the instance.Use the
deletecluster
command to delete the cluster:cbt -instance=INSTANCE_ID deletecluster CLUSTER_ID
Provide the following:
INSTANCE_ID
: The permanent identifier for the instance.CLUSTER_ID
: The permanent identifier for the cluster.
Move data to a new location
To move the data in a Bigtable instance to a new zone or region, add a new cluster in the location that you want to move to, and then delete the cluster in the location you want to move from. The deleted cluster remains available until data has been replicated to the new cluster, so you don't have to worry about any requests failing. Bigtable replicates all data to the new cluster automatically.
Manage app profiles
Application profiles, or app profiles, control how your applications connect to an instance that uses replication. Every instance with more than 1 cluster has its own default app profile. You can also create many different custom app profiles for each instance, using a different app profile for each kind of application that you run.
To learn how to set up an instance's app profiles, see Configuring App Profiles. For examples of settings you can use to implement common use cases, see Examples of replication configurations.
Manage labels
Labels are key-value pairs that you can use to group related instances and store metadata about an instance.
To learn how to manage labels, see Adding or updating an instance's labels and Removing a label from an instance.
Change an instance's display name
To change an instance's display name, which the Google Cloud console uses to identify the instance:
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then click Edit instance.
Edit the instance name, then click Save.
gcloud
If you don't know the instance ID, use the
bigtable instances list
command to view a list of your project's instances:gcloud bigtable instances list
Use the
bigtable instances update
command to update the display name:gcloud bigtable instances update INSTANCE_ID \ --display-name=DISPLAY_NAME
Provide the following:
INSTANCE_ID
: The permanent identifier for the instance.DISPLAY_NAME
: A human-readable name that identifies the instance in the Google Cloud console.
cbt
This feature is not available in the cbt
CLI.
What's next
- Learn how to add, update, and remove labels for an instance.
- Find out how to create and update an instance's app profiles, which contain settings for replication.