Create Hyperdisk Storage Pools


Hyperdisk Storage Pools are a new block storage resource that helps you manage your Hyperdisk block storage in aggregate. Hyperdisk Storage Pools are available in Hyperdisk Throughput Storage Pool and Hyperdisk Balanced Storage Pool variants.

You must specify the following properties when creating a storage pool:

  • Zone
  • Storage pool type
  • Capacity provisioning type
  • Pool provisioned capacity
  • Performance provisioning type
  • Pool provisioned IOPS and throughput

You can use Standard capacity, Advanced capacity, Standard performance, or Advanced performance provisioning types with Hyperdisk Storage Pools:

  • Standard capacity: The capacity provisioned for each disk created in the storage pool is deducted from the total provisioned capacity of the storage pool.
  • Advanced capacity: The storage pool benefits from thin-provisioning and data reduction. Only the amount of actual written data is deducted from the total provisioned capacity of the storage pool.
  • Standard performance: The performance provisioned for each disk created in the storage pool is deducted from the total provisioned performance of the storage pool.
  • Advanced performance: The performance provisioned for each disk benefits from thin-provisioning. Only the amount of performance used by a disk is deducted from the total provisioned performance of the storage pool.

    Before you begin

    • If you haven't already, set up authentication. Authentication is the process by which your identity is verified for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine as follows.

      Select the tab for how you plan to use the samples on this page:

      Console

      When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.

      gcloud

      1. Install the Google Cloud CLI, then initialize it by running the following command:

        gcloud init
      2. Set a default region and zone.

      Go

      To use the Go samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

      1. Install the Google Cloud CLI.
      2. To initialize the gcloud CLI, run the following command:

        gcloud init
      3. If you're using a local shell, then create local authentication credentials for your user account:

        gcloud auth application-default login

        You don't need to do this if you're using Cloud Shell.

      For more information, see Set up authentication for a local development environment.

      Java

      To use the Java samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

      1. Install the Google Cloud CLI.
      2. To initialize the gcloud CLI, run the following command:

        gcloud init
      3. If you're using a local shell, then create local authentication credentials for your user account:

        gcloud auth application-default login

        You don't need to do this if you're using Cloud Shell.

      For more information, see Set up authentication for a local development environment.

      Node.js

      To use the Node.js samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

      1. Install the Google Cloud CLI.
      2. To initialize the gcloud CLI, run the following command:

        gcloud init
      3. If you're using a local shell, then create local authentication credentials for your user account:

        gcloud auth application-default login

        You don't need to do this if you're using Cloud Shell.

      For more information, see Set up authentication for a local development environment.

      REST

      To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.

        Install the Google Cloud CLI, then initialize it by running the following command:

        gcloud init

      For more information, see Authenticate for using REST in the Google Cloud authentication documentation.

    Required roles and permissions

    To get the permissions that you need to create a storage pool, ask your administrator to grant you the following IAM roles on the project:

    • Compute Instance Admin (v1) (roles/compute.instanceAdmin.v1)
    • To connect to a VM instance that can run as a service account: Service Account User (v1) (roles/iam.serviceAccountUser role)

    For more information about granting roles, see Manage access to projects, folders, and organizations.

    These predefined roles contain the permissions required to create a storage pool. To see the exact permissions that are required, expand the Required permissions section:

    Required permissions

    The following permissions are required to create a storage pool:

    • compute.storagePools.create on the project
    • compute.storagePools.setLabels on the project

    You might also be able to get these permissions with custom roles or other predefined roles.

    Limitations

    Take note of the following limitations when creating Hyperdisk Storage Pools:

    Resource limits:

    • You can create a Hyperdisk Storage Pool with up to 1 PiB of provisioned capacity.
    • You can create a maximum of 5 storage pools per hour.
    • You can create a maximum of 10 storage pools per day.
    • You can create at most 10 storage pools per project.
    • You can't change the provisioning model for a pool; you can't change a Standard capacity storage pool to an Advanced capacity storage pool or an Advanced performance storage pool to a Standard performance storage pool.
    • Storage pools are a zonal resource.
    • You can create up to 1,000 disks in a storage pool.
    • You can use Hyperdisk Storage Pools with only Compute Engine. Cloud SQL instances cannot use Hyperdisk Storage Pools.
    • You can change the provisioned capacity or performance of a storage pool at most two times in a 24 hour period.

    Limits for disks in a storage pool:

    • Only new disks in the same project and zone can be created in a storage pool.
    • Moving disks in or out of a storage pool is not permitted. To move a disk in or out of a storage pool, you have to recreate the disk from a snapshot. For more information, see Change the disk type.
    • To create boot disks in a storage pool, you must use a Hyperdisk Balanced Storage Pool.
    • Storage pools don't support regional disks.
    • You can't clone, create instant snapshots of, or configure Persistent Disk Asynchronous Replication for disks in a storage pool.
    • Hyperdisk Balanced disks in a storage pool can't be attached to multiple compute instances.

    Capacity ranges and provisioned performance limits

    When creating a storage pool, the provisioned capacity, IOPS, and throughput are subject to the limits described in Limits for storage pools.

    Create a Hyperdisk Storage Pool

    To create a new Hyperdisk Storage Pool, use the Google Cloud console, Google Cloud CLI, or REST.

    Console

    1. Go to the Create a storage pool page in the Google Cloud console.
      Go to the Create Storage Pool page
    2. In the Name field, enter a unique name for the storage pool.
    3. Optional: In the Description field, enter a description for the storage pool.
    4. Select the Region and Zone in which to create the storage pool.
    5. Choose a value for the Storage pool type.
    6. Choose a provisioning type in the Capacity type field and specify the capacity to provision for the storage pool in the Storage pool capacity field. You can specify a size from 10 TiB to 1 PiB.

      To create a storage pool with large capacity, you might have to request a higher quota.

    7. Choose a provisioning type in the Performance type field.

    8. For Hyperdisk Balanced Storage Pools, in the Provisioned IOPS field, enter the IOPS to provision for the storage pool.

    9. For a Hyperdisk Throughput Storage Pool or Hyperdisk Balanced Storage Pool, in the Provisioned throughput field, enter the throughput to provision for the storage pool.

    10. Click Submit to create the storage pool.

    gcloud

    To create a Hyperdisk Storage Pool, use the gcloud compute storage-pools create command.

    gcloud compute storage-pools create NAME  \
        --zone=ZONE   \
        --storage-pool-type=STORAGE_POOL_TYPE   \
        --capacity-provisioning-type=CAPACITY_TYPE \
        --provisioned-capacity=POOL_CAPACITY   \
        --performance-provisioning-type=PERFORMANCE_TYPE \
        --provisioned-iops=IOPS   \
        --provisioned-throughput=THROUGHPUT   \
        --description=DESCRIPTION
    

    Replace the following:

    • NAME: the unique storage pool name.
    • ZONE: the zone in which to create the storage pool, for example, us-central1-a.
    • STORAGE_POOL_TYPE: the type of disk to store in the storage pool. The allowed values are hyperdisk-throughput and hyperdisk-balanced.
    • CAPACITY_TYPE: Optional: the capacity provisioning type of the storage pool. The allowed values are advanced and standard. If not specified, the value advanced is used.
    • POOL_CAPACITY: the total capacity to provision for the new storage pool, specified in GiB by default.
    • PERFORMANCE_TYPE: Optional: the performance provisioning type of the storage pool. The allowed values are advanced and standard. If not specified, the value advanced is used.
    • IOPS: the IOPS to provision for the storage pool. You can use this flag only with Hyperdisk Balanced Storage Pools.
    • THROUGHPUT: the throughput in MBps to provision for the storage pool.
    • DESCRIPTION: Optional: a text string that describes the storage pool.

    REST

    Construct a POST request to create a Hyperdisk Storage Pool by using the storagePools.insert method.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/storagePools
    
    {
        "name": "NAME",
        "description": "DESCRIPTION",
        "poolProvisionedCapacityGb": "POOL_CAPACITY",
        "storagePoolType": "projects/PROJECT_ID/zones/ZONE/storagePoolTypes/STORAGE_POOL_TYPE",
        "poolProvisionedIops": "IOPS",
        "poolProvisionedThroughput": "THROUGHPUT",
        "capacityProvisioningType": "CAPACITY_TYPE",
        "performanceProvisioningType": "PERFORMANCE_TYPE"
    }
    

    Replace the following:

    • PROJECT_ID: the project ID
    • ZONE: the zone in which to create the storage pool, for example, us-central1-a.
    • NAME: a unique name for the storage pool .
    • DESCRIPTION: Optional: a text string that describes the storage pool.
    • POOL_CAPACITY: the total capacity to provision for the new storage pool, specified in GiB by default.
    • STORAGE_POOL_TYPE: the type of disk to store in the storage pool. The allowed values are hyperdisk-throughput and hyperdisk-balanced.
    • IOPS: Optional: the IOPS to provision for the storage pool. You can use this flag only with Hyperdisk Balanced Storage Pools.
    • THROUGHPUT: Optional: The throughput in MBps to provision for the storage pool.
    • CAPACITY_TYPE: Optional: the capacity provisioning type of the storage pool. The allowed values are advanced and standard. If not specified, the value advanced is used.
    • PERFORMANCE_TYPE: Optional: the performance provisioning type of the storage pool. The allowed values are advanced and standard. If not specified, the value advanced is used.

    Go

    
    // createHyperdiskStoragePool creates a new Hyperdisk storage pool in the specified project and zone.
    func createHyperdiskStoragePool(w io.Writer, projectId, zone, storagePoolName, storagePoolType string) error {
    	// projectID := "your_project_id"
    	// zone := "europe-west4-b"
    	// storagePoolName := "your_storage_pool_name"
    	// storagePoolType := "projects/**your_project_id**/zones/europe-west4-b/diskTypes/hyperdisk-balanced"
    
    	ctx := context.Background()
    	client, err := compute.NewStoragePoolsRESTClient(ctx)
    	if err != nil {
    		return fmt.Errorf("NewStoragePoolsRESTClient: %v", err)
    	}
    	defer client.Close()
    
    	// Create the storage pool resource
    	resource := &computepb.StoragePool{
    		Name:                        proto.String(storagePoolName),
    		Zone:                        proto.String(zone),
    		StoragePoolType:             proto.String(storagePoolType),
    		CapacityProvisioningType:    proto.String("advanced"),
    		PerformanceProvisioningType: proto.String("advanced"),
    		PoolProvisionedCapacityGb:   proto.Int64(10240),
    		PoolProvisionedIops:         proto.Int64(10000),
    		PoolProvisionedThroughput:   proto.Int64(1024),
    	}
    
    	// Create the insert storage pool request
    	req := &computepb.InsertStoragePoolRequest{
    		Project:             projectId,
    		Zone:                zone,
    		StoragePoolResource: resource,
    	}
    
    	// Send the insert storage pool request
    	op, err := client.Insert(ctx, req)
    	if err != nil {
    		return fmt.Errorf("Insert storage pool request failed: %v", err)
    	}
    
    	// Wait for the insert storage pool operation to complete
    	if err = op.Wait(ctx); err != nil {
    		return fmt.Errorf("unable to wait for the operation: %w", err)
    	}
    
    	// Retrieve and return the created storage pool
    	storagePool, err := client.Get(ctx, &computepb.GetStoragePoolRequest{
    		Project:     projectId,
    		Zone:        zone,
    		StoragePool: storagePoolName,
    	})
    	if err != nil {
    		return fmt.Errorf("Get storage pool request failed: %v", err)
    	}
    
    	fmt.Fprintf(w, "Hyperdisk Storage Pool created: %v\n", storagePool.GetName())
    	return nil
    }
    

    Java

    
    import com.google.cloud.compute.v1.InsertStoragePoolRequest;
    import com.google.cloud.compute.v1.Operation;
    import com.google.cloud.compute.v1.StoragePool;
    import com.google.cloud.compute.v1.StoragePoolsClient;
    import java.io.IOException;
    import java.util.concurrent.ExecutionException;
    import java.util.concurrent.TimeUnit;
    import java.util.concurrent.TimeoutException;
    
    public class CreateHyperdiskStoragePool {
      public static void main(String[] args)
              throws IOException, ExecutionException, InterruptedException, TimeoutException {
        // TODO(developer): Replace these variables before running the sample.
        // Project ID or project number of the Google Cloud project you want to use.
        String projectId = "YOUR_PROJECT_ID";
        // Name of the zone in which you want to create the storagePool.
        String zone = "europe-central2-b";
        // Name of the storagePool you want to create.
        String storagePoolName = "YOUR_STORAGE_POOL_NAME";
        // The type of disk you want to create. This value uses the following format:
        // "projects/%s/zones/%s/storagePoolTypes/hyperdisk-throughput|hyperdisk-balanced"
        String storagePoolType = "hyperdisk-balanced";
        // Optional: the capacity provisioning type of the storage pool.
        // The allowed values are advanced and standard. If not specified, the value advanced is used.
        String capacityProvisioningType = "advanced";
        // The total capacity to provision for the new storage pool, specified in GiB by default.
        long provisionedCapacity = 128;
        // the IOPS to provision for the storage pool.
        // You can use this flag only with Hyperdisk Balanced Storage Pools.
        long provisionedIops = 3000;
        // the throughput in MBps to provision for the storage pool.
        long provisionedThroughput = 140;
    
        createHyperdiskStoragePool(projectId, zone, storagePoolName, storagePoolType,
                capacityProvisioningType, provisionedCapacity, provisionedIops, provisionedThroughput);
      }
    
      // Creates a hyperdisk storagePool in a project
      public static StoragePool createHyperdiskStoragePool(String projectId, String zone,
                                                    String storagePoolName, String storagePoolType,
                                                    String capacityProvisioningType, long capacity,
                                                    long iops, long throughput)
              throws IOException, ExecutionException, InterruptedException, TimeoutException {
        // Initialize client that will be used to send requests. This client only needs to be created
        // once, and can be reused for multiple requests.
        try (StoragePoolsClient client = StoragePoolsClient.create()) {
          // Create a storagePool.
          StoragePool resource = StoragePool.newBuilder()
                  .setZone(zone)
                  .setName(storagePoolName)
                  .setStoragePoolType(storagePoolType)
                  .setCapacityProvisioningType(capacityProvisioningType)
                  .setPoolProvisionedCapacityGb(capacity)
                  .setPoolProvisionedIops(iops)
                  .setPoolProvisionedThroughput(throughput)
                  .build();
    
          InsertStoragePoolRequest request = InsertStoragePoolRequest.newBuilder()
                  .setProject(projectId)
                  .setZone(zone)
                  .setStoragePoolResource(resource)
                  .build();
    
          // Wait for the insert disk operation to complete.
          Operation operation = client.insertAsync(request).get(1, TimeUnit.MINUTES);
    
          if (operation.hasError()) {
            System.out.println("StoragePool creation failed!");
            throw new Error(operation.getError().toString());
          }
    
          // Wait for server update
          TimeUnit.SECONDS.sleep(10);
    
          StoragePool storagePool = client.get(projectId, zone, storagePoolName);
    
          System.out.printf("Storage pool '%s' has been created successfully", storagePool.getName());
    
          return storagePool;
        }
      }
    }

    Node.js

    // Import the Compute library
    const computeLib = require('@google-cloud/compute');
    const compute = computeLib.protos.google.cloud.compute.v1;
    
    // Instantiate a storagePoolClient
    const storagePoolClient = new computeLib.StoragePoolsClient();
    // Instantiate a zoneOperationsClient
    const zoneOperationsClient = new computeLib.ZoneOperationsClient();
    
    /**
     * TODO(developer): Update/uncomment these variables before running the sample.
     */
    // Project ID or project number of the Google Cloud project you want to use.
    const projectId = await storagePoolClient.getProjectId();
    // Name of the zone in which you want to create the storagePool.
    const zone = 'us-central1-a';
    // Name of the storagePool you want to create.
    // storagePoolName = 'storage-pool-name';
    // The type of disk you want to create. This value uses the following format:
    // "projects/{projectId}/zones/{zone}/storagePoolTypes/(hyperdisk-throughput|hyperdisk-balanced)"
    const storagePoolType = `projects/${projectId}/zones/${zone}/storagePoolTypes/hyperdisk-balanced`;
    // Optional: The capacity provisioning type of the storage pool.
    // The allowed values are advanced and standard. If not specified, the value advanced is used.
    const capacityProvisioningType = 'advanced';
    // The total capacity to provision for the new storage pool, specified in GiB by default.
    const provisionedCapacity = 10240;
    // The IOPS to provision for the storage pool.
    // You can use this flag only with Hyperdisk Balanced Storage Pools.
    const provisionedIops = 10000;
    // The throughput in MBps to provision for the storage pool.
    const provisionedThroughput = 1024;
    // Optional: The performance provisioning type of the storage pool.
    // The allowed values are advanced and standard. If not specified, the value advanced is used.
    const performanceProvisioningType = 'advanced';
    
    async function callCreateComputeHyperdiskPool() {
      // Create a storagePool.
      const storagePool = new compute.StoragePool({
        name: storagePoolName,
        poolProvisionedCapacityGb: provisionedCapacity,
        poolProvisionedIops: provisionedIops,
        poolProvisionedThroughput: provisionedThroughput,
        storagePoolType,
        performanceProvisioningType,
        capacityProvisioningType,
        zone,
      });
    
      const [response] = await storagePoolClient.insert({
        project: projectId,
        storagePoolResource: storagePool,
        zone,
      });
    
      let operation = response.latestResponse;
    
      // Wait for the create storage pool operation to complete.
      while (operation.status !== 'DONE') {
        [operation] = await zoneOperationsClient.wait({
          operation: operation.name,
          project: projectId,
          zone: operation.zone.split('/').pop(),
        });
      }
    
      console.log(`Storage pool: ${storagePoolName} created.`);
    }
    
    await callCreateComputeHyperdiskPool();

    What's next?