{% include "_shared/_delete_tutorial_resources.html" with name="Managing workloads and apps with NetApp Cloud Volumes Service" %}

Managing workloads and apps with NetApp Cloud Volumes Service

By Alim Karim, Product Manager, NetApp

NetApp and Google Cloud Platform (GCP) have partnered to offer Cloud Volumes Service, a fully managed, cloud-native data service that provides advanced data management capabilities and performance.

Whether you are looking to migrate existing enterprise and industry-specific apps to GCP, or to build new machine learning (ML) and Kubernetes-based apps that require persistent storage, you can accelerate the deployment time and create rapid clones in a matter of minutes.

NetApp Cloud Volumes Service for GCP

Cloud Volumes Service helps you manage your workloads and apps. Migrate your workloads to the cloud without sacrificing performance. Cloud Volumes Service removes obstacles so you can move more of your file-based apps to GCP, with support for Network File System version 3 (NFSv3) and Server Message Block (SMB). You don't have to re-architect your apps and you get persistent storage for your apps without complexity.

Key features:

  • Fully-managed service with NoOps, integrated with the Google Cloud Platform Console.
  • Migrate data between on-premises and GCP.
  • Provision volumes from 0 to 100 TB in seconds.
  • Multiprotocol support (NFS and SMB).
  • Protect data with automated, efficient snapshots.
  • Accelerate app development with rapid cloning.
  • Consume cloud services such as analytics, AI, and ML.

Cloud Volumes Service is integrated with the GCP Console and available through GCP Marketplace.

This solution enables you to quickly add multi-protocol workloads as well as build and deploy both Windows-based and UNIX-based apps. You can schedule snapshots of your Cloud Volumes Service and restore snapshots to help keep your data protected. You can also create clones and then migrate them to keep your datasets continuously in sync. Cloud Volumes Service enables you to stay productive across your file services–based workloads, such as analytics, DevOps, and database apps.

Service levels

With three service levels—standard, premium, and extreme—that you can change on demand, Cloud Volumes Service delivers the right performance fit for your workload that you can adjust as the nature of your app changes. The cloud volume scales with the amount of allocated capacity, so performance isn't limited as your dataset expands. You can also increase or decrease the allocated capacity on the fly without having to worry about adding or deleting underlying nodes.

Service levels Throughput
per TB
Workload types
Standard 16 MB/s General purpose, file shares,
email, and web
Premium 64 MB/s Databases and apps
Extreme 128 MB/s High-performance apps

Use cases

Cloud Volumes Service supports and expedites the deployment of various cloud-based systems through rapid delivery of shared file systems and a rich set of storage management features. The primary use cases for using Cloud Volumes Service include file services, analytics, DevOps, and databases.

The following diagram illustrates the typical architecture of Cloud Volumes Service, combined with a shared VPC topology in GCP:

Architectural diagram of Cloud Volumes Service on GCP

File services

Cloud Volumes Service is a fault-tolerant, scalable platform for creating cloud-native NFS and SMB file systems. As a result of NetApp's long experience delivering enterprise, on-premises, network-attached storage solutions, Cloud Volumes Service comes with a complete range of supporting features:

  • Read-only and read-write client access control.
  • Connections over both NFSv3 and NFSv4 (coming soon).
  • Active Directory (AD) integration for SMB file systems.

This range of features helps you migrate existing apps to GCP and provides you with a platform to develop and maintain a file storage solution in the cloud, saving you time and money by reducing spending on hardware, maintenance, power, cooling, and physical space.

Enterprise apps

You can easily rehost your traditional apps, which are currently deployed on-premises, to Cloud Volumes Service. This includes a subset of enterprise apps that typically don't require refactoring, but you want to preserve their core functionality for unstructured data storage workflows.

By using Cloud Volumes Service, you can create NFS shares for Linux-based apps and SMB shares for Windows-based apps in seconds. These shares are fully managed. You can scale them up or down for capacity and performance without impacting your workflows or users.

You can preserve app service delivery lifecycles with quick snapshots and copies for development, testing, and staging environments. This capability further accelerates production releases and minimizes the lead time in a true no-ops fashion.

Stateful apps on Google Kubernetes Engine

Google Kubernetes Engine (GKE) enables app teams to containerize existing apps as well as to deploy new app clusters that are location independent.

Stateful app sets for use cases such as analytics, devops CI/CD pipelines, and databases often require data persistence. Cloud Volumes Service takes a cloud-native approach in providing persistent and performant endpoints.

For environments of large-scale composite apps, Cloud Volumes Service has the enterprise data management capabilities to help you overcome storage management challenges in provisioning, copying, and protecting the datasets.

Databases

Open source databases are often at the heart of online transaction processing, which can include banking, retail sales, and online purchases. Slow response times often send your customers looking elsewhere. Most of your customers won't wait for your app or web pages to load. High-performance storage is where Cloud Volumes Service can help. Whether you're accessing the primary database or a snapshot copy, you can expect excellent, consistent performance from Cloud Volumes Service.

Cloud Volumes Service supports different levels of performance for each file system. Because database administrators can allocate individual storage pools for hot or cold data, they have fine-grained control over the use of high-performance storage or more cost-effective, high-capacity storage. Cloud Volumes Service helps ensure that file systems are available and resilient against system failures, which simplifies the setup for reliable database services in the cloud.

Getting started with Cloud Volumes Service

Cloud Volumes Service is a fully-managed, cloud file storage service that provides access to cloud volumes through NFS or SMB protocols. In this article, you create either an NFS or a SMB volume to manage your cloud volume.

Objectives

  • Create an NFS or SMB volume.
  • Mount NFS exports to Compute Engine instances.
  • Map SMB shares from Compute Engine instances.

Costs

This tutorial uses the following billable components of Google Cloud Platform:

  • Compute Engine
  • Cloud Volumes Service, depending on the service level you choose:
    • Standard: $0.10 per GB allocated, 16 MB of throughput per TB
    • Premium: $0.20 per GB allocated, 64 MB of throughput per TB
    • Extreme: $0.30 per GB allocated, 128 MB of throughput per TB

You can use the pricing calculator to generate a cost estimate based on your projected usage. New GCP users might be eligible for a free trial.

Understanding the Cloud Volumes Service architecture

Cloud Volumes Service for GCP leverages the GCP Private Google Access framework. In this framework, you can connect to the Cloud Volumes Service from your VPCs by using private (RFC 1918) addresses. This framework uses Service Networking and VPC peering constructs similar to other GCP services like Cloud SQL.

This design provides enhanced security and complete isolation between tenants, and the setup requires no manual intervention.

The following diagram describes the high-level flow of elements in the control plane (top) and the data plane (bottom) of the service:

Control plane and data plane

Before you begin

You must have the Network Admin role and complete a series of one-time tasks per project and VPC to create a connection from the user’s environment to the Cloud Volumes Service.

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a Google Cloud Platform project.

    Go to the Manage resources page

  3. Make sure that billing is enabled for your Google Cloud Platform project.

    Learn how to enable billing

  4. In the GCP Console, open Cloud Shell.

    Open Cloud Shell
  5. Enable the Cloud Volumes Service API:

    gcloud --project=my-cvs-prj services enable cloudvolumesgcp-api.netapp.com
  6. Enable the Service Networking API in your project:

    gcloud --project=my-cvs-prj services enable servicenetworking.googleapis.com
  7. Enable the Service Consumer Management API in your project:

    gcloud --project=my-cvs-prj services enable serviceconsumermanagement.googleapis.com

Setting up private services access for Cloud Volumes Service

Cloud Volumes Service uses private services access to create a high-throughput and low-latency data-path connection. You need to perform the following steps once for each project. However, if you are using shared VPC, you need to perform the steps only for each host project.

  1. Create an allocated IP range within your VPC for the Cloud Volumes Service mount points.

    Cloud Volumes Service uses a /28 CIDR range for each consumer service project. If you pass a /24 CIDR range, then 15 consumer service projects can be supported. If you pass a /20 CIDR range, then 255 consumer service projects can be supported. In the case of shared VPC, allocate the range from the host project.

    gcloud --project=my-cvs-prj beta compute addresses create cloudvolumes-reserved-range \
        --global --purpose=VPC_PEERING --prefix-length=24 --network=production-vpc
  2. Create a private service connection to the Cloud Volumes Service endpoint:

    gcloud --project=my-cvs-prj beta compute vpc-peerings connect \
        --service=cloudvolumesgcp-api.netapp.com --ranges=cloudvolumes-reserved-range\
        --network=production-vpc
  3. Check that the connection is established:

    gcloud --project=my-cvs-prj services vpc-peerings list

Regional availability for Cloud Volumes Service on GCP

Cloud Volumes Service for GCP is currently available in the following regions:

  • us-east-4
  • us-central-1

Workflow for managing cloud volumes

NetApp Cloud Volumes Service for GCP supports both NFS and SMB protocols. The following diagram illustrates the key tasks for managing cloud volumes that use NFS or SMB.

Workflow for managing cloud volumes

Creating and managing NFS volumes

After you create an NFS volume, you mount your NFS exports to Compute Engine instances.

Create an NFS volume

  1. In the GCP Console go to the Volumes page.

    GO TO VOLUMES PAGE

  2. Click Create.

    Cloud Volumes in the GCP console

  3. In the Create File System page, complete the following fields:

    Create File System page

    1. In the Name field, enter a display name for the volume.

    2. From the Region drop-down list, select a GCP region for your volume. For more information about region selection, see Best practices for Compute Engine region selection.

    3. The Volume path must be unique across all your cloud volumes in a project. The system automatically generates a recommended volume path.

    4. For Service level, click the level of performance for the volume. It scales with capacity. For more information, see service levels.

    5. If you want to create a volume based on a snapshot, select the snapshot from the Snapshot drop-down list.

    6. In the Allocated capacity field, the minimum size of a cloud volume is 1,024 GiB (1 TiB).

    7. In the Protocol type field, select NSFv3.

    8. The VPC Network can be part of a host project in a shared VPC, or it can be a standalone project. Select Shared VCP configuration if you have a host project and shared VPC topology. For standalone projects, leave the box cleared.

      Shared VPC configuration box

    9. Under Authorized Network:

      • Select the VPC Network Name from which the volume will be accessible.

      • Optionally, you can specify your custom CIDR range by selecting the Use Address Range box.

        IP address range

    10. To manage export policy rules for the volume, expand Show export policy.

      • Click Add Rule to specify the allowed clients and their access type.
      • In the Allowed clients field, enter the IP address or range of addresses that have access to the cloud volume.
      • Select Read & Write or Read Only to specify the type of access these IP addresses have to the cloud volume.
      • Add rules as needed.

      Export policy page

    11. To manage snapshot policy for the volume, expand Show snapshot policy.

    12. Select Allow automatic snapshots, specify the snapshot schedules, and specify the number of snapshots to keep.

      Snapshot policy page

  4. Click Save to create the volume.

    The new volume appears in the Volumes list.

    Volumes list

Mount NFS exports to Compute Engine instances

  1. In the GCP Console, go to the Volumes page.

    GO TO VOLUMES PAGE

  2. Click the NFS volume for which you want to mount NFS exports.

  3. Scroll to the right, click More , and then click Mount Instructions.

    Create NFS volume

  4. Follow the instructions in the Mount Instructions for NFS window.

    NFS mount instructions

Creating and managing SMB volumes

Before you can create and manage SMB volumes, you must add an Active Directory (AD) connection. Currently, Cloud Volumes Service supports only one AD connection per GCP region.

Creating an AD connection

  1. In the GCP Console, go to Cloud Volumes.

    GO TO CLOUD VOLUMES PAGE

  2. Click Active Directory connections, and then click Create.

  3. In the Create Active Directory Connection window, enter the following information, and then click Save.

    1. In the Username and Password fields, enter credentials associated with an account that has privileges to create a computer account in AD.
    2. In the Domain field, enter the name of the AD domain.
    3. In the DNS server field, enter the DNS server address of the AD domain.
    4. In the NetBIOS field, enter the NetBIOS name of the server.
    5. From the Region drop-down list, select a region associated with your AD credentials.

      Active Directory window

Creating an SMB volume

  1. In the GCP Console, go to the Volumes page.

    GO TO VOLUMES PAGE

  2. Click Create.

    Volumes page

  3. In the Create File System page, complete the following fields:

    Create file systems page

    1. In the Name field, enter a display name for the volume.
    2. From the Region drop-down list, select a GCP region for your volume. For more information about region selection, see Best Practices for Compute Engine region selection.
    3. The Volume path must be unique across all your cloud volumes. The system automatically generates a recommended volume path.
    4. For Service level, click the level of performance for the volume. It scales with capacity.
    5. If you want to create a volume based on a snapshot, select the snapshot from the Snapshot drop-down list.
    6. In the Allocated capacity field, the minimum size of a cloud volume is 1,024 GiB (1 TiB).
    7. In the Protocol type field, select SMB.
    8. The VPC network can be part of a host project in a shared VPC, or it can be a standalone project. Select Shared VCP configuration if you have a host project and shared VPC topology. For standalone projects, leave the box unchecked.

      Shared VPC configuration box

    9. Under Authorized Network:

      • Select the VPC Network Name from which the volume will be accessible.

      • Optionally, you can specify your custom CIDR range by selecting the Use Address Range box.

      IP address range

    10. To manage export policy rules for the volume, expand Show export policy.

      • Click Add Rule to specify the allowed clients and their access type.
      • In the Allowed clients field, enter the IP address or range of addresses that have access to the cloud volume.
      • Select Read & Write or Read Only specify the type of access these IP addresses have to the cloud volume.
      • Add rules as needed.

      Export policy page

    11. To manage snapshot policy for the volume, expand Show snapshot policy.

    12. Select Allow automatic snapshots, specify the snapshot schedules, and specify the number of snapshots to keep.

      Snapshot policy page

  4. Click Save to create the volume.

    The new volume appears in the Volumes list.

    Volumes list

Mapping SMB shares from Compute Engine instances

  1. In the GCP Console, go to the Volumes page.

    GO TO VOLUMES PAGE

  2. Click the SMB volume for which you want to map an SMB share.

  3. Scroll to the right, click More , and then click Mount Instructions.

    Mount SMB

  4. Follow the instructions in the Mount Instructions for SMB window that appears.

    Create SMB instructions

Deleting a cloud volume

Whether you create an NFS or SMB volume, you can delete a cloud volume that is no longer needed.

  1. In the GCP Console, go to the Volumes page.

    GO TO VOLUMES PAGE

  2. In the Volumes view, select the volume that you want to delete, and then click Delete.

    Volumes view

  3. In the Delete Volume dialog, click Confirm to delete the volume.

    Delete volume dialog

Security considerations

You should familiarize yourself with a few security considerations for NFS and SMB access of Cloud Volumes Service.

NFS access

The NFS considerations in this section pertain only to NFSv3, which is currently the only supported version.

GCP has strict inbound firewall rules that are categorized as default and implied. Every VPC Service Controls network has two implied firewall rules. Understanding the implied rules help you manage access to the cloud volumes.

  • The implied allow egress rule: The rule's action is to allow, the destination IP range is 0.0.0.0/0, and the priority is the lowest possible (65535). It lets any instance send traffic to any destination. You can restrict outbound access with a firewall rule that has a higher priority. Internet access is permitted if no other firewall rules deny the outbound traffic and if the instance has an external IP address or uses a NAT instance. See Internet access requirements for more details.
  • The implied deny ingress rule: The rule's action is to deny, the source is 0.0.0.0/0, and the priority is the lowest possible (65535). It protects all instances by blocking incoming traffic to them. You can permit incoming access with a firewall rule that has a higher priority. Note that the default network includes some additional rules that override this rule to permit certain types of incoming traffic.

NFS uses various ports to communicate between the initiator and a target. To ensure proper communication and successful volume mount, you must enable these ports on the VPC firewalls. If you have a local firewall enabled, you must also enable these ports on the compute instance. The required ports are as follows:

  • 111 TCP/UDP portmapper
  • 2049 TCP/UDP nfsd
  • 635 TCP/UDP mountd
  • 4045 TCP/UDP nlockmgr
  • 4046 TCP/UDP status

SMB access

AD integration

In the Cloud Volumes Service implementation of SMB, workgroups aren't supported. Cloud Volumes Service have an inherent dependency on a directory service. The following are supported directories:

  • A "roll your own" AD that is a Windows 2008r2 or later AD server in the tenant VPC.
  • A third-party AD as a service in GCP.

Communication between cloud volumes and AD

GCP has strict inbound firewall rules that are categorized as default and implied. Every VPC network has two implied firewall rules. Understanding the implied rules help you manage access to the cloud volumes.

  • The implied allow egress rule: The rule's action is to allow, the destination IP range is 0.0.0.0/0, and the priority is the lowest possible (65535). It lets any instance send traffic to any destination. You can restrict outbound access with a firewall rule that has a higher priority. Internet access is permitted if no other firewall rules deny the outbound traffic and if the instance has an external IP address or uses a NAT instance. See Internet access requirements for more details.
  • The implied deny ingress rule: The rule's action is to deny, the source is 0.0.0.0/0, and the priority is the lowest possible (65535). It protects all instances by blocking incoming traffic to them. You can permit incoming access with a firewall rule that has a higher priority. Note that the default network includes some additional rules that override this rule to permit certain types of incoming traffic.

You must create a set of inbound rules to enable Cloud Volumes Service to initiate communication with the AD domain controllers. You must add these rules to the security groups that are attached to each AD instance to enable inbound communication from the storage subnet CIDR or the specific IP address. The required ports are as follows:

  • ICMPV4
  • DNS 53 TCP
  • DNS 53 UDP
  • LDAP 389 TCP
  • LDAP 389 UDP
  • LDAP (GC) 3268 TCP
  • NetBIOS Name 138 UDP
  • SAM/LSA 445 TCP
  • SAM/LSA 445 UDP
  • Secure LDAP 636 TCP
  • Secure LDAP 3269 TCP
  • W32Time 123 UDP
  • AD Web Svc 9389 TCP
  • Kerberos 464 TCP
  • Kerberos 464 UDP
  • Kerberos 88 TCP
  • Kerberos 88 UDP

Permissions for Cloud Volumes Service

Cloud Volumes Service supports a granular set of permissions. Currently these granular permissions are combined into two predefined roles. The granular permissions are listed below:

  • netappcloudvolumes.volumes.create
  • netappcloudvolumes.volumes.update
  • netappcloudvolumes.volumes.delete
  • netappcloudvolumes.volumes.list
  • netappcloudvolumes.volumes.get
  • netappcloudvolumes.snapshots.create
  • netappcloudvolumes.snapshots.update
  • netappcloudvolumes.snapshots.delete
  • netappcloudvolumes.snapshots.list
  • netappcloudvolumes.snapshots.get
  • netappcloudvolumes.activeDirectories.create
  • netappcloudvolumes.activeDirectories.update
  • netappcloudvolumes.activeDirectories.delete
  • netappcloudvolumes.activeDirectories.list

The two predefined roles are netappcloudvolumes.admin and netappcloudvolumes.viewer. You can assign these roles to specific users or service accounts.

The netappcloudvolumes.admin role contains the full permission set listed above, while the netappcloudvolumes.viewer role contains the list and get permissions on specific objects.

Adding Cloud Volumes Service roles to a user

To grant a user the netappcloudvolumes.admin role, use the following command, substituting the appropriate user name and project ID for myuser@myorg.com and my-project.

gcloud projects add-iam-policy-binding my-project \
    --member='user:myuser@myorg.com' \
    --role='roles/netappcloudvolumes.admin'

To grant a user the netappcloudvolumes.viewer role, use the following command, substituting the appropriate user name and project ID for myuser@myorg.com and my-project.

gcloud projects add-iam-policy-binding my-project \
    --member='user:myuser@myorg.com' \
    --role='roles/netappcloudvolumes.viewer'

Cloud Volumes APIs

The cloud volumes capabilities that are available through the web UI are also available through RESTful APIs. The APIs enable you to programmatically create and manage cloud volumes. They also provide the capability to develop scripts and tools for provisioning and support other service workflows.

Creating your service account and private key

  1. In Cloud Shell, create a service account in your project:

    gcloud beta iam service-accounts create serviceaccountname \
        --description "Admin SA for CVS API access" \
        --display-name "cloudvolumes-admin-sa"
    
  2. Assign the NetApp cloud volumes admin role to the service account. Replace projectname and serviceaccount@projectname with your project name and with the service account you just created.

    gcloud projects add-iam-policy-binding projectname \
        --member='serviceAccount:serviceaccount@projectname.gserviceaccount.com' \
        --role='roles/netappcloudvolumes.admin'
    
  3. Confirm the role bindings for the service account and project:

    gcloud projects get-iam-policy projectname
    

    The output looks something like this:

    output from get-iam-policy

  4. Create and download your private key file.

    gcloud iam service-accounts keys create key_file_name --iam-account serviceaccount@projectname>
    

You have now created the private key file that can be used to generate an authorization token to securely make the API calls. See Creating and managing service accounts

Accessing the NetApp Cloud Volumes API Swagger spec

Here is the NetApp Cloud Volumes API Swagger spec.

Examples of using the Cloud Volumes APIs

The following example uses Python to interact with the Cloud Volumes APIs. In this example, the script makes a call to get all volumes in a given project and print out their details. The private key file belongs to a service account that has thenetappcloudvolumes.admin role assigned to it. Note how the authentication token is obtained and used in the request headers.

import google.auth
import google.auth.transport.requests
import requests
import json
from google.auth import jwt
from google.oauth2 import service_account
from google.oauth2 import id_token

# Small utility function to convert bytes to gibibytes
def convertToGiB(bytes):
    return bytes/1024/1024/1024

# Set common variables
audience = 'https://cloudvolumesgcp-api.netapp.com'
server = 'https://cv-prod.us-east4.gcp.netapp.com'
service_account_file = '/home/username/keys/prj-10-31e8f6d0c17e.json'
project_number = 1234567890

# Get all volumes from all regions
get_url = server + "/v2/projects/" + str(project_number) + "/locations/-/Volumes"

# Create credential object from private key file
svc_creds = service_account.Credentials.from_service_account_file(
    service_account_file)

# Create jwt
jwt_creds = jwt.Credentials.from_signing_credentials(
    svc_creds, audience=audience)

# Issue request to get auth token
request = google.auth.transport.requests.Request()
jwt_creds.refresh(request)

# Extract token
id_token = jwt_creds.token

# Construct GET request
headers = {
        "Content-Type": "application/json",
        "Authorization": "Bearer " + id_token.decode('utf-8')
        }

# Issue the request to the server
r = requests.get(get_url, headers=headers)

# Load the json response into a dict
r_dict = r.json()

# Print out all vols
print ("Response to GET request: " + get_url)
for vol in r_dict:
    # Get volume attributes
    volname = vol["name"]
    volsizeGiB = convertToGiB(vol["quotaInBytes"])
    region = vol["region"]
    print ("\tvolname: " + volname + ", size: " + str(volsizeGiB) + "GiB, region: " + region)

The result of running this script is output similar to the following (it will vary based on what volumes exist in your project):

Response to GET request: https://cloudvolumesgcp-api.netapp.com/v2/projects/33699670086/locations/-/Volumes
     volname: launch, size: 1024.0GiB, region: us-east4
     volname: centraltest, size: 1024.0GiB, region: us-central1
     volname: betalaunch, size: 1024.0GiB, region: us-central1

Performance expectations

For information about Cloud Volumes Service benchmarks, see NetApp Cloud Volumes Service for GC: Benchmarks.

For information about Cloud Volumes Service performance, see Enterprise Applications on Google Cloud: An Introduction to Cloud Volumes Service For GCP.

Support and troubleshooting

If you encounter issues setting up or managing Cloud Volumes Service, you can create a case for support.

  1. In the GCP Console, hold the pointer over Support and then click Cases.

    GO TO THE CASES PAGE

    Cases menu

  2. In the Role-Based Support dialog, click Open Support Centre.

    Open Support Center button

  3. Click My Account and then next to the support package you want to use, click New Case.

  4. In the New Case section, complete the following fields:

    1. From the Issue Type drop-down list, select Networking.
    2. From the Component drop-down list, select NetApp Cloud Volumes.
    3. From the Subcomponent drop-down list, select the option that most closely describes your issue.
    4. Complete the other required fields such as Project ID, Subject, and Description of the problem.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

Unsubscribing the Cloud Volumes Service

  1. Follow the steps for deleting a cloud volume to delete all volumes.
  2. Remove the service account created for Cloud Volumes Service:

    1. Go to IAM and admin > Service accounts.
    2. From the Actions menu, select Delete.
    3. Confirm account deletion by typing the service account email, and then click Delete.

      Delete an account

Deleting the project

  1. In the GCP Console, go to the Projects page.

    Go to the Projects page

  2. In the project list, select the project you want to delete and click Delete .
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

Deleting Compute Engine instances

  1. In the GCP Console, go to the VM Instances page.

    Go to the VM Instances page

  2. Click the checkbox for the instance you want to delete.
  3. Click Delete to delete the instance.

See the Notice file for information about third-party copyright and licenses used in the Cloud Volumes Service software.

What's next

  • Try out other Google Cloud Platform features for yourself. Have a look at our tutorials.
Was this page helpful? Let us know how we did:

Send feedback about...