The cloud volumes capabilities that are available through the web UI are also available through RESTful APIs. The APIs enable you to programmatically create and manage cloud volumes. They also provide the capability to develop scripts and tools for provisioning and support other service workflows.
View the NetApp Cloud Volumes API Swagger specification
To view the NetApp Cloud Volumes API Swagger specification with Swagger Editor, do the following:
- Go to Swagger Editor.
- Select File > Import URL.
Enter the following URL:
https://cloudvolumesgcp-api.netapp.com/swagger.json
Click OK.
The Cloud Volumes APIs (
CVS-GCP
) are displayed.
Create your service account and private key
In Cloud Shell, create a service account in your project:
gcloud iam service-accounts create serviceaccountname \ --description "Admin SA for CVS API access" \ --display-name "cloudvolumes-admin-sa"
Assign the NetApp cloud volumes admin role to the service account. Replace projectid and serviceaccount@projectid with your project ID and with the service account you just created:
gcloud projects add-iam-policy-binding projectid \ --member='serviceAccount:serviceaccount@projectid.iam.gserviceaccount.com' \ --role='roles/netappcloudvolumes.admin'
Confirm the role bindings for the service account and project:
gcloud projects get-iam-policy projectid
The output looks something like this:
Manage API authentication
Cloud Volumes Service APIs use bearer authentication. Before you can make any API calls, you must fetch a valid JSON web token (JWT) from Identity and Access Management.
There are two ways to obtain valid tokens from Identity and Access Management: by using service account impersonation or by using a service account key.
Authenticate using service account impersonation
You can use service account impersonation to allow principals and resources to act as an IAM service account. This method of authentication is more secure than using a service account key for this purpose. For more information, see Service account impersonation.
When you use service account impersonation, the code runs with Application Default Credentials (ADC).
Examples of Application Default Credentials include the following:
- The identity or principal used for gcloud auth application-default login, such as your Google user access credentials
- The service account attached to a Compute Engine virtual machine
- The service account attached to a Cloud Run function
- The service account attached to a Cloud Build job
- The IAM service account on a GKE cluster using Workload Identity Federation for GKE
For more information, see Attaching a service account to a resource.
When you use service account impersonation, use the Fetch a JSON web token using service account impersonation example code to generate JSON web tokens.
To grant the
iam.serviceAccountTokenCreator
role to your ADC user on the service account that you created in the previous section, follow the instructions in Allowing a principal to impersonate a single service account.Example:
gcloud iam service-accounts add-iam-policy-binding \ serviceaccount@projectid.iam.gserviceaccount.com \ --member=user:my-gcloud-user@example.com \ --role=roles/iam.serviceAccountTokenCreator
This binding grants the
my-gcloud-user@example.com
user the permissions to impersonate the service accountserviceaccount@projectid.iam.gserviceaccount.com
.Only the
serviceaccount@projectid.iam.gserviceaccount.com
account needs the permissions of theroles/netappcloudvolumes.admin
role.
Authenticate using a service account key
You can create a JSON key for the service account created in the previous section and use the key to obtain a JSON web token. This method of authentication is less secure than using service account impersonation. Google recommends against using service account keys for this purpose. For more information, see Best practices for working with service accounts.
To create and download a private JSON key file, run the following command:
gcloud iam service-accounts keys create key_file_name --iam-account serviceaccount@projectid
Examples of using the Cloud Volumes APIs
The examples in this section use Python 3.6 or later to interact with the Cloud Volumes Service APIs.
To use these examples, install the required Python modules:
pip3 install requests google google.auth \
google-api-python-client google-cloud-iam
Fetch a JSON web token by using service account impersonation
The following example fetches a JSON web token (JWT) using service account
impersonation. This example also defines the get_headers
helper function,
which is used in other examples on this page.
This example requires that the iamcredentials.googleapis.com
API is enabled.
def get_headers(token: str) -> dict: headers = { "Content-Type": "application/json", "Authorization": "Bearer " + token } return headers def get_token(service_account_name) -> str: import datetime import json from google.cloud import iam_credentials_v1 token_expiration_time_seconds = 30*60 # 30 minutes lifetime audience = 'https://cloudvolumesgcp-api.netapp.com/' client = iam_credentials_v1.IAMCredentialsClient() service_account_path = client.service_account_path('-', service_account_name) # Build the claims set curr_time = datetime.datetime.now() expiration = curr_time + datetime.timedelta(seconds=token_expiration_time_seconds) claims = { "iss": service_account_name, "aud": audience, "iat": int(curr_time.timestamp()), "exp": int(expiration.timestamp()), "sub": service_account_name, } response = client.sign_jwt(request={"name": service_account_path,"payload": json.dumps(claims)}) return response.signed_jwt token = get_token("serviceaccount@projectid.iam.gserviceaccount.com")
Fetch a JSON web token by using a service account key
The following example fetches a JSON web token (JWT) by using a service account
JSON key. This example also defines the get_headers
helper function, which is
used in other examples on this page.
def get_headers(token: str) -> dict: headers = { "Content-Type": "application/json", "Authorization": "Bearer " + token } return headers def get_token(service_account_file: str) -> str: import google.auth.transport.requests from google.oauth2 import service_account from google.auth import jwt audience = 'https://cloudvolumesgcp-api.netapp.com' # Create credential object from private key file svc_creds = service_account.Credentials.from_service_account_file(service_account_file) # Create JWT jwt_creds = jwt.Credentials.from_signing_credentials(svc_creds, audience=audience) # Issue request to get auth token request = google.auth.transport.requests.Request() jwt_creds.refresh(request) # Extract token return jwt_creds.token.decode('utf-8') token = get_token("key.json")
Determine the project number
Cloud Volumes Service APIs use automatically generated
Google Cloud project numbers
to identify projects, but users often use human-readable and customizable
project IDs. You can look up project numbers in the Google Cloud console, or you can
use the function in the following example to get the project number associated
with the project ID in your key file. This get_google_project_number
function
is used in other examples in this section.
To use this function, the user must have the resourcemanager.projects.get
permission, and the Cloud Resource Manager API (cloudresourcemanager.googleapis.com
)
must be enabled.
def get_google_project_number(service_account_identifier: str) -> str: import re, json from google.auth import default from googleapiclient import discovery, errors # Is string passed a service account name? user_managed_sa_regex = "^[a-z]([-a-z0-9]*[a-z0-9])@[a-z0-9-]+\.iam\.gserviceaccount\.com$" if re.match(user_managed_sa_regex, service_account_identifier): project_id = service_account_identifier.split('@')[1].split('.')[0] else: with open(service_account_identifier) as json_file: content = json.load(json_file) project_id = content['project_id'] credentials, _ = default() service = discovery.build('cloudresourcemanager', 'v1', credentials=credentials) request = service.projects().get(projectId=project_id) try: response = request.execute() return response["projectNumber"] except errors.HttpError as e: print("Unable to resolve JSON keyfile to project number. Missing resourcemanager.projects.get permissions?") return "" # Call using a key file project_number = get_google_project_number("key.json") # Call using a service account name project_number = get_google_project_number("serviceaccount@projectid.iam.gserviceaccount.com")
Create a storage pool
def create_pool(token:str, project_number: str, region: str, payload: dict): import requests server = 'https://cloudvolumesgcp-api.netapp.com' post_url = f"{server}/v2/projects/{project_number}/locations/{region}/Pools" # POST request to create the pool r = requests.post(post_url, json=payload, headers=get_headers(token)) r.raise_for_status() if not (r.status_code == 201 or r.status_code == 202): print(f"ERROR: HTTP code: {r.status_code} {r.reason} for url: {r.url}") return pool = r.json()['response']['AnyValue'] # Get pool attributes # Note that process might take some minutes and some # attributes are only available after it is finished poolname = pool["name"] sizeGiB = int(pool["sizeInBytes"] / 1024**3) region = pool["region"] numvols = pool["numberOfVolumes"] print(f"poolname: {poolname:30} size: {sizeGiB:>7} GiB region: {region} # of Vols: {numvols}") keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) poolname = "ok-pooltest" region = "europe-west1" network = "ncv-vpc" regionalHA = False payload = { "name": poolname, "region": region, "serviceLevel": "ZoneRedundantStandardSW" if regionalHA == True else "StandardSW", # "StandardSW" or "ZoneRedundantStandardSW" "storageClass": "software", "zone": f"{region}-b", # use zone b in desired region # "secondaryZone": f"{region}-c", # omit for zonal pool "regionalHA": regionalHA, # set "True" for multi-zone and specify secondaryZone "sizeInBytes": 1024*1024**3, # 1024 GiB "network": f"projects/{project_number}/global/networks/{network}", } create_pool(token, project_number, region, payload)
Output
The output should be similar to the following:
poolname: ok-pooltest size: 1024 GiB region: europe-west1 # of Vols: 0
Print information about storage pools
In this example, the script prints the details of all storage pools in a given project:
def print_pools(token:str, project_number: str, region: str='-'): import requests server = 'https://cloudvolumesgcp-api.netapp.com' get_url = f"{server}/v2/projects/{project_number}/locations/{region}/Pools" r = requests.get(get_url, headers=get_headers(token)) r.raise_for_status() print(f"Pools in region: {region}") for pool in r.json(): # Get volume attributes poolname = pool["name"] sizeGiB = int(pool["sizeInBytes"] / 1024**3) region = pool["region"] numvols = pool["numberOfVolumes"] print(f"poolname: {poolname:30} size: {sizeGiB:>7} GiB region: {region} #ofVolumes: {numvols}") keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) print_pools(token, project_number, "-")
Output
The result of running this script varies based on what pools exist in your project. The output should be similar to the following:
Pools in region: - poolname: okpool3 size: 2000 GiB region: europe-west1 #ofVolumes: 1
Print all volumes
In this example, the script makes a call to get all volumes in a given project and print their details:
def print_volumes(token:str, project_number: str, region: str='-'): import requests server = 'https://cloudvolumesgcp-api.netapp.com' get_url = f"{server}/v2/projects/{project_number}/locations/{region}/Volumes" r = requests.get(get_url, headers=get_headers(token)) r.raise_for_status() print(f"Volume in region: {region}") for vol in r.json(): # Get volume attributes volname = vol["name"] volsizeGiB = int(vol["quotaInBytes"] / 1024**3) region = vol["region"] print(f"volname: {volname:30} size: {volsizeGiB:>7} GiB region: {region}") keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) print_volumes(token, project_number, "-")
Output
The result of running this script varies based on what volumes exist in your project. The output should be similar to the following:
Volume in region: - volname: smbvolume size: 1024 GiB region: us-east4 volname: datalake size: 1024 GiB region: us-east4 volname: sapshared size: 1024 GiB region: us-central1 volname: catiarepo size: 1024 GiB region: europe-west2
Create a volume
The following create_volume
helper function is used by other examples in this
section for creating volumes:
def create_volume(token:str, project_number: str, region: str, payload: dict): import requests server = 'https://cloudvolumesgcp-api.netapp.com' post_url = f"{server}/v2/projects/{project_number}/locations/{region}/Volumes" # POST request to create the volume r = requests.post(post_url, jsonpayload, headers=get_headers(token)) r.raise_for_status() if not (r.status_code == 201 or r.status_code == 202): print(f"ERROR: HTTP code: {r.status_code} {r.reason} for url: {r.url}") return vol = r.json()['response']['AnyValue'] # Get volume attributes. # The process can take several minutes. Some # attributes are only available after it is finished. volname = vol["name"] volsizeGiB = int(vol["quotaInBytes"] / 1024**3) region = vol["region"] volume_id = vol["volumeId"] print(f"Created volume: {volname:30} size: {volsizeGiB:>7} GiB region: {region} UUID: {volume_id}")
Create a CVS-Performance volume with NFSv3
def create_volume_nfsv3(token:str, project_number: str, region: str, network: str, volume_name: str): payload = { "name": volume_name, "creationToken": volume_name, # mount path "region": region, "serviceLevel": "low", # low/medium/high = standard/premium/extreme "storageClass": "hardware", # hardware for CVS-Performance, software for CVS "quotaInBytes": 1024*1024**3, # 1024 GiB "network": f"projects/{project_number}/global/networks/{network}", "protocolTypes": [ "NFSv3" # NFSv3, NFSv4, CIFS ], "snapshotPolicy": { "dailySchedule": { "hour": 1, "minute": 10, "snapshotsToKeep": 5 } }, "exportPolicy": { "rules": [ { "access": "ReadWrite", "allowedClients": "0.0.0.0/0", "nfsv3": { "checked": True } } ] } } create_volume(token, project_number, region, payload) keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) create_volume_nfsv3(token, project_number, "us-east4", "my-vpc", "nfsv3-volume")
Output
The output should be similar to the following:
Created volume: nfsv3-volume size: 1024 GiB region: us-east4 UUID: d85f6c26-1604-cdc6-1213-b1d6468e6980
Create a CVS Standard-SW volume with NFSv3
def create_volume_cvs(token:str, project_number: str, region: str, network: str, volume_name: str, pool_id: str): payload = { "name": volume_name, "creationToken": volume_name, # mount path "quotaInBytes": 1024*1024**3, # 1024 GiB "region": region, "storageClass": "software", # software for CVS "poolId": pool_id, # UUID of storage pool to create volume within "serviceLevel": "basic", "regionalHA": False, "zone": f"{region}-b", "network": f"projects/{project_number}/global/networks/{network}", "protocolTypes": [ "NFSv3" # NFSv3, NFSv4, CIFS ], "exportPolicy": { "rules": [ { "access": "ReadWrite", "allowedClients": "0.0.0.0/0", "nfsv3": { "checked": True } } ] } } create_volume(token, project_number, region, payload) keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) create_volume_cvs(token, project_number, "europe-west1", "ncv-vpc", "nfsv3-volume", "9760acf5-4638-11e7-9bdb-020073ca7773")
Output
The output should be similar to the following:
Created volume: nfsv3-volume size: 1024 GiB region: europe-west1 UUID: e1d9afb6-d727-2643-6c04-bc544d7ad765
Create a volume with NFSv4
def create_volume_nfsv4(token:str, project_number: str, region: str, network: str, volume_name: str): payload = { "name": volume_name, "creationToken": volume_name, # mount path "region": region, "serviceLevel": "low", # low/medium/high = standard/premium/extreme "storageClass": "hardware", # hardware for CVS-Performance, software for CVS "quotaInBytes": 1024*1024**3, # 1024 GiB "network": f"projects/{project_number}/global/networks/{network}", "protocolTypes": [ "NFSv4" # NFSv3, NFSv4, CIFS ], "exportPolicy": { "rules": [ { "access": "ReadWrite", "allowedClients": "0.0.0.0/0", "nfsv3": { "checked": False }, "nfsv4": { "checked": True } } ] } } create_volume(token, project_number, region, payload) keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) create_volume_nfsv4(token, project_number, "us-east4", "ncv-vpc", "nfsv4-volume")
Output
The output should be similar to the following:
Created volume: nfsv4-volume size: 1024 GiB region: us-east4 UUID: 2222c128-1772-c89f-540a-0ff48d519f75
Create a volume with SMB (continuously available, non-browsable, with encryption enabled)
def create_volume_smb(token:str, project_number: str, region: str, network: str, volume_name: str): payload = { "name": volume_name, "creationToken": volume_name, # mount path "region": region, "serviceLevel": "medium", # low/medium/high = standard/premium/extreme "storageClass": "hardware", # hardware for CVS-Performance, software for CVS "quotaInBytes": 1024*1024**3, # 1024 GiB "network": f"projects/{project_number}/global/networks/{network}", "protocolTypes": [ "CIFS" # NFSv3, NFSv4, CIFS ], "smbShareSettings": [ "continuously_available", "encrypt_data", "non_browsable" ] } create_volume(token, project_number, region, payload) keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) create_volume_smb(token, project_number, "us-east4", "ncv-vpc", "smb-volume")
Output
The output should be similar to the following:
Created volume: smb-volume size: 1024 GiB region: us-east4 UUID: 6327df5e-1b75-3d4a-8d59-0093c9423d57
Get volume details
def get_volume_details(token:str, project_number: str, region: str, volumeId: str): import requests import json server = 'https://cloudvolumesgcp-api.netapp.com' get_url = f"{server}/v2/projects/{project_number}/locations/{region}/Volumes/{volumeId}" r = requests.get(get_url, headers=get_headers(token)) r.raise_for_status() r.json() keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) result=volume_details(token, project_number, "us-central1", "1bc88bc6-cc7d-5fe3-3737-8e635fe2f996") import json print(json.dumps(result, indent=4))
Output
The output should be similar to the following:
{ "created": "2020-04-09T04:20:12.000Z", "lifeCycleState": "available", "lifeCycleStateDetails": "Available for use", "name": "data-volume1", "ownerId": "031d5c87-af29-11e9-b98e-0a580aff0248", "region": "us-central1", "volumeId": "1bc88bc6-cc7d-5fe3-3737-8e635fe2f996", "zone": "us-central1-zone1", "billingLabels": [ { "key": "department", "value": "csa-gcp" } ], "creationToken": "thirsty-amazing-varahamihira", "encryptionType": "ServiceManaged", "exportPolicy": { "rules": [ { "access": "ReadWrite", "allowedClients": "10.10.0.0/16,192.168.6.10,192.168.6.12,192.168.100.0/24", "hasRootAccess": "true", "kerberos5ReadOnly": { "checked": false }, "kerberos5ReadWrite": { "checked": false }, "kerberos5iReadOnly": { "checked": false }, "kerberos5iReadWrite": { "checked": false }, "kerberos5pReadOnly": { "checked": false }, "kerberos5pReadWrite": { "checked": false }, "nfsv3": { "checked": true }, "nfsv4": { "checked": false } } ] }, "inReplication": false, "isDataProtection": false, "jobs": [], "kerberosEnabled": false, "labels": null, "ldapEnabled": false, "mountPoints": [ { "export": "/thirsty-amazing-varahamihira", "exportFull": "10.194.0.20:/thirsty-amazing-varahamihira", "instructions": "Setting up your instance\nOpen an SSH client and connect to your instance.\nInstall the nfs client on your instance.\nOn Red Hat Enterprise Linux or SuSE Linux instance:\nsudo yum install -y nfs-utils\nOn an Ubuntu or Debian instance:\nsudo apt-get install nfs-common\n\nMounting your volume\nCreate a new directory on your instance, such as \"/thirsty-amazing-varahamihira\":\nsudo mkdir /thirsty-amazing-varahamihira\nMount your volume using the example command below:\nsudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 10.194.0.20:/thirsty-amazing-varahamihira /thirsty-amazing-varahamihira\nNote. Please use mount options appropriate for your specific workloads when known.", "protocolType": "NFSv3", "server": "10.194.0.20", "vlanId": 1033 } ], "network": "projects/823997568320/global/networks/ncv-vpc", "protocolTypes": [ "NFSv3" ], "quotaInBytes": 1099511627776, "regionalHA": false, "securityStyle": "unix", "serviceLevel": "basic", "snapReserve": 0, "snapshotDirectory": true, "snapshotPolicy": { "dailySchedule": { "hour": 0, "minute": 0, "snapshotsToKeep": 0 }, "enabled": false, "hourlySchedule": { "minute": 0, "snapshotsToKeep": 0 }, "monthlySchedule": { "daysOfMonth": "1", "hour": 0, "minute": 0, "snapshotsToKeep": 0 }, "weeklySchedule": { "day": "Sunday", "hour": 0, "minute": 0, "snapshotsToKeep": 0 } }, "storageClass": "hardware", "timezone": "CST", "unixPermissions": "0770", "usedBytes": 564580352 }
Update volume details
def modify_volume(token:str, project_number: str, region: str, volume_id: str, payload: dict): import requests import json server = 'https://cloudvolumesgcp-api.netapp.com' get_url = f"{server}/v2/projects/{project_number}/locations/{region}/Volumes/{volume_id}" put_url = f"{server}/v2/projects/{project_number}/locations/{region}/Volumes/{volume_id}" # Read attributes of existing volume r = requests.get(get_url, headers=get_headers(token)) if not r.status_code == 200: print(f"Error: {r.url} returned: {r.text}") return {} # Merge changes payload = {**r.json(), **payload} # Update volume r = requests.put(put_url, data=json.dumps(payload), headers=get_headers(token)) if r.status_code not in [200, 202]: print(f"Error: {r.url} returned: {r.text}") r.raise_for_status() return r.json() keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) payload = { "unixPermissions": "0755", "billingLabels": [ {'key': "department", 'value': "engineering" } ] } result = modify_volume(token, project_number, "us-central1", "1bc88bc6-cc7d-5fe3-3737-8e635fe2f996", payload)
Enable Google Cloud VMware Engine datastore deletion protection
This example reuses the modify_volume
function introduced in the
Update volume details
section. You can disable deletion protection by omitting the "volumeDelete"
string.
keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) payload = { "restrictedActions": [ "volumeDelete" ] } result = modify_volume(token, project_number, "us-central1", "1bc88bc6-cc7d-5fe3-3737-8e635fe2f996", payload)
Get service level details
import requests import json keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) region = "us-east4" server = 'https://cloudvolumesgcp-api.netapp.com' get_url = f"{server}/v2/projects/{project_number}/locations/{region}/Storage/ServiceLevels" r = requests.get(get_url, headers=get_headers(token)) print(json.dumps(r.json(), indent=4))
Output
The output should be similar to the following:
[ { "name": "basic", "performance": "low" }, { "name": "standard", "performance": "medium" }, { "name": "extreme", "performance": "high" } ]
Update the service level
keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) payload = {"serviceLevel": "extreme"} result = modify_volume(token, project_number, "us-central1", "1bc88bc6-cc7d-5fe3-3737-8e635fe2f996", payload)
Create an Active Directory connection
def create_activedirectory(token:str, project_number: str, region: str): import requests import json server = 'https://cloudvolumesgcp-api.netapp.com' post_url = f"{server}/v2/projects/{project_number}/locations/{region}/Storage/ActiveDirectory" payload = { "username": "example_username", "password": "example_password", "DNS": "101.102.103.104", "aesEncryption": True, "allowLocalNFSUsersWithLdap": False, "domain": "example.com", "label": "hardware", # "hardware" for CVS-Performance, "software" for CVS "netBIOS": "cvserver", "organizationalUnit": "CN=Computers", # or specify OU like "OU=myOU,DC=example,DC=com", "region": region, "site": "Default-First-Site-Name", # More optional parameters available. See the Swagger definition } # POST request to create the AD connection r = requests.post(post_url, data=json.dumps(payload), headers=get_headers(token)) r.raise_for_status() ad_uuid = r.json()['UUID'] print(f"Created Active Directory entry: {ad_uuid}") keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) create_activedirectory(token, project_number, "us-east4")
Output
The output should be similar to the following:
Created Active Directory entry: c18c569d-0920-805f-9918-a4b04df758f5
Update Active Directory with Kerberos and backup operators
def update_activedirectory(token:str, project_number: str, region: str, uuid: str, changes: dict): import requests import json server = 'https://cloudvolumesgcp-api.netapp.com' get_url = f"{server}/v2/projects/{project_number}/locations/{region}/Storage/ActiveDirectory/{uuid}" # read parameters from existing entry r = requests.get(get_url, headers=get_headers(token)) r.raise_for_status() # Merge old entry with changes payload = {**r.json(), **changes} # PUT request to update the AD connection put_url = get_url r = requests.put(put_url, data=json.dumps(payload), headers=get_headers(token)) r.raise_for_status() ad_uuid = r.json()['UUID'] print(f"Updated Active Directory entry: {ad_uuid}") keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) # Add parameters to change here. changes = { "backupOperators": [ "backup1", "backup2" ], "kdcIP": "10.3.1.15", "aesEncryption": True, "adName": "2BOVAEKB44B" } update_activedirectory(token, project_number, "us-east4", 'c18c569d-0920-805f-9918-a4b04df758f5', changes)
Output
The output should be similar to the following:
Updated Active Directory entry: 5cf34bae-74c7-e539-aad9-4dcfec84d8fd
Create a backup
def create_backup(token:str, project_number: str, region: str, volume_id: str, backup_name: str): import requests server = 'https://cloudvolumesgcp-api.netapp.com' post_url = f"{server}/v2/projects/{project_number}/locations/{region}/Backups" payload = { "name": backup_name, "volumeId": volume_id } r = requests.post(post_url, json=payload, headers=get_headers(token)) if not (r.status_code == 201 or r.status_code == 202): print(f"ERROR: HTTP code: {r.status_code} {r.reason} for url: {r.url}") r.raise_for_status() return print("Backup created/creating") keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) backups = create_backup(token, project_number, "europe-west1", "9760acf5-4638-11e7-9bdb-020073ca7773", "mybackup")
Output
The output should be similar to the following:
Backup created/creating
List backups
def get_backups(token:str, project_number: str, region: str): import requests import json server = 'https://cloudvolumesgcp-api.netapp.com' get_url = f"{server}/v2/projects/{project_number}/locations/{region}/Backups" r = requests.get(get_url, headers=get_headers(token)) r.raise_for_status() return r.json() keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) backups = get_backups(token, project_number, "-") for backup in backups: print(f"region: {backup['region']:22} name: {backup['name']:20} size [GiB]: {int(backup['bytesTransferred']/1024**3):>7} UUID: {backup['backupId']}")
Output
The output should be similar to the following:
region: europe-west1 name: oktest size [GiB]: 748 UUID: fbb4df43-c715-6a82-cced-8b5bbdb98b02 region: europe-west1 name: oktest2 size [GiB]: 0 UUID: 7d11c58f-f6af-c356-f58e-939713ce0d65 region: europe-west1 name: saw-testbackup size [GiB]: 0 UUID: 248d900d-dd92-ca87-bca1-3f74f831f04f region: australia-southeast2 name: test-backup size [GiB]: 0 UUID: 97e0b72d-0735-c1af-b6a5-8e4133699095
Restore from a backup
Restoring from a backup is like creating a new volume, except that restoring
from a backup requires the specification of a backupId
value. The new volume is
initialized with the data from the backup volume specified by the backupId
value.
keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) region = "europe-west1" pool_id = "9760acf5-4638-11e7-9bdb-020073ca7773" network = "ncv-vpc" backup_id = "fbb4df43-c715-6a82-cced-8b5bbdb98b02" payload = { "name": "restored_volume", "creationToken": "restored_volume", # mount path "region": region, "storageClass": "software", # software for CVS "poolId": pool_id, # UUID of Storage Pool to create volume within "quotaInBytes": 1024*1024**3, # 1024 GiB "network": f"projects/{project_number}/global/networks/{network}", "backupId": backup_id, "protocolTypes": [ "NFSv3" # NFSv3, NFSv4, CIFS ], "exportPolicy": { "rules": [ { "access": "ReadWrite", "allowedClients": "0.0.0.0/0", "nfsv3": { "checked": True } } ] } } create_volume(token, project_number, region, payload)
Output
The output should be similar to the following:
Created volume: restored_volume size: 1024 GiB region: europe-west1 UUID: d2e9bcb5-e728-3623-7e06-ba5e4d8ad738
List KMS configurations
The following example lists all KMS configurations in your project, including the configuration ID.
def print_kmsconfigs(token:str, project_number: str, region: str): import requests server = 'https://cloudvolumesgcp-api.netapp.com' get_url = f"{server}/v2/projects/{project_number}/locations/{region}/Storage/KmsConfig" r = requests.get(get_url, headers=get_headers(token)) r.raise_for_status() for kms in r.json(): print(f"Key: {kms['keyRingLocation']}/{kms['keyRing']}/{kms['keyName']}, Region: {kms['region']}, kmsId: {kms['uuid']}") keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) print_kmsconfigs(token, project_number, "-")
Output
The result of running this script varies based on what KMS entries exist in your project.
The output should be similar to the following:
Key: global/OneRing/Frankfurt, Region: europe-west3, kmsId: 12759064-04e9-5d50-9262-bdca39f13cd0
Create KMS configurations
The following example creates a KMS configuration in your project:
def create_kmsconfig(token:str, project_number: str, region: str, kms_resource_name: str, network_resource_name: str): import requests server = 'https://cloudvolumesgcp-api.netapp.com' post_url = f"{server}/v2/projects/{project_number}/locations/{region}/Storage/KmsConfig" # key resource names can be fetched from KMS. Format is # projects//locations/ /keyRings/ /cryptoKeys/ key_components = kms_resource_name.split('/') if len(key_components) != 8: print("Invalid key resource name passed.") return payload = { 'keyName': key_components[7], "keyProjectID": key_components[1], 'keyRing': key_components[5], 'keyRingLocation': key_components[3], # network: For standalone networks, it is okay to put in VPC name only. # for shared-VPCs, put in full VPC resource name in format below. 'network': network_resource_name, 'region': region, } r = requests.post(post_url, json=payload, headers=get_headers(token)) if not (r.status_code == 201 or r.status_code == 202): print(f"ERROR: HTTP code: {r.status_code} {r.reason} for url: {r.url}") r.raise_for_status() return print("KMS config created") print(r.json()) keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) create_kmsconfig(token, project_number, "us-west1", \ "projects/my-kms-project/locations/global/keyRings/TheOneRing/cryptoKeys/Mordor-Key", \ "projects/1234567890/global/networks/my-shared-vpc")
Output
The output should be similar to the following:
KMS config created {'jobs': [{'action': 'create', 'created': '2022-09-05T10:39:21.000Z', 'jobId': '441b89a7-0460-8b0a-d251-048ede81c42a', 'objectId': '03cdaf89-311d-bdee-6d99-b900bbe3612d', 'objectType': 'GcpKmsConfig', 'state': 'ongoing', 'stateDetails': 'Job is in progress', 'workerId': 'f7435545-c052-4ec5-5e3f-227c51a20f36'}], 'keyName': 'Mordor-Key', 'keyProjectID': 'my-kms-project', 'keyRing': 'TheOneRing', 'keyRingLocation': 'global', 'network': 'projects/1234567890/global/networks/my-shared-vpc', 'region': 'us-west1', 'state': 'creating', 'uuid': '03cdaf89-311d-bdee-6d99-b900bbe3612d'}
After you create a KMS configuration, you must set key access permissions. This can be done manually with the VIEW COMMANDS button in the user interface. Setting key access permissions can also be automated using Google APIs.
Migrate KMS configurations
The following example migrates volumes of a given region from the Encryption Type NetApp (using Cloud Volumes Service managed keys) to Encryption Type Cloud KMS (using customer-managed encryption keys):
def migrate_kmsconfigs(token:str, project_number: str, region: str, kms_config_id: str): import requests server = 'https://cloudvolumesgcp-api.netapp.com' post_url = f"{server}/v2/projects/{project_number}/locations/{region}/Storage/KmsConfig/{kms_config_id}/Migrate" payload = { "toOntapKeyManager": False } r = requests.post(post_url, json=payload, headers=get_headers(token)) r.raise_for_status() if r.status_code == 202: print("Migration successful") else: print("ERROR:") print(r.json()) keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) migrate_kmsconfig(token, project_number, "us-east4", "12759064-04e9-5d50-9262-bdca39f13cd0")
Output
The output will either be Migration successful
or ERROR:
followed by a
JSON string with status information.
Delete KMS configurations
The following example deletes a KMS configuration for a given configuration ID
(kmsId
).
def delete_kmsconfig(token:str, project_number: str, region: str, kms_config_id: str): import requests import json server = 'https://cloudvolumesgcp-api.netapp.com' delete_url = f"{server}/v2/projects/{project_number}/locations/{region}/Storage/KmsConfig/{kms_config_id}" r = requests.delete(delete_url, headers=get_headers(token)) r.raise_for_status() keyfile = "key.json" project_number = get_google_project_number(keyfile) token = get_token(keyfile) delete_kmsconfig(token, project_number, "us-east4", "12759064-04e9-5d50-9262-bdca39f13cd0")