Using the Cloud Client Libraries for Python

This document demonstrates how to use the Cloud Client Libraries for Python for Compute Engine. It describes how to authorize requests and how to create, list, and delete instances. This exercise discusses how to use the google-api-python-client library to access Compute Engine resources. You can run this sample from your local machine or on a VM instance, provided that you have authorized the sample correctly.

For a full list of available client libraries, including other Google client libraries and third-party open source libraries, see the client libraries page.

To skip the exercise and view the full code example, visit the GoogleCloudPlatform/python-docs-samples GitHub page.

Objectives

  • Perform OAuth 2.0 authorization using the oauth2client library
  • Create an instance using the google-python-client library
  • List instances using the google-python-client library
  • Delete an instance using the google-python-client library

Costs

This tutorial uses billable components of Google Cloud Platform including Compute Engine.

New Google Cloud Platform users might be eligible for a free trial.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a GCP project.

    Go to the project selector page

  3. Make sure that billing is enabled for your Google Cloud Platform project. Learn how to confirm billing is enabled for your project.

  4. Install the Cloud SDK.
  5. After the SDK is installed, run gcloud auth application-default login.
  6. Install the google-api-python-client library. Typically, you can run:
    $ pip install --upgrade google-api-python-client

    For more information about how to install this library, see the installation instructions. You also need to have Python 2.7 or 3.3+ to run the Cloud Client Libraries for Python.

  7. Enable the Cloud Storage API.
  8. Create a Cloud Storage bucket and note the bucket name for later.

Authorizing requests

This sample uses OAuth 2.0 authorization. There are many ways to authorize requests using OAuth 2.0, but for the example use application default credentials. This lets you reuse the credentials from the gcloud tool if you are running the sample on a local workstation or reuse credentials from a service account if you are running the sample from within Compute Engine or App Engine. You should have installed and authorized the gcloud tool in the Before you begin section.

Application default credentials are provided in Google API Client Libraries automatically. You just have to build and initialize the API:

compute = googleapiclient.discovery.build('compute', 'v1')

For example, the following snippet is the main method of this sample, which builds and initializes the API and then makes some calls to create, list, and delete an instance:

def main(project, bucket, zone, instance_name, wait=True):
    compute = googleapiclient.discovery.build('compute', 'v1')

    print('Creating instance.')

    operation = create_instance(compute, project, zone, instance_name, bucket)
    wait_for_operation(compute, project, zone, operation['name'])

    instances = list_instances(compute, project, zone)

    print('Instances in project %s and zone %s:' % (project, zone))
    for instance in instances:
        print(' - ' + instance['name'])

    print("""
Instance created.
It will take a minute or two for the instance to complete work.
Check this URL: http://storage.googleapis.com/{}/output.png
Once the image is uploaded press enter to delete the instance.
""".format(bucket))

    if wait:
        input()

    print('Deleting instance.')

    operation = delete_instance(compute, project, zone, instance_name)
    wait_for_operation(compute, project, zone, operation['name'])


if __name__ == '__main__':
    parser = argparse.ArgumentParser(
        description=__doc__,
        formatter_class=argparse.RawDescriptionHelpFormatter)
    parser.add_argument('project_id', help='Your Google Cloud project ID.')
    parser.add_argument(
        'bucket_name', help='Your Google Cloud Storage bucket name.')
    parser.add_argument(
        '--zone',
        default='us-central1-f',
        help='Compute Engine zone to deploy to.')
    parser.add_argument(
        '--name', default='demo-instance', help='New instance name.')

    args = parser.parse_args()

    main(args.project_id, args.bucket_name, args.zone, args.name)

Listing instances

Using the Cloud Client Libraries for Python, you can list instances by using the compute.instances().list method. You need to provide the project ID and the zone for which you want to list instances. For example:

def list_instances(compute, project, zone):
    result = compute.instances().list(project=project, zone=zone).execute()
    return result['items'] if 'items' in result else None

Adding an instance

To add an instance, use the instances().insert method and specify the properties of the new instance. These properties are specified in the request body; for details about each property see the API reference for instances.insert.

At a minimum, your request must provide values for the following properties when you create a new instance:

  • Instance name
  • Root persistent disk
  • Machine type
  • Zone
  • Network Interfaces

This sample starts an instance with the following properties in a zone of your choice:

  • Machine type: n1-standard-1
  • Root persistent disk: a new persistent disk based on the latest Debian 8 image
  • The Compute Engine default service account with the following scopes:

    • https://www.googleapis.com/auth/devstorage.read_write, so the instance can read and write files in Cloud Storage
    • https://www.googleapis.com/auth/logging.write, so the instances logs can upload to Stackdriver Logging
  • Metadata to specify commands that the instance should execute upon startup

def create_instance(compute, project, zone, name, bucket):
    # Get the latest Debian Jessie image.
    image_response = compute.images().getFromFamily(
        project='debian-cloud', family='debian-9').execute()
    source_disk_image = image_response['selfLink']

    # Configure the machine
    machine_type = "zones/%s/machineTypes/n1-standard-1" % zone
    startup_script = open(
        os.path.join(
            os.path.dirname(__file__), 'startup-script.sh'), 'r').read()
    image_url = "http://storage.googleapis.com/gce-demo-input/photo.jpg"
    image_caption = "Ready for dessert?"

    config = {
        'name': name,
        'machineType': machine_type,

        # Specify the boot disk and the image to use as a source.
        'disks': [
            {
                'boot': True,
                'autoDelete': True,
                'initializeParams': {
                    'sourceImage': source_disk_image,
                }
            }
        ],

        # Specify a network interface with NAT to access the public
        # internet.
        'networkInterfaces': [{
            'network': 'global/networks/default',
            'accessConfigs': [
                {'type': 'ONE_TO_ONE_NAT', 'name': 'External NAT'}
            ]
        }],

        # Allow the instance to access cloud storage and logging.
        'serviceAccounts': [{
            'email': 'default',
            'scopes': [
                'https://www.googleapis.com/auth/devstorage.read_write',
                'https://www.googleapis.com/auth/logging.write'
            ]
        }],

        # Metadata is readable from the instance and allows you to
        # pass configuration from deployment scripts to instances.
        'metadata': {
            'items': [{
                # Startup script is automatically executed by the
                # instance upon startup.
                'key': 'startup-script',
                'value': startup_script
            }, {
                'key': 'url',
                'value': image_url
            }, {
                'key': 'text',
                'value': image_caption
            }, {
                'key': 'bucket',
                'value': bucket
            }]
        }
    }

    return compute.instances().insert(
        project=project,
        zone=zone,
        body=config).execute()

The following sections describe the instance creation parameters.

Root persistent disks

All instances must boot from a root persistent disk. The root persistent disk contains all of the necessary files required for starting an instance. When you create a root persistent disk you must specify the source OS image that should be applied to the disk. In the example above, you created a new root persistent disk based on Debian 8 at the same time as the instance. However, it is also possible to create a disk beforehand and attach it to the instance.

Instance metadata

When you create your instance, you might want to include instance metadata such as a startup script, configuration variables, and SSH keys. In the example above, you used the metadata field in your request body to specify a startup script for the instance and some configuration variables as key/values pairs. The startup script, listed below, shows how to read these variables and use them to apply text to an image and upload it to Cloud Storage.

apt-get update
apt-get -y install imagemagick

# Use the metadata server to get the configuration specified during
# instance creation. Read more about metadata here:
# https://cloud.google.com/compute/docs/metadata#querying
IMAGE_URL=$(curl http://metadata/computeMetadata/v1/instance/attributes/url -H "Metadata-Flavor: Google")
TEXT=$(curl http://metadata/computeMetadata/v1/instance/attributes/text -H "Metadata-Flavor: Google")
CS_BUCKET=$(curl http://metadata/computeMetadata/v1/instance/attributes/bucket -H "Metadata-Flavor: Google")

mkdir image-output
cd image-output
wget $IMAGE_URL
convert * -pointsize 30 -fill white -stroke black -gravity center -annotate +10+40 "$TEXT" output.png

# Create a Google Cloud Storage bucket.
gsutil mb gs://$CS_BUCKET

# Store the image in the Google Cloud Storage bucket and allow all users
# to read it.
gsutil cp -a public-read output.png gs://$CS_BUCKET/output.png

Deleting an Instance

To delete an instance, you need to call the instances().delete method and provide the name, zone, and project ID of the instance to delete. Because you set the autoDelete parameter for the boot disk it is also deleted when the instance is deleted. This setting is off by default but is useful when your use case calls for disks and instances to be deleted together.

def delete_instance(compute, project, zone, name):
    return compute.instances().delete(
        project=project,
        zone=zone,
        instance=name).execute()

Running the sample

You can run the full sample by downloading the code and running it on the command line. Make sure to download the create_instance.py file and the startup-script.sh file. To run the sample:

python create_instance.py --name [INSTANCE_NAME] --zone [ZONE] [PROJECT_ID] [CLOUD_STORAGE_BUCKET]

where:

  • [INSTANCE_NAME] is the name of the instance to create.
  • [ZONE] is the desired zone for this request.
  • [PROJECT_ID] is our project ID.
  • [CLOUD_STORAGE_BUCKET] is the name of the bucket you initially set up but without the gs:// prefix.

For example:

python python-example.py --name example-instance --zone us-central1-a example-project my-gcs-bucket

Waiting for operations to complete

Requests to the Compute Engine API that modify resources such as instances immediately return a response acknowledging your request. The acknowledgement lets you check the status of the requested operation. Operations can take a few minutes to complete, so it's often easier to wait for the operation to complete before continuing. This helper method waits until the operation completes before returning:

def wait_for_operation(compute, project, zone, operation):
    print('Waiting for operation to finish...')
    while True:
        result = compute.zoneOperations().get(
            project=project,
            zone=zone,
            operation=operation).execute()

        if result['status'] == 'DONE':
            print("done.")
            if 'error' in result:
                raise Exception(result['error'])
            return result

        time.sleep(1)

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

Delete your Cloud Storage bucket

To delete a Cloud Storage bucket:

  1. In the GCP Console, go to the Cloud Storage Browser page.

    Go to the Cloud Storage Browser page

  2. Click the checkbox for the bucket you want to delete.
  3. Click Delete to delete the bucket.

What's next

  • Download and view the full code sample. The full sample includes a small example of using all of these methods together. Feel free to download it, change it, and run it to suit your needs.
  • Review the API reference to learn how to perform other tasks with the API.
هل كانت هذه الصفحة مفيدة؟ يرجى تقييم أدائنا:

إرسال تعليقات حول...

Compute Engine Documentation