Using the Python Client Library

This tutorial describes how to use the Google APIs Client Library for Python to call the AI Platform REST APIs in your Python applications. The code snippets and examples in the rest of this documentation use this Python client library.

You will be creating a model in your Google Cloud Platform project in this tutorial. That is a simple task that's easily contained to a small example.

Objectives

This is a basic tutorial designed to familiarize you with this Python client library. When you're finished you should be able to:

  • Get a Python representation of the AI Platform services.
  • Use that representation to create a model in your project, which should help you understand how to call the other model and job management APIs.

Costs

You will not be charged for the operations in this tutorial. Refer to the pricing page for more information.

Before you begin

Set up your GCP project

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a GCP project.

    Go to the Project selector page

  3. Make sure that billing is enabled for your Google Cloud Platform project.

    Learn how to enable billing

  4. Enable the AI Platform ("Cloud Machine Learning Engine") and Compute Engine APIs.

    Enable the APIs

  5. Install and initialize the Cloud SDK.

Set up authentication

To set up authentication, you need to create a service account key and set an environment variable for the file path to the service account key.

  1. Create a service account key for authentication:
    1. In the GCP Console, go to the Create service account key page.

      Go to the Create Service Account Key page
    2. From the Service account drop-down list, select New service account.
    3. In the Service account name field, enter a name .
    4. From the Role drop-down list, select Machine Learning Engine > ML Engine Admin and Storage > Storage Object Admin.

      Note: The Role field authorizes your service account to access resources. You can view and change this field later by using GCP Console. If you are developing a production app, you may need to specify more granular permissions than Machine Learning Engine > ML Engine Admin and Storage > Storage Object Admin. For more information, see access control for AI Platform.
    5. Click Create. A JSON file that contains your key downloads to your computer.
  2. Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. This variable only applies to your current shell session, so if you open a new session, set the variable again.

Set up a Python development environment

Choose one of the options below to set up your environment locally on macOS or in a remote environment on Cloud Shell.

For macOS users, we recommend that you set up your environment using the MACOS tab below. Cloud Shell, shown on the CLOUD SHELL tab, is available on macOS, Linux, and Windows. Cloud Shell provides a quick way to try AI Platform, but isn't suitable for ongoing development work.

macOS

  1. Check Python installation
    Confirm that you have Python installed and, if necessary, install it.

    python -V
  2. Check pip installation
    pip is Python's package manager, included with current versions of Python. Check if you already have pip installed by running pip --version. If not, see how to install pip.

    You can upgrade pip using the following command:

    pip install -U pip

    See the pip documentation for more details.

  3. Install virtualenv
    virtualenv is a tool to create isolated Python environments. Check if you already have virtualenv installed by running virtualenv --version. If not, install virtualenv:

    pip install --user --upgrade virtualenv

    To create an isolated development environment for this guide, create a new virtual environment in virtualenv. For example, the following command activates an environment named cmle-env:

    virtualenv cmle-env
    source cmle-env/bin/activate
  4. For the purposes of this tutorial, run the rest of the commands within your virtual environment.

    See more information about using virtualenv. To exit virtualenv, run deactivate.

Cloud Shell

  1. Open the Google Cloud Platform Console.

    Google Cloud Platform Console

  2. Click the Activate Google Cloud Shell button at the top of the console window.

    Activate Google Cloud Shell

    A Cloud Shell session opens inside a new frame at the bottom of the console and displays a command-line prompt. It can take a few seconds for the shell session to be initialized.

    Cloud Shell session

    Your Cloud Shell session is ready to use.

  3. Configure the gcloud command-line tool to use your selected project.

    gcloud config set project [selected-project-id]

    where [selected-project-id] is your project ID. (Omit the enclosing brackets.)

Install the Google APIs Client Library for Python

Install the Google APIs Client Library for Python.

This is a basic tutorial designed to familiarize you with this Python client library. When you're finished you should be able to:

  • Get a Python representation of the AI Platform services.
  • Use that representation to create a model in your project, which should help you understand how to call the other model and job management APIs.
You will not be charged for the operations in this tutorial. Refer to the pricing page for more information.

Importing the required modules

When you want to use the Google APIs Client Library for Python to call the AI Platform REST APIs in your code, you must import its package and the OAuth2 package. For this tutorial (and for most standard uses of AI Platform) you only need to import specific modules:

Refer to the documentation for those packages to learn about the other available modules.

Create a new Python file using your favorite editor, and add these lines:

from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
from googleapiclient import errors

Building a Python representation of the API

Get your Python representation of the REST API. The method you call is build because the API client library uses service discovery to dynamically set up connections to the services as they exist when you make the call. Call your object that encapsulates the services ml:

ml = discovery.build('ml','v1')

Configuring your parameters and request body

To make a call to a service, you must create the parameters and request body that will be passed to the REST API. You pass parameters as regular Python parameters to the method that represents the call. The body is a JSON resource just as you would use if calling the API with an HTTP request directly.

Take a look at the REST API for creating a model in a new browser tab, projects.models.create:

  • Notice the path parameter parent, which is the part of the URI of the request that identifies the project. If you were making the HTTP POST request directly, you would use the following URI:

    https://ml.googleapis.com/v1/projects/your_project_ID/models
    

    When using the API client library, the variable part of the URI is represented as a string-typed parameter to the API call. You'll set it to 'projects/<var>your_project_ID</var>'. Store your project in a variable to make API calls cleaner:

    project_id = 'projects/{}'.format('your_project_ID')
    
  • The body of the request is a JSON resource representing the model information. You can see in the model resource definition that it has two values for input: name and (optionally) description. You can pass a Python dictionary in the place of JSON and the API client library will perform the necessary conversion.

    Create your Python dictionary:

    request_dict = {'name': 'your-model-name',
                   'description': 'This is a machine learning model entry.'}
    

Creating your request

Making calls to APIs with the Python client library has two steps: first you create a request, then you make the call using that request.

Create the request

Use the built client objects that you created earlier (if you followed the code snippet exactly, it's called ml) as the root of the API hierarchy and specify the API you want to use. Each collection in the API path behaves like a function that returns a list of the collections and methods within it. For example, the root of all the AI Platform APIs is projects, so your call begins with ml.projects().

Use this code to form your request:

request = ml.projects().models().create(parent=project_id, body=request_dict)

Send the request

The request that you constructed in the last step exposes an execute method that you call to send the request to the service:

response = request.execute()

It's common for developers to combine this step with the last one:

response = ml.projects().models().create(parent=project_id,
                                         body=request_dict).execute()

Handle simple errors

A lot of things can go wrong when you make API calls over the Internet. It's a good idea to handle common errors. The simplest way to deal with errors is to put your request in a try block and catch likely errors. Most of the errors you're likely to get from the service are HTTP errors, which are encapsulated in the HttpError class. To catch these errors, you'll use the errors module from the googleapiclient package.

Wrap your request.execute() call in a try block. Also put a print statement in the block, so that you will try to print the response only if the call succeeds:

try:
    response = request.execute()
    print(response)

Add a catch block to handle HTTP errors. You can use HttpError._get_reason() to get the reason text fields from the response:

except errors.HttpError, err:
    # Something went wrong, print out some information.
    print('There was an error creating the model. Check the details:')
    print(err._get_reason())

Of course, a simple print statement might not be the right approach for your application.

Putting it all together

Here is the complete example:

from googleapiclient import discovery
from googleapiclient import errors

# Store your full project ID in a variable in the format the API needs.
project_id = 'projects/{}'.format('your_project_ID')

# Build a representation of the Cloud ML API.
ml = discovery.build('ml', 'v1')

# Create a dictionary with the fields from the request body.
request_dict = {'name': 'your_model_name',
               'description': 'your_model_description'}

# Create a request to call projects.models.create.
request = ml.projects().models().create(
              parent=project_id, body=request_dict)

# Make the call.
try:
    response = request.execute()
    print(response)
except errors.HttpError, err:
    # Something went wrong, print out some information.
    print('There was an error creating the model. Check the details:')
    print(err._get_reason())

Generalizing to other methods

You can use the procedure you learned here to make any of the other REST API calls. Some of the APIs require much more complicated JSON resources than creating a model does, but the principles are the same:

  1. Import googleapiclient.discovery and googleapiclient.errors.

  2. Use the discovery module to build a Python representation of the API.

  3. Use the API representation as a series of nested objects to get to the API you want and create a request. For example,

    request = ml.projects().models().versions().delete(
        name='projects/myproject/models/mymodel/versions/myversion')
    
  4. Call request.execute() to send the request, handling exceptions in an appropriate way for your application.

  5. When there is a response body, you can treat it like a Python dictionary to get at the JSON objects specified in the API reference. Note that many of the objects in responses have fields that are present only in some circumstances. You should always check to avoid key errors:

    response = request.execute()
    
    some_value = response.get('some_key') or 'default_value'
    

What's next

Kunde den här sidan hjälpa dig? Berätta:

Skicka feedback om ...

AI Platform for TensorFlow