Running the Python Bookshelf on Compute Engine

This tutorial shows how to run the Python Bookshelf app on Compute Engine. Follow this tutorial to deploy an existing Python web app to Compute Engine. You should work through the Bookshelf app documentation as part of the tutorial for the App Engine standard environment.

Objectives

  • Deploy the Bookshelf sample app to a single Compute Engine instance.
  • Scale the app horizontally by using a managed instance group.
  • Serve traffic by using HTTP load balancing.
  • Respond to traffic changes by using autoscaling.

Costs

This tutorial uses billable components of Google Cloud Platform (GCP), including:

  • Compute Engine
  • Cloud Storage
  • Cloud Datastore
  • Stackdriver Logging
  • Cloud Pub/Sub

Use the Pricing Calculator to generate a cost estimate based on your projected usage. New GCP users might be eligible for a free trial.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a GCP project.

    Go to the Manage resources page

  3. Make sure that billing is enabled for your project.

    Learn how to enable billing

  4. Enable the Cloud Datastore, Cloud Storage, and Cloud Pub/Sub APIs.

    Enable the APIs

  5. Install and initialize the Cloud SDK.
  6. Install Python, pip, and virtualenv on your system. For instructions, refer to Setting Up a Python Development Environment for Google Cloud Platform.

Initializing Cloud Datastore

The Bookshelf app uses Cloud Datastore to store the books. To initialize Cloud Datastore in your project for the first time:

  1. Open Cloud Datastore on the GCP Console.

  2. Select a region for your datastore and click Continue. When you reach the Create an Entity page, close the window. The Bookshelf app is ready to create entities on Cloud Datastore.

Creating a Cloud Storage bucket

The following instructions show how to create a Cloud Storage bucket. Buckets are the basic containers that hold your data in Cloud Storage.

  1. In a terminal window, enter the following command:

    gsutil mb gs://[YOUR-BUCKET-NAME]

    Where:

    • [YOUR-BUCKET-NAME] represents the name of your Cloud Storage bucket.
  2. To view uploaded images in the bookshelf app, set the bucket's default access control list (ACL) to public-read.

    gsutil defacl set public-read gs://[YOUR-BUCKET-NAME]

    Cloning the sample app

    The sample app is available on GitHub at GoogleCloudPlatform/getting-started-python.

    1. Clone the repository.

      git clone https://github.com/GoogleCloudPlatform/getting-started-python.git
      
    2. Go to the sample directory.

      cd getting-started-python/7-gce
      

    Configuring the app

    1. Open config.py for editing.

    2. Set the value of PROJECT_ID to your project ID.

    3. Set the value CLOUD_STORAGE_BUCKET to your Cloud Storage bucket name.

    4. Save and close config.py.

    Running the app on your local computer

    1. Create an isolated Python environment, and install dependencies:

      Linux/macOS

      virtualenv -p python3 env
      source env/bin/activate
      pip install -r requirements.txt
      

      Windows

      virtualenv -p python3 env
      env\scripts\activate
      pip install -r requirements.txt
      

    2. Run both the application, and the task worker, locally using Honcho. Learn more about using Honcho in the Cloud Pub/Sub part of the tutorial.

      honcho start -f ./procfile worker bookshelf

    3. In your web browser, enter this address:

      http://localhost:8080

    To stop both the local tasks, press Control+C.

    Deploying to a single instance

    Single-instance deployment

    This section walks you through running a single instance of your app on Compute Engine.

    Push your code to a repository

    You can use Cloud Source Repositories to easily create a Git repository in your project and upload your app code there. Your instances can then pull the latest version of your app code from the repository during startup. This is convenient because updating your app doesn't require configuring new images or instances; all you need to do is restart an existing instance or create a new one.

    If this is your first time using Git, use git config --global to set up your identity.

    1. In your Google Cloud Platform Console, create a repository

      Create Repository

      or use gcloud to create a repository.

      gcloud source repos create [YOUR_REPO]
      

      Where:

      • [YOUR_PROJECT_ID] is your project ID.
      • [YOUR_REPO] is the name of the repository just created.
    2. Push your app's code to your project's repository.

      git commit -am "Updating configuration"
      git config credential.helper gcloud.sh
      git remote add cloud https://source.developers.google.com/p/[YOUR_PROJECT_ID]/r/[YOUR_REPO]
      git push cloud master
      

    Use a startup script to initialize an instance

    Now that your code is accessible by Compute Engine instances, you need a way to instruct your instance to download and run your code. An instance can have a startup script that is executed whenever the instance is started or restarted.

    Here is the startup script that is included in the Bookshelf sample app:

    set -v
    
    # Talk to the metadata server to get the project id
    PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
    
    # Install logging monitor. The monitor will automatically pickup logs sent to
    # syslog.
    curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
    service google-fluentd restart &
    
    # Install dependencies from apt
    apt-get update
    apt-get install -yq \
        git build-essential supervisor python python-dev python-pip libffi-dev \
        libssl-dev
    
    # Create a pythonapp user. The application will run as this user.
    useradd -m -d /home/pythonapp pythonapp
    
    # pip from apt is out of date, so make it update itself and install virtualenv.
    pip install --upgrade pip virtualenv
    
    # Get the source code from the Google Cloud Repository
    # git requires $HOME and it's not set during the startup script.
    export HOME=/root
    git config --global credential.helper gcloud.sh
    git clone https://source.developers.google.com/p/$PROJECTID/r/[YOUR_REPO_NAME] /opt/app
    
    # Install app dependencies
    virtualenv -p python3 /opt/app/7-gce/env
    source /opt/app/7-gce/env/bin/activate
    /opt/app/7-gce/env/bin/pip install -r /opt/app/7-gce/requirements.txt
    
    # Make sure the pythonapp user owns the application code
    chown -R pythonapp:pythonapp /opt/app
    
    # Configure supervisor to start gunicorn inside of our virtualenv and run the
    # application.
    cat >/etc/supervisor/conf.d/python-app.conf << EOF
    [program:pythonapp]
    directory=/opt/app/7-gce
    command=/opt/app/7-gce/env/bin/honcho start -f ./procfile worker bookshelf
    autostart=true
    autorestart=true
    user=pythonapp
    # Environment variables ensure that the application runs inside of the
    # configured virtualenv.
    environment=VIRTUAL_ENV="/opt/app/7-gce/env",PATH="/opt/app/7-gce/env/bin",\
        HOME="/home/pythonapp",USER="pythonapp"
    stdout_logfile=syslog
    stderr_logfile=syslog
    EOF
    
    supervisorctl reread
    supervisorctl update
    
    # Application should now be running under supervisor

    To use this script, replace [YOUR_REPO_NAME] with the name of your repository.

    The startup script performs the following tasks:

    • Installs the Stackdriver Logging agent. The agent automatically collects logs from syslog.

    • Installs Python and Supervisor. Supervisor runs the app as a daemon.

    • Clones the app's source code from the Cloud Source Repositories and installs dependencies.

    • Configures Supervisor to run the app. Supervisor makes sure the app is restarted if it exits unexpectedly or is killed by an admin or other process. It also sends the app's stdout and stderr to syslog to be collected by the Logging agent.

    Create and configure a Compute Engine instance

    1. Create a Compute Engine instance. The following command creates a new instance, allows it to access GCP services, and runs your startup script. The instance name is my-app-instance.

      Linux/macOS

      gcloud compute instances create my-app-instance \
          --image-family=debian-9 \
          --image-project=debian-cloud \
          --machine-type=g1-small \
          --scopes userinfo-email,cloud-platform \
      
          --metadata-from-file startup-script=gce/startup-script.sh \
          --zone us-central1-f \
          --tags http-server
      

      Windows

      gcloud compute instances create my-app-instance ^
          --image-family=debian-9 ^
          --image-project=debian-cloud ^
          --machine-type=g1-small ^
          --scopes userinfo-email,cloud-platform ^
          --metadata-from-file startup-script=gce/startup-script.sh ^
          --zone us-central1-f ^
          --tags http-server
      

    2. Check the progress of the instance creation.

      gcloud compute instances get-serial-port-output my-app-instance --zone us-central1-f
      

      If the startup script has completed, Finished running startup script is displayed near the end of the command output.

    3. Create a firewall rule to allow traffic to your instance.

      Linux/macOS

      gcloud compute firewall-rules create default-allow-http-8080 \
          --allow tcp:8080 \
          --source-ranges 0.0.0.0/0 \
          --target-tags http-server \
          --description "Allow port 8080 access to http-server"
      

      Windows

      gcloud compute firewall-rules create default-allow-http-8080 ^
          --allow tcp:8080 ^
          --source-ranges 0.0.0.0/0 ^
          --target-tags http-server ^
          --description "Allow port 8080 access to http-server"
      

    4. Get the external IP address of your instance.

      gcloud compute instances list
      
    5. To see the app running, go to http://[YOUR_INSTANCE_IP]:8080,

      where [YOUR_INSTANCE_IP] is the external IP address of your instance.

    Manage and monitor an instance

    You can use the Google Cloud Platform Console to monitor and manage your instance.

    To view the running instance and connect to it by using ssh, go to Compute > Compute Engine.

    To view all of the logs generated by your Compute Engine resources, go to Monitoring > Logs. Stackdriver Logging is automatically configured to gather logs from various common services, including syslog.

    Horizontal scaling with multiple instances

    Multiple-instance deployment with managed instances

    Compute Engine can easily scale horizontally. By using a managed instance group and the Compute Engine Autoscaler, Compute Engine can automatically create new instances of your app when needed and shut down instances when demand is low. You can set up an HTTP load balancer to distribute traffic to the instances in a managed instance group.

    Deployment script

    The sample app includes a script that automates the following deployment steps. The script named deploy.sh deploys the resources for a complete, autoscaled, load-balanced app as described in Horizontal scaling with multiple instances.

    You can run each of the following steps yourself, or run gce/deploy.sh from the gce directory.

    Default values for environment variables including $IMAGE_FAMILY, $IMAGE_PROJECT, $MACHINE_TYPE, and $SCOPES can be reviewed in the initialization section of gce/deploy.sh.

    Create a managed instance group

    A managed instance group is a group of homogeneous instances based on the same instance template. An instance template defines the configuration of your instance, including source image, disk size, scopes, and metadata, including startup scripts.

    1. First, create a template.

      gcloud compute instance-templates create $TEMPLATE \
        --image-family $IMAGE_FAMILY \
        --image-project $IMAGE_PROJECT \
        --machine-type $MACHINE_TYPE \
        --scopes $SCOPES \
        --metadata-from-file startup-script=$STARTUP_SCRIPT \
        --tags $TAGS
    2. Create an instance group.

      gcloud compute instance-groups managed \
        create $GROUP \
        --base-instance-name $GROUP \
        --size $MIN_INSTANCES \
        --template $TEMPLATE \
        --zone $ZONE

      The --size parameter species the number of instances in the group. After all of the instances have finished running their startup scripts, the instances can be accessed individually by using their external IP addresses and port 8080. To find the external IP addresses of the instances, enter gcloud compute instances list. The managed instances have names that start with the same prefix, my-app, which you specified in the --base-instance-name parameter.

    Create a load balancer

    An individual instance is fine for testing or debugging, but for serving web traffic it's better to use a load balancer to automatically direct traffic to available instances. To create a load balancer, follow these steps.

    1. Create a health check. The load balancer uses a health check to determine which instances are capable of serving traffic.

      gcloud compute http-health-checks create ah-health-check \
        --request-path /_ah/health \
        --port 8080
    2. Create a named port. The HTTP load balancer looks for the http service to know which port to direct traffic to. In your existing instance group, give port 8080 the name http.

      gcloud compute instance-groups managed set-named-ports \
          $GROUP \
          --named-ports http:8080 \
          --zone $ZONE
    3. Create a backend service. The backend service is the target for load-balanced traffic. It defines which instance group the traffic should be directed to and which health check to use.

      gcloud compute backend-services create $SERVICE \
        --http-health-checks ah-health-check \
        --global
    4. Add the backend service.

      gcloud compute backend-services add-backend $SERVICE \
        --instance-group $GROUP \
        --instance-group-zone $ZONE \
        --global
    5. Create a URL map and proxy. The URL map defines which URLs should be directed to which backend services. In this sample, all traffic is served by one backend service. If you want to load balance requests between multiple regions or groups, you can create multiple backend services. A proxy receives traffic and forwards it to backend services using URL maps.

      1. Create the URL map.

        gcloud compute url-maps create $SERVICE-map \
          --default-service $SERVICE
      2. Create the proxy.

        gcloud compute target-http-proxies create $SERVICE-proxy \
          --url-map $SERVICE-map
    6. Create a global forwarding rule. The global forwarding rule ties a public IP address and port to a proxy.

      gcloud compute forwarding-rules create $SERVICE-http-rule \
        --global \
        --target-http-proxy $SERVICE-proxy \
        --ports=80

    Configure the autoscaler

    The load balancer ensures that traffic is distributed across all of your healthy instances. But what happens if there is too much traffic for your instances to handle? You could manually add more instances. But a better solution is to configure a Compute Engine autoscaler to automatically create and delete instances in response to traffic demands.

    1. Create an autoscaler.

      gcloud compute instance-groups managed set-autoscaling \
        $GROUP \
        --max-num-replicas $MAX_INSTANCES \
        --target-load-balancing-utilization $TARGET_UTILIZATION \
        --zone $ZONE

      The preceding command creates an autoscaler on the managed instance group that automatically scales up to 10 instances. New instances are added when the load balancer is above 50% utilization and are removed when utilization falls below 50%.

    2. Create a firewall rule.

      # Check if the firewall rule has been created in previous steps of the documentation
      if gcloud compute firewall-rules list --filter="name~'default-allow-http-8080'" \
        --format="table(name)" | grep -q 'NAME'; then
        echo "Firewall rule default-allow-http-8080 already exists."
      else
        gcloud compute firewall-rules create default-allow-http-8080 \
          --allow tcp:8080 \
          --source-ranges 0.0.0.0/0 \
          --target-tags http-server \
          --description "Allow port 8080 access to http-server"
      fi
      

    3. Check progress until at least one of your instances reports HEALTHY.

      gcloud compute backend-services get-health my-app-service --global
      

    View your app

    1. Get the forwarding IP address for the load balancer.

      gcloud compute forwarding-rules list --global
      

      Your forwarding-rules IP address is in the IP_ADDRESS column.

    2. In a browser, enter the IP address from the list. Your load-balanced and autoscaled app is now running on Compute Engine!

    Manage and monitor your deployment

    Managing multiple instances is as easy as managing a single instance. You can use the GCP Console to monitor load balancing, autoscaling, and your managed instance group.

    You can manage your instance group and autoscaling configuration by using the Compute Engine > Instance groups section.

    You can manage your load balancing configuration, including URL maps and backend services, by using the Network services > Load balancing section.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

Run the teardown script

If you ran the deploy.sh script, run the teardown.sh script to remove all resources created by the deploy.sh script. This returns your project to the state before running deploy.sh script and helps to avoid further billing. To remove the single instance and the storage bucket created at the beginning of the tutorial, follow the instructions in the next section.

Delete resources manually

If you followed the steps in this tutorial manually, you can manually delete the cloud resources you created.

Delete your load balancer

  1. In the GCP Console, go to the Load Balancing page.

    Go to the Load Balancing page

  2. Click the checkbox next to the load balancer you want to delete.

  3. Click the Delete button at the top of the page to delete the load balancer.

  4. In the Delete load balancer dialog, select the associated backend service and health check resources.

  5. Click the Delete Load Balancer button to delete the load balancer and its associated resources.

Delete your Compute Engine managed instance group

To delete a Compute Engine instance group:

  1. In the GCP Console, go to the Instances Groups page.

    Go to the Instances Groups page

  2. Click the checkbox next to the instance group you want to delete.
  3. Click the Delete button at the top of the page to delete the instance group.

Delete your single Compute Engine instance

To delete a Compute Engine instance:

  1. In the GCP Console, go to the VM Instances page.

    Go to the VM Instances page

  2. Click the checkbox next to the instance you want to delete.
  3. Click the Delete button at the top of the page to delete the instance.

Delete your Cloud Storage bucket

To delete a Cloud Storage bucket:

  1. In the GCP Console, go to the Cloud Storage browser.

    Go to the Cloud Storage browser

  2. Click the checkbox next to the bucket you want to delete.
  3. Click the Delete button at the top of the page to delete the bucket.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...