This tutorial shows how to run the Python Bookshelf sample app on Compute Engine. Follow this tutorial to deploy an existing Python web app to Compute Engine. You should work through the Bookshelf app documentation as part of the tutorial for the App Engine standard environment.
- Deploy the Bookshelf sample app to a single Compute Engine instance.
- Scale the app horizontally by using a managed instance group.
- Serve traffic by using HTTP load balancing.
- Respond to traffic changes by using autoscaling.
이 가이드는 Google Cloud Platform의 다음 청구 가능 구성요소를 사용합니다.
이 가이드를 마치면 만든 리소스를 삭제하여 비용이 계속 청구되지 않게 할 수 있습니다. 자세한 내용은 삭제를 참조하세요.
Before you begin
Google 계정에 로그인합니다.
아직 계정이 없으면 새 계정을 등록하세요.
Google Cloud Platform 프로젝트를 선택하거나 만듭니다.
Google Cloud Platform 프로젝트에 결제가 사용 설정되어 있는지 확인하세요.
- Cloud Datastore, Cloud Storage, and Cloud Pub/Sub APIs를 사용 설정합니다.
- Cloud SDK 설치 및 초기화.
virtualenvon your system. For instructions, see Setting up a Python development environment for Google Cloud Platform.
Initializing Cloud Datastore
The Bookshelf app uses Cloud Datastore to store book data. To initialize Cloud Datastore in your project for the first time, follow these steps:
In the Google Cloud Platform Console, open the Datastore page.
On the Settings and utilities menu, select Preferences.
Under User preferences, set the Language & region for your datastore, and then click Save.
When you reach the Create an Entity page, close the window.
The Bookshelf app is ready to create entities on Cloud Datastore.
Creating a Cloud Storage bucket
The following instructions detail how to create a Cloud Storage bucket. Buckets are the basic containers that hold your data in Cloud Storage.
In your terminal window, create a Cloud Storage bucket, where
[YOUR_BUCKET_NAME]represents the name of your bucket:
gsutil mb gs://[YOUR_BUCKET_NAME]
To view uploaded images in the Bookshelf app, set the bucket's default access control list (ACL) to
gsutil defacl set public-read gs://[YOUR_BUCKET_NAME]
Cloning the sample app
The sample app is available on GitHub at
Clone the repository:
git clone https://github.com/GoogleCloudPlatform/getting-started-python.git
Go to the sample directory:
Configuring the app
config.pyfile for editing and replace the following values:
Set the value of
[PROJECT_ID]to your project ID.
Set the value of
[CLOUD_STORAGE_BUCKET]to the name of your Cloud Storage bucket.
Save and close the
Running the app on your local computer
Create an isolated Python environment, and install dependencies:
virtualenv -p python3 env source env/bin/activate pip install -r requirements.txt
virtualenv -p python3 env env\scripts\activate pip install -r requirements.txt
Run both the app and the task worker locally, using Honcho. Learn more about using Honcho in the Cloud Pub/Sub part of the tutorial.
honcho start -f ./procfile worker bookshelf
In your browser, enter the following address:
To stop the local tasks, press Control+C.
Deploying to a single instance
This section walks you through running a single instance of your app on Compute Engine.
Push your code to a repository
You can use Cloud Source Repositories to create a Git repository in your project and upload your app's code there. Your instances can then pull the latest version of your app's code from the repository during startup. Using a Git repository is convenient because updating your app doesn't require configuring new images or instances; just restart an existing instance or create one.
If this is your first time using Git, use
git config --global to set up
In the GCP Console, create a repository:
Push your app's code to your project's repository, where
[YOUR_PROJECT_ID]is your GCP project ID and
[YOUR_REPO]is the name of your repository:
git commit -am "Updating configuration" git config credential.helper gcloud.sh git remote add cloud https://source.developers.google.com/p/[YOUR_PROJECT_ID]/r/[YOUR_REPO] git push cloud master
Initialize an instance by using a startup script
Now that Compute Engine instances can access your code, you need a way to instruct your instance to download and run your code. An instance can have a startup script that runs whenever the instance is started or restarted.
Here is the startup script included in the Bookshelf sample app. To use this
[YOUR_REPO_NAME] with the name of your repository.
set -v # Talk to the metadata server to get the project id PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google") # Install logging monitor. The monitor will automatically pickup logs sent to # syslog. curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash service google-fluentd restart & # Install dependencies from apt apt-get update apt-get install -yq \ git build-essential supervisor python python-dev python-pip libffi-dev \ libssl-dev # Create a pythonapp user. The application will run as this user. useradd -m -d /home/pythonapp pythonapp # pip from apt is out of date, so make it update itself and install virtualenv. pip install --upgrade pip virtualenv # Get the source code from the Google Cloud Repository # git requires $HOME and it's not set during the startup script. export HOME=/root git config --global credential.helper gcloud.sh git clone https://source.developers.google.com/p/$PROJECTID/r/[YOUR_REPO_NAME] /opt/app # Install app dependencies virtualenv -p python3 /opt/app/7-gce/env source /opt/app/7-gce/env/bin/activate /opt/app/7-gce/env/bin/pip install -r /opt/app/7-gce/requirements.txt # Make sure the pythonapp user owns the application code chown -R pythonapp:pythonapp /opt/app # Configure supervisor to start gunicorn inside of our virtualenv and run the # application. cat >/etc/supervisor/conf.d/python-app.conf << EOF [program:pythonapp] directory=/opt/app/7-gce command=/opt/app/7-gce/env/bin/honcho start -f ./procfile worker bookshelf autostart=true autorestart=true user=pythonapp # Environment variables ensure that the application runs inside of the # configured virtualenv. environment=VIRTUAL_ENV="/opt/app/7-gce/env",PATH="/opt/app/7-gce/env/bin",\ HOME="/home/pythonapp",USER="pythonapp" stdout_logfile=syslog stderr_logfile=syslog EOF supervisorctl reread supervisorctl update # Application should now be running under supervisor
The startup script performs the following tasks:
Installs the Logging agent. The agent automatically collects logs from syslog.
Installs Python and Supervisor. Supervisor runs the app as a daemon.
Clones the app's source code from the Cloud Source Repositories and installs dependencies.
Configures Supervisor to run the app. Supervisor makes sure the app is restarted if it exits unexpectedly or is stopped by an admin or process. It also sends the app's
stderrto syslog for the Logging agent to collect.
Create and configure a Compute Engine instance
Create a Compute Engine instance. This command creates an instance, allows it to access GCP services, and runs your startup script. The instance name is
gcloud compute instances create my-app-instance \ --image-family=debian-9 \ --image-project=debian-cloud \ --machine-type=g1-small \ --scopes userinfo-email,cloud-platform \ --metadata-from-file startup-script=gce/startup-script.sh \ --zone us-central1-f \ --tags http-server
gcloud compute instances create my-app-instance ^ --image-family=debian-9 ^ --image-project=debian-cloud ^ --machine-type=g1-small ^ --scopes userinfo-email,cloud-platform ^ --metadata-from-file startup-script=gce/startup-script.sh ^ --zone us-central1-f ^ --tags http-server
Check the progress of the instance creation.
gcloud compute instances get-serial-port-output my-app-instance --zone us-central1-f
When the startup script completes, the output displays
Finished running startup script.
Create a firewall rule to allow traffic to your instance.
gcloud compute firewall-rules create default-allow-http-8080 \ --allow tcp:8080 \ --source-ranges 0.0.0.0/0 \ --target-tags http-server \ --description "Allow port 8080 access to http-server"
gcloud compute firewall-rules create default-allow-http-8080 ^ --allow tcp:8080 ^ --source-ranges 0.0.0.0/0 ^ --target-tags http-server ^ --description "Allow port 8080 access to http-server"
Get the external IP address of your instance.
gcloud compute instances list
To see the app running, go to
[YOUR_INSTANCE_IP]is the external IP address of your instance.
Manage and monitor your instance
To manage and monitor your instance, use the GCP Console.
To view the running instance and connect to it by using
ssh, in the GCP Console, go to the VM instances page.
To view the logs generated by your Compute Engine resources, in the GCP Console, go to the Logs page.
Logging is automatically configured to gather logs from various common services, including syslog.
Horizontal scaling with multiple instances
Compute Engine can scale horizontally. By using a managed instance group and the Compute Engine autoscaler, Compute Engine can automatically create instances of your app when needed and shut down instances when demand is low. You can set up an HTTP load balancer to distribute traffic to the instances in a managed instance group.
The sample app includes a script that automates the following deployment steps.
The script named
deploy.sh deploys the resources for a complete, autoscaled,
load-balanced app as described in
Horizontal scaling with multiple instances.
You can run each of the following steps yourself, or run
Create a managed instance group
A managed instance group is a group of homogeneous instances based on the same instance template. An instance template defines the configuration of your instance, including source image, disk size, scopes, and metadata (including startup scripts).
Create a template.
gcloud compute instance-templates create $TEMPLATE \ --image-family $IMAGE_FAMILY \ --image-project $IMAGE_PROJECT \ --machine-type $MACHINE_TYPE \ --scopes $SCOPES \ --metadata-from-file startup-script=$STARTUP_SCRIPT \ --tags $TAGS
Create an instance group.
gcloud compute instance-groups managed \ create $GROUP \ --base-instance-name $GROUP \ --size $MIN_INSTANCES \ --template $TEMPLATE \ --zone $ZONE
--sizeparameter specifies the number of instances in the group. When all of the instances finish running their startup scripts, you can access the instances individually by using their external IP addresses and port
8080. To find the external IP addresses of the instances, enter
gcloud compute instances list. The managed instances have names that start with the same prefix,
my-app, which you specified in the
Create a load balancer
An individual instance is fine for testing or debugging, but for serving web traffic it's better to use a load balancer to automatically direct traffic to available instances.
Create a health check.
The load balancer uses a health check to determine which instances are capable of serving traffic.
gcloud compute http-health-checks create ah-health-check \ --request-path /_ah/health \ --port 8080
Create a named port.
The HTTP load balancer looks for the
httpservice to know which port to direct traffic to. In your existing instance group, give port
gcloud compute instance-groups managed set-named-ports \ $GROUP \ --named-ports http:8080 \ --zone $ZONE
Create a backend service.
The backend service is the target for load-balanced traffic. It defines which instance group the traffic is directed to and which health check to use.
gcloud compute backend-services create $SERVICE \ --http-health-checks ah-health-check \ --global
Add the backend service.
gcloud compute backend-services add-backend $SERVICE \ --instance-group $GROUP \ --instance-group-zone $ZONE \ --global
Create a URL map and proxy.
The URL map defines which URLs are directed to which backend services. In this sample, all traffic is served by one backend service. If you want to load balance requests between multiple regions or groups, you can create multiple backend services. A proxy receives traffic and forwards it to backend services by using URL maps.
Create the URL map.
gcloud compute url-maps create $SERVICE-map \ --default-service $SERVICE
Create the proxy.
gcloud compute target-http-proxies create $SERVICE-proxy \ --url-map $SERVICE-map
Create a global forwarding rule. The global forwarding rule ties a public IP address and port to a proxy.
gcloud compute forwarding-rules create $SERVICE-http-rule \ --global \ --target-http-proxy $SERVICE-proxy \ --ports=80
Configure the autoscaler
The load balancer ensures that traffic is distributed across all of your healthy instances. But what happens if there is too much traffic for your instances to handle? You could manually add more instances. But a better solution is to configure a Compute Engine autoscaler to automatically create and delete instances in response to traffic demands.
Create an autoscaler.
gcloud compute instance-groups managed set-autoscaling \ $GROUP \ --max-num-replicas $MAX_INSTANCES \ --target-load-balancing-utilization $TARGET_UTILIZATION \ --zone $ZONE
The preceding command creates an autoscaler on the managed instance group that automatically scales up to 10 instances. Instances are added when the load balancer is above 50% utilization and are removed when utilization falls below 50%.
Create a firewall rule.
# Check if the firewall rule has been created in previous steps of the documentation if gcloud compute firewall-rules list --filter="name~'default-allow-http-8080'" \ --format="table(name)" | grep -q 'NAME'; then echo "Firewall rule default-allow-http-8080 already exists." else gcloud compute firewall-rules create default-allow-http-8080 \ --allow tcp:8080 \ --source-ranges 0.0.0.0/0 \ --target-tags http-server \ --description "Allow port 8080 access to http-server" fi
Check progress until at least one of your instances reports
gcloud compute backend-services get-health frontend-web-service --global
View your app
Get the forwarding IP address for the load balancer.
gcloud compute forwarding-rules list --global
Your forwarding-rules IP address is in the
In a browser, enter the IP address from the list.
Your load-balanced and autoscaled app is now running on Compute Engine.
Manage and monitor your deployment
To manage and monitor your deployment, use the GCP Console.
To manage and monitor your load balancing configuration (including URL maps and backend services), in the GCP Console, go to the Load balancing page.
To manage and monitor your managed instance group and autoscaling configuration, in the GCP Console, go to the Instance groups page.
To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:
Run the teardown script
If you ran the
deploy.sh script, run the
teardown.sh script to remove all
resources created by the
deploy.sh script. This returns your project to its
state before running the
deploy.sh script and helps to avoid further billing.
To remove the single instance and the storage bucket created at the beginning of
the tutorial, follow the instructions in the next section.
Delete resources manually
If you followed the steps in this tutorial manually, you can manually delete the cloud resources that you created.
Delete your load balancer
In the GCP Console, go to the Load Balancing page.
Select the checkbox next to the load balancer that you want to delete, and then click Deletedelete.
In the Delete load balancer dialog, select the associated backend service and health check resources, and then click Deletedelete. The load balancer and its associated resources are deleted.
Delete your Compute Engine managed instance group
- GCP Console에서 인스턴스 그룹 페이지로 이동합니다.
- 다음의 옆에 있는 체크박스를 클릭합니다. 삭제할 인스턴스 그룹
- 페이지 상단의 삭제 버튼을 클릭하여 인스턴스 그룹을 삭제합니다.
Delete your single Compute Engine instance
- GCP Console에서 VM 인스턴스 페이지로 이동합니다.
- 다음의 옆에 있는 체크박스를 클릭합니다. 삭제할 인스턴스
- 페이지 상단의 삭제 버튼을 클릭하여 인스턴스를 삭제합니다.
Delete your Cloud Storage bucket
- GCP Console에서 Cloud Storage 브라우저로 이동합니다.
- 삭제할 버킷 옆의 체크박스를 클릭합니다.
- 페이지 상단의 삭제 버튼을 클릭하여 버킷을 삭제합니다.
- Learn how to deploy the Python Bookshelf sample app on Compute Engine using Cloud Deployment Manager.
- Learn how to run the Python Bookshelf sample app on GKE.
Try out other Google Cloud Platform features for yourself. Have a look at our tutorials.
Explore other GCP services.