This tutorial shows how to run the Ruby Bookshelf app on Compute Engine. Follow this tutorial to deploy an existing Ruby web app to Compute Engine. You should work through the Bookshelf app documentation as part of the tutorial for the App Engine standard environment.
- Deploy the Bookshelf sample app to a single Compute Engine instance.
- Scale the app horizontally by using a managed instance group.
- Serve traffic by using HTTP load balancing.
- Respond to traffic changes by using autoscaling.
This tutorial uses billable components of Google Cloud Platform (GCP), including:
- Compute Engine
- Cloud Storage
- Cloud Datastore
- Stackdriver Logging
- Cloud Pub/Sub
Before you begin
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
Select or create a Google Cloud Platform project.
Make sure that billing is enabled for your Google Cloud Platform project.
- Enable the Cloud Datastore, Cloud Storage, and Cloud Pub/Sub APIs.
- Install and initialize the Cloud SDK.
- Install Ruby 2.3 or newer and Ruby on Rails. You also need Ruby Gems, which is included in Ruby.
Initializing Cloud Datastore
The Bookshelf app uses Cloud Datastore to store the books. To initialize Cloud Datastore in your project for the first time:
Open Cloud Datastore on the GCP Console.
Select a region for your datastore and click Continue. When you reach the Create an Entity page, close the window. The Bookshelf app is ready to create entities on Cloud Datastore.
Creating a Cloud Storage bucket
The following instructions show how to create a Cloud Storage bucket. Buckets are the basic containers that hold your data in Cloud Storage.
In a terminal window, enter the following command:
gsutil mb gs://[YOUR-BUCKET-NAME]
[YOUR-BUCKET-NAME]represents the name of your Cloud Storage bucket.
To view uploaded images in the bookshelf app, set the bucket's default access control list (ACL) to
gsutil defacl set public-read gs://[YOUR-BUCKET-NAME]
Cloning the sample app
The sample app is available on GitHub at GoogleCloudPlatform/getting-started-ruby.
Clone the repository.
git clone https://github.com/GoogleCloudPlatform/getting-started-ruby.git
Go to the sample directory.
Configuring the app
Create configuration files by copying the provided examples. The configuration file paths are in
.gitignoreand aren't committed to version control.
cp config/database.example.yml config/database.yml cp config/settings.example.yml config/settings.yml
config/settings.ymlfor editing and replace the following values:
@@PROJECT_ID@@with your project ID.
@@BUCKET_NAME@@with the name of the storage bucket you created in the previous step.
@@CLIENT_ID@@with your OAuth client ID
@@CLIENT_SECRET@@with your OAuth client secret.
Save and close
config/database.ymlfor editing and replace
@@PROJECT_ID@@with your project ID.
Save and close
Running the app on your local computer
Start a local web server.
In your web browser, enter
To stop the local web server, press Control+C.
Run the worker on your local computer
After Redis is running, in a new command window, enter the following command:
TERM_CHILD=1 QUEUE=* rake environment resque:work
Now add some books to the Bookshelf app. If you have both the app and worker instance running locally, you can watch the worker update the book information in the background.
Resque's interface is configured to be available at the following address, which you can use to check the status of your worker:
Deploying to a single instance
This section walks you through running a single instance of your app on Compute Engine.
Push your code to a repository
You can use Cloud Source Repositories to create a Git repository in your project and upload your app code there. Your instances can then pull the latest version of your app code from the repository during startup. This is convenient because updating your app doesn't require configuring new images or instances; all you need to do is restart an existing instance or create a new one.
If this is your first time using Git, use
git config --global to set up
In your Google Cloud Platform Console, create a repository
gcloudto create a repository.
gcloud source repos create [YOUR_REPO]
[YOUR_PROJECT_ID]is your project ID.
[YOUR_REPO]is the name of the repository just created.
Push your app's code to your project's repository.
git commit -am "Updating configuration" git config credential.helper gcloud.sh git remote add cloud https://source.developers.google.com/p/[YOUR_PROJECT_ID]/r/[YOUR_REPO] git push cloud master
Use a startup script to initialize an instance
Now that your code is accessible by Compute Engine instances, you need a way to instruct your instance to download and run your code. An instance can have a startup script that is executed whenever the instance is started or restarted.
Here are the startup scripts that are included in the Bookshelf sample app:
set -e # Talk to the metadata server to get the project id PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google") REPO_NAME="[YOUR_REPO_NAME]" # Get the source code export HOME=/root git config --global credential.helper gcloud.sh # Change branch from master if not using master git clone https://source.developers.google.com/p/$PROJECTID/r/$REPO_NAME /opt/app -b master pushd /opt/app/7-compute-engine pushd config cp database.example.yml database.yml chmod go-rwx database.yml cp settings.example.yml settings.yml chmod go-rwx settings.yml # Add your GCP project ID here sed -i -e 's/@@PROJECT_ID@@/[YOUR_PROJECT_ID]/' settings.yml sed -i -e 's/@@PROJECT_ID@@/[YOUR_PROJECT_ID]/' database.yml # Add your cloud storage config here sed -i -e 's/@@BUCKET_NAME@@/[YOUR_BUCKET_NAME]/' settings.yml # Add your OAuth config here sed -i -e 's/@@CLIENT_ID@@/[YOUR_CLIENT_ID]/' settings.yml sed -i -e 's/@@CLIENT_SECRET@@/[YOUR_CLIENT_SECRET]/' settings.yml popd # config ./gce/configure.sh popd # /opt/app
set -e curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash cat >/etc/google-fluentd/config.d/railsapp.conf << EOF <source> type tail format none path /opt/app/7-compute-engine/log/*.log pos_file /var/tmp/fluentd.railsapp.pos read_from_head true tag railsapp </source> EOF service google-fluentd restart & # Install dependencies from apt apt-get update apt-get install -y git ruby ruby-dev build-essential libxml2-dev zlib1g-dev nginx libmysqlclient-dev libsqlite3-dev redis-server gem install bundler --no-ri --no-rdoc useradd -m railsapp chown -R railsapp:railsapp /opt/app mkdir /opt/gem chown -R railsapp:railsapp /opt/gem sudo -u railsapp -H bundle install --path /opt/gem sudo -u railsapp -H bundle exec rake assets:precompile systemctl enable redis-server.service systemctl start redis-server.service cat gce/default-nginx > /etc/nginx/sites-available/default systemctl restart nginx.service cat gce/railsapp.service > /lib/systemd/system/railsapp.service systemctl enable railsapp.service systemctl start railsapp.service cat gce/resqworker.service > /lib/systemd/system/resqworker.service systemctl enable resqworker.service systemctl start resqworker.service
This startup script performs these tasks:
Clones the app's source code from Cloud Source Repositories and sets up your configuration files with your secrets.
Installs the Stackdriver Logging agent and configures it to monitor the app's logs. This means that the logs configured in the previous tutorial are uploaded into the logging section of the Google Cloud Platform Console just as if you were using App Engine.
Installs and configures Ruby, Rails, and NGINX.
Customizing the startup script
The sample app provides a startup script template that you can modify to contain your secrets. Note that secrets shouldn't be checked in to source control.
Copy the startup script template to one that you modify and don't check in:
cp gce/startup-script.sh gce/my-startup.sh
my-startup.sh. Add your secrets for your database and Cloud Storage: the same ones that you previously added to
[YOUR_*place holders with your values.
Here's an example:
# Add your GCP project ID here sed -i -e 's/PROJECT_ID/your-project-id/' settings.yml sed -i -e 's/PROJECT_ID/your-project-id/' database.yml # Add your cloud storage config here sed -i -e 's/BUCKET_NAME/your-bucket-name/' settings.yml # Add your OAuth config here sed -i -e 's/CLIENT_ID/1234/' settings.yml sed -i -e 's/CLIENT_SECRET/1234/' settings.yml
Create and configure a Compute Engine instance
Create a Compute Engine instance. The following command creates a new instance, allows it to access GCP services, and runs your startup script. The instance name is
gcloud compute instances create my-app-instance \ --machine-type=g1-small \ --scopes logging-write,storage-rw,datastore,https://www.googleapis.com/auth/projecthosting \ --metadata-from-file startup-script=gce/my-startup.sh \ --zone us-central1-f \ --tags http-server \ --image-family ubuntu-1604-lts \ --image-project ubuntu-os-cloud
gcloud compute instances create my-app-instance ^ --machine-type=g1-small ^ --scopes logging-write,storage-rw,datastore,https://www.googleapis.com/auth/projecthosting ^ --metadata-from-file startup-script=gce/my-startup.sh ^ --zone us-central1-f ^ --tags http-server ^ --image-family ubuntu-1604-lts ^ --image-project ubuntu-os-cloud
Check the progress of the instance creation.
gcloud compute instances get-serial-port-output my-app-instance --zone us-central1-f
If the startup script has completed,
Finished running startup scriptis displayed near the end of the command output.
Create a firewall rule to allow traffic to your instance.
gcloud compute firewall-rules create default-allow-http-80 \ --allow tcp:80 \ --source-ranges 0.0.0.0/0 \ --target-tags http-server \ --description "Allow port 80 access to http-server"
gcloud compute firewall-rules create default-allow-http-80 ^ --allow tcp:80 ^ --source-ranges 0.0.0.0/0 ^ --target-tags http-server ^ --description "Allow port 80 access to http-server"
Get the external IP address of your instance.
gcloud compute instances list
To see the app running, go to
[YOUR_INSTANCE_IP]is the external IP address of your instance.
Manage and monitor an instance
You can use the Google Cloud Platform Console to monitor and manage your instance.
To view the running instance and connect to it by using
ssh, go to
Compute > Compute Engine.
To view all of the logs generated by your Compute Engine resources, go to Monitoring > Logs. Stackdriver Logging is automatically configured to gather logs from various common services, including syslog.
Horizontal scaling with multiple instances
Compute Engine can easily scale horizontally. By using a managed instance group and the Compute Engine Autoscaler, Compute Engine can automatically create new instances of your app when needed and shut down instances when demand is low. You can set up an HTTP load balancer to distribute traffic to the instances in a managed instance group.
The sample app includes a script that automates the following
deployment steps. The script named
deploy.sh deploys the resources for a
complete, autoscaled, load-balanced app as described in
Horizontal scaling with multiple instances.
You can run each of the following steps yourself, or run
Create a managed instance group
A managed instance group is a group of homogeneous instances based on the same instance template. An instance template defines the configuration of your instance, including source image, disk size, scopes, and metadata, including startup scripts.
First, create a template.
gcloud compute instance-templates create my-app-tmpl \ --machine-type=g1-small \ --scopes logging-write,storage-rw,datastore,https://www.googleapis.com/auth/projecthosting \ --metadata-from-file startup-script=gce/my-startup.sh \ --image-family ubuntu-1604-lts \ --image-project ubuntu-os-cloud \ --tags http-server
Create an instance group.
gcloud compute instance-groups managed create my-app-group \ --base-instance-name my-app \ --size 2 \ --template my-app-tmpl \ --zone us-central1-f
--sizeparameter species the number of instances in the group. After all of the instances have finished running their startup scripts, the instances can be accessed individually by using their external IP addresses and port
8080. To find the external IP addresses of the instances, enter
gcloud compute instances list. The managed instances have names that start with the same prefix,
my-app, which you specified in the
Create a load balancer
An individual instance is fine for testing or debugging, but for serving web traffic it's better to use a load balancer to automatically direct traffic to available instances. To create a load balancer, follow these steps.
Create a health check. The load balancer uses a health check to determine which instances are capable of serving traffic.
gcloud compute http-health-checks create bookshelf-health-check \ --request-path / \ --port 8080
Create a named port. The HTTP load balancer looks for the
httpservice to know which port to direct traffic to. In your existing instance group, give port
gcloud compute instance-groups managed set-named-ports my-app-group \ --named-ports http:8080 \ --zone us-central1-f
Create a backend service. The backend service is the target for load-balanced traffic. It defines which instance group the traffic should be directed to and which health check to use.
gcloud compute backend-services create my-app-service \ --http-health-check bookshelf-health-check
Add the backend service.
gcloud compute backend-services add-backend my-app-service \ --group my-app-group \ --zone us-central1-f
Create a URL map and proxy. The URL map defines which URLs should be directed to which backend services. In this sample, all traffic is served by one backend service. If you want to load balance requests between multiple regions or groups, you can create multiple backend services. A proxy receives traffic and forwards it to backend services using URL maps.
Create the URL map.
gcloud compute url-maps create my-app-service-map \ --default-service my-app-service
Create the proxy.
gcloud compute target-http-proxies create my-app-service-proxy \ --url-map my-app-service-map
Create a global forwarding rule. The global forwarding rule ties a public IP address and port to a proxy.
gcloud compute forwarding-rules create my-app-service-http-rule \ --global \ --target-http-proxy my-app-service-proxy \ --port-range 80
Configure the autoscaler
The load balancer ensures that traffic is distributed across all of your healthy instances. But what happens if there is too much traffic for your instances to handle? You could manually add more instances. But a better solution is to configure a Compute Engine autoscaler to automatically create and delete instances in response to traffic demands.
Create an autoscaler.
gcloud compute instance-groups managed set-autoscaling my-app-group \ --max-num-replicas 10 \ --target-load-balancing-utilization 0.5 \ --zone us-central1-f
The preceding command creates an autoscaler on the managed instance group that automatically scales up to 10 instances. New instances are added when the load balancer is above 50% utilization and are removed when utilization falls below 50%.
Check progress until at least one of your instances reports
gcloud compute backend-services get-health frontend-web-service --global
View your app
Get the forwarding IP address for the load balancer.
gcloud compute forwarding-rules list --global
Your forwarding-rules IP address is in the
In a browser, enter the IP address from the list. Your load-balanced and autoscaled app is now running on Compute Engine!
Manage and monitor your deployment
Managing multiple instances is as easy as managing a single instance. You can use the GCP Console to monitor load balancing, autoscaling, and your managed instance group.
You can manage your instance group and autoscaling configuration by using the Compute Engine > Instance groups section.
You can manage your load balancing configuration, including URL maps and backend services, by using the Network services > Load balancing section.
To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:
Run the teardown script
If you ran the
deploy.sh script, run the
teardown.sh script to remove all
resources created by the
deploy.sh script. This returns your project to the
state before running
deploy.sh script and helps to avoid further billing. To
remove the single instance and the storage bucket created at the beginning of
the tutorial, follow the instructions in the next section.
Delete resources manually
If you followed the steps in this tutorial manually, you can manually delete the cloud resources you created.
Delete your load balancer
In the GCP Console, go to the Load Balancing page.
Click the checkbox next to the load balancer you want to delete.
Click the Delete button at the top of the page to delete the load balancer.
In the Delete load balancer dialog, select the associated backend service and health check resources.
Click the Delete Load Balancer button to delete the load balancer and its associated resources.
Delete your Compute Engine managed instance group
To delete a Compute Engine instance group:
- In the GCP Console, go to the Instance groups page.
- Click the checkbox next to the instance group you want to delete.
- Click Delete delete at the top of the page to delete the instance group.
Delete your single Compute Engine instance
To delete a Compute Engine instance:
- In the GCP Console, go to the VM Instances page.
- Click the checkbox next to the instance you want to delete.
- Click Delete delete at the top of the page to delete the instance.
Delete your Cloud Storage bucket
To delete a Cloud Storage bucket:
- In the GCP Console, go to the Cloud Storage Browser page.
- Click the checkbox next to the bucket you want to delete.
- Click Delete delete at the top of the page to delete the bucket.
Try out other Google Cloud Platform features for yourself. Have a look at our tutorials.
Explore other GCP services.