Running the Ruby Bookshelf app on Compute Engine

This tutorial shows how to run the Ruby Bookshelf sample app on Compute Engine. Follow this tutorial to deploy an existing Ruby web app to Compute Engine. You should work through the Bookshelf app documentation as part of the tutorial for the App Engine standard environment.

Objectives

  • Deploy the Bookshelf sample app to a single Compute Engine instance.
  • Scale the app horizontally by using a managed instance group.
  • Serve traffic by using HTTP load balancing.
  • Respond to traffic changes by using autoscaling.

Costs

This tutorial uses the following billable components of Google Cloud Platform:

To generate a cost estimate based on your projected usage, use the pricing calculator. New GCP users might be eligible for a free trial.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a GCP project.

    Go to the project selector page

  3. Make sure that billing is enabled for your Google Cloud Platform project. Learn how to enable billing.

  4. Enable the Cloud Datastore, Cloud Storage, and Cloud Pub/Sub APIs.

    Enable the APIs

  5. Install and initialize the Cloud SDK.
  6. Install Ruby 2.3 or newer and Ruby on Rails. You also need Ruby Gems, which is included in Ruby.

Initializing Cloud Datastore

The Bookshelf app uses Cloud Datastore to store book data. To initialize Cloud Datastore in your project for the first time, follow these steps:

  1. In the Google Cloud Platform Console, open the Datastore page.

    Open Cloud Datastore

  2. On the Settings and utilities menu, select Preferences.

  3. Under User preferences, set the Language & region for your datastore, and then click Save.

  4. When you reach the Create an Entity page, close the window.

The Bookshelf app is ready to create entities on Cloud Datastore.

Creating a Cloud Storage bucket

The following instructions detail how to create a Cloud Storage bucket. Buckets are the basic containers that hold your data in Cloud Storage.

  1. In your terminal window, create a Cloud Storage bucket, where [YOUR_BUCKET_NAME] represents the name of your bucket:

    gsutil mb gs://[YOUR_BUCKET_NAME]
  2. To view uploaded images in the Bookshelf app, set the bucket's default access control list (ACL) to public-read:

    gsutil defacl set public-read gs://[YOUR_BUCKET_NAME]

Cloning the sample app

The sample app is available on GitHub at GoogleCloudPlatform/getting-started-ruby.

  1. Clone the repository:

    git clone -b steps https://github.com/GoogleCloudPlatform/getting-started-ruby.git
    
  2. Go to the sample directory:

    cd getting-started-ruby/7-compute-engine
    

Configuring the app

  1. Install dependencies.

    bundle install
    
  2. Create configuration files by copying the provided examples. The configuration file paths are in .gitignore and aren't committed to version control.

    cp config/database.example.yml config/database.yml
    cp config/settings.example.yml config/settings.yml
    
  3. Open config/settings.yml for editing and replace the following values:

    • @@PROJECT_ID@@ with your project ID.
    • @@BUCKET_NAME@@ with the name of the storage bucket you created in the previous step.
    • @@CLIENT_ID@@ with your OAuth client ID
    • @@CLIENT_SECRET@@ with your OAuth client secret.
  4. Save and close settings.yml.

  5. Open config/database.yml for editing and replace @@PROJECT_ID@@ with your project ID.

  6. Save and close database.yml.

Running the app on your local computer

  1. Start a local web server.

    rails server
    
  2. In your web browser, enter http://localhost:3000.

    To stop the local web server, press Control+C.

Run the worker on your local computer

The worker uses the Resque queuing service. Resque is backed by Redis, so you need a Redis server running locally on port 6379 to connect the worker. See the Redis quickstart to get started.

After Redis is running, in a new command window, enter the following command:

TERM_CHILD=1 QUEUE=* rake environment resque:work

Now add some books to the Bookshelf app. If you have both the app and worker instance running locally, you can watch the worker update the book information in the background.

Resque's interface is configured to be available at the following address, which you can use to check the status of your worker:

http://localhost:3000/resque

Deploying to a single instance

This section walks you through running a single instance of your app on Compute Engine.

Single-instance deployment

Push your code to a repository

You can use Cloud Source Repositories to create a Git repository in your project and upload your app's code there. Your instances can then pull the latest version of your app's code from the repository during startup. Using a Git repository is convenient because updating your app doesn't require configuring new images or instances; just restart an existing instance or create one.

If this is your first time using Git, use git config --global to set up your identity.

  1. In the GCP Console, create a repository:

    Create repository

  2. Push your app's code to your project's repository, where [YOUR_PROJECT_ID] is your GCP project ID and [YOUR_REPO] is the name of your repository:

    git commit -am "Updating configuration"
    git config credential.helper gcloud.sh
    git remote add cloud https://source.developers.google.com/p/[YOUR_PROJECT_ID]/r/[YOUR_REPO]
    git push cloud master
    

Initialize an instance by using a startup script

Now that Compute Engine instances can access your code, you need a way to instruct your instance to download and run your code. An instance can have a startup script that runs whenever the instance is started or restarted.

Here are the startup scripts that are included in the Bookshelf sample app:

set -e

# Talk to the metadata server to get the project id
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
REPO_NAME="[YOUR_REPO_NAME]"

# Get the source code
export HOME=/root
git config --global credential.helper gcloud.sh
# Change branch from master if not using master
git clone https://source.developers.google.com/p/$PROJECTID/r/$REPO_NAME /opt/app -b master

pushd /opt/app/7-compute-engine

pushd config

cp database.example.yml database.yml
chmod go-rwx database.yml
cp settings.example.yml settings.yml
chmod go-rwx settings.yml

# Add your GCP project ID here
sed -i -e 's/@@PROJECT_ID@@/[YOUR_PROJECT_ID]/' settings.yml
sed -i -e 's/@@PROJECT_ID@@/[YOUR_PROJECT_ID]/' database.yml

# Add your cloud storage config here
sed -i -e 's/@@BUCKET_NAME@@/[YOUR_BUCKET_NAME]/' settings.yml

# Add your OAuth config here
sed -i -e 's/@@CLIENT_ID@@/[YOUR_CLIENT_ID]/' settings.yml
sed -i -e 's/@@CLIENT_SECRET@@/[YOUR_CLIENT_SECRET]/' settings.yml
popd # config

./gce/configure.sh

popd # /opt/app
set -e

curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
cat >/etc/google-fluentd/config.d/railsapp.conf << EOF
<source>
  type tail
  format none
  path /opt/app/7-compute-engine/log/*.log
  pos_file /var/tmp/fluentd.railsapp.pos
  read_from_head true
  tag railsapp
</source>
EOF
service google-fluentd restart &

# Install dependencies from apt
apt-get update
apt-get install -y git ruby ruby-dev build-essential libxml2-dev zlib1g-dev nginx libmysqlclient-dev libsqlite3-dev redis-server

gem install bundler --no-ri --no-rdoc

useradd -m railsapp
chown -R railsapp:railsapp /opt/app

mkdir /opt/gem
chown -R railsapp:railsapp /opt/gem

sudo -u railsapp -H bundle install --path /opt/gem
sudo -u railsapp -H bundle exec rake assets:precompile

systemctl enable redis-server.service
systemctl start redis-server.service

cat gce/default-nginx > /etc/nginx/sites-available/default
systemctl restart nginx.service

cat gce/railsapp.service > /lib/systemd/system/railsapp.service
systemctl enable railsapp.service
systemctl start railsapp.service

cat gce/resqworker.service > /lib/systemd/system/resqworker.service
systemctl enable resqworker.service
systemctl start resqworker.service

This startup script performs these tasks:

  • Clones the app's source code from Cloud Source Repositories and sets up your configuration files with your secrets.

  • Installs the Logging agent and configures it to monitor the app's logs. This means that the logs configured in the previous tutorial are uploaded into the logging section of the GCP Console just as if you were using App Engine.

  • Installs and configures Ruby, Rails, and NGINX.

Customize the startup script

The sample app provides a startup script template that you can modify to contain your secrets. Note that secrets shouldn't be checked in to source control.

  1. Copy the startup script template to one that you modify and don't check in:

    cp gce/startup-script.sh gce/my-startup.sh
    
  2. Edit the my-startup.sh file.

    Add your secrets for your database and Cloud Storage. These secrets are the same ones that you previously added to database.yml and settings.yml. Replace [YOUR_* placeholders with your values.

    Here's an example:

    # Add your GCP project ID here
    sed -i -e 's/PROJECT_ID/your-project-id/' settings.yml
    sed -i -e 's/PROJECT_ID/your-project-id/' database.yml
    
    # Add your Cloud Storage config here
    sed -i -e 's/BUCKET_NAME/your-bucket-name/' settings.yml
    
    # Add your OAuth config here
    sed -i -e 's/CLIENT_ID/1234/' settings.yml
    sed -i -e 's/CLIENT_SECRET/1234/' settings.yml
    

Create and configure a Compute Engine instance

  1. Create a Compute Engine instance. This command creates an instance, allows it to access GCP services, and runs your startup script. The instance name is my-app-instance.

    Linux/macOS

    gcloud compute instances create my-app-instance \
        --machine-type=g1-small \
        --scopes logging-write,storage-rw,datastore,https://www.googleapis.com/auth/projecthosting \
        --metadata-from-file startup-script=gce/my-startup.sh \
        --zone us-central1-f \
        --tags http-server \
        --image-family ubuntu-1604-lts \
        --image-project ubuntu-os-cloud
    

    Windows

    gcloud compute instances create my-app-instance ^
        --machine-type=g1-small ^
        --scopes logging-write,storage-rw,datastore,https://www.googleapis.com/auth/projecthosting ^
        --metadata-from-file startup-script=gce/my-startup.sh ^
        --zone us-central1-f ^
        --tags http-server ^
        --image-family ubuntu-1604-lts ^
        --image-project ubuntu-os-cloud
    

  2. Check the progress of the instance creation.

    gcloud compute instances get-serial-port-output my-app-instance --zone us-central1-f
    

    When the startup script completes, the output displays Finished running startup script.

  3. Create a firewall rule to allow traffic to your instance.

    Linux/macOS

    gcloud compute firewall-rules create default-allow-http-80 \
        --allow tcp:80 \
        --source-ranges 0.0.0.0/0 \
        --target-tags http-server \
        --description "Allow port 80 access to http-server"
    

    Windows

    gcloud compute firewall-rules create default-allow-http-80 ^
        --allow tcp:80 ^
        --source-ranges 0.0.0.0/0 ^
        --target-tags http-server ^
        --description "Allow port 80 access to http-server"
    

  4. Get the external IP address of your instance.

    gcloud compute instances list
    
  5. To see the app running, go to http://[YOUR_INSTANCE_IP], where [YOUR_INSTANCE_IP] is the external IP address of your instance.

Manage and monitor your instance

To manage and monitor your instance, use the GCP Console.

  • To view the running instance and connect to it by using ssh, in the GCP Console, go to the VM instances page.

    Go to the VM instances page

  • To view the logs generated by your Compute Engine resources, in the GCP Console, go to the Logs page.

    Go to the Logs page

    Logging is automatically configured to gather logs from various common services, including syslog.

Horizontal scaling with multiple instances

Multiple-instance deployment with managed instances

Compute Engine can scale horizontally. By using a managed instance group and the Compute Engine autoscaler, Compute Engine can automatically create instances of your app when needed and shut down instances when demand is low. You can set up an HTTP load balancer to distribute traffic to the instances in a managed instance group.

Deployment script

The sample app includes a script that automates the following deployment steps. The script named deploy.sh deploys the resources for a complete, autoscaled, load-balanced app as described in Horizontal scaling with multiple instances.

You can run each of the following steps yourself, or run gce/deploy.sh from the gce directory.

Create a managed instance group

A managed instance group is a group of homogeneous instances based on the same instance template. An instance template defines the configuration of your instance, including source image, disk size, scopes, and metadata (including startup scripts).

  1. Create a template.

    gcloud compute instance-templates create my-app-tmpl \
        --machine-type=g1-small \
        --scopes logging-write,storage-rw,datastore,https://www.googleapis.com/auth/projecthosting \
        --metadata-from-file startup-script=gce/my-startup.sh \
        --image-family ubuntu-1604-lts \
        --image-project ubuntu-os-cloud \
        --tags http-server
    
  2. Create an instance group.

    gcloud compute instance-groups managed create my-app-group \
        --base-instance-name my-app \
        --size 2 \
        --template my-app-tmpl \
        --zone us-central1-f
    

    The --size parameter specifies the number of instances in the group. When all of the instances finish running their startup scripts, you can access the instances individually by using their external IP addresses and port 8080. To find the external IP addresses of the instances, enter gcloud compute instances list. The managed instances have names that start with the same prefix, my-app, which you specified in the --base-instance-name parameter.

Create a load balancer

An individual instance is fine for testing or debugging, but for serving web traffic it's better to use a load balancer to automatically direct traffic to available instances.

  1. Create a health check.

    The load balancer uses a health check to determine which instances are capable of serving traffic.

    gcloud compute http-health-checks create bookshelf-health-check \
        --request-path / \
        --port 8080
    
  2. Create a named port.

    The HTTP load balancer looks for the http service to know which port to direct traffic to. In your existing instance group, give port 8080 the name http.

    gcloud compute instance-groups managed set-named-ports my-app-group \
        --named-ports http:8080 \
        --zone us-central1-f
    
  3. Create a backend service.

    The backend service is the target for load-balanced traffic. It defines which instance group the traffic is directed to and which health check to use.

    gcloud compute backend-services create my-app-service \
        --http-health-check bookshelf-health-check
    
  4. Add the backend service.

    gcloud compute backend-services add-backend my-app-service \
        --group my-app-group \
        --zone us-central1-f
    
  5. Create a URL map and proxy.

    The URL map defines which URLs are directed to which backend services. In this sample, all traffic is served by one backend service. If you want to load balance requests between multiple regions or groups, you can create multiple backend services. A proxy receives traffic and forwards it to backend services by using URL maps.

    1. Create the URL map.

      gcloud compute url-maps create my-app-service-map \
          --default-service my-app-service
      
    2. Create the proxy.

      gcloud compute target-http-proxies create my-app-service-proxy \
          --url-map my-app-service-map
      
  6. Create a global forwarding rule. The global forwarding rule ties a public IP address and port to a proxy.

    gcloud compute forwarding-rules create my-app-service-http-rule \
        --global \
        --target-http-proxy my-app-service-proxy \
        --port-range 80
    

Configure the autoscaler

The load balancer ensures that traffic is distributed across all of your healthy instances. But what happens if there is too much traffic for your instances to handle? You could manually add more instances. But a better solution is to configure a Compute Engine autoscaler to automatically create and delete instances in response to traffic demands.

  1. Create an autoscaler.

    gcloud compute instance-groups managed set-autoscaling my-app-group \
        --max-num-replicas 10 \
        --target-load-balancing-utilization 0.5 \
        --zone us-central1-f
    

    The preceding command creates an autoscaler on the managed instance group that automatically scales up to 10 instances. Instances are added when the load balancer is above 50% utilization and are removed when utilization falls below 50%.

  2. Check progress until at least one of your instances reports HEALTHY.

    gcloud compute backend-services get-health frontend-web-service --global
    

View your app

  1. Get the forwarding IP address for the load balancer.

    gcloud compute forwarding-rules list --global
    

    Your forwarding-rules IP address is in the IP_ADDRESS column.

  2. In a browser, enter the IP address from the list.

    Your load-balanced and autoscaled app is now running on Compute Engine.

Manage and monitor your deployment

To manage and monitor your deployment, use the GCP Console.

  • To manage and monitor your load balancing configuration (including URL maps and backend services), in the GCP Console, go to the Load balancing page.

    Go to the Load balancing page

  • To manage and monitor your managed instance group and autoscaling configuration, in the GCP Console, go to the Instance groups page.

    Go to the Instance groups page

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

Run the teardown script

If you ran the deploy.sh script, run the teardown.sh script to remove all resources created by the deploy.sh script. This returns your project to its state before running the deploy.sh script and helps to avoid further billing. To remove the single instance and the storage bucket created at the beginning of the tutorial, follow the instructions in the next section.

Delete resources manually

If you followed the steps in this tutorial manually, you can manually delete the cloud resources that you created.

Delete your load balancer

  1. In the GCP Console, go to the Load Balancing page.

    Go to the Load Balancing page

  2. Select the checkbox next to the load balancer that you want to delete, and then click Deletedelete.

  3. In the Delete load balancer dialog, select the associated backend service and health check resources, and then click Deletedelete. The load balancer and its associated resources are deleted.

Delete your Compute Engine managed instance group

  1. In the GCP Console, go to the Instance groups page.

    Go to the Instance groups page

  2. Click the checkbox for the instance group you want to delete.
  3. Click Delete to delete the instance group.

Delete your single Compute Engine instance

  1. In the GCP Console, go to the VM Instances page.

    Go to the VM Instances page

  2. Click the checkbox for the instance you want to delete.
  3. Click Delete to delete the instance.

Delete your Cloud Storage bucket

  1. In the GCP Console, go to the Cloud Storage Browser page.

    Go to the Cloud Storage Browser page

  2. Click the checkbox for the bucket you want to delete.
  3. Click Delete to delete the bucket.

What's next

Czy ta strona była pomocna? Podziel się z nami swoją opinią:

Wyślij opinię na temat...