Running the Node.js Bookshelf on Compute Engine

This tutorial shows how to run the Node.js Bookshelf app on Google Compute Engine. Follow this tutorial to deploy an existing Node.js web app to Compute Engine. You don't have to be familiar with the Bookshelf app to follow this tutorial, but if you would like to learn about the Bookshelf app, see the tutorial for the App Engine flexible environment.

Objectives

  • Deploy the Bookshelf sample app to a single Compute Engine instance.
  • Scale the app horizontally by using a managed instance group.
  • Serve traffic by using HTTP load balancing.
  • Respond to traffic changes by using autoscaling.

Costs

This tutorial uses billable components of Cloud Platform, including:

  • Google Compute Engine
  • Google Cloud Storage
  • Google Cloud Datastore
  • Google Cloud Logging
  • Google Cloud Pub/Sub

Use the Pricing Calculator to generate a cost estimate based on your projected usage. New Cloud Platform users might be eligible for a free trial.

Before you begin

  1. Sign in to your Google account.

    If you don't already have one, sign up for a new account.

  2. Select or create a Cloud Platform project.

    Go to the Manage resources page

  3. Enable billing for your project.

    Enable billing

  4. Enable the Cloud Datastore, Cloud Storage, and Cloud Pub/Sub APIs.

    Enable the APIs

  5. Install and initialize the Cloud SDK.
  6. Install Node.js and npm using the official installer.

Creating a Cloud Storage bucket

The following instructions show how to create a Cloud Storage bucket.

To create a bucket:

  1. Invoke the following command in a terminal window:

    gsutil mb gs://[YOUR-BUCKET-NAME]

  2. Set the bucket's default ACL to public-read, which enables users to see their uploaded images:

    gsutil defacl set public-read gs://[YOUR-BUCKET-NAME]

Creating a web application client ID

A web application client ID allows your application to authorize users and access Google APIs on behalf of your users.

  1. Go to the credentials section in the Google Cloud Platform Console.

  2. Click OAuth consent screen. For the the product name, enter Node.js Bookshelf App. Fill in any relevant optional fields. Click Save.

  3. Click Create credentials > OAuth client ID.

  4. Under Application type, select Web Application.

  5. Under Name, enter Node.js Bookshelf Client.

  6. Under Authorized redirect URIs enter the following URLs, one at a time. Replace [YOUR_PROJECT_ID] with your project ID:

    http://localhost:8080/auth/google/callback
    http://[YOUR_PROJECT_ID].appspot.com/auth/google/callback
    https://[YOUR_PROJECT_ID].appspot.com/auth/google/callback
    http://[YOUR_PROJECT_ID].appspot-preview.com/auth/google/callback
    https://[YOUR_PROJECT_ID].appspot-preview.com/auth/google/callback

  7. Click Create.

  8. Copy the client ID and client secret and save them for later use.

Cloning the sample app

The sample application is available on GitHub at GoogleCloudPlatform/nodejs-getting-started.

  1. Clone the repository:

    git clone https://github.com/GoogleCloudPlatform/nodejs-getting-started.git
    
  2. Go to the sample directory:

    cd nodejs-getting-started/7-gce
    

Configuring the app

In the sample directory, create a config.json file with this content:

{
  "GCLOUD_PROJECT": "[YOUR_PROJECT_ID]",
  "CLOUD_BUCKET": "[YOUR_CLOUD_BUCKET]",
  "DATA_BACKEND": "datastore"
  "OAUTH2_CLIENT_ID": "[YOUR_OAUTH2_CLIENT_ID]",
  "OAUTH2_CLIENT_SECRET": "[YOUR_OAUTH2_CLIENT_SECRET]",
  "OAUTH2_CALLBACK": "http://localhost:8080/auth/google/callback"
}

where:

  • [YOUR_PROJECT_ID] is your project ID.
  • [YOUR_CLOUD_BUCKET] is the name of your Cloud Storage Bucket.
  • [YOUR_OAUTH2_CLIENT_ID] is the application client ID you created previously.
  • [YOUR_OAUTH2_CLIENT_SECRET] is client secret you created previously.

Running the app on your local computer

  1. Install dependencies:

    npm install
    
  2. Run the app:

    npm start
    
  3. In your web browser, enter this address:

    http://localhost:8080

To stop the local web server, press Control+C.

Deploying to a single instance

Single-instance deployment

This section walks you through running a single instance of your application on Compute Engine.

Pushing your code to a repository

There are several ways to get your code onto a running Compute Engine instance. One way is to use Cloud Source Repositories. Every project includes a Git repository that can easily be made available to Compute Engine instances. Your instances can then pull the latest version of your application code during startup. This is convenient because updating your application does not require configuring new images or instances; all you need to do is restart an existing instance or create a new one.

Push your application code to your project's repository:

git commit -am "Updating configuration"
git config credential.helper gcloud.sh
git remote add cloud https://source.developers.google.com/p/[YOUR_PROJECT_ID]/
git push cloud

where [YOUR_PROJECT_ID] is your project ID.

Using a startup script to initialize an instance

Now that your code is accessible by Compute Engine instances, you need a way to instruct your instance to download and run your code. An instance can have a startup script that is executed whenever the instance is started or restarted.

Here is the startup script that is included in the Bookshelf sample app:

set -v

# Talk to the metadata server to get the project id
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")

# Install logging monitor. The monitor will automatically pick up logs sent to
# syslog.
curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
service google-fluentd restart &

# Install dependencies from apt
apt-get update
apt-get install -yq ca-certificates git nodejs build-essential supervisor

# Install nodejs
mkdir /opt/nodejs
curl https://nodejs.org/dist/v4.2.2/node-v4.2.2-linux-x64.tar.gz | tar xvzf - -C /opt/nodejs --strip-components=1
ln -s /opt/nodejs/bin/node /usr/bin/node
ln -s /opt/nodejs/bin/npm /usr/bin/npm

# Get the application source code from the Google Cloud Repository.
# git requires $HOME and it's not set during the startup script.
export HOME=/root
git config --global credential.helper gcloud.sh
git clone https://source.developers.google.com/p/$PROJECTID /opt/app

# Install app dependencies
cd /opt/app/7-gce
npm install

# Create a nodeapp user. The application will run as this user.
useradd -m -d /home/nodeapp nodeapp
chown -R nodeapp:nodeapp /opt/app

# Configure supervisor to run the node app.
cat >/etc/supervisor/conf.d/node-app.conf << EOF
[program:nodeapp]
directory=/opt/app/7-gce
command=npm start
autostart=true
autorestart=true
user=nodeapp
environment=HOME="/home/nodeapp",USER="nodeapp",NODE_ENV="production"
stdout_logfile=syslog
stderr_logfile=syslog
EOF

supervisorctl reread
supervisorctl update

# Application should now be running under supervisor

The startup script performs these tasks:

  • Install the Google Cloud Logging agent. The agent automatically collects logs from syslog.

  • Install Node.js and Supervisor. Supervisor is used to run the application as a daemon.

  • Clone the application source code from the Cloud Repository and installs dependencies.

  • Configure Supervisor to run the application. Supervisor makes sure the application is restarted if it exits unexpectedly or is killed by an administrator or other process. It also sends the application's stdout and stderr to syslog to be collected by the Cloud Logging agent.

Creating and configuring a Compute Engine instance

  1. Create a Compute Engine instance:

    Linux/Mac OS X

    gcloud compute instances create my-app-instance \
        --image=debian-8 \
        --machine-type=g1-small \
        --scopes userinfo-email,cloud-platform \
        --metadata-from-file startup-script=gce/startup-script.sh \
        --zone us-central1-f \
        --tags http-server
    

    Windows

    gcloud compute instances create my-app-instance ^
        --image=debian-8 ^
        --machine-type=g1-small ^
        --scopes userinfo-email,cloud-platform ^
        --metadata-from-file startup-script=gce/startup-script.sh ^
        --zone us-central1-f ^
        --tags http-server
    

    This command creates a new instance, allows it to access Cloud Platform services, and runs your startup script. The instance name is my-app-instance.

  2. Check the progress of the instance creation:

    gcloud compute instances get-serial-port-output my-app-instance --zone us-central1-f
    

    If the startup script has completed, you will see Finished running startup script at the end of the command output.

  3. Create a firewall rule to allow traffic to your instance:

    Linux/Mac OS X

    gcloud compute firewall-rules create default-allow-http-8080 \
        --allow tcp:8080 \
        --source-ranges 0.0.0.0/0 \
        --target-tags http-server \
        --description "Allow port 8080 access to http-server"
    

    Windows

    gcloud compute firewall-rules create default-allow-http-8080 ^
        --allow tcp:8080 ^
        --source-ranges 0.0.0.0/0 ^
        --target-tags http-server ^
        --description "Allow port 8080 access to http-server"
    

  4. Get the external IP address of your instance:

    gcloud compute instances list
    
  5. To see the application running, go to http://[YOUR_INSTANCE_IP]:8080,

    where [YOUR_INSTANCE_IP] is the external IP address of your instance.

Managing and monitoring an instance

You can use the Google Cloud Platform Console to monitor and manage your instance. In the Compute > Compute Engine section, you can view the running instance and connect to it using SSH. In the Monitoring > Logs section, you can view all of the logs generated by your Compute Engine resources. Google Cloud Logging is automatically configured to gather logs from various common services, including syslog.

Horizontal scaling with multiple instances

Multiple-instance deployment with managed instances

Compute Engine can easily scale horizontally. By using a managed instance group and the Compute Engine Autoscaler, Compute Engine can automatically create new instances of your application when needed and shut down instances when demand is low. You can set up an HTTP load balancer to distribute traffic to the instances in a managed instance group.

Creating a managed instance group

A managed instance group is a group of homogeneous instances based on the same instance template. An instance template defines the configuration of your instance, including source image, disk size, scopes, and metadata, including startup scripts.

  1. First, create a template:

    gcloud compute instance-templates create $TEMPLATE \
      --image $IMAGE \
      --machine-type $MACHINE_TYPE \
      --scopes $SCOPES \
      --metadata-from-file startup-script=$STARTUP_SCRIPT \
      --tags $TAGS

  2. Next, create an instance group:

    gcloud compute instance-groups managed \
      create $GROUP \
      --base-instance-name $GROUP \
      --size $MIN_INSTANCES \
      --template $TEMPLATE \
      --zone $ZONE

    The --size parameter species the number of instances in the group. After all of the instances have finished running their startup scripts, the instances can be accessed individually by using their external IP addresses and port 8080. To find the external IP addresses of the instances, enter gcloud compute instances list. The managed instances have names that start with the same prefix, my-app, which you specified in the --base-instance-name parameter.

Creating a load balancer

An individual instance is fine for testing or debugging, but for serving web traffic it's better to use a load balancer to automatically direct traffic to available instances. To create a load balancer, follow these steps.

  1. Create a health check. The load balancer uses a health check to determine which instances are capable of serving traffic:

    gcloud compute http-health-checks create ah-health-check \
      --request-path /_ah/health \
      --port 8080

  2. Create a named port. The HTTP load balancer looks for the http service to know which port to direct traffic to. In your existing instance group, give port 8080 the name http:

    gcloud compute instance-groups managed set-named-ports \
        $GROUP \
        --named-ports http:8080 \
        --zone $ZONE

  3. Create a backend service. The backend service is the target for load-balanced traffic. It defines which instance group the traffic should be directed to and which health check to use.

    gcloud compute backend-services create $SERVICE \
      --http-health-checks ah-health-check \
      --port 8080

    gcloud compute http-health-checks create ah-health-check \
      --request-path /_ah/health \
      --port 8080

  4. Add the backend service:

    gcloud compute backend-services add-backend $SERVICE \
      --instance-group $GROUP \
      --zone $ZONE

  5. Create a URL map and proxy:

    The URL map defines which URLs should be directed to which backend services. In this sample, all traffic is served by one backend service. If you want to load balance requests between multiple regions or groups, you can create multiple backend services. A proxy receives traffic and forwards it to backend services using URL maps.

    1. Create the URL map:

      gcloud compute url-maps create $SERVICE-map \
        --default-service $SERVICE

    2. Create the proxy:

      gcloud compute target-http-proxies create $SERVICE-proxy \
        --url-map $SERVICE-map

  6. Create a global forwarding rule. The global forwarding rule ties a public IP address and port to a proxy:

    gcloud compute forwarding-rules create $SERVICE-http-rule \
      --global \
      --target-http-proxy $SERVICE-proxy \
      --port-range 80

Configuring the autoscaler

The load balancer ensures that traffic is distributed across all of your healthy instances. But what happens if there is too much traffic for your instances to handle? You could manually add more instances. But a better solution is to configure a Compute Engine Autoscaler to automatically create and delete instances in response to traffic demands.

  1. Create an autoscaler:

    gcloud compute instance-groups managed set-autoscaling \
      $GROUP \
      --max-num-replicas $MAX_INSTANCES \
      --target-load-balancing-utilization $TARGET_UTILIZATION \
      --zone $ZONE

    The preceding command creates an autoscaler on the managed instance group that automatically scales up to 10 instances. New instances are added when the load balancer is above 50% utilization and are removed when utilization falls below 50%.

  2. Create a firewall rule if you haven't already created one:

    gcloud compute firewall-rules create default-allow-http-8080 \
        --allow tcp:8080 \
        --source-ranges 0.0.0.0/0 \
        --target-tags http-server \
        --description "Allow port 8080 access to http-server"

  3. Check progress:

    gcloud compute backend-services get-health my-app-service
    

    Continue checking until at least one of your instances reports HEALTHY.

Viewing your application

  1. Get the forwarding IP address for the load balancer:

    gcloud compute forwarding-rules list --global
    

    Your forwarding-rules IP address is in the IP_ADDRESS column.

  2. In a browser, enter the IP address from the list. Your load-balanced and autoscaled app is now running on Google Compute Engine!

Managing and monitoring your deployment

Managing multiple instances is just as easy as managing a single instance. You can use the Cloud Platform Console to monitor load balancing, autoscaling, and your managed instance group.

You can manage your instance group and autoscaling configuration by using the Compute > Compute Engine > Instance groups section.

You can manage your load balancing configuration, including URL maps and backend services, by using the Compute > Compute Engine > HTTP load balancing section.

Deployment script

The sample application includes a script that helps demonstrate deployment to Compute Engine. The script named deploy.sh performs a complete, autoscaled, load-balanced deployment of the application as described in Horizontal scaling with multiple instances.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

Running the teardown script

If you ran the deploy.sh script, run the teardown.sh script to remove all resources created by the deploy.sh script. This returns your project to its original state and helps to avoid further billing.

Deleting resources manually

If you followed the steps in this tutorial manually, you can delete your Compute Engine instances and Cloud Storage bucket manually.

Deleting your Compute Engine instance

To delete a Compute Engine instance:

  1. In the Cloud Platform Console, go to the VM Instances page.

    Go to the VM Instances page

  2. Click the checkbox next to the instance you want to delete.
  3. Click the Delete button at the top of the page to delete the instance.

Deleting your Cloud Storage bucket

To delete a Cloud Storage bucket:

  1. In the Cloud Platform Console, go to the Cloud Storage browser.

    Go to the Cloud Storage browser

  2. Click the checkbox next to the bucket you want to delete.
  3. Click the Delete button at the top of the page to delete the bucket.

What's next

Monitor your resources on the go

Get the Google Cloud Console app to help you manage your projects.

Send feedback about...