Developers & Practitioners

Deploying a Cloud Spanner-based Node.js application

Cloud Spanner is Google’s fully managed, horizontally scalable relational database service. Customers in financial services, gaming, retail and many other industries trust it to run their most demanding workloads, where consistency and availability at scale are critical. In this blog post, we illustrate how to build and deploy a Node.js application on Cloud Spanner using a sample stock chart visualization tool called OmegaTrade. This application stores the stock prices in Cloud Spanner and renders visualizations using Google Charts. You will learn how to set up a Cloud Spanner instance and how to deploy a Node.js application to Cloud Run, along with a few important Cloud Spanner concepts.

We begin by describing the steps to deploy the application on Cloud Run, and end with a discussion of best practices around tuning sessions, connection pooling & timeouts for applications using Cloud Spanner in general, which were adopted in OmegaTrade as well.

Deployment steps

We’re going to deploy the application completely serverless - with the frontend and backend services deployed on Cloud Run and Cloud Spanner as the data store. We chose Cloud Run because it abstracts away infrastructure management and scales up or down automatically almost instantaneously depending on traffic.

1 Traffic

The backend service uses the Node.js Express framework and connects to Cloud Spanner with default connection pooling, session, and timeout capabilities. 

As prerequisites, please ensure that you have:

  • Access to a new or existing GCP project with one of the sets of permissions below:

    • Owner

    • Editor + Cloud Run Admin + Storage Admin 

    • Cloud Run Admin + Service Usage Admin + Cloud Spanner Admin + Storage Admin

  • Enabled billing on the above GCP project

  • Installed and initialized the Google Cloud SDK

  • Installed and configured Docker on your machine 

  • Git installed and set up on your machine

Note - Please ensure that your permissions are not restricted by any organizational policies, or you may run into an issue at the Deployment stage later on.

Let’s begin!

First, let's set up our gcloud configuration as default and set a GCP project to this configuration. gcloud is the command-line interface for GCP services.

  gcloud init


  Pick configuration to use:
 [1] Re-initialize this configuration [default] with new settings
 [2] Create a new configuration
 [3] Switch to and re-initialize existing configuration: [emulator]
Please enter your numeric choice:  1

Choose your Google account with access to the required GCP project and enter the Project ID when prompted. 

Next,  we need to ensure the default gcloud configuration is set correctly. Below we are enabling authentication, unsetting any API endpoint URL set previously, and setting the GCP project we intend to use in the default gcloud configuration.

  gcloud config unset auth/disable_credentials
gcloud config unset api_endpoint_overrides/spanner
gcloud config set project [Your-Project-ID]

Now let’s enable Google Cloud APIs for Cloud Spanner, Container Registry, and Cloud Run.

  gcloud services enable
gcloud services enable
gcloud services enable

Provision Cloud Spanner: Instance, database & tables

Let’s create a Spanner instance and a database using gcloud commands.

  gcloud spanner instances create omegatrade-instance --config=regional-us-west1 \
    --description="omegatrade-instance" --nodes=1

gcloud spanner databases create omegatrade-db --instance omegatrade-instance

We will also create 4 tables that are required by the OmegaTrade application:

  • Users
  • Companies
  • CompanyStocks (tracks the stock values)
  • Simulations (tracks the state of each simulation)
  gcloud spanner databases ddl update omegatrade-db --instance=omegatrade-instance --ddl='CREATE TABLE users (userId STRING(36) NOT NULL, businessEmail STRING(50), fullName STRING(36), password STRING(100), photoUrl STRING(250), provider STRING(20),
forceChangePassword BOOL) PRIMARY KEY(userId);
CREATE UNIQUE NULL_FILTERED INDEX usersByBusinessEmail ON users (businessEmail);'

gcloud spanner databases ddl update omegatrade-db --instance=omegatrade-instance --ddl='CREATE TABLE companies (companyId STRING(36) NOT NULL, companyName STRING(30), companyShortCode STRING(15), created_at TIMESTAMP NOT NULL OPTIONS (allow_commit_timestamp=true)) PRIMARY KEY(companyId);
CREATE UNIQUE NULL_FILTERED INDEX companiesByCompanyName ON companies (companyName);
CREATE UNIQUE NULL_FILTERED INDEX companiesByShortCode ON companies (companyShortCode);' 

gcloud spanner databases ddl update omegatrade-db --instance=omegatrade-instance --ddl='CREATE TABLE companyStocks (companyStockId STRING(36) NOT NULL, companyId STRING(36) NOT NULL, open NUMERIC, volume NUMERIC, currentValue NUMERIC, date FLOAT64, close NUMERIC,
dayHigh NUMERIC, dayLow NUMERIC, timestamp TIMESTAMP NOT NULL OPTIONS (allow_commit_timestamp=true), CONSTRAINT FK_CompanyStocks FOREIGN KEY (companyId) REFERENCES companies (companyId)) PRIMARY KEY(companyStockId);' 

gcloud spanner databases ddl update omegatrade-db --instance=omegatrade-instance --ddl='CREATE TABLE simulations (sId STRING(36) NOT NULL, companyId STRING(36) NOT NULL,
status STRING(36), createdAt TIMESTAMP NOT NULL OPTIONS (allow_commit_timestamp=true),
CONSTRAINT FK_CompanySimulation FOREIGN KEY (companyId) REFERENCES companies (companyId)

Verify if these tables were successfully created by querying INFORMATION_SCHEMA in the Cloud Spanner instance. The INFORMATION_SCHEMA, as defined in the SQL spec, is the standard way to query metdata about database objects.

  gcloud spanner databases execute-sql omegatrade-db  --instance=omegatrade-instance  --sql='SELECT * FROM information_schema.tables WHERE table_schema <> "INFORMATION_SCHEMA"  AND table_schema <> "SPANNER_SYS"'

Now that the Cloud Spanner instance, database, and tables are created from the above step, let’s build and deploy OmegaTrade.

Deploy app backend to Cloud Run

We will now walk through the steps to deploy the omegatrade/frontend and omegatrade/backend services to Cloud Run. We will first deploy the backend and then use the backend service URL to deploy the frontend. 

First, we’ll clone the repository:

  git clone && 
cd omegatrade/backend

Let’s edit some env variables we need for the app to work. Add your project ID in the placeholder [Your-Project-ID].

  vi .env

PROJECTID = [Your-Project-ID]
INSTANCE = omegatrade-instance
DATABASE = omegatrade-db
JWT_KEY = w54p3Y?4dj%8Xqa2jjVC84narhe5Pk

Now, let’s build the image from the dockerfile and push it to GCR. As above, we will need to change the command to reflect our GCP project ID.

  docker build -t[Your-Project-ID]/omega-trade/backend:v1 -f .
docker push[Your-Project-ID]/omega-trade/backend:v1

Note - In case you face issues with authentication, follow the steps mentioned in the Google Cloud documentation suggested at runtime, and retry the below commands.

Next, let's deploy the backend to Cloud Run. We will create a Cloud Run service and deploy the image we have built with some environment variables for Cloud Spanner configuration. This may take a few minutes.

  gcloud run deploy omegatrade-backend --platform managed --region us-west1 --image[Your-Project-ID]/omega-trade/backend:v1 --memory 512Mi --allow-unauthenticated

Service [omegatrade-backend] revision [omegatrade-backend-00001-poq] has been deployed and is serving 100 percent of traffic.
Service URL:

Now we have OmegaTrade backend up and running. The Service URL for the backend is printed to the console. Note down this URL as we will use it to build the frontend. 

Import sample stock data to the database

To import sample company and stock data, run the below command in the backend folder.
  npm install
node seed-data.js

The above command will migrate sample data into the connected database.

Once this is successful, you will get a `Data Loaded successfully` message.

Note: You may run this migration only on an empty database, to avoid duplication.

Now, let’s deploy the frontend.

Deploy the app frontend to Cloud Run

Before we build the front-end service, we need to update the following file from the repo with the back-end URL from the back-end deployment step, i.e. the Service URL

  cd omegatrade/frontend/src/environments
vi environment.ts

Note - If you’d like to enable Sign In With Google for the application, now would be a good time to set up OAuth. To do so, please follow the steps in part 6 of the readme.

Change the base URL to the Service URL (append the /api/v1/ as it is). If you enabled OAuth, make sure the clientId matches the value that you got from the OAuth console flow.

  export const environment = {
  production: false,
  name: "dev",
  // change baseUrl according to backend URL
  baseUrl: "[Your-Service-URL]/api/v1/", 
  // change clientId to actual value you have received from OAuth console
  clientId: "[Your-ClientId]"

 If you skipped creating any OAuth credentials, set clientId to an empty string. All other fields remain the same.

  // if you skipped creating any OAuth credentials
  clientId: ""

Go back to the frontend folder. Build frontend service and push the image to GCR. This process may take a few minutes.

  cd ../..
docker build -t[Your-Project-ID]/omegatrade/frontend:v1 -f dockerfile .
docker push[Your-Project-ID]/omegatrade/frontend:v1

Now, Let’s deploy the frontend to Cloud Run

  gcloud run deploy omegatrade-frontend --platform managed --region us-west1 --image[Your-Project-ID]/omegatrade/frontend:v1 --allow-unauthenticated

Allow unauthenticated invocations to [omegatrade-frontend] (y/N)?  y

Service [omegatrade-frontend] revision [omegatrade-frontend-00001-zoz] has been deployed and is serving 100 percent of traffic.
Service URL:

Now, we have the front end deployed. You can go to this Service URL in your browser to access the application.

Optionally, we can add the frontend URL in the OAuth web application, to enable sign-in using a Google account.

Under OAuth 2.0 Client IDs, open the application you have created (OmegaTrade-Test). Add the frontend URL under Authorised JavaScript origins and save.

Note - Please ensure that cookies are enabled in your browser to avoid being blocked from running the app.

A few screenshots

Congratulations! If you’ve been following along, the app should now be up and running in Cloud Run. You should be able to go to the frontend URL and play around with the application! Try adding your favorite company tickers and generating simulations. All data writes and reads are being taken care of by Cloud Spanner.

Here are a few screenshots from the app:

1. Login and Registration View: The user can register and authenticate using a Google account (via OAuth, if you enabled it) or using an email address. On successful login, the user is redirected to the Dashboard.
Sign IN

2. Dashboard View: The app is pre-configured with simulated stock values for a few fictitious sample companies. The dashboard view renders the simulated stock prices in a graph. 

3 graph

3. Manage Company View: Users can also add a new company and its ticker symbol using this view.

4-5 dashboard

4. Simulate Data View: This view allows the user to simulate data for any existing or newly added company. The backend service simulates data based on a couple of parameters: the interval chosen and the number of rows. The user can also pause, resume and delete the running simulations.

4-5 dashboard

Now that we’ve got the application deployed, let’s cover a few important Spanner concepts that you’re likely to come across, both as you explore the application’s code, and in your own applications.


A Session represents a communication channel with the Cloud Spanner database service. It is used to perform transactions that read, write, or modify data in a Cloud Spanner database. A session is associated with a single database.

Best Practice - It is very unlikely you will need to interact with sessions directly. Sessions are created and maintained by client libraries internally and they are optimized by these libraries for best performance.

Connection (session) pooling

In Cloud Spanner, a long-lived "connection"/"communication channel" with the database is modeled by a "session" and not a DatabaseClient object. The DatabaseClient object implements connection (session) pooling internally in a SessionPool object which can be configured via SessionPoolOptions.

The default Session Pool options are
  const DEFAULTS = {
 //Time in milliseconds before giving up trying to acquire a session.
 //If the specified value is `Infinity`, a timeout will not occur.
 acquireTimeout: Infinity,
 // concurrent requests the pool is allowed to make.
 concurrency: Infinity,
 // If set to true, an error will be thrown when there are no available sessions for a request.
 fail: false,
 // How long until a resource becomes idle, in minutes.
 idlesAfter: 10,
 // How often to ping idle sessions, in minutes. Must be less than 1 hour.
 keepAlive: 30,
 // Labels to apply to any session created by the pool.
 labels: {},
 // Maximum number of resources to create at any given time.
 max: 100,
 // Maximum number of idle resources to keep in the pool at any given time.
 maxIdle: 1,
 // Minimum number of resources to keep in the pool at any given time.
 min: 0,
 //  Percentage of sessions to be pre-allocated as write sessions represented as a float.
 writes: 0,

Best Practice -  It is recommended to use the default session pool options as it is already configured for maximum performance.

NOTE - You can set min = max if you want the pool to be its maximum size by default. This helps to avoid the case where your application has already used up min sessions and then it blocks waiting for the next block of sessions to be created.

Timeouts and retries

Best Practice -  It is recommended to use the default timeout and retry configurations [1] [2] [3] because setting more aggressive timeouts and retries could cause the backend to start throttling your requests.

In the following example, a custom timeout of 60 seconds is set explicitly (see the totalTimeoutMillis setting) for the given operation. If the operation takes longer than this timeout, a DEADLINE_EXCEEDED error is returned.

  const {Spanner} = require('@google-cloud/spanner');
const spanner = new Spanner({
  projectId: 'test-project',
const retryAndTimeoutSettings = {
  retry: {
     // The set of error codes that will be retried.
    backoffSettings: {
      // Configure retry delay settings.
       // The initial amount of time to wait before retrying the request.
      initialRetryDelayMillis: 500,
      // The maximum amount of time to wait before retrying. I.e. after this
      // value is reached, the wait time will not increase further by the
      // multiplier.      
      maxRetryDelayMillis: 10000,
      // The previous wait time is multiplied by this multiplier to come up
      // with the next wait time, until the max is reached.
      retryDelayMultiplier: 1.5,
      // Configure RPC and total timeout settings.
      // Timeout for the first RPC call. Subsequent retries will be based off
      // this value.
      initialRpcTimeoutMillis: 5000,
      // Controls the change of timeout for each retry.     
      rpcTimeoutMultiplier: 1.5,
      // The max for the per RPC timeout.
      maxRpcTimeoutMillis: 30000,
      // The timeout for all calls (first call + all retries).
      totalTimeoutMillis: 60000,
// Gets a reference to a Cloud Spanner instance and database
const instance = spanner.instance('omegatrade-instance');
const database = instance.database('omegatrade-db');
const table = database.table('users');
const row = {
     userId: 1
     fullName: "user01",
     businessEmail: "",
     password: "someencryptedpassword",
     provider: "",
     photoUrl: ""
await table.insert(row, retryAndTimeoutSettings);


Congratulations! If you’ve been following along, you should now have a functional OAuth-enabled Node.js application based on Spanner deployed to Cloud Run. In addition, you should have a better understanding of the various parameters related to sessions, connection pooling, timeouts, and retries that Cloud Spanner exposes. Feel free to play with the application and explore the codebase at your leisure. 

To learn more about the building blocks for implementing Node.js applications on Cloud Spanner, visit