Connecting to Cloud SQL

Overview

Cloud SQL is a fully-managed database service that makes it easy to set up, maintain, manage, and administer your relational PostgreSQL and MySQL databases in the cloud. You can connect your Cloud Functions to Cloud SQL instances using a local socket interface provided in the Cloud Functions execution environment. This is a private connection to your Cloud SQL instances, which means you don't need to expose them to any external network.

Setting up Cloud SQL

Cloud Functions supports both First and Second Generation MySQL instances as well as PostgreSQL instances. See the relevant Cloud SQL documentation for instructions on setting up a new Cloud SQL instance.

Cross Region

First Generation MySQL instances must be in the same region as your Cloud Function. Second Generation MySQL instances as well as PostgreSQL instances work with Cloud Functions in any region.

Cross Project

Your Cloud Function has access to all Cloud SQL instances in your project. You can access Second Generation MySQL instances as well as PostgreSQL instances in other projects if your Cloud Function's service account (listed on the Cloud Function's General tab in the GCP Console) is added as a member in IAM on the project with the Cloud SQL instance(s) with the Cloud SQL Client role.

Connecting to Cloud SQL

Cloud Functions provides a local socket interface to connect to Cloud SQL instances, which works for both MySQL and PostgreSQL instances. To connect to your Cloud SQL instance you can use the /cloudsql file system path.

MySQL

Node.js

const mysql = require('mysql');

/**
 * TODO(developer): specify SQL connection details
 */
const connectionName = process.env.INSTANCE_CONNECTION_NAME || '<YOUR INSTANCE CONNECTION NAME>';
const dbUser = process.env.SQL_USER || '<YOUR DB USER>';
const dbPassword = process.env.SQL_PASSWORD || '<YOUR DB PASSWORD>';
const dbName = process.env.SQL_NAME || '<YOUR DB NAME>';

const mysqlConfig = {
  connectionLimit: 1,
  user: dbUser,
  password: dbPassword,
  database: dbName
};
if (process.env.NODE_ENV === 'production') {
  mysqlConfig.socketPath = `/cloudsql/${connectionName}`;
}

// Connection pools reuse connections between invocations,
// and handle dropped or expired connections automatically.
let mysqlPool;

exports.mysqlDemo = (req, res) => {
  // Initialize the pool lazily, in case SQL access isn't needed for this
  // GCF instance. Doing so minimizes the number of active SQL connections,
  // which helps keep your GCF instances under SQL connection limits.
  if (!mysqlPool) {
    mysqlPool = mysql.createPool(mysqlConfig);
  }

  mysqlPool.query('SELECT NOW() AS now', (err, results) => {
    if (err) {
      console.error(err);
      res.status(500).send(err);
    } else {
      res.send(JSON.stringify(results));
    }
  });

  // Close any SQL resources that were declared inside this function.
  // Keep any declared in global scope (e.g. mysqlPool) for later reuse.
};

Python (Beta)

from os import getenv

import pymysql
from pymysql.err import OperationalError

# TODO(developer): specify SQL connection details
CONNECTION_NAME = getenv(
  'INSTANCE_CONNECTION_NAME',
  '<YOUR INSTANCE CONNECTION NAME>')
DB_USER = getenv('MYSQL_USER', '<YOUR DB USER>')
DB_PASSWORD = getenv('MYSQL_PASSWORD', '<YOUR DB PASSWORD>')
DB_NAME = getenv('MYSQL_DATABASE', '<YOUR DB NAME>')

mysql_config = {
  'user': DB_USER,
  'password': DB_PASSWORD,
  'db': DB_NAME,
  'charset': 'utf8mb4',
  'cursorclass': pymysql.cursors.DictCursor,
  'autocommit': True
}

# Create SQL connection globally to enable reuse
# PyMySQL does not include support for connection pooling
mysql_conn = None


def __get_cursor():
    """
    Helper function to get a cursor
      PyMySQL does NOT automatically reconnect,
      so we must reconnect explicitly using ping()
    """
    try:
        return mysql_conn.cursor()
    except OperationalError:
        mysql_conn.ping(reconnect=True)
        return mysql_conn.cursor()


def mysql_demo(request):
    global mysql_conn

    # Initialize connections lazily, in case SQL access isn't needed for this
    # GCF instance. Doing so minimizes the number of active SQL connections,
    # which helps keep your GCF instances under SQL connection limits.
    if not mysql_conn:
        try:
            mysql_conn = pymysql.connect(**mysql_config)
        except OperationalError:
            # If production settings fail, use local development ones
            mysql_config['unix_socket'] = f'/cloudsql/{CONNECTION_NAME}'
            mysql_conn = pymysql.connect(**mysql_config)

    # Remember to close SQL resources declared while running this function.
    # Keep any declared in global scope (e.g. mysql_conn) for later reuse.
    with __get_cursor() as cursor:
        cursor.execute('SELECT NOW() as now')
        results = cursor.fetchone()
        return str(results['now'])

PostgreSQL

Node.js

const pg = require('pg');

/**
 * TODO(developer): specify SQL connection details
 */
const connectionName = process.env.INSTANCE_CONNECTION_NAME || '<YOUR INSTANCE CONNECTION NAME>';
const dbUser = process.env.SQL_USER || '<YOUR DB USER>';
const dbPassword = process.env.SQL_PASSWORD || '<YOUR DB PASSWORD>';
const dbName = process.env.SQL_NAME || '<YOUR DB NAME>';

const pgConfig = {
  max: 1,
  user: dbUser,
  password: dbPassword,
  database: dbName
};

if (process.env.NODE_ENV === 'production') {
  pgConfig.socketPath = `/cloudsql/${connectionName}`;
}

// Connection pools reuse connections between invocations,
// and handle dropped or expired connections automatically.
let pgPool;

exports.postgresDemo = (req, res) => {
  // Initialize the pool lazily, in case SQL access isn't needed for this
  // GCF instance. Doing so minimizes the number of active SQL connections,
  // which helps keep your GCF instances under SQL connection limits.
  if (!pgPool) {
    pgPool = new pg.Pool(pgConfig);
  }

  pgPool.query('SELECT NOW() as now', (err, results) => {
    if (err) {
      console.error(err);
      res.status(500).send(err);
    } else {
      res.send(JSON.stringify(results));
    }
  });

  // Close any SQL resources that were declared inside this function.
  // Keep any declared in global scope (e.g. mysqlPool) for later reuse.
};

Python (Beta)

from os import getenv

from psycopg2 import OperationalError
from psycopg2.pool import SimpleConnectionPool

# TODO(developer): specify SQL connection details
CONNECTION_NAME = getenv(
  'INSTANCE_CONNECTION_NAME',
  '<YOUR INSTANCE CONNECTION NAME>')
DB_USER = getenv('POSTGRES_USER', '<YOUR DB USER>')
DB_PASSWORD = getenv('POSTGRES_PASSWORD', '<YOUR DB PASSWORD>')
DB_NAME = getenv('POSTGRES_DATABASE', '<YOUR DB NAME>')

pg_config = {
  'user': DB_USER,
  'password': DB_PASSWORD,
  'dbname': DB_NAME
}

# Connection pools reuse connections between invocations,
# and handle dropped or expired connections automatically.
pg_pool = None


def __connect(host):
    """
    Helper function to connect to Postgres
    """
    global pg_pool
    pg_config['host'] = host
    pg_pool = SimpleConnectionPool(1, 1, **pg_config)


def postgres_demo(request):
    global pg_pool

    # Initialize the pool lazily, in case SQL access isn't needed for this
    # GCF instance. Doing so minimizes the number of active SQL connections,
    # which helps keep your GCF instances under SQL connection limits.
    if not pg_pool:
        try:
            __connect(f'/cloudsql/{CONNECTION_NAME}')
        except OperationalError:
            # If production settings fail, use local development ones
            __connect('localhost')

    # Remember to close SQL resources declared while running this function.
    # Keep any declared in global scope (e.g. pg_pool) for later reuse.
    with pg_pool.getconn().cursor() as cursor:
        cursor.execute('SELECT NOW() as now')
        results = cursor.fetchone()
        return str(results[0])

Note that PostgreSQL requires a .s.PGSQL.5432 suffix on the connection string. Libraries like pg add this automatically. However, if you choose an alternative library, make sure to confirm that this has been added. The full connection string for PostgreSQL when using socket paths is:

/cloudsql/INSTANCE_CONNECTION_NAME/.s.PGSQL.5432

The INSTANCE_CONNECTION_NAME is the name of your Cloud SQL instance connection, and will usually be in the form:

First Generation

ProjectID:InstanceID

Second Generation

ProjectID:Region:InstanceID

For example:

my-gcp-project:us-central1:helloworld

You can determine the instance connection name from the Cloud SQL Dashboard in GCP Console. Note that the "Instance connection name" (used above) is not the same as your "Instance ID."

Best Practices for Cloud SQL in Cloud Functions

This section provides guidelines for connecting to Cloud SQL in Cloud Functions.

Connection Pools & Connection Loss

Connections to underlying databases may be dropped, either by the database server itself, or by the infrastructure underlying Cloud Functions. To mitigate this, we recommend that you use a client library that supports connection pools that automatically reconnect broken client connections.

When using a connection pool, it is important to set the maximum connections to 1. This may seem counter-intuitive, however, creating more than one concurrent connection per function instance may cause rapid exhaustion of connection resources (see Maximum Concurrent Connections below for more detail). Cloud Functions limits concurrent executions to 1 per instance. This means you will never have a situation where two requests are being processed by a single function instance at the same time, so in most situations only a single database connection is needed.

Connection Reuse

Where possible, we recommend that you allocate connection objects in global scope. This is both a performance optimization, and a less error-prone approach as it is often the case that connection closure is missed (for example, when unexpected errors occur).

If you use a connection pool in global scope, then we recommend that you not close connections at the end of the function call. If you do not use a pool in global scope, and/or if you create individual connections in function-scope, then you should close these connections before the function returns.

Using a globally scoped connection pool is recommended as this will improve the likelihood that the same connection is reused for subsequent invocations of the function, and the connection will be closed naturally when the function instance is evicted (auto-scaled down).

Maximum Concurrent Connections

Both the MySQL and PostgreSQL editions of Cloud SQL impose a maximum limit on concurrent connections, and these limits may vary depending on the database engine chosen (see the Cloud SQL Quotas and Limits page). If each one of your function instances is creating a connection, your overall throughput will be limited by the number of concurrent connections that can be created. Where possible, you should only open a connection if the function needs to use it. A common anti-pattern here is to deploy a function that retains a reference to the connection in global scope (often considered best practice), but which subsequently doesn't use the connection at all (anti-pattern).

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Functions Documentation