Connect from Cloud Functions

This page contains information and examples for connecting to a Cloud SQL instance from a service running in Cloud Functions.

For step-by-step instructions on running a Cloud Functions sample web application connected to Cloud SQL, see the quickstart for connecting from Cloud Functions.

Cloud SQL is a fully-managed database service that helps you set up, maintain, manage, and administer your relational databases in the cloud.

Cloud Functions is a lightweight compute solution for developers to create single-purpose, standalone functions that respond to Cloud events without the need to manage a server or runtime environment.

Set up a Cloud SQL instance

  1. Enable the Cloud SQL Admin API in the Google Cloud project that you are connecting from, if you haven't already done so:

    Enable the API

  2. Create a Cloud SQL for PostgreSQL instance.

    By default, Cloud SQL assigns a public IP address to a new instance. You also have the option to assign a private IP address. For more information about the connectivity options for both, see the Connecting Overview page.

Configure Cloud Functions

The steps to configure Cloud Functions depend on the type of IP address that you assigned to your Cloud SQL instance.

Public IP (default)

To configure Cloud Functions to enable connections to a Cloud SQL instance:

  • Confirm that the instance created above has a public IP address. You can confirm this on the Overview page for the instance in the Google Cloud console. If you need to add a public IP address, see Configure public IP.
  • Get the instance's INSTANCE_CONNECTION_NAME. This value is available:
    • On the Overview page for the instance, in the Google Cloud console, or
    • By running the following command: gcloud sql instances describe [INSTANCE_NAME]
  • Configure the service account for your function. If the authorizing service account belongs to a different project from the Cloud SQL instance, enable the Cloud SQL Admin API, and add the IAM permissions listed below, on both projects. Confirm that the service account has the appropriate Cloud SQL roles and permissions to connect to Cloud SQL.
    • To connect to Cloud SQL, the service account needs one of the following IAM roles:
      • Cloud SQL Client (preferred)
      • Cloud SQL Editor
      • Cloud SQL Admin
      Or, you can manually assign the following IAM permissions:
      • cloudsql.instances.connect
      • cloudsql.instances.get
  • If you're using Cloud Functions (2nd gen) and not Cloud Functions (1st gen), the following are required (also see Configure Cloud Run):
    1. Initially deploy your function.
      When you first begin creating a Cloud Function in the Google Cloud console, the underlying Cloud Run service hasn't been created yet. You can't configure a Cloud SQL connection until that service is created (by deploying the Cloud Function).
    2. In the Google Cloud console, in the upper right of the Function details page, under Powered by Cloud Run, click the link to access the underlying Cloud Run service.
    3. On the Cloud Run Service details page, select the Edit and deploy new revision tab.
    4. Follow the standard steps (as in the case of any configuration change) for setting a new configuration for a Cloud SQL connection.
      This creates a new Cloud Run revision, and subsequent revisions automatically receive this Cloud SQL connection, unless you explicitly change it.

Private IP

If the authorizing service account belongs to a different project than the one containing the Cloud SQL instance, do the following:

  • In both projects, enable the Cloud SQL Admin API.
  • For the service account in the project that contains the Cloud SQL instance, add the IAM permissions.
A Serverless VPC Access connector uses private IP addresses to handle communication to your VPC network. To connect directly with private IP addresses, you must do the following:
  1. Make sure that the Cloud SQL instance created previously has a private IP address. If you need to add one, see Configure private IP for instructions.
  2. Create a Serverless VPC Access connector in the same VPC network as your Cloud SQL instance. Note the following conditions:
    • Unless you're using Shared VPC, your connector must be in the same project and region as the resource that uses it, but it can send traffic to resources in different regions.
    • Serverless VPC Access supports communication to VPC networks connected using Cloud VPN and VPC Network Peering.
    • Serverless VPC Access doesn't support legacy networks.
  3. Configure Cloud Functions to use the connector.
  4. Connect using your instance's private IP address and port 5432.

Connect to Cloud SQL

After you configure Cloud Functions, you can connect to your Cloud SQL instance.

Public IP (default)

For public IP paths, Cloud Functions provides encryption and connects using the Cloud SQL Auth Proxy in two ways:

Private IP

For private IP paths, your application connects directly to your instance through a VPC network. This method uses TCP to connect directly to the Cloud SQL instance without using the Cloud SQL Auth Proxy.

Connect with TCP

Connect using the private IP address of your Cloud SQL instance as the host and port 5432.

Python

To see this snippet in the context of a web application, view the README on GitHub.

import os
import ssl

import sqlalchemy


def connect_tcp_socket() -> sqlalchemy.engine.base.Engine:
    """Initializes a TCP connection pool for a Cloud SQL instance of Postgres."""
    # Note: Saving credentials in environment variables is convenient, but not
    # secure - consider a more secure solution such as
    # Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
    # keep secrets safe.
    db_host = os.environ[
        "INSTANCE_HOST"
    ]  # e.g. '127.0.0.1' ('172.17.0.1' if deployed to GAE Flex)
    db_user = os.environ["DB_USER"]  # e.g. 'my-db-user'
    db_pass = os.environ["DB_PASS"]  # e.g. 'my-db-password'
    db_name = os.environ["DB_NAME"]  # e.g. 'my-database'
    db_port = os.environ["DB_PORT"]  # e.g. 5432

    pool = sqlalchemy.create_engine(
        # Equivalent URL:
        # postgresql+pg8000://<db_user>:<db_pass>@<db_host>:<db_port>/<db_name>
        sqlalchemy.engine.url.URL.create(
            drivername="postgresql+pg8000",
            username=db_user,
            password=db_pass,
            host=db_host,
            port=db_port,
            database=db_name,
        ),
        # ...
    )
    return pool

Java

To see this snippet in the context of a web application, view the README on GitHub.

Note:

  • CLOUD_SQL_CONNECTION_NAME should be represented as <MY-PROJECT>:<INSTANCE-REGION>:<INSTANCE-NAME>
  • Using the argument ipTypes=PRIVATE will force the SocketFactory to connect with an instance's associated private IP
  • See the JDBC socket factory version requirements for the pom.xml file here .


import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import javax.sql.DataSource;

public class TcpConnectionPoolFactory extends ConnectionPoolFactory {

  // Note: Saving credentials in environment variables is convenient, but not
  // secure - consider a more secure solution such as
  // Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
  // keep secrets safe.
  private static final String DB_USER = System.getenv("DB_USER");
  private static final String DB_PASS = System.getenv("DB_PASS");
  private static final String DB_NAME = System.getenv("DB_NAME");

  private static final String INSTANCE_HOST = System.getenv("INSTANCE_HOST");
  private static final String DB_PORT = System.getenv("DB_PORT");


  public static DataSource createConnectionPool() {
    // The configuration object specifies behaviors for the connection pool.
    HikariConfig config = new HikariConfig();

    // The following URL is equivalent to setting the config options below:
    // jdbc:postgresql://<INSTANCE_HOST>:<DB_PORT>/<DB_NAME>?user=<DB_USER>&password=<DB_PASS>
    // See the link below for more info on building a JDBC URL for the Cloud SQL JDBC Socket Factory
    // https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory#creating-the-jdbc-url

    // Configure which instance and what database user to connect with.
    config.setJdbcUrl(String.format("jdbc:postgresql://%s:%s/%s", INSTANCE_HOST, DB_PORT, DB_NAME));
    config.setUsername(DB_USER); // e.g. "root", "postgres"
    config.setPassword(DB_PASS); // e.g. "my-password"


    // ... Specify additional connection properties here.
    // ...

    // Initialize the connection pool using the configuration object.
    return new HikariDataSource(config);
  }
}

Node.js

To see this snippet in the context of a web application, view the README on GitHub.

const Knex = require('knex');
const fs = require('fs');

// createTcpPool initializes a TCP connection pool for a Cloud SQL
// instance of Postgres.
const createTcpPool = async config => {
  // Note: Saving credentials in environment variables is convenient, but not
  // secure - consider a more secure solution such as
  // Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
  // keep secrets safe.
  const dbConfig = {
    client: 'pg',
    connection: {
      host: process.env.INSTANCE_HOST, // e.g. '127.0.0.1'
      port: process.env.DB_PORT, // e.g. '5432'
      user: process.env.DB_USER, // e.g. 'my-user'
      password: process.env.DB_PASS, // e.g. 'my-user-password'
      database: process.env.DB_NAME, // e.g. 'my-database'
    },
    // ... Specify additional properties here.
    ...config,
  };
  // Establish a connection to the database.
  return Knex(dbConfig);
};

Go

To see this snippet in the context of a web application, view the README on GitHub.

package cloudsql

import (
	"database/sql"
	"fmt"
	"log"
	"os"

	// Note: If connecting using the App Engine Flex Go runtime, use
	// "github.com/jackc/pgx/stdlib" instead, since v5 requires
	// Go modules which are not supported by App Engine Flex.
	_ "github.com/jackc/pgx/v5/stdlib"
)

// connectTCPSocket initializes a TCP connection pool for a Cloud SQL
// instance of Postgres.
func connectTCPSocket() (*sql.DB, error) {
	mustGetenv := func(k string) string {
		v := os.Getenv(k)
		if v == "" {
			log.Fatalf("Fatal Error in connect_tcp.go: %s environment variable not set.", k)
		}
		return v
	}
	// Note: Saving credentials in environment variables is convenient, but not
	// secure - consider a more secure solution such as
	// Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
	// keep secrets safe.
	var (
		dbUser    = mustGetenv("DB_USER")       // e.g. 'my-db-user'
		dbPwd     = mustGetenv("DB_PASS")       // e.g. 'my-db-password'
		dbTCPHost = mustGetenv("INSTANCE_HOST") // e.g. '127.0.0.1' ('172.17.0.1' if deployed to GAE Flex)
		dbPort    = mustGetenv("DB_PORT")       // e.g. '5432'
		dbName    = mustGetenv("DB_NAME")       // e.g. 'my-database'
	)

	dbURI := fmt.Sprintf("host=%s user=%s password=%s port=%s database=%s",
		dbTCPHost, dbUser, dbPwd, dbPort, dbName)


	// dbPool is the pool of database connections.
	dbPool, err := sql.Open("pgx", dbURI)
	if err != nil {
		return nil, fmt.Errorf("sql.Open: %w", err)
	}

	// ...

	return dbPool, nil
}

PHP

To see this snippet in the context of a web application, view the README on GitHub.

namespace Google\Cloud\Samples\CloudSQL\Postgres;

use PDO;
use PDOException;
use RuntimeException;
use TypeError;

class DatabaseTcp
{
    public static function initTcpDatabaseConnection(): PDO
    {
        try {
            // Note: Saving credentials in environment variables is convenient, but not
            // secure - consider a more secure solution such as
            // Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
            // keep secrets safe.
            $username = getenv('DB_USER'); // e.g. 'your_db_user'
            $password = getenv('DB_PASS'); // e.g. 'your_db_password'
            $dbName = getenv('DB_NAME'); // e.g. 'your_db_name'
            $instanceHost = getenv('INSTANCE_HOST'); // e.g. '127.0.0.1' ('172.17.0.1' for GAE Flex)

            // Connect using TCP
            $dsn = sprintf('pgsql:dbname=%s;host=%s', $dbName, $instanceHost);

            // Connect to the database
            $conn = new PDO(
                $dsn,
                $username,
                $password,
                # ...
            );
        } catch (TypeError $e) {
            throw new RuntimeException(
                sprintf(
                    'Invalid or missing configuration! Make sure you have set ' .
                        '$username, $password, $dbName, and $instanceHost (for TCP mode). ' .
                        'The PHP error was %s',
                    $e->getMessage()
                ),
                $e->getCode(),
                $e
            );
        } catch (PDOException $e) {
            throw new RuntimeException(
                sprintf(
                    'Could not connect to the Cloud SQL Database. Check that ' .
                        'your username and password are correct, that the Cloud SQL ' .
                        'proxy is running, and that the database exists and is ready ' .
                        'for use. For more assistance, refer to %s. The PDO error was %s',
                    'https://cloud.google.com/sql/docs/postgres/connect-external-app',
                    $e->getMessage()
                ),
                $e->getCode(),
                $e
            );
        }

        return $conn;
    }
}

Best practices and other information

You can use the Cloud SQL Auth Proxy when testing your application locally. See the quickstart for using the Cloud SQL Auth Proxy for detailed instructions.

Connection Pools

Connections to underlying databases may be dropped, either by the database server itself, or by the infrastructure underlying Cloud Functions. We recommend using a client library that supports connection pools that automatically reconnect broken client connections. Additionally, we recommend using a globally scoped connection pool to increase the likelihood that your function reuses the same connection for subsequent invocations of the function, and closes the connection naturally when the instance is evicted (auto-scaled down). For more detailed examples on how to use connection pools, see Managing database connections.

Connection Limits

Cloud SQL imposes a maximum limit on concurrent connections, and these limits may vary depending on the database engine chosen (see Cloud SQL Quotas and Limits). It's recommended to use a connection with Cloud Functions, but it is important to set the maximum number of connections to 1.

Where possible, you should take care to only initialize a connection pool for functions that need access to your database. Some connection pools will create connections preemptively, which can consume excess resources and count towards your connection limits. For this reason, it's recommended to use Lazy Initialization to delay the creation of a connection pool until needed, and only include the connection pool in functions where it's used.

For more detailed examples on how to limit the number of connections, see Managing database connections.

API Quota Limits

Cloud Functions provides a mechanism that connects using the Cloud SQL Auth Proxy, which uses the Cloud SQL Admin API. API quota limits apply to the Cloud SQL Auth Proxy. The Cloud SQL Admin API quota used is approximately two times the number of Cloud SQL instances configured times the total number of functions deployed. You can set the number of max concurrent invocations to modify the expected API quota consumed. Cloud Functions also imposes rate limits on the number of API calls allowed per 100 seconds.

What's next