Connect from Cloud Run

This page contains information and examples for connecting to a Cloud SQL instance from a service running in Cloud Run.

For step-by-step instructions on running a Cloud Run sample web application connected to Cloud SQL, see the quickstart for connecting from Cloud Run.

Cloud SQL is a fully-managed database service that helps you set up, maintain, manage, and administer your relational databases in the cloud.

Cloud Run is a managed compute platform that lets you run containers directly on top of Google Cloud infrastructure.

Set up a Cloud SQL instance

  1. Enable the Cloud SQL Admin API in the Google Cloud project that you are connecting from, if you haven't already done so:

    Enable the API

  2. Create a Cloud SQL for PostgreSQL instance. We recommend that you choose a Cloud SQL instance location in the same region as your Cloud Run service for better latency, to avoid some networking costs, and to reduce cross region failure risks.

    By default, Cloud SQL assigns a public IP address to a new instance. You also have the option to assign a private IP address. For more information about the connectivity options for both, see the Connecting Overview page.

Configure Cloud Run

The steps to configure Cloud Run depend on the type of IP address that you assigned to your Cloud SQL instance. If you route all egress traffic through Direct VPC egress or a Serverless VPC Access connector, use a private IP address. Compare the two network egress methods.

Public IP (default)

  • Make sure that the instance has a public IP address. You can verify this on the Overview page for your instance in the Google Cloud console. If you need to add one, see the Configuring public IP page for instructions.
  • Get the INSTANCE_CONNECTION_NAME for your instance. You can find this value on the Overview page for your instance in the Google Cloud console or by running the following gcloud sql instances describe command:
    gcloud sql instances describe INSTANCE_NAME
       
    Replace INSTANCE_NAME with the name of your Cloud SQL instance.
  • Get the CLOUD_RUN_SERVICE_ACCOUNT_NAME for your Cloud Run service. You can find this value on the IAM page of the project that's hosting the Cloud Run service in the Google Cloud console or by running the following gcloud run services describe command in the project that's hosting the Cloud Run service:
    gcloud run services describe CLOUD_RUN_SERVICE_NAME
    --region CLOUD_RUN_SERVICE_REGION --format="value(spec.template.spec.serviceAccountName)"
       
    Replace the following variables:
    • CLOUD_RUN_SERVICE_NAME: the name of your Cloud Run service
    • CLOUD_RUN_SERVICE_REGION: the region of your Cloud Run service
  • Configure the service account for your Cloud Run service. Make sure that the service account has the appropriate Cloud SQL roles and permissions to connect to Cloud SQL.
    • The service account for your service needs one of the following IAM roles:
      • Cloud SQL Client (preferred)
      • Cloud SQL Admin
      Or, you can manually assign the following IAM permissions:
      • cloudsql.instances.connect
      • cloudsql.instances.get
  • If you're adding a Cloud SQL connection to a new service, you need to have your service containerized and uploaded to the Container Registry or Artifact Registry. If you don't already have a connection, then see these instructions about building and deploying a container image.

Like any configuration change, setting a new configuration for the Cloud SQL connection leads to the creation of a new Cloud Run revision. Subsequent revisions will also automatically get this Cloud SQL connection unless you make explicit updates to change it.

Console

  1. Go to Cloud Run

  2. Start configuring the service. To add Cloud SQL connections to an existing service, do the following:

    1. From the Services list, click the service name you want.
    2. Under View in Cloud Functions, click the service name.
    3. Click Edit and Redeploy.
  3. Enable connecting to a Cloud SQL instance:
    1. Click Container, Variables & Secrets, Connections, Security .
    2. Click the Container tab.
    3. Scroll to Cloud SQL connections.
    4. Click Add connection.
    5. Click Enable the Cloud SQL Admin button if you haven't enabled the Cloud SQL Admin API yet.

    Add a Cloud SQL connection

    • If you're adding a connection to a Cloud SQL instance in your project, then select the Cloud SQL instance you want from the menu.
    • If you're using a Cloud SQL instance from another project, then select custom connection string in the menu and enter the full instance connection name in the format PROJECT-ID:REGION:INSTANCE-ID.
    • To delete a connection, hold your cursor to the right of the connection to display the Delete icon, and click it.
  4. Click Create or Deploy.

Command line

Before using any of the following commands, make the following replacements:

  • IMAGE with the image you're deploying
  • SERVICE_NAME with the name of your Cloud Run service
  • INSTANCE_CONNECTION_NAME with the instance connection name of your Cloud SQL instance, or a comma delimited list of connection names.

    If you're deploying a new container, use the following command:

    gcloud run deploy \
      --image=IMAGE \
      --add-cloudsql-instances=INSTANCE_CONNECTION_NAME
    If you're updating an existing service, use the following command:
    gcloud run services update SERVICE_NAME \
      --add-cloudsql-instances=INSTANCE_CONNECTION_NAME

Terraform

The following code creates a base Cloud Run container, with a connected Cloud SQL instance.

resource "google_cloud_run_v2_service" "default" {
  name     = "cloudrun-service"
  location = "us-central1"

  deletion_protection = false # set to "true" in production

  template {
    containers {
      image = "us-docker.pkg.dev/cloudrun/container/hello:latest" # Image to deploy

      volume_mounts {
        name       = "cloudsql"
        mount_path = "/cloudsql"
      }
    }
    volumes {
      name = "cloudsql"
      cloud_sql_instance {
        instances = [google_sql_database_instance.default.connection_name]
      }
    }
  }
  client     = "terraform"
  depends_on = [google_project_service.secretmanager_api, google_project_service.cloudrun_api, google_project_service.sqladmin_api]
}

  1. Apply the changes by entering terraform apply.
  2. Verify the changes by checking the Cloud Run service, clicking the Revisions tab, and then the Connections tab.

Private IP

If the authorizing service account belongs to a different project than the one containing the Cloud SQL instance, do the following:

  • In both projects, enable the Cloud SQL Admin API.
  • For the service account in the project that contains the Cloud SQL instance, add the IAM permissions.
Direct VPC egress and connectors use private IP addresses to handle communication to your VPC network. To connect directly with private IP addresses using one of these egress methods, do the following:
  1. Make sure that the Cloud SQL instance created previously has a private IP address. To add an internal IP address, see Configure private IP.
  2. Configure your egress method to connect to the same VPC network as your Cloud SQL instance. Note the following conditions:
    • Direct VPC egress and Serverless VPC Access both support communication to VPC networks connected using Cloud VPN and VPC Network Peering.
    • Direct VPC egress and Serverless VPC Access don't support legacy networks.
    • Unless you're using Shared VPC, a connector must share the same project and region as the resource that uses it, although the connector can send traffic to resources in different regions.
  3. Connect using your instance's private IP address and port 5432.

Connect to Cloud SQL

After you configure Cloud Run, you can connect to your Cloud SQL instance.

Public IP (default)

For public IP paths, Cloud Run provides encryption and connects using the Cloud SQL Auth Proxy in two ways:

Use Secret Manager

Google recommends that you use Secret Manager to store sensitive information such as SQL credentials. You can pass secrets as environment variables or mount as a volume with Cloud Run.

After creating a secret in Secret Manager, update an existing service, with the following command:

Command line

gcloud run services update SERVICE_NAME \
  --add-cloudsql-instances=INSTANCE_CONNECTION_NAME
  --update-env-vars=INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME_SECRET \
  --update-secrets=DB_USER=DB_USER_SECRET:latest \
  --update-secrets=DB_PASS=DB_PASS_SECRET:latest \
  --update-secrets=DB_NAME=DB_NAME_SECRET:latest

Terraform

The following creates secret resources to securely hold the database user, password, and name values using google_secret_manager_secret and google_secret_manager_secret_version. Note that you must update the project compute service account to have access to each secret.


# Create dbuser secret
resource "google_secret_manager_secret" "dbuser" {
  secret_id = "dbusersecret"
  replication {
    auto {}
  }
  depends_on = [google_project_service.secretmanager_api]
}

# Attaches secret data for dbuser secret
resource "google_secret_manager_secret_version" "dbuser_data" {
  secret      = google_secret_manager_secret.dbuser.id
  secret_data = "secret-data" # Stores secret as a plain txt in state
}

# Update service account for dbuser secret
resource "google_secret_manager_secret_iam_member" "secretaccess_compute_dbuser" {
  secret_id = google_secret_manager_secret.dbuser.id
  role      = "roles/secretmanager.secretAccessor"
  member    = "serviceAccount:${data.google_project.project.number}-compute@developer.gserviceaccount.com" # Project's compute service account
}


# Create dbpass secret
resource "google_secret_manager_secret" "dbpass" {
  secret_id = "dbpasssecret"
  replication {
    auto {}
  }
  depends_on = [google_project_service.secretmanager_api]
}

# Attaches secret data for dbpass secret
resource "google_secret_manager_secret_version" "dbpass_data" {
  secret      = google_secret_manager_secret.dbpass.id
  secret_data = "secret-data" # Stores secret as a plain txt in state
}

# Update service account for dbpass secret
resource "google_secret_manager_secret_iam_member" "secretaccess_compute_dbpass" {
  secret_id = google_secret_manager_secret.dbpass.id
  role      = "roles/secretmanager.secretAccessor"
  member    = "serviceAccount:${data.google_project.project.number}-compute@developer.gserviceaccount.com" # Project's compute service account
}


# Create dbname secret
resource "google_secret_manager_secret" "dbname" {
  secret_id = "dbnamesecret"
  replication {
    auto {}
  }
  depends_on = [google_project_service.secretmanager_api]
}

# Attaches secret data for dbname secret
resource "google_secret_manager_secret_version" "dbname_data" {
  secret      = google_secret_manager_secret.dbname.id
  secret_data = "secret-data" # Stores secret as a plain txt in state
}

# Update service account for dbname secret
resource "google_secret_manager_secret_iam_member" "secretaccess_compute_dbname" {
  secret_id = google_secret_manager_secret.dbname.id
  role      = "roles/secretmanager.secretAccessor"
  member    = "serviceAccount:${data.google_project.project.number}-compute@developer.gserviceaccount.com" # Project's compute service account
}

Update the main Cloud Run resource to include the new secrets.

resource "google_cloud_run_v2_service" "default" {
  name     = "cloudrun-service"
  location = "us-central1"

  deletion_protection = false # set to "true" in production

  template {
    containers {
      image = "us-docker.pkg.dev/cloudrun/container/hello:latest" # Image to deploy

      # Sets a environment variable for instance connection name
      env {
        name  = "INSTANCE_CONNECTION_NAME";
        value = google_sql_database_instance.default.connection_name
      }
      # Sets a secret environment variable for database user secret
      env {
        name = "DB_USER"
        value_source {
          secret_key_ref {
            secret  = google_secret_manager_secret.dbuser.secret_id # secret name
            version = "latest"                                      # secret version number or 'latest'
          }
        }
      }
      # Sets a secret environment variable for database password secret
      env {
        name = "DB_PASS"
        value_source {
          secret_key_ref {
            secret  = google_secret_manager_secret.dbpass.secret_id # secret name
            version = "latest"                                      # secret version number or 'latest'
          }
        }
      }
      # Sets a secret environment variable for database name secret
      env {
        name = "DB_NAME"
        value_source {
          secret_key_ref {
            secret  = google_secret_manager_secret.dbname.secret_id # secret name
            version = "latest"                                      # secret version number or 'latest'
          }
        }
      }

      volume_mounts {
        name       = "cloudsql"
        mount_path = "/cloudsql"
      }
    }
    volumes {
      name = "cloudsql"
      cloud_sql_instance {
        instances = [google_sql_database_instance.default.connection_name]
      }
    }
  }
  client     = "terraform"
  depends_on = [google_project_service.secretmanager_api, google_project_service.cloudrun_api, google_project_service.sqladmin_api]
}

Apply the changes by entering terraform apply.

The example command uses the secret version, latest; however, Google recommends pinning the secret to a specific version, SECRET_NAME:v1.

Private IP

For private IP paths, your application connects directly to your instance through a VPC network. This method uses TCP to connect directly to the Cloud SQL instance without using the Cloud SQL Auth Proxy.

Connect with TCP

Connect using the private IP address of your Cloud SQL instance as the host and port 5432.

Python

To see this snippet in the context of a web application, view the README on GitHub.

import os
import ssl

import sqlalchemy


def connect_tcp_socket() -> sqlalchemy.engine.base.Engine:
    """Initializes a TCP connection pool for a Cloud SQL instance of Postgres."""
    # Note: Saving credentials in environment variables is convenient, but not
    # secure - consider a more secure solution such as
    # Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
    # keep secrets safe.
    db_host = os.environ[
        "INSTANCE_HOST"
    ]  # e.g. '127.0.0.1' ('172.17.0.1' if deployed to GAE Flex)
    db_user = os.environ["DB_USER"]  # e.g. 'my-db-user'
    db_pass = os.environ["DB_PASS"]  # e.g. 'my-db-password'
    db_name = os.environ["DB_NAME"]  # e.g. 'my-database'
    db_port = os.environ["DB<_PORT&q>u<ot;]  #> <e.g. 54>3<2

    p<ool = s>qlalchemy.create_engine(
        # Equivalent URL:
        # postgresql+pg8000://db_user:db_pass@db_host:db_port/db_name
        sqlalchemy.engine.url.URL.create(
            drivername="postgresql+pg8000",
            username=db_user,
            password=db_pass,
            host=db_host,
            port=db_port,
            database=db_name,
        ),
        # ...
    )
    return pool

Java

To see this snippet in the context of a web application, view the README on GitHub.

Note:

  • CLOUD_SQL_CONNECTION_NAME should be represented as <MY-PROJECT>:<INSTANCE-REGION>:<INSTANCE-NAME>
  • Using the argument ipTypes=PRIVATE will force the SocketFactory to connect with an instance's associated private IP
  • See the JDBC socket factory version requirements for the pom.xml file here .


import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import javax.sql.DataSource;

public class TcpConnectionPoolFactory extends ConnectionPoolFactory {

  // Note: Saving credentials in environment variables is convenient, but not
  // secure - consider a more secure solution such as
  // Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
  // keep secrets safe.
  private static final String DB_USER = System.getenv("DB_USER");
  private static final String DB_PASS = System.getenv("DB_PASS");
  private static final String DB_NAME = System.getenv("DB_NAME");

  private static final String INSTANCE_HOST = System.getenv("INSTANCE_HOST");
  private static final String DB_PORT = System.getenv("DB_PORT");


  public static DataSource createConnectionPool() {
    // The configuration object specifies behaviors for the connection pool.
    HikariConfig config = new HikariConfig();

    // The following URL is equivalent to setting t<he config opt>i<ons bel>o<w:
    >// jdb<c:postg>&resql://I<NSTANCE>_HOST:DB_PORT/DB_NAME?user=DB_USERpassword=DB_PASS
    // See the link below for more info on building a JDBC URL for the Cloud SQL JDBC Socket Factory
    // https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory#creating-the-jdbc-url

    // Configure which instance and what database user to connect with.
    config.setJdbcUrl(String.format("jdbc:postgresql://%s:%s/%s", INSTANCE_HOST, DB_PORT, DB_NAME));
    config.setUsername(DB_USER); // e.g. "root", "postgres"
    config.setPassword(DB_PASS); // e.g. "my-password"


    // ... Specify additional connection properties here.
    // ...

    // Initialize the connection pool using the configuration object.
    return new HikariDataSource(config);
  }
}

Node.js

To see this snippet in the context of a web application, view the README on GitHub.

const Knex = require('knex');
const fs = require('fs');

// createTcpPool initializes a TCP connection pool for a Cloud SQL
// instance of Postgres.
const createTcpPool >= async config = {
  // Note: Saving credentials in environment variables is convenient, but not
  // secure - consider a more secure solution such as
  // Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
  // keep secrets safe.
  const dbConfig = {
    client: 'pg',
    connection: {
      host: process.env.INSTANCE_HOST, // e.g. '127.0.0.1'
      port: process.env.DB_PORT, // e.g. '5432'
      user: process.env.DB_USER, // e.g. 'my-user'
      password: process.env.DB_PASS, // e.g. 'my-user-password'
      database: process.env.DB_NAME, // e.g. 'my-database'
    },
    // ... Specify additional properties here.
    ...config,
  };
  // Establish a connection to the database.
  return Knex(dbConfig);
};

Go

To see this snippet in the context of a web application, view the README on GitHub.

package cloudsql

import (
	"database/sql"
	"fmt"
	"log"
	"os"

	// Note: If connecting using the App Engine Flex Go runtime, use
	// "github.com/jackc/pgx/stdlib" instead, since v5 requires
	// Go modules which are not supported by App Engine Flex.
	_ "github.com/jackc/pgx/v5/stdlib"
)

// connectTCPSocket initializes a TCP connection pool for a Cloud SQL
// instance of Postgres.
func connectTCPSocket() (*sql.DB, error) {
	mustGetenv := func(k string) string {
		v := os.Getenv(k)
		if v == "" {
			log.Fatalf("Fatal Error in connect_tcp.go: %s environment variable not set.", k)
		}
		return v
	}
	// Note: Saving credentials in environment variables is convenient, but not
	// secure - consider a more secure solution such as
	// Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
	// keep secrets safe.
	var (
		dbUser    = mustGetenv("DB_USER")       // e.g. 'my-db-user'
		dbPwd     = mustGetenv("DB_PASS")       // e.g. 'my-db-password'
		dbTCPHost = mustGetenv("INSTANCE_HOST") // e.g. '127.0.0.1' ('172.17.0.1' if deployed to GAE Flex)
		dbPort    = mustGetenv("DB_PORT")       // e.g. '5432'
		dbName    = mustGetenv("DB_NAME")       // e.g. 'my-database'
	)

	dbURI := fmt.Sprintf("host=%s user=%s password=%s port=%s database=%s",
		dbTCPHost, dbUser, dbPwd, dbPort, dbName)


	// dbPool is the pool of database connections.
	dbPool, err := sql.Open("pgx", dbURI)
	if err != nil {
		return nil, fmt.Errorf("sql.Open: %w", err)
	}

	// ...

	return dbPool, nil
}

C#

To see this snippet in the context of a web application, view the README on GitHub.

using Npgsql;
using System;

namespace CloudSql
{
    public class PostgreSqlTcp
    {
        public static NpgsqlConnectionStringBuilder NewPostgreSqlTCPConnectionString()
        {
            // Equivalent connection string:
            // "<;Uid=DB>_USER<;Pwd=DB>_PASS;<Host=INSTANCE>_HOST;Data<base=DB>_NAME;"
            var connectionString = new NpgsqlConnectionStringBuilder()
            {
                // Note: Saving credentials in environment variables is convenient, but not
                // secure - consider a more secure solution such as
                // Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
                // keep secrets safe.
                Host = Environment.GetEnvironmentVariable("INSTANCE_HOST"),     // e.g. '127.0.0.1'
                // Set Host to 'cloudsql' when deploying to App Engine Flexible environment
                Username = Environment.GetEnvironmentVariable("DB_USER"), // e.g. 'my-db-user'
                Password = Environment.GetEnvironmentVariable("DB_PASS"), // e.g. 'my-db-password'
                Database = Environment.GetEnvironmentVariable("DB_NAME"), // e.g. 'my-database'

                // The Cloud SQL proxy provides encryption between the proxy and instance.
                SslMode = SslMode.Disable,
            };
            connectionString.Pooling = true;
            // Specify additional properties here.
            return connectionString;
        }
    }
}

Ruby

To see this snippet in the context of a web application, view the README on GitHub.

tcp: &tcp
  adapter: postgresql
  # Configure additional properties here.
  # Note: Saving credentials in environment variables is convenient, but not
  # secure - consider a more secure solution such as
  # Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
  # keep secrets safe.
  username: <%= ENV["DB_USE>R"] %  # e.g. "my-database-use<r"
  password:> %= ENV["DB_PASS"] % # e.g. "<;my-database-password"
  database: %= ENV.f>etch(&quo<t;DB_NAME") { "vote_development">; } %
  host: %= ENV.fetch("INSTANCE_HOST&q<uot;) { "127.0.0.1" }% ># '172.17.0.1' if deployed to GAE Flex
  port: %= ENV.fetch("DB_PORT") { 5432 }%

PHP

To see this snippet in the context of a web application, view the README on GitHub.

namespace Google\Cloud\Samples\CloudSQL\Postgres;

use PDO;
use PDOException;
use RuntimeException;
use TypeError;

class DatabaseTcp
{
    public static function initTcpDatabaseConnection(): PDO
    {
        try {
            // Note: Saving credentials in environment variables is convenient, but not
            // secure - consider a more secure solution such as
            // Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
            // keep secrets safe.
            $username = getenv('DB_USER'); // e.g. 'your_db_user'
            $password = getenv('DB_PASS'); // e.g. 'your_db_password'
            $dbName = getenv('DB_NAME'); // e.g. 'your_db_name'
            $instanceHost = getenv('INSTANCE_HOST'); // e.g. '127.0.0.1' ('172.17.0.1' for GAE Flex)

            // Connect using TCP
            $dsn = sprintf('pgsql:dbname=%s;host=%s', $dbName, $instanceHost);

            // Connect to the database
            $conn = new PDO(
                $dsn,
                $username,
                $password,
                # ...
            );
        } catch (TypeError $e) {
            throw new RuntimeException(
                sprintf(
                    'Invalid or missing configuration! Make sure you have set ' .
                        '$username, $password, $dbName, and $instanceHost> (for TCP mode). ' .
                        &#>39;The PHP error was %s',
                    $e-getMessage()
                ),
                $e-getCode(),
                $e
            );
        } catch (PDOException $e) {
            throw new RuntimeException(
                sprintf(
                    'Could not connect to the Cloud SQL Database. Check that ' .
                        'your username and password are correct, that the Cloud SQL ' .
                        'proxy is running, and that the database exists and is ready ' .
                        'for use. For more assistance, refer to %s. T>he PDO error was %s',
                    '>https://cloud.google.com/sql/docs/postgres/connect-external-app',
                    $e-getMessage()
                ),
                $e-getCode(),
                $e
            );
        }

        return $conn;
    }
}

Best practices and other information

You can use the Cloud SQL Auth Proxy when testing your application locally. See the quickstart for using the Cloud SQL Auth Proxy for detailed instructions.

You can also test using the Cloud SQL Proxy via a docker container.

Connection Pools

Connections to underlying databases may be dropped, either by the database server itself, or by the platform infrastructure. We recommend using a client library that supports connection pools that automatically reconnect broken client connections. For more detailed examples on how to use connection pools, see the Managing database connections page.

Connection Limits

Both the MySQL and PostgreSQL editions of Cloud SQL impose a maximum limit on concurrent connections, and these limits may vary depending on the database engine chosen (see the Cloud SQL Quotas and Limits page).

Cloud Run container instances are limited to 100 connections to a Cloud SQL database. Each instance of a Cloud Run service or job can have 100 connections to the database, and as this service or job scales, the total number of connections per deployment can grow.

You can limit the maximum number of connections used per instance by using a connection pool. For more detailed examples on how to limit the number of connections, see the Managing database connections page.

API Quota Limits

Cloud Run provides a mechanism that connects using the Cloud SQL Auth Proxy, which uses the Cloud SQL Admin API. API quota limits apply to the Cloud SQL Auth Proxy. The Cloud SQL Admin API quota used is approximately two times the number of Cloud SQL instances configured by the number of Cloud Run instances of a particular service deployed at any one time. You can cap or increase the number of Cloud Run instances to modify the expected API quota consumed.

What's next