Example: Private connectivity for a Cloud SQL instance

This page explains through an example, how to use the Private Service Connect (PSC) to establish a connection between your Private Service Access (PSA) enabled Cloud SQL instance and the Integration Connectors runtime. Your Cloud SQL instance can be any of the following types:

Considerations

When you create a PSC service attachment, consider the following key points:

  • The PSC service attachment and the load balancer are created in different subnets within the same VPC. And specifically, the service attachment is always created in a NAT subnet.
  • SOCKS5 proxy servers must be bound to the 0.0.0.0:<port> IP address because this is required for incoming traffic from the load balancer and the health check probes. For more information, see Health check.
  • Traffic from load balancer and health check probe should be sent to the same port.
  • Configure the firewall rules to facilitate the traffic flow.

    Ingress rules

    • Traffic from the PSC service attachment's subnet should reach the ILB's subnet.
    • Within the ILB's subnet, ILB should be able to send traffic to the SOCKS5 proxy servers.
    • The health check probe should be able to access the SOCKS5 proxy servers. The Google Cloud health check probes have a fixed IP range (35.191.0.0/16, 130.211.0.0/22). So these IPs can be allowed to send traffic to the SOCKS proxy servers.

    Egress rules

    Egress traffic is enabled by default in a Google Cloud project, unless specific deny rules are configured.

  • All your Google Cloud components such as the PSC service attachment and the load balancer should be in the same region.
  • Your backend system should not be open to the public network, as this can be a security concern. However, ensure that your SOCKS5 proxy servers accept traffic in the following scenarios:
    • Pass-through load balancers (L4 TCP/UDP ILB): Requests from the PSC service attachment's NAT IPs should be able to reach your SOCKS5 proxy servers. These NAT IPs are auto-generated. Therefore, you must allow the entire NAT subnet's IP range in which your service attachment resides. For more information, see Private Service Connect subnets.
    • Proxy-based/HTTP(s) load balancers (L4 proxy ILB, L7 ILB): All new requests originate from the load balancer. Therefore, your SOCKS5 proxy servers should accept requests from the proxy subnet of your VPC network. For more information, see Proxy-only subnets for Envoy-based load balancers.

Set up PSC for a Cloud SQL instance

Integration Connectors uses Cloud SQL Auth proxy to connect to a Cloud SQL instance. The Cloud SQL Auth proxy supports chaining through a SOCKS5 proxy, which lets you forward encrypted traffic from the Cloud SQL Auth proxy to the destination Cloud SQL instance. Hence, you need SOCKS5 proxy servers to connect to a private Cloud SQL instance.

sample illustration

The following diagram shows how your Google Cloud project will look after the PSC service attachment is configured for a sample Cloud SQL instance setup.

sample illustration

In this example, SOCKS5 proxy servers are exposed through a service attachment so that PSC can securely connect to a Cloud SQL instance. The SOCKS proxy servers have access to a Cloud SQL instance through the Private Service Access. The SOCKS5 proxy servers are in an unmanaged Compute Engine instance group, and you can decide the number of proxy instances based on the expected ingress traffic.

How traffic flows in the example?

  1. Integration Connectors sends a request to a service attachment.
  2. The service attachment forwards the request to a L4 ILB.
  3. The L4 ILB sends a request to the SOCKS5 proxy servers.

    The ILB has the forwarding rules and performs port forwarding. By default, a SOCKS5 proxy listens on the 1080 port. However, if the SOCKS5 proxy servers listen on a different port, that port must be opened for listening on the ILB as well.

  4. The SOCKS5 proxy servers forward the request to the Cloud SQL instance.

Before you begin

Before creating a PSC service attachment for the sample scenario, do the following tasks:

  • Install the gcloud CLI.
  • Enable the Compute Engine API and the Service Networking API for your Google Cloud project.
  • To make your CLI commands less verbose, you can set the values for your PROJECT_ID, REGION, and ZONE using the following commands:
    gcloud config set project PROJECT_ID
    gcloud config set compute/region REGION
    gcloud config set compute/zone ZONE
  • For the commands in this tutorial, replace BACKEND_SERVER_PORT with 1080 which is the default port on which a SOCKS5 proxy server runs.
  • It's recommended to create a new VPC network and use that when trying this sample scenario. After testing the scenario, you can delete the VPC network and other resources.
  • There should be at least one existing connection that you have created. The connection can be of any type. Having an existing connection lets you fetch the project ID of the service directory from the Integration Connectors runtime. This project ID is required for creating the PSC service attachment.

Create a PSC service attachment

To create a PSC service attachment for the sample scenario, do the following tasks:

  1. Create a VPC network and the required subnets.
    1. Create a VPC network.
      gcloud compute networks create VPC_NETWORK \
      --project=PROJECT_ID --subnet-mode=custom --mtu=1460 \
      --bgp-routing-mode=regional
    2. Add Subnet-1.
      gcloud compute networks subnets create SUBNET_NAME_1 \
      --network=VPC_NETWORK --range=SUBNET_RANGE_1 \
      --purpose=PRIVATE_SERVICE_CONNECT

      This command creates Subnet-1 as a NAT subnet which will be exclusively used to host the PSC service attachment. You can't host any other service in this NAT subnet.

    3. Add Subnet-2.
      gcloud compute networks subnets create SUBNET_NAME_2 \
      --network=VPC_NETWORK --range=SUBNET_RANGE_2
  2. Create a private Cloud SQL instance.
    1. Configure private service access.
      1. Allocate an IP address range.
        gcloud compute addresses create google-managed-services-VPC_NETWORK \
        --global --purpose=VPC_PEERING --prefix-length=16 \
        --network=projects/PROJECT_ID/global/networks/VPC_NETWORK
        
      2. Create a private connection.
        gcloud services vpc-peerings connect \
        --service=servicenetworking.googleapis.com \
        --ranges=google-managed-services-VPC_NETWORK \
        --network=VPC_NETWORK \
        --project=PROJECT_ID
        
    2. Create a Cloud SQL instance with a private IP.
      gcloud beta sql instances create \
      INSTANCE_NAME \
      --database-version=DATABASE_VERSION \
      --cpu=NUMBER_OF_CPUs \
      --memory=MEMORY \
      --zone=ZONE \
      --root-password=ROOT_PASSWORD \
      --network=projects/PROJECT_ID/global/networks/VPC_NETWORK \
      --no-assign-ip \
      --allocated-ip-range-name=google-managed-services-VPC_NETWORK
      

      Specify the DATABASE_VERSION based on the type of instance you want to create. You can create an instance of type MySQL, PostgreSQL, or SQL Server. For the list of all the supported database versions, see SQL Database Version.

      This command creates a default user for your Cloud SQL instance. Following are the default users that will be created for the various Cloud SQL * instances:

      • Cloud SQL for MySQL - root
      • Cloud SQL for SQL Server - sqlserver
      • Cloud SQL for PostgreSQL - postgres
    3. (Optional) If you don't want to use the default user, create a new user for the newly created Cloud SQL instance.
      gcloud sql users create USER --host=% --instance=INSTANCE_NAME \
      --password=PASSWORD
      

      Ensure the user has all the required permissions to access the database that you will create in the next step.

    4. Create a database in the newly created Cloud SQL instance.
      gcloud sql databases create DATABASE_NAME \
      --instance=INSTANCE_NAME
      
  3. Configure Cloud NAT.
    1. Create a simple router.
      gcloud compute routers create NAT_ROUTER_NAME \
          --network=VPC_NETWORK
      
    2. Configure the network address translation.
      gcloud compute routers nats create NAT_GATEWAY_NAME \
          --router=NAT_ROUTER_NAME \
          --auto-allocate-nat-external-ips \
          --nat-all-subnet-ip-ranges
      
  4. Create Compute Engine VM instances for running SOCKS5 proxy servers.
    1. Create proxy instance 1.
      gcloud compute instances create PROXY_INSTANCE_1 \
      --project=PROJECT_ID \
      --network-interface=network-tier=PREMIUM,subnet=SUBNET_NAME_2,no-address
      

    Based on your requirement, you can create as many VM instances as required.

  5. Create a firewall rule for allowing SSH to your VM instances.
    gcloud compute firewall-rules create FIREWALL_RULE_NAME_SSH \
    --direction=INGRESS --priority=1000 --network=VPC_NETWORK --allow=tcp:22
    
  6. Install SOCKS5 proxy.

    Detailed steps to install and configure a SOCKS5 proxy server are outside the scope of this document, and you can install any SOCKS5 proxy of your choice. The following steps show how to install and configure the Dante SOCKS5 proxy server.

    1. SSH to your VM instance.
      gcloud compute ssh \
          --tunnel-through-iap \
          PROXY_INSTANCE_1
      
    2. Install the Dante SOCKS5 proxy server.
      sudo apt update
      sudo apt install dante-server
    3. Check the server interface.
      sudo ip a
    4. Create backup of the Dante configuration.
      sudo mv /etc/danted.conf /etc/danted.conf.bak
    5. Create a new Dante configuration file.
      sudo nano /etc/danted.conf
    6. Copy the following configuration to the config file:
      logoutput: /var/log/socks.log
      # Bind the server to the 0.0.0.0 IP address to allow traffic
      # traffic from the load balancer and the health check probes.
      internal: 0.0.0.0 port = 1080
      external: ens4
      clientmethod: none
      socksmethod: none
      user.privileged: root
      user.notprivileged: nobody
      client pass {
              from: 0.0.0.0/0 to: 0.0.0.0/0
              log: error connect disconnect
      }
      client block {
              from: 0.0.0.0/0 to: 0.0.0.0/0
              log: connect error
      }
      socks pass {
              from: 0.0.0.0/0 to: 0.0.0.0/0
              log: error connect disconnect
      }
      socks block {
              from: 0.0.0.0/0 to: 0.0.0.0/0
              log: connect error
      }
    7. Restart the Dante server and check the status.
      sudo systemctl restart danted
      sudo systemctl status danted
    8. Exit from the VM instance.
      exit
  7. Set up an unmanaged instance group.
    1. Create an unmanaged instance group.
      gcloud compute instance-groups unmanaged create INSTANCE_GROUP_NAME
    2. Add the VM instances created in step 3 to the group.
      gcloud compute instance-groups unmanaged add-instances INSTANCE_GROUP_NAME \
      --instances=PROXY_INSTANCE_1
  8. Create a health check probe and allow the traffic from the probe.
    1. Create the health check probe.
      gcloud compute health-checks create tcp HEALTH_CHECK_NAME \
      --port BACKEND_SERVER_PORT --region=REGION

      In this command, set BACKEND_SERVER_PORT to 1080 which is the default port on which the SOCKS5 proxy servers run.

    2. Create a firewall rule to allow traffic from the probe.
      gcloud compute firewall-rules create FIREWALL_RULE_NAME_HEALTHCHECK \
      --direction=INGRESS --priority=1000 --network=VPC_NETWORK --allow=tcp:BACKEND_SERVER_PORT \
      --source-ranges=35.191.0.0/16,130.211.0.0/22
  9. Create an L4 internal load balancer and allow traffic from the load balancer.
    1. Create a backend service.
      gcloud compute backend-services create BACKEND_SERVICE \
      --load-balancing-scheme=internal --protocol=tcp --health-checks=HEALTH_CHECK_NAME \
      --health-checks-region=REGION 
    2. Add instance group to the backend service.
      gcloud compute backend-services add-backend BACKEND_SERVICE \
      --instance-group=INSTANCE_GROUP_NAME \
      --instance-group-zone=ZONE
    3. Create a forwarding rule.
      gcloud compute forwarding-rules create FORWARDING_RULE_NAME \
      --load-balancing-scheme=internal --network=VPC_NETWORK --subnet=SUBNET_NAME_2 \
      --ip-protocol=TCP --ports=BACKEND_SERVER_PORT --backend-service=BACKEND_SERVICE \
      --backend-service-region=REGION
    4. Create a firewall rule to allow internal traffic from load-balancer to the instance group.
      gcloud compute firewall-rules create FIREWALL_RULE_NAME_INTERNAL \
      --direction=INGRESS --priority=1000 --network=VPC_NETWORK \
      --action=ALLOW --rules=all --source-ranges=SUBNET_RANGE_2
  10. Create the PSC service attachment.
    1. Create a firewall rule to allow traffic from the PSC service attachment to the internal load balancer created in the previous step.
      gcloud compute firewall-rules create FIREWALL_RULE_NAME_SA \
      --direction=INGRESS --priority=1000 --network=VPC_NETWORK \
      --allow=tcp:BACKEND_SERVER_PORT --source-ranges=SUBNET_RANGE_1
    2. Create service attachment with explicit approval.
      gcloud compute service-attachments create SERVICE_ATTACHMENT_NAME \
      --producer-forwarding-rule=FORWARDING_RULE_NAME  \
      --connection-preference=ACCEPT_MANUAL \
      --consumer-accept-list=SERVICE_DIRECTORY_PROJECT_ID=LIMIT \
      --nat-subnets=SUBNET_NAME_1

      In this command, LIMIT is the connection limit for the project. The connection limit is the number of consumer Private Service Connect endpoints that can connect to this service. To understand how to get the SERVICE_DIRECTORY_PROJECT_ID, see Get the project ID of the service directory.

  11. Create an endpoint attachment.

    You can think of the endpoint attachment as an interface to the PSC service attachment. You can't use the PSC service attachment directly for configuring private connectivity. The PSC service attachment can be accessed only through an endpoint attachment. And you can create the endpoint attachment either as an IP address or as a hostname. After creating the endpoint attachment, you can use it when you configure a connector for private connectivity. For more information, see Create an endpoint attachment.

  12. Verify the PSC setup.

    You can verify the private connectivity by creating a Cloud SQL connection and setting up the SOCKS5 proxy server as shown in this tutorial. For detailed steps on creating a connection, see the specific (Cloud SQL for MySQL, Cloud SQL for PostgreSQL, or Cloud SQL for SQL Server) connector documentation. When creating the connection, in the Destinations section (in step 5), select the Destination type as Host address, and then enter the IP address or hostname of the endpoint attachment for the SOCKS5 proxy server details. Set the port value to 1080, unless you have configured a different port for the SOCKS5 proxy server. If the connection creation is successful, the status of the newly created connection will be Active in your Connections page in the Cloud console.

Get the project ID of the service directory

As a best practice, you can create the PSC service attachment such that it accepts requests only from the specified Google Cloud projects. However, to do this, you need the project ID of the service directory associated with your Google Cloud project. To get the project ID of the service directory, you can use the List Connections API as shown in the following example.

Syntax

curl -X GET \
    -H "authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
    "https://connectors.googleapis.com/v1/projects/CONNECTORS_PROJECT_ID/locations/-/connections"

Replace the following:

  • CONNECTORS_PROJECT_ID: The ID of your Google Cloud project where you created your connection.

Example

This example gets the project ID of the service directory for the connectors-test Google Cloud project.

curl -X GET \
    -H "authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json" \
    "https://connectors.googleapis.com/v1/projects/connectors-test/locations/-/connections"

Running this command on the terminal displays an output similar to the following:

.....
{
  "connections": [
    {
      "name": "projects/connectors-test/locations/asia-northeast1/connections/big-query-iam-invalid-sa",
      "createTime": "2022-10-07T09:02:31.905048520Z",
      "updateTime": "2022-10-07T09:22:39.993778690Z",
      "connectorVersion": "projects/connectors-test/locations/global/providers/gcp/connectors/bigquery/versions/1",
      "status": {
        "state": "ACTIVE"
      },
      "configVariables": [
        {
          "key": "project_id",
          "stringValue": "connectors-test"
        },
        {
          "key": "dataset_id",
          "stringValue": "testDataset"
        }
      ],
      "authConfig": {},
      "serviceAccount": "564332356444-compute@developer.gserviceaccount.com",
      "serviceDirectory": "projects/abcdefghijk-tp/locations/asia-northeast1/namespaces/connectors/services/runtime",
      "nodeConfig": {
        "minNodeCount": 2,
        "maxNodeCount": 50
      }
    },
....

In the sample output, for the connectors-test Google Cloud project, the project ID of the service directory is abcdefghijk-tp.