이 페이지에서는 구성된 가상 프라이빗 클라우드(VPC) 외부에서 PostgreSQL용 AlloyDB 클러스터에 연결하는 다양한 방법을 살펴봅니다. AlloyDB 클러스터를 이미 만들었다고 가정합니다.
외부 연결 정보
AlloyDB 클러스터는Google Cloud VPC 내의 여러 노드로 구성됩니다. 클러스터를 만들 때 VPC 중 하나와 새 클러스터가 포함된 Google 관리형 VPC 간에 비공개 서비스 액세스도 구성합니다. 이 피어링된 연결을 사용하면 비공개 IP 주소를 사용하여 클러스터의 VPC에 있는 리소스를 마치 자체 VPC에 포함된 것처럼 액세스할 수 있습니다.
연결된 VPC 외부에서 애플리케이션을 클러스터에 연결해야 하는 경우가 있습니다.
애플리케이션이 비공개 서비스 액세스를 통해 클러스터에 연결한 VPC 외부의 Google Cloud 생태계 내 다른 곳에서 실행됩니다.
애플리케이션이 Google 네트워크 외부에 있는 VPC에서 실행됩니다.
애플리케이션이 공개 인터넷의 다른 곳에 있는 머신에서 '온프레미스'로 실행됩니다.
이 모든 경우에 추가 서비스를 설정하여 AlloyDB 클러스터에 대한 이러한 종류의 외부 연결을 사용 설정해야 합니다.
외부 연결 솔루션 요약
니즈에 따라 외부 연결을 설정하는 데 다음 두 가지 일반적인 솔루션을 사용하는 것이 좋습니다.
프로젝트 개발 또는 프로토타입 제작 또는 비교적 저렴한 프로덕션 환경의 경우 VPC 내 bastion이라고도 하는 중간 가상 머신(VM)을 설정합니다. 이 중간 VM을 외부 애플리케이션 환경과 AlloyDB 클러스터 간의 보안 연결로 사용하는 다양한 방법이 있습니다.
오픈소스 도구와 최소한의 추가 리소스를 사용하여 VPC 외부에서 AlloyDB 클러스터에 연결을 설정하려면 해당 VPC 내에 설정된 중간 VM에서 프록시 서비스를 실행합니다. 이 목적으로 새 VM을 설정하거나 AlloyDB 클러스터의 VPC 내에 이미 실행 중인 VM을 사용할 수 있습니다.
자체 관리형 솔루션으로 중간 VM을 사용하는 것은 일반적으로 네트워크 연결 제품을 사용하는 것보다 비용이 적게 들고 설정 시간이 더 빠릅니다. 또한 연결의 가용성, 보안, 데이터 처리량이 모두 프로젝트의 일부로 유지해야 하는 중간 VM에 따라 달라지는 단점도 있습니다.
IAP를 통해 연결
IAP(Identity-Aware Proxy)를 사용하면 중간 VM의 공개 IP 주소를 노출하지 않고도 클러스터에 안전하게 연결할 수 있습니다. 이 경로를 통한 액세스를 제한하기 위해 방화벽 규칙과 Identity and Access Management(IAM)를 조합하여 사용합니다.
따라서 IAP는 개발 및 프로토타입 제작과 같은 비프로덕션 용도에 적합한 솔루션입니다.
피어링된 VPC에서 연결하는 경우 중간 VM의 내부 IP 주소를 사용하고 그렇지 않은 경우 외부 IP 주소를 사용합니다.
AlloyDB 인증 프록시가 수신 대기하는 포트에 연결되도록 외부 클라이언트에서 psql를 사용하여 연결을 테스트합니다. 예를 들어 postgres 사용자 역할로 포트 5432에 연결하려면 다음 안내를 따르세요.
psql-hIP_ADDRESS-pPORT_NUMBER-UUSERNAME
PostgreSQL 풀러를 통해 연결
외부 클라이언트 대신 중간 VM에 AlloyDB 인증 프록시를 설치하고 실행해야 하는 경우 이를 풀러라고도 하는 프로토콜 인식 프록시와 연결하여 보안 연결을 활성화할 수 있습니다. PostgreSQL의 널리 사용되는 오픈소스 풀러에는 Pgpool-II 및 PgBouncer가 포함됩니다.
이 솔루션에서는 AlloyDB 인증 프록시와 풀러를 중간 VM에서 모두 실행합니다. 그러면 클라이언트 또는 애플리케이션이 추가 서비스를 실행할 필요 없이 SSL을 통해 풀러에 직접 안전하게 연결할 수 있습니다. 풀러는 인증 프록시를 통해 PostgreSQL 쿼리를 AlloyDB 클러스터에 전달합니다.
AlloyDB 클러스터 내의 모든 인스턴스에는 별도의 내부 IP 주소가 있으므로 각 프록시 서비스는 기본 인스턴스, 대기 인스턴스, 읽기 풀 중 하나의 특정 인스턴스와만 통신할 수 있습니다. 따라서 클러스터의 모든 인스턴스에서 적절하게 구성된 SSL 인증서를 사용하여 별도의 풀러 서비스를 실행해야 합니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[[["\u003cp\u003eThis document outlines methods for connecting to an AlloyDB for PostgreSQL cluster from outside its configured Virtual Private Cloud (VPC).\u003c/p\u003e\n"],["\u003cp\u003eFor development or low-cost production, an intermediary virtual machine (VM) within the VPC can act as a secure connection point to the AlloyDB cluster.\u003c/p\u003e\n"],["\u003cp\u003eIdentity-Aware Proxy (IAP) can be used to securely connect to the cluster through the intermediary VM without exposing its public IP address, suitable for non-production purposes.\u003c/p\u003e\n"],["\u003cp\u003eConnecting through a SOCKS proxy on the intermediary VM provides a flexible, scalable, and encrypted connection, making it suitable for production environments with proper configuration.\u003c/p\u003e\n"],["\u003cp\u003eFor high-availability production environments, utilizing Cloud VPN or Cloud Interconnect offers a robust, managed solution for establishing a permanent connection between the VPC and your application, removing the single point of failure risk.\u003c/p\u003e\n"]]],[],null,["# Connect to a cluster from outside its VPC\n\nThis page examines different ways to connect to an AlloyDB for PostgreSQL\ncluster from outside its configured Virtual Private Cloud (VPC). It\nassumes that you have already [created an AlloyDB\ncluster](/alloydb/docs/cluster-create).\n\nAbout external connections\n--------------------------\n\nYour AlloyDB cluster comprises a number of nodes within a\nGoogle Cloud VPC. When you create a cluster, you also [configure private\nservices\naccess](/alloydb/docs/configure-connectivity#about_network_connectivity)\nbetween one of your VPCs and the Google-managed VPC containing your new\ncluster. This peered connection lets you use private IP addresses\nto access resources on the cluster's VPC as if they are part of your own\nVPC, using private IP addresses.\n\nSituations exist where your application must connect to your cluster\nfrom outside this connected VPC:\n\n- Your application runs elsewhere within the Google Cloud ecosystem,\n outside of the VPC that you connected to your cluster through private\n services access.\n\n- Your application runs on a VPC that exists outside of Google's network.\n\n- Your application runs \"on-premises\", on a machine located somewhere\n else on the public internet.\n\nIn all of these cases, you must set up an additional service to enable\nthis kind of external connection to your AlloyDB cluster.\n\nSummary of external-connection solutions\n----------------------------------------\n\nWe recommend two general solutions for making external connections,\ndepending upon your needs:\n\n- For project development or prototyping, or for a relatively low-cost\n production environment, [set up an intermediary virtual machine\n (VM)](#vm)---also known as a *bastion*---within your VPC. A variety of\n methods exist to use this intermediary VM as a secure connection\n between an external application environment and your\n AlloyDB cluster.\n\n- For production environments that require high availability, consider\n [establishing a permanent connection between the VPC and your\n application](#vpn) through either Cloud VPN or Cloud Interconnect.\n\nThe next several sections describe these external-connection solutions\nin detail.\n\nConnect through an intermediary VM\n----------------------------------\n\nTo establish a connection to an AlloyDB cluster from\noutside its VPC using open-source tools and a minimum of additional\nresources, run a proxy service on [an intermediary VM set up within that\nVPC](/alloydb/docs/connect-psql#create-vm). You can set up a new VM for\nthis purpose, or use a VM already running within your\nAlloyDB cluster's VPC.\n\nAs a self-managed solution, using an intermediary VM generally costs\nless and has a faster set-up time than [using a Network Connectivity\nproduct](#vpn). It also has downsides: the connection's availability,\nsecurity, and data throughput all become dependent on the intermediary\nVM, which you must maintain as part of your project.\n\n### Connect through IAP\n\nUsing [Identity-Aware Proxy (IAP)](/iap/docs/concepts-overview), you can\nsecurely connect to your cluster without the need to expose the\nintermediary VM's public IP address. You use a combination of firewall\nrules and Identity and Access Management (IAM) to limit access through this route.\nThis makes IAP a good solution for non-production uses\nlike development and prototyping.\n\nTo set up IAP access to your cluster, follow these steps:\n\n1. [Install Google Cloud CLI](/sdk/docs/install) on your external client.\n\n2. [Prepare your project for IAP TCP forwarding](/iap/docs/using-tcp-forwarding#preparing_your_project_for_tcp_forwarding).\n\n When defining the new firewall rule, allow ingress TCP traffic to\n port `22` (SSH). If you are using [your project's default\n network](/vpc/docs/vpc#default-network) with its [pre-populated\n `default-allow-ssh`\n rule](/vpc/docs/firewalls#more_rules_default_vpc) enabled, then you\n don't need to define an additional rule.\n3. Set up port forwarding between your external client and the intermediary VM\n using [SSH through IAP](/iap/docs/using-tcp-forwarding#tunneling_ssh_connections).\n\n gcloud compute ssh my-vm \\\n --tunnel-through-iap \\\n --zone=\u003cvar translate=\"no\"\u003eZONE_ID\u003c/var\u003e \\\n --ssh-flag=\"-L \u003cvar translate=\"no\"\u003ePORT_NUMBER\u003c/var\u003e:\u003cvar translate=\"no\"\u003eALLOYDB_IP_ADDRESS\u003c/var\u003e:5432\"\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eZONE_ID\u003c/var\u003e: The ID of the zone where the cluster is located---for example, `us-central1-a`.\n - \u003cvar translate=\"no\"\u003eALLOYDB_IP_ADDRESS\u003c/var\u003e: The IP address of the AlloyDB instance you want to connect to.\n - \u003cvar translate=\"no\"\u003ePORT_NUMBER\u003c/var\u003e: The port number of your VM.\n\n | **Note:** If you're using [managed connection pooling](/alloydb/docs/configure-managed-connection-pooling), then change the port number from `5432` to `6432`. Apply this change of port throughout the examples on this page.\n4. Test your connection using [`psql`](/alloydb/docs/connect-psql) on\n your external client, having it connect to the local port you\n specified in the previous step. For example, to connect as the `postgres`\n user role to port `5432`:\n\n psql -h localhost -p 5432 -U \u003cvar translate=\"no\"\u003eUSERNAME\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eUSERNAME\u003c/var\u003e: The postgreSQL user you want to connect to the instance---for example, the default user `postgres`.\n\n### Connect through a SOCKS proxy\n\nRunning a SOCKS service on the intermediary VM provides a flexible and\nscalable connection to your AlloyDB cluster, with end-to-end encryption\nprovided by the AlloyDB Auth Proxy. With appropriate configuration, you can\nmake it suitable for production workloads.\n\nThis solution includes these steps:\n\n1. Install, configure, and run a SOCKS server on the intermediary VM. One\n example is [Dante](https://www.inet.no/dante/), a\n popular open-source solution.\n\n Configure the server to bind to the VM's `ens4` network interface\n for both external and internal connections. Specify any port you\n want for internal connections.\n2. [Configure your VPC's firewall](/vpc/docs/firewalls) to allow TCP\n traffic from the appropriate IP address or range to\n the SOCKS server's configured port.\n\n3. Install [the AlloyDB Auth Proxy](/alloydb/docs/auth-proxy/overview) on\n the external client.\n\n4. Run the AlloyDB Auth Proxy on your external client, with the `ALL_PROXY`\n environment variable set to the intermediary VM's IP address, and\n specifying the port that the SOCKS server uses.\n\n This example configures the AlloyDB Auth Proxy to connect to the database at\n `my-main-instance`, by way of a SOCKS server running at `198.51.100.1`\n on port `1080`: \n\n ALL_PROXY=socks5://\u003cvar translate=\"no\"\u003e198.51.100.1:1080\u003c/var\u003e ./alloydb-auth-proxy \\\n /projects/\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e/locations/\u003cvar translate=\"no\"\u003eREGION_ID\u003c/var\u003e/clusters/\u003cvar translate=\"no\"\u003eCLUSTER_ID\u003c/var\u003e/instances/\u003cvar translate=\"no\"\u003eINSTANCE_ID\u003c/var\u003e\n\n If you are connecting from a\n peered VPC, you can use the intermediary VM's internal IP address;\n otherwise, use its external IP address.\n5. Test your connection using [`psql`](/alloydb/docs/connect-psql) on\n your external client, having it connect to the port that the\n AlloyDB Auth Proxy listens on. For example, to connect as the `postgres`\n user role to port `5432`:\n\n psql -h \u003cvar translate=\"no\"\u003eIP_ADDRESS\u003c/var\u003e -p \u003cvar translate=\"no\"\u003ePORT_NUMBER\u003c/var\u003e -U \u003cvar translate=\"no\"\u003eUSERNAME\u003c/var\u003e\n\n### Connect through a PostgreSQL pooler\n\nIf you need to install and run the AlloyDB Auth Proxy on the intermediary VM,\ninstead of an external client, then you can enable secure\nconnections to it by pairing it with a *protocol-aware proxy* , also\nknown as a *pooler* . Popular open-source poolers for PostgreSQL include\n[Pgpool-II](https://pgpool.net/) and\n[PgBouncer](https://www.pgbouncer.org/).\n\nIn this solution, you run both the AlloyDB Auth Proxy and the pooler on the\nintermediary VM. Your client or application can then securely\nconnect directly to the pooler over SSL, without the need to run any\nadditional services. The pooler takes care of passing PostgreSQL\nqueries along to your AlloyDB cluster through the\nAuth Proxy.\n\nBecause every instance within an AlloyDB cluster has a\nseparate internal IP address, each proxy service can communicate with\nonly one specific instance: either the primary instance, the stand-by,\nor a read pool. Therefore, you need to run a separate pooler service,\nwith an appropriately configured SSL certificate, for every instance in\nthe cluster.\n\nConnect through Cloud VPN or Cloud Interconnect\n-----------------------------------------------\n\nFor production work requiring high availability (HA), we recommend the\nuse of a Google Cloud\n[Network Connectivity](/network-connectivity/docs/how-to/choose-product) product: either\n[Cloud VPN](/network-connectivity/docs/vpn) or\n[Cloud Interconnect](/network-connectivity/docs/interconnect),\ndepending upon your external service's needs and network topology. You\nthen configure\n[Cloud Router](/network-connectivity/docs/router/concepts/overview)\nto advertise the appropriate routes.\n\nWhile using a Network Connectivity product is a more involved\nprocess than setting up an intermediary VM, this approach shifts the\nburdens of uptime and availability from you to Google. In particular,\nHA VPN offers 99.99% SLA, making it appropriate for\nproduction environments.\n\nNetwork Connectivity solutions also free you from the need to\nmaintain a separate, secure VM as part of your application, avoiding the\nsingle-point-of-failure risks inherent with that approach.\n\nTo start learning more about these solutions, see [Choosing a Network Connectivity product](/network-connectivity/docs/how-to/choose-product).\n\nWhat's next\n-----------\n\n- Learn more about [Private services access and on-premises\n connectivity](/vpc/docs/private-services-access#on-premises-connectivity) in Google Cloud VPCs."]]