[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-05。"],[],[],null,["# Private Service Connect architecture and performance\n====================================================\n\nThis page explains how Private Service Connect works.\n\nPrivate Service Connect is implemented by using software-defined\nnetworking (SDN) from Google Cloud called\n[Andromeda](https://www.usenix.org/system/files/conference/nsdi18/nsdi18-dalton.pdf)\n(PDF). Andromeda is the distributed control plane and data plane for\nGoogle Cloud networking that enables networking for\nVirtual Private Cloud (VPC) networks. The Andromeda networking fabric processes\npackets on the physical servers that host VMs. As a result, the data plane is\nfully distributed and has no centralized bottlenecks on intermediate proxies or\nappliances.\n\nBecause Private Service Connect traffic is processed fully on the\nphysical hosts, it has significant performance benefits over a proxy-oriented\nmodel:\n\n- **There are no additional bandwidth limits imposed by\n Private Service Connect.** The combined bandwidth of the source and destination VM interfaces is effectively the bandwidth limit of Private Service Connect.\n- **Private Service Connect adds minimal latency to traffic.** The traffic path is the same as VM-to-VM traffic within a single VPC network. Network address translation of traffic is the only additional traffic processing step which is done entirely on the destination host.\n\nThe following diagram shows a typical traffic path for\nPrivate Service Connect traffic between a consumer\nVPC network and a producer VPC network.\n[](/static/vpc/images/psc-architecture.svg) Physical hosts perform client load balancing to determine which target host to send the traffic to (click to enlarge).\n\nFrom a logical perspective, there are consumer\nPrivate Service Connect endpoints and producer load balancers.\nHowever, from a physical perspective traffic goes directly from the physical\nserver that hosts the client VM to the physical server that hosts the producer\nload balancer VM.\n\nAndromeda applies functions to Private Service Connect traffic as\nshown in the following diagram:\n\n- Client-side load balancing is applied on the source host (`Host 1`) which decides which target host to send the traffic to. This decision is based on location, load and health.\n- The inner packet from `VPC1` is encapsulated in an Andromeda header with the destination network of `VPC2`.\n- The destination host (`Host 2`) applies SNAT and DNAT to the packet, using the [NAT subnet](/vpc/docs/about-vpc-hosted-services#psc-subnets) as the source IP address range of the packet and the producer load balancer IP address as the destination IP address.\n\nThere are exceptions where traffic is processed by intermediate routing hosts,\nsuch as inter-regional traffic or very small or intermittent traffic flows.\nHowever, Andromeda dynamically offloads traffic flows for direct, host-to-host\nnetworking whenever possible to optimize for best latency and throughput.\n\nWhat's next\n-----------\n\n- Learn more about [Private Service Connect](/vpc/docs/private-service-connect).\n- View [Private Service Connect compatibility\n information](/vpc/docs/private-service-connect-compatibility)."]]