[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[],[],null,["# Networking overview\n\nThe virtual networking layer on\nGoogle Distributed Cloud (GDC) air-gapped appliance governs connectivity, firewalls, service discovery, load balancing, and observability between virtual machines and\npods running in a GDC organization.\n\nGDC networking model\n--------------------\n\nGDC consists of one level of tenancy: projects.\n\n### Project networking\n\nYou deploy all virtual machines (VM) and containerized workloads into a [project](/distributed-cloud/hosted/docs/latest/appliance/platform/pa-user/project-management). Projects provide a network segmentation boundary within the organization.\n\nWorkloads within a project can communicate directly with one another. However, the default network policy prevents communication between workloads in different projects. If the project network policy allows, then workloads in the organization can reach each other at the L3 network layer using their respective IP addresses. You must explicitly enable [ingress](#ingress) and [egress](#egress) constraints to and from the organization for each workload that requires inbound or outbound traffic.\n\nConfigure load balancers\n------------------------\n\nLoad balancers distribute traffic across your application's backend workloads, ensuring stability and availability. Create external and internal load balancers for pod and VM workloads. GDC provides three methods for configuring load balancers. For more information, see [Manage load balancers](/distributed-cloud/hosted/docs/latest/appliance/platform/pa-user/expose-services).\n\n#### Ingress constraints\n\nThe mechanism used to expose workloads outside the organization differs depending on whether the workload is based on VMs or containers.\n\nYou expose VM-based workloads outside of the organization using the VM external access capability. You enable this capability for each VM. Each VM gets its own IP address from the external range of the organization.\n\nOn the other hand, you expose containerized workloads outside the organization using the external load balancer feature. You can create an external load balancer, and GDC assigns an external IP address. Then, traffic can be load-balanced across a set of backend pod workloads.\n\n#### Egress constraints\n\nYou must explicitly enable outbound traffic for each project and workload to communicate outside the organization. Enabling outbound traffic changes the IP from workloads to an external IP using Network Address Translation (NAT) when connecting outside the organization. For more information about allowing outbound traffic, see [Manage outbound traffic from an organization](/distributed-cloud/hosted/docs/latest/appliance/platform/pa-user/manage-egress).\n\nNetwork policy enforcement model\n--------------------------------\n\nThe security posture for workloads within an organization is the union of default and\nuser-created project network policies.\n\nPolicy enforcement is based on Layer-3 and Layer-4 traffic flows. A flow describes a 5-tuple connection as follows:\n\n- Source IP address\n- Destination IP address\n- Source port\n- Destination port\n- Protocol, such as `TCP` or `UDP`\n\nNetwork policies perform outbound traffic enforcement on traffic at the\nnode that hosts the source workload and inbound traffic enforcement when the\ntraffic arrives at the node that hosts the destination workload. Therefore, to\nestablish a connection, you must allow the policy to leave the source for the\ndestination and arrive at the destination from the source.\n\nReply traffic, such as the SYN-ACK (synchronize-acknowledge) segment replying to an SYN segment, is not\nsubject to enforcement. Therefore, reply traffic is always allowed if the\ninitiating traffic is allowed. For this reason, you only observe connection timeouts due to\npolicy enforcement from the client initiating the\nconnection. Denied traffic is either discarded during the outbound data transfer from the source\nnode or the inbound data transfer at the destination node. The receiving workload never observes the connection.\n\nEnforcement is based on allow-based policy rules that are additive. The\nresulting enforcement for a workload is an \"any match\" for the traffic flow\nagainst the union of all policies applied to that workload. When multiple\npolicies are present, the rules applied to each workload are additively combined,\nallowing traffic if it matches at least one of the rules. You don't have\ndeny rules, only allow rules.\n\nWhen a network policy denies a flow, you don't receive a response packet and observe a connection timeout.\nFor this reason, any refused or reset protocol-level connections\nor HTTP errors are not a direct result of networking enforcement.\n\nFor more information about Kubernetes network policies, see\n\u003chttps://kubernetes.io/docs/concepts/services-networking/network-policies/#the-two-sorts-of-pod-isolation\u003e."]]