Networking overview

The virtual networking layer on Google Distributed Cloud (GDC) air-gapped governs connectivity, firewalls, service discovery, load balancing, and observability between virtual machines and pods running in a GDC organization.

GDC networking model

GDC contains two levels of multi-tenancy concepts: organizations and projects. Projects exist within organizations, and you deploy all virtualized and containerized workloads into one particular project within an organization.

Organization networking

Each organization in GDC has its own isolated virtual network. The virtual network within the organization is a flat IP space, which means all workloads in the organization have direct IP address connectivity to one another. Using project network policies, you can control access between workloads in different projects in the organization.

GDC isolates each organization at the network level from all other organizations. Workloads in one organization don't have direct IP address connectivity to workloads in another organization.

An organization has two different IP ranges: an internal range and an external range. The external IP range is reachable from outside the organization, and the internal IP range is only accessible from within the organization. Workloads are always assigned an IP address from the internal range of the organization, which means they are not accessible from outside the organization by default. You must explicitly enable inbound and outbound traffic for workloads using the ingress and egress constraints described in the Project networking section.

Project networking

You deploy all virtual machines (VM) and containerized workloads into a project. Projects provide a network segmentation boundary within the organization.

Workloads within a project can communicate directly with one another. However, the default network policy prevents communication between workloads in different projects. Project network policies (ProjectNetworkPolicy) let you configure which projects in the organization can communicate with one another. If the project network policy allows, then workloads in the organization can reach each other at the L3 network layer using their respective IP addresses. You must explicitly enable ingress and egress constraints to and from the organization for each workload that requires inbound or outbound traffic.

Ingress constraints

The mechanism used to expose workloads outside the organization differs depending on whether the workload is based on VMs or containers.

You expose VM-based workloads outside of the organization using the VM external access capability. You enable this capability for each VM. Each VM gets its own IP address from the external range of the organization.

On the other hand, you expose containerized workloads outside the organization using the external load balancer feature. You can create an external load balancer, and GDC assigns an external IP address. Then, traffic can be load-balanced across a set of backend pod workloads.

Egress constraints

You must explicitly enable outbound traffic for each project and workload to communicate outside the organization. Enabling outbound traffic changes the IP from workloads to an external IP using Network Address Translation (NAT) when connecting outside the organization. For more information about allowing outbound traffic, see Manage outbound traffic from an organization.

Network policy enforcement model

The security posture for workloads within an organization is the union of default and user-created project network policies.

Policy enforcement is based on Layer-3 and Layer-4 traffic flows. A flow describes a 5-tuple connection as follows:

  • Source IP address
  • Destination IP address
  • Source port
  • Destination port
  • Protocol (for example, TCP or UDP)

Network policies perform outbound traffic enforcement on traffic at the node that hosts the source workload and inbound traffic enforcement when the traffic arrives at the node that hosts the destination workload. Therefore, to establish a connection, you must allow the policy to leave the source for the destination and arrive at the destination from the source.

Reply traffic, such as the SYN-ACK (synchronize-acknowledge) segment replying to an SYN segment, is not subject to enforcement. Therefore, reply traffic is always allowed if the initiating traffic is allowed. For this reason, you only observe connection timeouts due to policy enforcement from the client initiating the connection. Denied traffic is either discarded during the outbound data transfer from the source node or the inbound data transfer at the destination node. The receiving workload never observes the connection.

Enforcement is based on allow-based policy rules that are additive. The resulting enforcement for a workload is an "any match" for the traffic flow against the union of all policies applied to that workload. When multiple policies are present, the rules applied to each workload are additively combined, allowing traffic if it matches at least one of the rules. You don't have deny rules, only allow rules.

When a network policy denies a flow, you don't receive a response packet and observe a connection timeout. For this reason, any refused or reset protocol-level connections or HTTP errors are not a direct result of networking enforcement.

For more information about Kubernetes network policies, see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-two-sorts-of-pod-isolation