Network policy

To set a network policy for virtual machine (VM) workloads at the project namespace level, use the ProjectNetworkPolicy resource, a multi-cluster network policy for Google Distributed Cloud air-gapped appliance (GDC). It enables you to define policies, which allows communication within projects, between projects, and to external IP addresses.

For traffic within a project, GDC applies a predefined project network policy, the intra-project policy, to each project by default. To enable and control traffic across projects within the same organization, you define cross-project policies. When multiple policies are present, GDC additively combines the rules for each project. GDC also allows traffic if at least one of the rules match.

Request permission and access

To perform the tasks listed in this page, you must have the Project NetworkPolicy Admin role. Ask your Project IAM Admin to grant you the Project NetworkPolicy Admin (project-networkpolicy-admin) role in the namespace of the project where the VM resides.

Intra-project traffic

By default, VM workloads in a project namespace have the ability to communicate with each other without exposing services to the external world, even if the VMs are part of different clusters within the same project.

Ingress intra-project traffic network policy

When you create a project, you create a default base ProjectNetworkPolicy on the Management API server to allow intra-project communication. This policy allows ingress traffic from other workloads in the same project. You can remove it, but be careful if doing so, as this results in denying both intraproject and container workload communication.

Egress intra-project traffic network policy

By default, you don't need to take action regarding egress. This is because in the absence of an egress policy, all traffic is allowed. However, when you set a single policy, only the traffic the policy specifies is allowed.

When you disable Data Exfiltration Prevention and apply an egress ProjectNetworkPolicy to the project, such as preventing access to an external resource, use a required policy to allow intra-project egress:

kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF
apiVersion: networking.gdc.goog/v1
kind: ProjectNetworkPolicy
metadata:
  namespace: PROJECT_1
  name: allow-intra-project-egress-traffic
spec:
  policyType: Egress

  ingress:
  - from:
    - projects:
        matchNames:
        - PROJECT_1
EOF

Cross-project (within org) traffic

VM workloads from different project namespaces but within the same organization can communicate with each other by applying a cross-project network policy.

Ingress cross-project traffic network policy

For project workloads to allow connections from other workloads in another project, you must configure an Ingress policy to allow the other project workloads to ingress.

The following policy enables workloads in the PROJECT_1 project to permit connections from workloads in the PROJECT_2 project, as well as the return traffic for the same flows. Run the following command to apply the policy:

kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF
apiVersion: networking.gdc.goog/v1
kind: ProjectNetworkPolicy
metadata:
  namespace: PROJECT_1
  name: allow-ingress-traffic-from-PROJECT_2
spec:
  policyType: Ingress
  subject:
    subjectType: UserWorkload
  ingress:
  - from:
    - projects:
        matchNames:
        - PROJECT_2
EOF

The preceding command allows PROJECT_2 to go to PROJECT_1, but doesn't allow connections initiated from PROJECT_1 to PROJECT_2. For the latter, you require a reciprocal policy in the PROJECT_2 project. Run the following command to apply the reciprocal policy:

kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF
apiVersion: networking.gdc.goog/v1
kind: ProjectNetworkPolicy
metadata:
  namespace: PROJECT_2
  name: allow-ingress-traffic-from-PROJECT_1
spec:
  policyType: Ingress
  subject:
    subjectType: UserWorkload
  ingress:
  - from:
    - projects:
        matchNames:
        - PROJECT_1
EOF

You've now permitted connections initiated to and from PROJECT_1 and PROJECT_2.

Use the following definitions for your variables.

VariableDefinition
MANAGEMENT_API_SERVERThe Management API server kubeconfig path.
PROJECT_1The name of a GDC project corresponding to PROJECT_1 in the example.
PROJECT_2The name of a GDC project corresponding to PROJECT_2 in the example.

Egress cross-project traffic network policy

When you grant an ingress cross-project traffic policy to enable workloads in one project, PROJECT_1, to allow connections from workloads in another project, PROJECT_2, this also grants the return traffic for the same flows. Therefore, you do not need an egress cross-project traffic network policy.

Cross-organization traffic

To connect a VM workload to a destination outside of your project that resides in a different organization requires explicit approval. That approval is due to Data Exfiltration Prevention, which GDC enables by default and prevents a project from having egress to workloads outside the organization where the project resides. The instructions to add a specific egress policy and enable data exfiltration are as follows in this section.

Ingress cross-organization traffic network policy

When you've configured a cross-organization egress network policy, you don't need to grant ingress. The allowed egress traffic from the workload outside of your organization goes through a load balancer (LB) and that egress traffic is a source network address translation (NAT) using an IP address that you've allocated for the project.

Egress cross-organization traffic network policy

To enable egress to services outside of the organization, customize your project network policy, ProjectNetworkPolicy. However, because Data Exfiltration Prevention is default-enabled, your customized Egress ProjectNetworkPolicy shows a validation error in the status field, and the dataplane ignores it. This happens by design.

Workloads can egress when you allow data exfiltration for a given project. The traffic that you permit to egress is a source network address translation (NAT) using a well-known IP address allocated for the project.

These directions show you how to enable your customized egress policy:

  1. Configure and apply your own customized Egress ProjectNetworkPolicy, following the kubectl CLI example. Apply the policy on all user workloads in PROJECT_1. It allows egress traffic to all hosts in CIDR, which reside outside the organization. Your first attempt causes a necessary status error, which is intended.

  2. Apply your ProjectNetworkPolicy configuration:

    kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF
    apiVersion: networking.gdc.goog/v1
    kind: ProjectNetworkPolicy
    metadata:
      namespace: PROJECT_1
      name: allow-egress-traffic-to-NAME
    spec:
      policyType: Egress
      subject:
        subjectType: UserWorkload
     egress:
      - to:
        - ipBlock:
            cidr: CIDR
    EOF
    
  3. When you finish, confirm that you see a validation error in your status.

  4. Ask the admin user to disable Data Exfiltration Prevention. This enables your configuration, while preventing all other egress.

  5. Check the ProjectNetworkPolicy that you just created and verify that the error in the validation status field is gone, and the status Ready is True, indicating that your policy is in effect:

    kubectl --kubeconfig MANAGEMENT_API_SERVER get projectnetworkpolicy
    allow-egress-traffic-to-NAME -n PROJECT_1 -o yaml
    

    Replace the variables, using the following definitions.

    VariableDefinition
    MANAGEMENT_API_SERVERThe Management API server kubeconfig file.
    PROJECT_1The name of the GDC project.
    CIDRThe Classless Inter-Domain Routing (CIDR) block of the permitted destination.
    NAMEA name associated with the destination.

    After you have applied this policy, and provided that you have not defined other egress policies, all other egress traffic is denied for PROJECT_1.