Network policy

To set a network policy for virtual machine (VM) workloads at the project namespace level, use the ProjectNetworkPolicy resource, a multi-cluster network policy for Google Distributed Cloud (GDC) air-gapped appliance. It lets you define policies, which allows communication within projects, between projects, and to external IP addresses.

For traffic within a project, GDC applies a predefined project network policy, the intra-project policy, to each project by default. To enable and control traffic across projects within the same organization, you define cross-project policies. When multiple policies are present, it additively combines the rules for each. It also allows traffic if it matches at least one of the rules.

Intra-project traffic

By default, VM workloads in a project namespace have the ability to communicate with each other without exposing services to the external world, even if the VMs are part of different clusters within the same project.

Ingress intra-project traffic network policy

When you create a project, you create a default base ProjectNetworkPolicy on the system cluster to allow intra-project communication. This policy allows ingress traffic from other workloads in the same project. You can remove it, but be careful if doing so, as this results in denying both intraproject and container workload communication.

Egress intra-project traffic network policy

By default, you don't need to take action regarding egress. This is because in the absence of an egress policy, all traffic is allowed. However, when you set a single policy, only the traffic the policy specifies is allowed.

When you disable Data Exfiltration Prevention and apply an egress ProjectNetworkPolicy to the project, such as preventing access to an external resource, use a required policy to allow intra-project egress:

kubectl --kubeconfig ADMIN_KUBECONFIG apply -f - <<EOF
apiVersion: networking.gdc.goog/v1
kind: ProjectNetworkPolicy
metadata:
  namespace: PROJECT_1
  name: allow-intra-project-egress-traffic
spec:
  policyType: Egress

  ingress:
  - from:
    - projects:
        matchNames:
        - PROJECT_1
EOF

Cross-project (within org) traffic

VM workloads from different project namespaces but within the same organization can communicate with each other by applying a cross-project network policy.

Ingress cross-project traffic network policy

For project workloads to allow connections from other workloads in another project, you must configure an Ingress policy to allow the other project workloads to data transfer out.

The following policy enables workloads in the PROJECT_1 project to permit connections from workloads in the PROJECT_2 project, as well as the return traffic for the same flows. If you want to access your VM in the PROJECT_1 from a source inside the PROJECT_2, you can also use this policy. Run the following command to apply the policy:

kubectl --kubeconfig ADMIN_KUBECONFIG apply -f - <<EOF
apiVersion: networking.gdc.goog/v1
kind: ProjectNetworkPolicy
metadata:
  namespace: PROJECT_1
  name: allow-ingress-traffic-from-PROJECT_2
spec:
  policyType: Ingress
  subject:
    subjectType: UserWorkload
  ingress:
  - from:
    - projects:
        matchNames:
        - PROJECT_2
EOF

The preceding command allows PROJECT_2 to go to PROJECT_1, but doesn't allow connections initiated from PROJECT_1 to PROJECT_2. For the latter, you require a reciprocal policy in the PROJECT_2 project. Run the following command to apply the reciprocal policy:

kubectl --kubeconfig ADMIN_KUBECONFIG apply -f - <<EOF
apiVersion: networking.gdc.goog/v1
kind: ProjectNetworkPolicy
metadata:
  namespace: PROJECT_2
  name: allow-ingress-traffic-from-PROJECT_1
spec:
  policyType: Ingress
  subject:
    subjectType: UserWorkload
  ingress:
  - from:
    - projects:
        matchNames:
        - PROJECT_1
EOF

You've now permitted connections initiated to and from PROJECT_1 and PROJECT_2.

Use the following definitions for your variables.

VariableDefinition
ADMIN_KUBECONFIGThe admin cluster kubeconfig path.
PROJECT_1The name of a GDC project corresponding to PROJECT_1 in the example.
PROJECT_2The name of a GDC project corresponding to PROJECT_2 in the example.

Egress cross-project traffic network policy

When you grant an ingress cross-project traffic policy to enable workloads in one project, PROJECT_1, to allow connections from workloads in another project, PROJECT_2, this also grants the return traffic for the same flows. Therefore, you don't need an egress cross-project traffic network policy.

Cross-organization traffic

To connect a VM workload to a destination outside of your project that resides in a different organization requires explicit approval. That approval is due to Data Exfiltration Prevention, which GDC enables by default and prevents a project from having egress to workloads outside the organization where the project resides. The instructions to add a specific egress policy and enable data exfiltration are as follows in this section.

Ingress cross-organization traffic network policy

To allow ingress traffic across different organizations, a ProjectNetworkPolicy must be applied which allows traffic from organization external clients to your project, for example connect to the VM using SSH.

A corresponding Egress policy is not required for the reply traffic. Return traffic is implicitly allowed.

If you want to access your VM in the PROJECT_1 from a source outside the organization that the VM resides in, you must apply the following policy to achieve this. You have to get and use the CIDR which contains your source IP address. The CIDR should be in network/len notation. For example, 192.0.2.0/21 is a valid one.

  1. Configure and apply your Ingress ProjectNetworkPolicy, following the kubectl example. Apply the policy on all user workloads in PROJECT_1. It allows ingress traffic to all hosts in CIDR, which reside outside the organization.

  2. Apply your ProjectNetworkPolicy configuration:

    kubectl --kubeconfig ADMIN_KUBECONFIG apply -f - <<EOF
    apiVersion: VERSION
    kind: ProjectNetworkPolicy
    metadata:
      namespace: PROJECT_1
      name: allow-external-traffic
    spec:
      policyType: Ingress
      subject:
        subjectType: UserWorkload
      ingress:
       - from:
         - ipBlock:
             cidr: CIDR
    EOF
    

Egress cross-organization traffic network policy

To enable egress to services outside of the organization, customize your project network policy, ProjectNetworkPolicy. However, because Data Exfiltration Prevention is default-enabled, your customized Egress ProjectNetworkPolicy shows a validation error in the status field, and the dataplane ignores it. This happens by design.

Workloads can egress when you allow data exfiltration for a given project. The traffic that you permit to egress is a source network address translation (NAT) using a well-known IP address allocated for the project.

A corresponding Ingress policy is not required for the reply traffic. Return traffic is implicitly allowed.

These directions show you how to enable your customized egress policy:

  1. Configure and apply your own customized Egress ProjectNetworkPolicy, following the kubectl example. Apply the policy on all user workloads in PROJECT_1. It allows egress traffic to all hosts in CIDR, which reside outside the organization. Your first attempt causes a necessary status error, which is intended.

  2. Apply your ProjectNetworkPolicy configuration:

    kubectl --kubeconfig $admin_kubeconfig apply -f - <<EOF
    apiVersion: networking.gdc.goog/v1
    kind: ProjectNetworkPolicy
    metadata:
      namespace: PROJECT_1
      name: allow-egress-traffic-to-NAME
    spec:
      policyType: Egress
      subject:
        subjectType: UserWorkload
      egress:
       - to:
         - ipBlock:
             cidr: CIDR
    EOF
    
  3. When you finish, confirm that you see a validation error in your status.

  4. Ask the administrator user to disable Data Exfiltration Prevention. This enables your configuration, while preventing all other egress.

  5. Check the ProjectNetworkPolicy that you just created and verify that the error in the validation status field is gone, and the status Ready is True, indicating that your policy is in effect:

    kubectl --kubeconfig ADMIN_KUBECONFIG get projectnetworkpolicy
    allow-egress-traffic-to-NAME -n PROJECT_1 -o yaml
    

    Replace the variables, using the following definitions.

    VariableDefinition
    ADMIN_KUBECONFIGThe admin cluster kubeconfig path.
    PROJECT_1The name of the GDC project.
    CIDRThe Classless Inter-Domain Routing (CIDR) block of the permitted destination.
    NAMEA name associated with the destination.

    After you have applied this policy, and provided that you have not defined other egress policies, all other egress traffic is denied for PROJECT_1.