Deploy workloads on Google Distributed Cloud Edge

When the Distributed Cloud Edge arrives at your chosen destination location, Google installers complete the physical installation and your system administrator connects the Distributed Cloud Edge to your local network.

The Distributed Cloud Edge rack arrives pre-configured with hardware, network, and Google Cloud settings you specified when you ordered Distributed Cloud Edge.

Once the hardware has been connected to your local network, it communicates with Google Cloud to download software updates and connect with your Google Cloud project. You are then ready to provision NodePools and deploy workloads.

Deploying a workload on your Distributed Cloud Edge hardware requires the following steps:

  1. Create a Distributed Cloud Edge Cluster as described in Create a Cluster.

  2. (Optional) If you want to integrate with Cloud Key Management Service to enable support for customer-managed encryption keys (CMEK) for your workload data, complete the steps in Enable support for customer-managed encryption keys (CMEK) for local storage. For more information on how Distributed Cloud Edge encrypts workload data, see Local storage security.

  3. Create a NodePool as described in Create a NodePool. In this step, you assign Nodes to the NodePool and optionally configure the NodePool to use Cloud KMS to wrap and unwrap the LUKS passphrase for encrypting workload data.

  4. Test the Cluster by obtaining its credentials as described in Obtain credentials for a Cluster.

  5. Grant users access to the Cluster by assigning them the GDCE Viewer or GDCE Admin IAM role on the project as described in Roles and permissions.

  6. Assign users granular role-based access to the Cluster resources using RoleBinding and ClusterRoleBinding to configure role-based access control.

  7. (Optional) Connect the Cluster to Google Cloud:

    1. Create a VPN Connection to your Cloud project as described in Create a VPN Connection.

    2. Check that the VPN Connection is operational as described in Check the status of a VPN Connection.

  8. (Optional) Configure Private Google Access to allow your Pods to access Google Cloud APIs and services using Cloud VPN. For instructions, see Configuring Private Google Access for on-premises hosts.

  9. (Optional) Configure Private Service Connect to allow your Pods to access Google Cloud APIs and services using Cloud VPN. For instructions, see Using Private Service Connect to access Google APIs with consumer HTTP(S) service controls.

Limitations for Distributed Cloud Edge workloads

When configuring your Distributed Cloud Edge workloads, you must adhere to the limitations described in this section. These limitations are enforced by Distributed Cloud Edge on all workloads that you deploy on your Distributed Cloud Edge hardware.

Linux workload limitations

Distributed Cloud Edge supports only the following Linux capabilities for workloads:

  • AUDIT_READ
  • AUDIT_WRITE
  • CHOWN
  • DAC_OVERRIDE
  • FOWNER
  • FSETID
  • IPC_LOCK
  • IPC_OWNER
  • KILL
  • MKNOD
  • NET_ADMIN
  • NET_BIND_SERVICE
  • NET_RAW
  • SETFCAP
  • SETGID
  • SETPCAP
  • SETUID
  • SYS_CHROOT
  • SYS_NICE
  • SYS_PACCT
  • SYS_RESOURCE
  • SYS_TIME

Namespace restrictions

Distributed Cloud Edge does not support the following namespaces:

  • hostPID
  • hostIPC
  • hostNetwork

Resource type restrictions

Distributed Cloud Edge does not support the CertificateSigningRequest resource type, which allows a client to ask for an X.509 certificate to be issued, based on a signing request.

Security context restrictions

Distributed Cloud Edge does not support the privileged mode security context.

Pod binding restrictions

Distributed Cloud Edge does not support binding Pods to host ports in the HostNetwork namespace. Additionally, the HostNetwork namespace is not available.

hostPath volume restrictions

Distributed Cloud Edge only allows the following hostPath volumes with read/write access:

  • /dev/hugepages
  • /dev/infiniband
  • /dev/vfio
  • /dev/char
  • /sys/devices

PersistentVolumeClaim resource type restrictions

Distributed Cloud Edge only allows the following PersistentVolumeClaim resource types:

  • csi
  • nfs
  • local

Volume type restrictions

Distributed Cloud Edge only allows the following volume types:

  • configMap
  • csi
  • downwardAPI
  • emptyDir
  • hostPath
  • nfs
  • persistentVolumeClaim
  • projected
  • secret

Pod toleration restrictions

Distributed Cloud Edge does not allow user-created Pods on control plane nodes. Specifically, Distributed Cloud Edge does not allow scheduling Pods that have the following toleration keys:

  • ""
  • node-role.kubernetes.io/master
  • node-role.kubernetes.io/control-plane

Impersonation restrictions

Distributed Cloud Edge does not support user or group impersonation.

Management namespace restrictions

Distributed Cloud Edge only allows users to access the following namespaces. Additionally, the role has read-only access to those namespaces.

  • kube-system with the exception of deleting ippools.whereabouts.cni.cncf.io
  • anthos-identity-service
  • cert-manager
  • gke-connect
  • kubevirt
  • metallb-system with the exception of editing configMap resources to set load balancing IP address ranges
  • nf-operator
  • sriov-network-operator

Webhook restrictions

Distributed Cloud Edge restricts webhooks as follows:

  • Any mutating webhook you create automatically excludes the kube-system namespace.
  • Mutating webhooks are disabled for the following resource types:
    • nodes
    • persistentvolumes
    • certificatesigningrequests
    • tokenreviews

What's next