This page describes the steps to deploy workloads on your Google Distributed Cloud connected hardware and the limitations that you must adhere to when configuring your workloads.
Before you complete these steps, you must meet the Distributed Cloud connected installation requirements and order the Distributed Cloud hardware.
When the Google Distributed Cloud connected hardware arrives at your chosen destination, it is pre-configured with hardware, Google Cloud, and some network settings that you specified when you ordered Distributed Cloud connected.
Google installers complete the physical installation, and your system administrator connects Distributed Cloud connected to your local network.
After the hardware is connected to your local network, it communicates with Google Cloud to download software updates and connect with your Google Cloud project. You are then ready to provision node pools and deploy workloads on Distributed Cloud connected.
Deployment overview
To deploy a workload on your Distributed Cloud connected hardware, complete the following steps:
Optional: Initialize the network configuration of your Distributed Cloud connected zone.
Optional: Configure Distributed Cloud networking.
Optional: Enable support for customer-managed encryption keys (CMEK) for local storage if you want to integrate with Cloud Key Management Service to enable support for CMEK for your workload data. For information about how Distributed Cloud connected encrypts workload data, see Local storage security.
Create a node pool. In this step, you assign nodes to a node pool and optionally configure the node pool to use Cloud KMS to wrap and unwrap the Linux Unified Key Setup (LUKS) passphrase for encrypting workload data.
Obtain credentials for a cluster to test the cluster.
Grant users access to the cluster by assigning them the Edge Container Viewer role (
roles/edgecontainer.viewer
) or the Edge Container Admin role (roles/edgecontainer.admin
) on the project.Optional: Enable GPU support to run GPU-based workloads on Distributed Cloud connected.
Optional: Connect the Distributed Cloud connected cluster to Google Cloud:
Deploy the NGINX load balancer as a service
The following example illustrates how to deploy the NGINX server and expose it as a service on a Distributed Cloud connected cluster:
Create a YAML file named
nginx-deployment.yaml
with the following contents:apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80
Apply the YAML file to the cluster using the following command:
kubectl apply -f nginx-deployment.yaml
Create a YAML file named
nginx-service.yaml
with the following contents:apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: LoadBalancer selector: app: nginx ports: - protocol: TCP port: 8080 targetPort: 80
Apply the YAML file to the cluster using the following command:
kubectl apply -f nginx-deployment.yaml
Obtain the external IP address assigned to the service by the MetalLB load balancer using the following command:
kubectl get services
The command returns output similar to the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service LoadBalancer 10.51.195.25 10.100.68.104 8080:31966/TCP 11d
Deploy a container with SR-IOV functions
The following example illustrates how to deploy a Pod that uses the SR-IOV network function operator features of Distributed Cloud connected.
Create the Distributed Cloud networking components
Create the required networking components for your Distributed Cloud connected deployment as follows. For more information on these components, see Distributed Cloud connected networking features.
Create a network:
gcloud edge-cloud networking networks create NETWORK_NAME \ --location=REGION \ --zone=ZONE_NAME \ --mtu=MTU_SIZE
Replace the following:
NETWORK_NAME
: a descriptive name that uniquely identifies this network.REGION
: the Google Cloud region to which the target Distributed Cloud zone belongs.ZONE_NAME
: the name of the target Distributed Cloud connected zone.MTU_SIZE
: the maximum transmission unit (MTU) size for this network. Valid values are 1500 and 9000. This value must match the MTU size of thedefault
network and be the same for all networks.
Create a subnetwork:
gcloud edge-cloud networking subnets create SUBNETWORK_NAME \ --network=NETWORK_NAME \ --ipv4-range=IPV4_RANGE \ --vlan-id=VLAN_ID \ --location=REGION \ --zone=ZONE_NAME
Replace the following:
SUBNETWORK_NAME
: a descriptive name that uniquely identifies this subnetwork.NETWORK_NAME
: the network that encapsulates this subnetwork.IPV4_RANGE
: the IPv4 address range that this subnetwork covers in the IP address/prefix format.VLAN_ID
: the target VLAN ID for this subnetwork.REGION
: the Google Cloud region to which the target Distributed Cloud connected zone belongs.ZONE_NAME
: the name of the target Distributed Cloud connected zone.
Monitor the subnetwork's status until it has been successfully created:
watch -n 30 'gcloud edge-cloud networking subnets list \ --location=REGION \ --zone=ZONE_NAME
Replace the following:
REGION
: the Google Cloud region to which the target Distributed Cloud connected zone belongs.ZONE_NAME
: the name of the target Distributed Cloud connected zone.
The status progresses from
PENDING
toPROVISIONING
and finally toRUNNING
.Record the VLAN ID, subnetwork CIDR block, and the gateway IP address for the CIDR block. You will use these values later in this procedure.
Configure the NodeSystemConfigUpdate
resources
Configure a NodeSystemConfigUpdate
network function operator resource for each node
in the cluster as follows.
List the nodes running in the target cluster's node pool using the following command:
kubectl get nodes | grep -v master
The command returns output similar to the following:
NAME STATUS ROLES AGE VERSION pool-example-node-1-01-b2d82cc7 Ready <none> 2d v1.22.8-gke.200 pool-example-node-1-02-52ddvfc9 Ready <none> 2d v1.22.8-gke.200
Record the returned node names and derive their short names. For example, for the
pool-example-node-1-01-b2d82cc7
node, its short name isnode101
.For each node you've recorded in the previous step, create a dedicated
NodeSystemConfigUpdate
resource file with the following contents:apiVersion: networking.gke.io/v1 kind: NodeSystemConfigUpdate metadata: name: nodesystemconfigupdate-NODE_SHORT_NAME namespace: nf-operator spec: kubeletConfig: cpuManagerPolicy: Static topologyManagerPolicy: SingleNumaNode nodeName: NODE_NAME osConfig: hugePagesConfig: ONE_GB: 2 TWO_MB: 0 isolatedCpusPerSocket: "0": 40 "1": 40 sysctls: nodeLevel: net.core.rmem_max: "8388608" net.core.wmem_max: "8388608"
Replace the following:
NODE_NAME
: the full name of the target node. For example,pool-example-node-1-01-b2d82cc7
.NODE_SHORT_NAME
: the short name of the target node derived from its full name. For example,node101
.
Name each file
node-system-config-update-NODE_SHORT_NAME.yaml
.Apply each of the
NodeSystemConfigUpdate
resource files to the cluster using the following command:kubectl apply -f node-system-config-update-NODE_SHORT_NAME.yaml
Replace
NODE_SHORT_NAME
with the short name of the corresponding target node.When you apply the resources to the cluster, each affected node reboots, which can take up to 30 minutes.
- Monitor the status of the affected nodes until all have successfully rebooted:
kubectl get nodes | grep -v master
The status of each node transitions from
not-ready
toready
as their reboots complete.
Configure the ToR switches for SR-IOV network functions
Follow the steps in this section to configure the network interfaces in each Distributed Cloud ToR switch in the Distributed Cloud connected rack for SR-IOV network functions operation.
Create a file named
mlnc6-pcie1-tor1-sriov.yaml
with the following contents. This file configures the first network interface on the first ToR switch.apiVersion: sriovnetwork.k8s.cni.cncf.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlnx6-pcie1-tor1-sriov namespace: sriov-network-operator spec: deviceType: netdevice isRdma: false linkType: eth mtu: 9000 nicSelector: pfNames: - enp59s0f0np0 nodeSelector: edgecontainer.googleapis.com/network-sriov.capable: "true" numVfs: 31 priority: 99 resourceName: mlnx6_pcie1_tor1_sriov
Create a file named
mlnc6-pcie1-tor2-sriov.yaml
with the following contents. This file configures the second network interface on the first ToR switch.apiVersion: sriovnetwork.k8s.cni.cncf.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlnx6-pcie1-tor2-sriov namespace: sriov-network-operator spec: deviceType: netdevice isRdma: false linkType: eth mtu: 9000 nicSelector: pfNames: - enp59s0f1np1 nodeSelector: edgecontainer.googleapis.com/network-sriov.capable: "true" numVfs: 31 priority: 99 resourceName: mlnx6_pcie1_tor2_sriov
Create a file named
mlnc6-pcie2-tor1-sriov.yaml
with the following contents. This file configures the first network interface on the second ToR switch.apiVersion: sriovnetwork.k8s.cni.cncf.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlnx6-pcie2-tor1-sriov namespace: sriov-network-operator spec: deviceType: netdevice isRdma: false linkType: eth mtu: 9000 nicSelector: pfNames: - enp216s0f0np0 nodeSelector: edgecontainer.googleapis.com/network-sriov.capable: "true" numVfs: 31 priority: 99 resourceName: mlnx6_pcie2_tor1_sriov
Create a file named
mlnc6-pcie2-tor2-sriov.yaml
with the following contents. This file configures the second network interface on the second ToR switch.apiVersion: sriovnetwork.k8s.cni.cncf.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlnx6-pcie2-tor2-sriov namespace: sriov-network-operator spec: deviceType: netdevice isRdma: false linkType: eth mtu: 9000 nicSelector: pfNames: - enp216s0f1np1 nodeSelector: edgecontainer.googleapis.com/network-sriov.capable: "true" numVfs: 31 priority: 99 resourceName: mlnx6_pcie2_tor2_sriov
Apply the ToR configuration files to the cluster using the following commands:
kubectl apply -f mlnc6-pcie1-tor1-sriov.yaml kubectl apply -f mlnc6-pcie1-tor2-sriov.yaml kubectl apply -f mlnc6-pcie2-tor1-sriov.yaml kubectl apply -f mlnc6-pcie2-tor2-sriov.yaml
The affected nodes are cordoned off, drained, and rebooted.
Monitor the status of the nodes using the following command:
watch -n 5 'kubectl get sriovnetworknodestates -o yaml -A | \ grep "syncStatus\|pool-"|sed "N;s/\n/ /"'
When all affected nodes show
syncStatus: Succeeded
, press Ctrl+C to exit the monitoring command loop.The command returns output similar to the following, indicating that the SR-IOV network function features have been enabled on the ToR switches:
Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 2520m (3%) 7310m (9%) memory 3044Mi (1%) 9774Mi (3%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) devices.kubevirt.io/kvm 0 0 devices.kubevirt.io/tun 0 0 devices.kubevirt.io/vhost-net 0 0 gke.io/mlnx6_pcie1_tor1_sriov 3 3 gke.io/mlnx6_pcie1_tor2_sriov 0 0 gke.io/mlnx6_pcie2_tor1_sriov 0 0 gke.io/mlnx6_pcie2_tor2_sriov 0 0
Configure a NetworkAttachmentDefinition
resource
Configure a NetworkAttachmentDefinition
resource for the cluster as follows:
Create a file named
network-attachment-definition.yaml
with the following contents:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: sriov-net1 annotations: k8s.v1.cni.cncf.io/resourceName: gke.io/mlnx6_pcie1_tor1_sriov spec: config: '{ "type": "sriov", "cniVersion": "0.3.1", "vlan": VLAN_ID, "name": "sriov-network", "ipam": { "type": "host-local", "subnet": "SUBNETWORK_CIDR", "routes": [{ "dst": "0.0.0.0/0" }], "gateway": "GATEWAY_ADDRESS" } }'
Replace the following:
VLAN_ID
: the VLAN ID of the subnetwork you created earlier in this guide.SUBNETWORK_CIDR
: the CIDR block for the subnetwork.GATEWAY_ADDRESS
: the gateway IP address for the subnetwork.
Apply the resource to the cluster using the following command:
kubectl apply -f network-attachment-definition.yaml
Deploy a Pod with SR-IOV network functions
Complete the steps in this section to deploy a Pod with SR-IOV network functions on
the cluster. The annotations
field in the Pod's configuration file specifies the
name of the NetworkAttachmentDefinition
resource you created earlier in this guide
and the namespace in which it has been deployed (default
in this example).
Create a Pod specification file named
sriovpod.yaml
with the following contents:apiVersion: v1 kind: Pod metadata: name: sriovpod annotations: k8s.v1.cni.cncf.io/networks: default/sriov-net1 spec: containers: - name: sleeppodsriov command: ["sh", "-c", "trap : TERM INT; sleep infinity & wait"] image: busybox securityContext: capabilities: add: - NET_ADMIN
Apply the Pod specification file to the cluster using the following command:
kubectl apply -f sriovpod.yaml
Verify that the Pod has successfully started using the following command:
kubectl get pods
Establish a command-line shell for the Pod using the following command:
kubectl exec -it sriovpod -- sh
Confirm that the Pod is communicating with the ToR switches using the SR-IOV network function operator feature using the following command in the Pod shell:
ip addr
The command returns output similar to the following:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 51: net1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 9000 qdisc mq qlen 1000 link/ether 2a:af:96:a5:42:ab brd ff:ff:ff:ff:ff:ff inet 192.168.100.11/25 brd 192.168.100.127 scope global net1 valid_lft forever preferred_lft forever 228: eth0@if229: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000 link/ether 46:c9:1d:4c:bf:32 brd ff:ff:ff:ff:ff:ff inet 10.10.3.159/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::44c9:1dff:fe4c:bf32/64 scope link valid_lft forever preferred_lft forever
The information returned for the
net1
interface denotes that network connectivity between the ToR switches and the Pod has been established.
Limitations for Distributed Cloud workloads
When you configure your Distributed Cloud connected workloads, you must adhere to the limitations described in this section. These limitations are enforced by Distributed Cloud connected on all the workloads that you deploy on your Distributed Cloud connected hardware.
Linux workload limitations
Distributed Cloud connected supports only the following Linux capabilities for workloads:
AUDIT_READ
AUDIT_WRITE
CHOWN
DAC_OVERRIDE
FOWNER
FSETID
IPC_LOCK
IPC_OWNER
KILL
MKNOD
NET_ADMIN
NET_BIND_SERVICE
NET_RAW
SETFCAP
SETGID
SETPCAP
SETUID
SYS_CHROOT
SYS_NICE
SYS_PACCT
SYS_PTRACE
SYS_RESOURCE
SYS_TIME
Namespace restrictions
Distributed Cloud connected does not support the following namespaces:
hostPID
hostIPC
hostNetwork
Resource type restrictions
Distributed Cloud connected does not support the
CertificateSigningRequest
resource type, which allows a client to ask for
an X.509 certificate to be issued, based on a signing request.
Security context restrictions
Distributed Cloud connected does not support the privileged mode security context.
Pod binding restrictions
Distributed Cloud connected does not support binding Pods to host
ports in the HostNetwork
namespace. Additionally, the HostNetwork
namespace
is not available.
hostPath
volume restrictions
Distributed Cloud connected only allows the following hostPath
volumes with read/write access:
/dev/hugepages
/dev/infiniband
/dev/vfio
/dev/char
/sys/devices
PersistentVolumeClaim
resource type restrictions
Distributed Cloud connected only allows the following
PersistentVolumeClaim
resource types:
csi
nfs
local
Volume type restrictions
Distributed Cloud connected only allows the following volume types:
configMap
csi
downwardAPI
emptyDir
hostPath
nfs
persistentVolumeClaim
projected
secret
Pod toleration restrictions
Distributed Cloud connected does not allow user-created Pods on control plane nodes. Specifically, Distributed Cloud connected does not allow scheduling Pods that have the following toleration keys:
""
node-role.kubernetes.io/master
node-role.kubernetes.io/control-plane
Impersonation restrictions
Distributed Cloud connected does not support user or group impersonation.
Management namespace restrictions
Distributed Cloud connected does not allow access to the following namespaces:
ai-system
ai-speech-system
ai-ocr-system
ai-translation-system
anthos-identity-service
cert-manager
dataproc-system
dataproc-
PROJECT_IDdns-system
g-istio-system
gke-connect
gke-managed-metrics-server
gke-operators
g-ospf-servicecontrol-system
g-ospf-system
gke-system
gpc-backup-system
iam-system
kube-node-lease
kube-public
kube-system
with the exception of deletingippools.whereabouts.cni.cncf.io
metallb-system
with the exception of editingconfigMap
resources to set load-balancing IP address rangesnf-operator
oclcm-system
prediction
rm-system
robinio
saas-system
sriov-fec-system
sriov-network-operator
vm-system
PROJECT_ID
denotes the ID of the target Google Cloud project.
Avoid the use of any namespace with the g-
prefix in its name. Such namespaces
are typically a reserved namespace used by Distributed Cloud connected.
Webhook restrictions
Distributed Cloud connected restricts webhooks as follows:
- Any mutating webhook that you create automatically excludes the
kube-system
namespace. - Mutating webhooks are disabled for the following resource types:
nodes
persistentvolumes
certificatesigningrequests
tokenreviews