Create and use virtual networks for VM Runtime on Google Distributed Cloud

This document is intended for application owners and platform administrators that run GKE on Bare Metal. This document shows you how to create and use virtual networks to support VM workloads that use VM Runtime on Google Distributed Cloud.

Before you begin

To complete this document, you need access to the following resources:

Virtual networks overview

Networks are created using custom resources. A network can be created at any time after your cluster is created. Network settings for the host interface and VLAN ID assignment, if defined, can't be changed after you create a network.

The deletion of networks is subject to some conditions. For example, the network controller rejects the deletion of a network when it's in use by any resources, such as VMs or network interfaces.

The network definition can include the gateway, routes, and DNS information. You can also enable the use of an external DHCP server. These network settings are statically or dynamically assigned depending on how certain network configurations options are defined.

Default pod-network

Each cluster has a pod-network created by default for you. This network can't be changed. Routes for the pod CIDR and service CIDR, and DNS configuration are automatically populated. The DNS configuration uses the same values as the cluster.

The pod-network can be used by workloads that need an interface to access the pod network of the cluster and don't need any specific configuration options. The routes of the pod-network are always configured to ensure the cluster and service access for the workloads, even though the default gateway is not on the pod-network interface.

This default pod-network lets you test VM Runtime on Google Distributed Cloud without additional steps to create your own virtual networks. Many of our docs use this default pod-network to reduce the complexity of the examples. The needs of your VM workloads determine if this default pod-network is sufficient, or if you need to create and use your own virtual networks.

The following YAML manifest shows a sample configuration for the pod-network. Values for routes, DNS, and the interface name have been populated by the cluster:

  ​​apiVersion: networking.gke.io/v1
  kind: Network
  metadata:
    name: pod-network
  spec:
    routes:
    - to: 192.168.0.0/16
    - to: 10.96.0.0/12
    dnsConfig:
      nameservers:
      - 10.96.0.10

Create and use virtual networks

To support production workloads, create networks that support the features you need, such as the use of an external DHCP server or the use of a VLAN ID. These networks provide layer 2 (L2) connectivity for your VMs.

Before you start to create networks, verify the following cluster configuration option:

  1. Verify that the multipleNetworkInterfaces field in the clusterNetwork section of your cluster configuration is set to false. This field is used for multi-NIC for Pods and should be disabled for VM Runtime on Google Distributed Cloud. The cluster configuration file is stored as bmctl-workspace/CLUSTER-NAME/CLUSTER-NAME.yamlunless you specified a different location.

    If your cluster configuration defines multipleNetworkInterfaces: true, you must create another cluster and then enable Anthos VM Runtime.

    ...
    clusterNetwork:
      multipleNetworkInterfaces: false
    ...
    

Use an external DHCP server

VM Runtime on Google Distributed Cloud doesn't provide DHCP servers. You must manually specify IP addresses for VMs or configure the use of external DHCP servers. When you enable the use of an external DHCP server, you can skip configuration of DNS and gateway settings if they're provided by DHCP.

To create a network that uses an external DHCP server, complete the following steps:

  1. Create a Network manifest, such as use-dhcp-network.yaml, in the editor of your choice:

    nano use-dhcp-network.yaml
    
  2. Copy and paste the following YAML manifest:

    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: NETWORK_NAME
    spec:
      type: L2
      nodeInterfaceMatcher:
        interfaceName: INTERFACE_NAME
      externalDHCP4: true
    

    Replace the following values:

    • NETWORK_NAME: the name for your network.
    • INTERFACE_NAME: the interface name on your GKE on Bare Metal node to attach the network to. Specify the name of the physical interface on your node to use. All nodes in your cluster should have the same interface name.

    In this Network manifest, the following values are set:

    • type is set to L2. With this setting, workloads can only have a Layer 2 attachment to this network. This is the only network type that you can create in VM Runtime on Google Distributed Cloud.
    • externalDHCP4 is set to true. This setting enables external DHCP for the network. The external DHCP server is responsible for IPv4 address allocation, routes, gateway, and DNS configuration for workloads connected to this network.
  3. Save and close the Network manifest in your editor.

  4. Create the network using kubectl:

    kubectl apply -f use-dhcp-network.yaml
    

Manually define network settings

VM Runtime on Google Distributed Cloud doesn't provide DHCP servers. You must manually specify IP addresses for VMs or configure the use of external DHCP servers. If you manually specify IP addresses, you must define network settings for DNS, routes, and default gateway.

To create a network with manually-specified networking settings for VMs, complete the following steps:

  1. Create a Network manifest, such as manual-network.yaml, in the editor of your choice:

    nano manual-network.yaml
    
  2. Copy and paste the following YAML manifest:

    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: NETWORK_NAME
    spec:
      type: L2
      nodeInterfaceMatcher:
        interfaceName: INTERFACE_NAME
      routes:
      - to: "ROUTE_ADDRESS"
      gateway4: GATEWAY_ADDRESS
      dnsConfig:
        nameservers:
        - NAMESERVER_ADDRESS
    

    Replace the following values:

    • NETWORK_NAME: the name for your network.
    • INTERFACE_NAME: the interface name on your GKE on Bare Metal node to attach the network to. Specify the name of the physical interface on your node to use. All nodes in your cluster should have the same interface name.
    • ROUTE_ADDRESS: optional routes in CIDR notation to configure on every VM that connects to this network.
    • GATEWAY_ADDRESS: the gateway IP address for your VMs to use.
    • NAMESERVER_ADDRESS: one or more DNS nameserver IP addresses for your VMs to use.
  3. Save and close the Network manifest in your editor.

  4. Create the network using kubectl:

    kubectl apply -f manual-network.yaml
    

Use a VLAN ID

When you create virtual networks, you can define tagged VLANs. These VLAN assignments help you isolate network traffic based on your workload requirements and isolation needs. In an AnthosManaged network, the cluster has permission to create and delete the VLAN interface on every node.

To create a network that defines a VLAN assignment, complete the following steps:

  1. Create a Network manifest, such as vlan-network.yaml, in the editor of your choice:

    nano vlan-network.yaml
    
  2. Copy and paste the following YAML manifest:

    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: NETWORK_NAME
    spec:
      type: L2
      networkLifecycle: AnthosManaged
      l2NetworkConfig:
        vlanID: VLAN_ID
      nodeInterfaceMatcher:
        interfaceName: INTERFACE_NAME
      externalDHCP4: true
    

    Replace the following values:

    • NETWORK_NAME: the name for your network.
    • INTERFACE_NAME: the interface name on your GKE on Bare Metal node to attach the network to. Specify the name of the physical interface on your node to use. All nodes in your cluster should have the same interface name.
    • VLAN_ID: the VLAN ID you want to tag traffic for.

    In this Network manifest, the following values are set:

    • Workloads can have only an L2 attachment to this network.
    • The network is AnthosManaged. This setting is the default lifecycle if not specified.
      • In this mode, the cluster has permission to create and delete the VLAN interface on every node, such as INTERFACE_NAME.VLAN_ID.
      • If you want to create, or have already created, the VLAN interfaces on the nodes, set the networkLifecycle value to UserManaged as shown in the next section.
    • The network has external DHCP enabled. The external DHCP server is responsible for IPv4 address allocation, routes, gateway, and DNS configuration for workloads connected to this network.
  3. Save and close the Network manifest in your editor.

  4. Create the network using kubectl:

    kubectl apply -f vlan-network.yaml
    

Create a user-managed network

In the following example virtual network, the network is user-managed, as opposed to Anthos-managed in a previous example. In user-managed networks, you're responsible for creating or deleting the VLAN interface on the host.

To create a network in a user-managed mode and manually define the VLAN interface configuration, complete the following steps:

  1. Create a Network manifest, such as user-managed-network.yaml, in the editor of your choice:

    nano user-managed-network.yaml
    
  2. Copy and paste the following YAML definition:

    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: NETWORK_NAME
    spec:
      type: L2
      networkLifecycle: UserManaged
      l2NetworkConfig:
        vlanID: VLAN_ID
      nodeInterfaceMatcher:
        interfaceName: INTERFACE_NAME
      externalDHCP4: true
    

    Replace the following values:

    • NETWORK_NAME: the name for your network.
    • INTERFACE_NAME: the host interface to attach the network to.
    • VLAN_ID: the VLAN ID you want to tag traffic for.

    In this Network manifest, the following values are set:

    • Workloads can have only an L2 attachment to this network.
    • The network is UserManaged. You must create or delete the VLAN VLAN_ID interface on every node before the network is created, or after the network is deleted.
    • The network has external DHCP enabled. The external DHCP server is responsible for IPv4 address allocation, routes, gateway, and DNS configuration for workloads connected to this network.
  3. Save and close the Network manifest in your editor.

  4. Create the network using kubectl:

    kubectl apply -f user-managed-network.yaml
    

Connect a VM to a network

Network settings for your VM such as DNS and DHCP are statically or dynamically assigned depending on how certain network configurations options are defined:

  • If you configure a static IP address on the VM, no query is sent to a DHCP server. Additional information to configure the gateway and route must come from the network resource.
  • If you don't configure a static IP address on the VM, a query is sent to the DHCP server. The VM gets all information from the DHCP server and ignores any configuration you define in the network resource.
  • If external DHCP isn't set to true in the network resource, you must configure a static IP address for the VM. All other information comes from the configuration you define in network resource.

To create a VM that connects to a network, complete the following steps:

CLI

  • To create a VM using kubectl, complete the following steps:

    kubectl virt create vm VM_NAME \
      --image ubuntu20.04 \
      --network NETWORK_NAME
    

    Replace the following values:

    • VM_NAME: the name for your VM.
    • NETWORK_NAME: the name of your network to connect to.
      • If the network is configured to allow the use of external DHCP servers, the VM gets an IP address assignment automatically. If you need to define a static IP address, add the optional --ip IP_ADDRESS parameter and value.

Manifest

To create a VM using a YAML manifest, complete the following steps:

  1. Create a VirtualMachine manifest, such as my-vm.yaml, in the editor of your choice:

    nano my-vm.yaml
    
  2. Copy and paste the following YAML manifest:

    apiVersion: vm.cluster.gke.io/v1
    kind: VirtualMachine
    metadata:
      name: VM_NAME
    spec:
      interfaces:
        - name: eth0
          networkName: NETWORK_NAME
          ipAddresses:
            - IP_ADDRESS
          default: true
      disks:
        - virtualMachineDiskName: VM_NAME-boot-dv
          boot: true
    

    In this YAML manifest, define the following settings:

    • VM_NAME: the name for your VM.
    • NETWORK_NAME: the name of your network to connect to.
    • IP_ADDRESS: the IP address in CIDR notation to assign to your VM, such as 192.0.2.10/24.
      • If your network is configured to allow the use of external DHCP servers, remove this field from the VirtualMachine manifest.

    The boot disk named VM_NAME-boot-dv must already exist. For more information, see Create a VM boot disk.

  3. Save and close the VirtualMachine manifest in your editor.

  4. Create the VM using kubectl:

    kubectl apply -f my-vm.yaml
    

What's next