Manage virtual machines

Stay organized with collections Save and categorize content based on your preferences.

This page describes how to manage virtual machines on Distributed Cloud Edge running Anthos VM Runtime. You must be familiar with Anthos VM Runtime before completing the steps on this page. For a list of supported guest operating systems, see Verified guest operating systems for Anthos VM Runtime.

To learn how virtual machines serve as an essential component of the Google Distributed Cloud platform, see Extending Anthos to manage on-premises edge VMs.

Migrate virtual machines from Kubevirt to Anthos VM Runtime

To continue using your existing virtual machines on Distributed Cloud Edge version 1.2.0 and later, you must complete the following steps:

  1. Before your Distributed Cloud Edge deployment is upgraded to version 1.2.0, back up your existing virtual machines and delete them.

  2. Wait for the upgrade to Distributed Cloud Edge version 1.2.0 to complete.

  3. Complete the steps in the rest of this guide to enable Anthos VM Runtime support and re-create your virtual machines.

Enable Anthos VM Runtime support on Distributed Cloud Edge

By default, Anthos VM Runtime virtual machine support is disabled on Distributed Cloud Edge. To enable it, complete the steps in this section. The instructions in this section assume that you have a fully functioning Distributed Cloud Edge cluster.

The VMRuntime resource that configures Anthos VM Runtime support on Distributed Cloud Edge also configures GPU support on your cluster by using the enableGPU parameter. Make sure that you configure the two parameters according to your workload needs. You do not have to enable GPU support to enable Anthos VM Runtime support on your Distributed Cloud Edge cluster.

The following table describes the available configurations:

enable value enableGPU value Resulting configuration
false false Workloads run only in containers and cannot use GPU resources.
false true Workloads run only in containers and can use GPU resources.
true true Workloads can run on virtual machines and in containers.
Both types of workloads can use GPU resources.
true false Workloads can run on virtual machines and in containers.
Neither type of workload can use GPU resources.

If you have already enabled GPU support, modify the VMRuntime resource to add the enable parameter, set its value to true, and then apply it to your Distributed Cloud Edge cluster.

Enable the Anthos VM Runtime virtual machine subsystem

To enable the Anthos VM Runtime virtual machine subsystem, complete the following steps:

  1. Create a VMRuntime custom resource with the following contents and apply it to your cluster:

    apiVersion: vm.cluster.gke.io/v1
    kind: VMRuntime
    metadata:
      name: vmruntime
    spec:
      # Enable Anthos VM Runtime support
      enabled: true
      # vmImageFormat must be "raw"
      vmImageFormat: "raw"
    

    Do not change the value of the vmImageFormat parameter. Distributed Cloud Edge does not support any other virtual disk formats.

    This process typically takes several minutes to complete.

  2. Use the following command to verify that the VMRuntime custom resource has been applied to your cluster:

    kubectl get vmruntime -o yaml
    

    The command returns output similar to the following example:

     - apiVersion: vm.cluster.gke.io/v1
       kind: VMRuntime
       metadata:
         name: vmruntime
         ...
       spec:
         enabled: true
         vmImageFormat: raw
       status:
         ...
       ready: true
         ...
    
  3. Use the following command to verify that Anthos VM Runtime virtual machine support has been enabled on your cluster:

    kubectl get pods -n vm-system
    

    The command returns output showing the Anthos VM Runtime subsystem Pods running on your cluster, similar to the following example:

    NAME                                                READY   STATUS         RESTARTS        AGE
    cdi-apiserver-6c76c6cf7b-n68wn                      1/1     Running        0               132m
    cdi-deployment-f78fd599-vj7tv                       1/1     Running        0               132m
    cdi-operator-65c4df9647-fcb9d                       1/1     Running        0               134m
    cdi-uploadproxy-7765ffb694-6j7bf                    1/1     Running        0               132m
    macvtap-fjfjr                                       1/1     Running        0               134m
    virt-api-77dd99dbbb-bs2fb                           1/1     Running        0               132m
    virt-api-77dd99dbbb-pqc27                           1/1     Running        0               132m
    virt-controller-5b44dbbbd7-hc222                    1/1     Running        0               132m
    virt-controller-5b44dbbbd7-p8xkk                    1/1     Running        0               132m
    virt-handler-n76fs                                  1/1     Running        0               132m
    virt-operator-86565697d9-fpxqh                      2/2     Running        0               134m
    virt-operator-86565697d9-jnbt7                      2/2     Running        0               134m
    vm-controller-controller-manager-7844d5fb7b-72d8m   2/2     Running        0               134m
    vmruntime-controller-manager-845649c847-m78r9       2/2     Running        0               175m
    

Grant the target namespace access to the Distributed Cloud Edge registry

Before you can create a virtual machine in a namespace, you must grant that namespace access to the Distributed Cloud Edge registry. The registry holds components necessary to create and deploy your virtual machines in the target namespace. Keep in mind that you cannot run virtual machines in namespaces reserved for Distributed Cloud Edge system management. For more information, see Management namespace restrictions.

Complete the following steps to grant your target namespace access to the Distributed Cloud Edge registry:

  1. Patch the default service account in the target namespace with the imagePullSecret key named gcr-pull:

    kubectl patch sa default -p "{\"imagePullSecrets\": [{\"name\": \"gcr-pull\"}]}" -n NAMESPACE
    

    Replace NAMESPACE with the name of the target namespace.

  2. Refresh the associated secret in the target namespace:

    # Delete existing secret.
    kubectl delete secret gcr-pull -n NAMESPACE --ignore-not-found
    # Copy the new secret to the target namespace.
    kubectl get secret gcr-pull -n vm-system -o yaml | sed "s/namespace: vm-system/namespace: NAMESPACE/g" | kubectl apply -f -
    

    Replace NAMESPACE with the name of the target namespace.

Install the virtctl management tool

You need the virtctl client tool to manage virtual machines on your Distributed Cloud Edge Cluster. To install the tool, complete the following steps:

  1. Install the virtctl client tool as a kubectl plugin:

    export VERSION=v0.49.0-anthos1.12-gke.7
    gsutil cp gs://anthos-baremetal-release/virtctl/${VERSION}/linux-amd64/virtctl /usr/local/bin/virtctl
    cd /usr/local/bin
    sudo ln -s virtctl kubectl-virt
    sudo chmod a+x virtctl
    cd -
    
  2. Verify that the virt plugin is installed:

    kubectl plugin list
    

    If the plugin has been successfully installed, the command's output lists kubectl-virt as one of the plugins.

Provision a virtual machine on Distributed Cloud Edge

This section provides configuration examples that illustrate how to provision a Linux and a Windows virtual machine on a Distributed Cloud Edge cluster. The examples use block storage instantiated as a PersistentVolume.

Provision a Linux virtual machine on Distributed Cloud Edge

The following example illustrates how to provision a Linux virtual machine running Ubuntu Linux 22.04. The installation source is the Ubuntu Linux 22.04 ISO disc image.

  1. Create a PersistentVolumeClaim resource with the following contents for the Ubuntu installation disc image, and then apply it to your cluster:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      labels:
        app: containerized-data-importer
      name: iso-ubuntu
      annotations:
        cdi.kubevirt.io/storage.import.endpoint: "https://releases.ubuntu.com/jammy/ubuntu-22.04.1-live-server-amd64.iso"
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: local-block
      volumeMode: Block
      resources:
        requests:
          storage: 5Gi
    
  2. Create a PersistentVolumeClaim resource with the following contents for the virtual machine's virtual hard disk, and then apply it to your cluster:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: ubuntuhd
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 15Gi
      storageClassName: local-block
      volumeMode: Block
    
  3. Create a VirtualMachineDisk resource with the following contents for the Ubuntu installation disc image, and then apply it to your cluster:

    apiVersion: vm.cluster.gke.io/v1
    kind: VirtualMachineDisk
    metadata:
      name: "ubuntu-iso-disk"
    spec:
      persistentVolumeClaimName: iso-ubuntu
      diskType: cdrom
    
  4. Create a VirtualMachineDisk resource with the following contents for the virtual machine's virtual hard disk, and then apply it to your cluster:

    apiVersion: vm.cluster.gke.io/v1
    kind: VirtualMachineDisk
    metadata:
      name: "ubuntu-main-disk"
    spec:
      persistentVolumeClaimName: ubuntuhd
    
  5. Create a VirtualMachineType resource with the following contents that specifies the virtual machine's configuration, and then apply it to your cluster:

    apiVersion: vm.cluster.gke.io/v1
    kind: VirtualMachineType
    metadata:
      name: small-2-20
    spec:
      cpu:
        vcpus: 2
      memory:
        capacity: 20Gi
    
  6. Create a VirtualMachine resource with the following contents that instantiates and starts the virtual machine on the cluster, and then apply it to your cluster:

    apiVersion: vm.cluster.gke.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: ubu-vm
      name: ubu-vm #  Propagate the virtual machine name to the VMI
    spec:
      compute:
        virtualMachineTypeName: small-2-20
      interfaces:
        - name: eth0
          networkName: pod-network
          default: true
      disks:
        - virtualMachineDiskName: ubuntu-main-disk
          boot: true
        - virtualMachineDiskName: ubuntu-iso-disk
    
  7. Install Ubuntu Linux on the virtual machine:

    1. Wait for the importer Pod to download the Ubuntu installation disc image.
    2. Check the status of the virtual machine:

      kubectl get gvm VM_NAME
      

      Replace VM_NAME with the name of the virtual machine, ubu-vm in this example.

    3. Log on to the virtual machine:

      kubectl virt vnc VM_NAME
      

      Replace VM_NAME with the name of the virtual machine, ubu-vm in this example.

    4. Complete the Ubuntu Linux installation steps.

  8. Clean up:

    1. Stop the virtual machine:

      kubectl virt stop VM_NAME
      

      Replace VM_NAME with the name of the virtual machine, ubu-vm in this example.

    2. Edit the virtual machine's YAML file to remove the reference to the installation disc image:

      kubectl edit gvm VM_NAME
      

      Replace VM_NAME with the name of the virtual machine, ubu-vm in this example.

    3. Start the virtual machine:

      kubectl virt start VM_NAME
      

      Replace VM_NAME with the name of the virtual machine, ubu-vm in this example.

    4. Delete the VirtualMachineDisk and PersistentVolumeClaim resources for the installation disc image:

      kubectl delete virtualmachinedisk ubuntu-iso-disk
      kubectl delete pvc iso-ubuntu
      

Provision a Windows virtual machine on Distributed Cloud Edge

The following example illustrates how to provision a Windows virtual machine. The steps are similar to provisioning a Linux virtual machine, with the addition of the virtio driver disk image, which is required for installing Windows.

  1. Obtain a licensed copy of Windows and its installation media image.

  2. Create a PersistentVolumeClaim resource with the following contents for the Windows installation disc image, and then apply it to your cluster. For instructions, see From image.

  3. Create a PersistentVolumeClaim resource with the following contents for the virtio driver, and then apply it to your cluster:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      labels:
        app: containerized-data-importer
      name: virtio-driver
      annotations:
        cdi.kubevirt.io/storage.import.endpoint: "https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso"
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: local-block
      volumeMode: Block
      resources:
        requests:
          storage: 1Gi
    
  4. Create a PersistentVolumeClaim resource with the following contents for the virtual machine's virtual hard disk, and then apply it to your cluster:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: windowshd
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 15Gi
      storageClassName: local-block
      volumeMode: Block
    
  5. Create a VirtualMachineDisk resource with the following contents for the Windows installation disc image, and then apply it to your cluster:

    apiVersion: vm.cluster.gke.io/v1
    kind: VirtualMachineDisk
    metadata:
      name: "windows-iso-disk"
    spec:
      persistentVolumeClaimName: iso-windows
      diskType: cdrom
    
  6. Create a VirtualMachineDisk resource with the following contents for the virtio driver, and then apply it to your cluster:

    apiVersion: vm.cluster.gke.io/v1
    kind: VirtualMachineDisk
    metadata:
      name: "win-virtio-driver"
    spec:
      persistentVolumeClaimName: virtio-driver
      diskType: cdrom
    
  7. Create a VirtualMachineDisk resource with the following contents for the virtual machine's virtual hard disk, and then apply it to your cluster:

    apiVersion: vm.cluster.gke.io/v1
    kind: VirtualMachineDisk
    metadata:
      name: "windows-main-disk"
    spec:
      persistentVolumeClaimName: windowshd
    
  8. Create a VirtualMachineType resource with the following contents that specifies the virtual machine's configuration, and then apply it to your cluster:

    apiVersion: vm.cluster.gke.io/v1
    kind: VirtualMachineType
    metadata:
      name: small-2-20
    spec:
      cpu:
        vcpus: 2
      memory:
        capacity: 20Gi
    
  9. Create a VirtualMachine resource with the following contents that instantiates and starts the virtual machine on the cluster, and then apply it to your cluster:

    apiVersion: vm.cluster.gke.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: win-vm
      name: win-vm #  Propagate the virtual machine name to the VMI
    spec:
      compute:
        virtualMachineTypeName: my-vmt
      interfaces:
        - name: eth0
          networkName: pod-network
          default: true
      disks:
        - virtualMachineDiskName: windows-main-disk
          boot: true
        - virtualMachineDiskName: windows-iso-disk
        - virtualMachineDiskName: win-virtio-driver
    
  10. Install Windows on the virtual machine:

    1. Wait for the importer Pod to download the Ubuntu installation disc image.
    2. Check the status of the virtual machine:

      kubectl get gvm VM_NAME
      

      Replace VM_NAME with the name of the virtual machine, win-vm in this example.

    3. Complete the Windows installation by following the steps in Connect to Windows VM and complete OS install.

  11. Clean up:

    1. Stop the virtual machine:

      kubectl virt stop VM_NAME
      

      Replace VM_NAME with the name of the virtual machine, win-vm in this example.

    2. Complete the steps in Detach the ISO image and drivers disk.

Manage virtual machines running on Distributed Cloud Edge

For instructions about managing virtual machines running on Distributed Cloud Edge, see the following Anthos VM Runtime documentation:

Configure the ttyS0 device for serial console access to Linux virtual machines

If you plan to access your Linux virtual machines by using the serial console (kubectl virt console), make sure that the ttyS0 serial console device has been configured on the guest operating system. To configure this device, complete the following steps:

  1. Instantiate the ttyS0 serial device in the system:

    setserial -g /dev/ttyS0
    
  2. Configure the grub bootloader to use the ttyS0 serial device by adding the following lines to your /etc/default/grub configuration file. The first line replaces your existing GRUB_CMDLINE_LINUX variable.

    GRUB_CMDLINE_LINUX='console=tty0 console=ttyS0,19200n8'
    GRUB_TERMINAL=serial
    GRUB_SERIAL_COMMAND="serial --speed=19200 --unit=0 --word=8 --parity=no --stop=1"
    
  3. Apply the new grub configuration to your boot sector:

    update-grub
    
  4. Restart the virtual machine.

Disable Anthos VM Runtime on Distributed Cloud Edge

Follow the steps in this section to disable Anthos VM Runtime on Distributed Cloud Edge. Before you can disable Anthos VM Runtime on Distributed Cloud Edge, you must stop and delete all virtual machines on your Distributed Cloud Edge cluster as described in Delete a VM.

To disable Anthos VM Runtime on Distributed Cloud Edge, modify the VMRuntime custom resource by setting the enabled spec parameter to false as follows, and then apply it to your cluster:

apiVersion: vm.cluster.gke.io/v1
kind: VMRuntime
metadata:
  name: vmruntime
spec:
  # Disable Anthos VM Runtime
  enabled: disable
  # vmImageFormat must be "raw"
  vmImageFormat: "raw"

Limitations of running virtual machines on Distributed Cloud Edge

The following limitations apply when running virtual machines on Distributed Cloud Edge:

  • The OSType field is not supported in VirtualMachine resource specifications. Because of this, only console and vnc methods are supported for accessing virtual machines running on Distributed Cloud Edge.
  • Virtual machines running on Distributed Cloud Edge do not support virtual networks. Only the pod-network network is supported in VirtualMachine resource specifications on Distributed Cloud Edge.
  • You cannot create a virtual machine on a Distributed Cloud Edge cluster directly by using the kubectl virt command because Distributed Cloud Edge does not provide filesystem storage to virtual machines.
  • Block storage PersistenVolumeClaim resources do not support the qcow2 disk image format.
  • The Containerized Data Importer (CDI) plug-in does not support DataVolume resources on block storage. This is because the plug-in's scratch space only works on filesystem storage. For more information, see Scratch space.

What's next