This page describes how to manage virtual machines on Google Distributed Cloud connected running VM Runtime on Google Distributed Cloud. You must be familiar with VM Runtime on GDC before completing the steps on this page. For a list of supported guest operating systems, see Verified guest operating systems for VM Runtime on GDC.
To learn how virtual machines serve as an essential component of the Distributed Cloud connected platform, see Extending GKE Enterprise to manage on-premises edge VMs.
Local control plane clusters support virtual machine webhooks. This allows Distributed Cloud connected to validate user requests made to the local Kubernetes API server. Rejected requests generate detailed information on the reason for rejection.
Enable VM Runtime on GDC support on Distributed Cloud connected
By default, VM Runtime on GDC virtual machine support is disabled on Distributed Cloud connected. To enable it, complete the steps in this section. The instructions in this section assume that you have a fully functioning Distributed Cloud connected cluster.
The VMRuntime
resource that configures VM Runtime on GDC
support on Distributed Cloud connected also
configures GPU support on your cluster
by using the enableGPU
parameter. Make sure that you configure the two
parameters according to your workload needs. You do not have to enable
GPU support to enable VM Runtime on GDC support on your
Distributed Cloud connected cluster.
The following table describes the available configurations:
enable value |
enableGPU value |
Resulting configuration |
---|---|---|
false |
false |
Workloads run only in containers and cannot use GPU resources. |
false |
true |
Workloads run only in containers and can use GPU resources. |
true |
true |
Workloads can run on virtual machines and in containers. Both types of workloads can use GPU resources. |
true |
false |
Workloads can run on virtual machines and in containers. Neither type of workload can use GPU resources. |
If you have already enabled GPU support, modify the VMRuntime
resource to add
the enable
parameter, set its value to true
,
and then apply it to your Distributed Cloud cluster.
Enable the VM Runtime on GDC virtual machine subsystem
Depending on the type of cluster on which you want to enable the VM Runtime on GDC virtual machine subsystem, do one of the following:
- For Cloud control plane clusters, you must manually create the
VMRuntime
resource. - For local control plane clusters, you must edit the existing
VMRuntime
resource.
To enable the VM Runtime on GDC virtual machine subsystem, complete the following steps:
Depending on the target cluster type, create or modify the
VMRuntime
custom resource with the following contents and apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VMRuntime metadata: name: vmruntime spec: # Enable Anthos VM Runtime support enabled: true # vmImageFormat defaults to "raw" if not set vmImageFormat: "raw"
Do not change the value of the
vmImageFormat
parameter. Distributed Cloud connected does not support any other virtual disk formats.This process typically takes several minutes to complete.
Use the following command to verify that the
VMRuntime
custom resource has been applied to your cluster:kubectl get vmruntime -o yaml
The command returns output similar to the following example:
- apiVersion: vm.cluster.gke.io/v1 kind: VMRuntime metadata: name: vmruntime ... spec: enabled: true vmImageFormat: raw status: ... ready: true ...
Use the following command to verify that VM Runtime on GDC virtual machine support has been enabled on your cluster:
kubectl get pods -n vm-system
The command returns output showing the VM Runtime on GDC subsystem Pods running on your cluster, similar to the following example:
NAME READY STATUS RESTARTS AGE cdi-apiserver-6c76c6cf7b-n68wn 1/1 Running 0 132m cdi-deployment-f78fd599-vj7tv 1/1 Running 0 132m cdi-operator-65c4df9647-fcb9d 1/1 Running 0 134m cdi-uploadproxy-7765ffb694-6j7bf 1/1 Running 0 132m macvtap-fjfjr 1/1 Running 0 134m virt-api-77dd99dbbb-bs2fb 1/1 Running 0 132m virt-api-77dd99dbbb-pqc27 1/1 Running 0 132m virt-controller-5b44dbbbd7-hc222 1/1 Running 0 132m virt-controller-5b44dbbbd7-p8xkk 1/1 Running 0 132m virt-handler-n76fs 1/1 Running 0 132m virt-operator-86565697d9-fpxqh 2/2 Running 0 134m virt-operator-86565697d9-jnbt7 2/2 Running 0 134m vm-controller-controller-manager-7844d5fb7b-72d8m 2/2 Running 0 134m vmruntime-controller-manager-845649c847-m78r9 2/2 Running 0 175m
Grant the target namespace access to the Distributed Cloud connected registry
The steps in this section only apply to Cloud control plane clusters. If you are configuring the VM Runtime on GDC virtual machine subsystem on a local control plane cluster, skip this section.
Before you can create a virtual machine in a namespace, you must grant that namespace access to the Distributed Cloud connected registry. The registry holds components necessary to create and deploy your virtual machines in the target namespace. Keep in mind that you cannot run virtual machines in namespaces reserved for Distributed Cloud connected system management. For more information, see Management namespace restrictions.
Complete the following steps to grant your target namespace access to the Distributed Cloud connected registry:
Patch the default service account in the target namespace with the
imagePullSecret
key namedgcr-pull
:kubectl patch sa default -p "{\"imagePullSecrets\": [{\"name\": \"gcr-pull\"}]}" -n NAMESPACE
Replace
NAMESPACE
with the name of the target namespace.Refresh the associated secret in the target namespace:
# Delete existing secret. kubectl delete secret gcr-pull -n NAMESPACE --ignore-not-found # Copy the new secret to the target namespace. kubectl get secret gcr-pull -n vm-system -o yaml | sed "s/namespace: vm-system/namespace: NAMESPACE/g" | kubectl apply -f -
Replace
NAMESPACE
with the name of the target namespace.The secret expires after one hour. You must manually refresh it after it expires.
Install the virtctl
management tool
You need the virtctl
client tool to manage virtual machines on your
Distributed Cloud connected cluster. To install the tool, complete the
following steps:
Install the
virtctl
client tool as akubectl
plugin:export VERSION=v0.49.0-anthos1.12-gke.7 gcloud storage cp gs://anthos-baremetal-release/virtctl/${VERSION}/linux-amd64/virtctl/usr/local/bin/virtctl cd /usr/local/bin sudo ln -s virtctl kubectl-virt sudo chmod a+x virtctl cd -
Verify that the
virt
plugin is installed:kubectl plugin list
If the plugin has been successfully installed, the command's output lists
kubectl-virt
as one of the plugins.
Provision a virtual machine on Distributed Cloud connected with raw block storage
This section provides configuration examples that illustrate how to provision a
Linux virtual machine and a Windows virtual machine on a Distributed Cloud
connected cluster with raw block storage. The examples use block storage instantiated as a PersistentVolume
.
Limitations of using raw block storage
The following limitations apply when running virtual machines with raw block storage on Distributed Cloud connected:
- The
OSType
field is not supported inVirtualMachine
resource specifications on Cloud control plane clusters. Because of this, onlyconsole
andvnc
methods are supported for accessing virtual machines running on Cloud control plane clusters. - You cannot create a virtual machine on a Distributed Cloud
connected cluster directly by using the
kubectl virt
command because Distributed Cloud connected does not provide file system storage to virtual machines. - Block storage
PersistentVolumeClaim
resources do not support theqcow2
disk image format. - The Containerized Data Importer (CDI) plug-in does not support
DataVolume
resources on block storage because the plug-in's scratch space only works on file system storage. For more information, see Scratch space.
Provision a Linux virtual machine on Distributed Cloud connected with raw block storage
The following example illustrates how to provision a Linux virtual machine with raw block storage running Ubuntu Server 22.04. The installation source is the Ubuntu Server 22.04 ISO disc image.
Create a
PersistentVolumeClaim
resource with the following contents for the Ubuntu Server installation disc image, and then apply it to your cluster:apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: app: containerized-data-importer name: iso-ubuntu annotations: cdi.kubevirt.io/storage.import.endpoint: "https://releases.ubuntu.com/jammy/ubuntu-22.04.3-live-server-amd64.iso" spec: accessModes: - ReadWriteOnce storageClassName: local-block volumeMode: Block resources: requests: storage: 5Gi
Create a
PersistentVolumeClaim
resource with the following contents for the virtual machine's virtual hard disk, and then apply it to your cluster:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ubuntuhd spec: accessModes: - ReadWriteOnce resources: requests: storage: 15Gi storageClassName: local-block volumeMode: Block
Create a
VirtualMachineDisk
resource with the following contents for the Ubuntu Server installation disc image, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineDisk metadata: name: "ubuntu-iso-disk" spec: persistentVolumeClaimName: iso-ubuntu diskType: cdrom
Create a
VirtualMachineDisk
resource with the following contents for the virtual machine's virtual hard disk, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineDisk metadata: name: "ubuntu-main-disk" spec: persistentVolumeClaimName: ubuntuhd
Create a
VirtualMachineType
resource with the following contents that specifies the virtual machine's configuration, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineType metadata: name: small-2-20 spec: cpu: vcpus: 2 memory: capacity: 20Gi
Create a
VirtualMachine
resource with the following contents that instantiates and starts the virtual machine on the cluster, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: ubu-vm name: ubu-vm # Propagate the virtual machine name to the VMI spec: osType: Linux compute: virtualMachineTypeName: small-2-20 interfaces: - name: eth0 networkName: pod-network default: true disks: - virtualMachineDiskName: ubuntu-main-disk boot: true - virtualMachineDiskName: ubuntu-iso-disk
The
osType
field only applies to local control plane clusters. It is required on local control plane clusters to configure the following features:Install Ubuntu Server on the virtual machine:
- Wait for the
importer
Pod to download the Ubuntu Server installation disc image. Check the status of the virtual machine:
kubectl get gvm VM_NAME
Replace
VM_NAME
with the name of the virtual machine—ubu-vm
in this example.Log on to the virtual machine:
kubectl virt vnc VM_NAME
Replace
VM_NAME
with the name of the virtual machine—ubu-vm
in this example.Complete the Ubuntu Linux installation steps.
- Wait for the
Clean up:
Stop the virtual machine:
kubectl virt stop VM_NAME
Replace
VM_NAME
with the name of the virtual machine—ubu-vm
in this example.Edit the virtual machine's YAML file to remove the reference to the installation disc image:
kubectl edit gvm VM_NAME
Replace
VM_NAME
with the name of the virtual machine—ubu-vm
in this example.Start the virtual machine:
kubectl virt start VM_NAME
Replace
VM_NAME
with the name of the virtual machine—ubu-vm
in this example.Delete the
VirtualMachineDisk
andPersistentVolumeClaim
resources for the installation disc image:kubectl delete virtualmachinedisk ubuntu-iso-disk kubectl delete pvc iso-ubuntu
Provision a Windows virtual machine on Distributed Cloud connected with raw block storage
The following example illustrates how to provision a Windows virtual machine with
raw block storage. The steps are similar to provisioning a Linux virtual machine,
with the addition of the virtio
driver disk image, which is required for installing
Windows.
Obtain a licensed copy of Windows and its installation media image.
Create a
PersistentVolumeClaim
resource with the following contents for the Windows installation disc image, and then apply it to your cluster. For instructions, see From image.Create a
PersistentVolumeClaim
resource with the following contents for thevirtio
driver, and then apply it to your cluster:apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: app: containerized-data-importer name: virtio-driver annotations: cdi.kubevirt.io/storage.import.endpoint: "https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso" spec: accessModes: - ReadWriteOnce storageClassName: local-block volumeMode: Block resources: requests: storage: 1Gi
Create a
PersistentVolumeClaim
resource with the following contents for the virtual machine's virtual hard disk, and then apply it to your cluster:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: windowshd spec: accessModes: - ReadWriteOnce resources: requests: storage: 15Gi storageClassName: local-block volumeMode: Block
Create a
VirtualMachineDisk
resource with the following contents for the Windows installation disc image, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineDisk metadata: name: "windows-iso-disk" spec: persistentVolumeClaimName: iso-windows diskType: cdrom
Create a
VirtualMachineDisk
resource with the following contents for thevirtio
driver, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineDisk metadata: name: "win-virtio-driver" spec: persistentVolumeClaimName: virtio-driver diskType: cdrom
Create a
VirtualMachineDisk
resource with the following contents for the virtual machine's virtual hard disk, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineDisk metadata: name: "windows-main-disk" spec: persistentVolumeClaimName: windowshd
Create a
VirtualMachineType
resource with the following contents that specifies the virtual machine's configuration, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineType metadata: name: small-2-20 spec: cpu: vcpus: 2 memory: capacity: 20Gi
Create a
VirtualMachine
resource with the following contents that instantiates and starts the virtual machine on the cluster, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: win-vm name: win-vm # Propagate the virtual machine name to the VMI spec: osType: Windows compute: virtualMachineTypeName: my-vmt interfaces: - name: eth0 networkName: pod-network default: true disks: - virtualMachineDiskName: windows-main-disk boot: true - virtualMachineDiskName: windows-iso-disk - virtualMachineDiskName: win-virtio-driver
The
osType
field only applies to local control plane clusters. It is required on local control plane clusters to configure the following features:Install Windows on the virtual machine:
- Wait for the
importer
Pod to download the Windows installation disc image. Check the status of the virtual machine:
kubectl get gvm VM_NAME
Replace
VM_NAME
with the name of the virtual machine—win-vm
in this example.Complete the Windows installation by following the steps in Connect to Windows VM and complete OS install.
- Wait for the
Clean up:
Stop the virtual machine:
kubectl virt stop VM_NAME
Replace
VM_NAME
with the name of the virtual machine—win-vm
in this example.Complete the steps in Detach the ISO image and drivers disk.
Provision a virtual machine on Distributed Cloud connected with Symcloud Storage
This section provides configuration examples that illustrate how to provision a Linux virtual machine and a Windows virtual machine on a Distributed Cloud connected cluster with the Symcloud Storage abstraction layer.
Before completing the steps in this section, you must first complete the steps in Configure Distributed Cloud connected for Symcloud Storage. If you later disable Symcloud Storage on the cluster, virtual machines configured to use Symcloud Storage will fail.
Provision a Linux virtual machine on Distributed Cloud connected with Symcloud Storage
The following example illustrates how to provision a Linux virtual machine with Symcloud Storage running Ubuntu Server 22.04. The installation source is the Ubuntu Server 22.04 ISO disc image.
Create a
VirtualMachineDisk
resource with the following contents for the Ubuntu Server installation disc image, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineDisk metadata: name: ubuntu-iso-disk spec: size: 20Gi storageClassName: robin diskType: cdrom source: http: url: https://releases.ubuntu.com/jammy/ubuntu-22.04.3-live-server-amd64.iso
Create a
VirtualMachineDisk
resource with the following contents for the virtual machine's virtual hard disk, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineDisk metadata: name: "ubuntu-main-disk" spec: size: 200Gi storageClassName: robin
Create a
VirtualMachineType
resource with the following contents that specifies the virtual machine's configuration, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineType metadata: name: small-2-20 spec: cpu: vcpus: 2 memory: capacity: 20Gi
Create a
VirtualMachine
resource with the following contents that instantiates and starts the virtual machine on the cluster, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: ubu-vm name: ubu-vm # Propagate the virtual machine name to the VMI spec: osType: Linux compute: virtualMachineTypeName: small-2-20 interfaces: - name: eth0 networkName: pod-network default: true disks: - virtualMachineDiskName: ubuntu-main-disk boot: true - virtualMachineDiskName: ubuntu-iso-disk
The
osType
field only applies to local control plane clusters. It is required on local control plane clusters to configure the following features:Install Ubuntu Server on the virtual machine:
- Wait for the
importer
Pod to download the Ubuntu Server installation disc image. Check the status of the virtual machine:
kubectl get gvm VM_NAME
Replace
VM_NAME
with the name of the virtual machine—ubu-vm
in this example.Log on to the virtual machine:
kubectl virt vnc VM_NAME
Replace
VM_NAME
with the name of the virtual machine—ubu-vm
in this example.Complete the Ubuntu Linux installation steps.
- Wait for the
Clean up:
Stop the virtual machine:
kubectl virt stop VM_NAME
Replace
VM_NAME
with the name of the virtual machine—ubu-vm
in this example.Edit the virtual machine's YAML file to remove the reference to the installation disc image:
kubectl edit gvm VM_NAME
Replace
VM_NAME
with the name of the virtual machine—ubu-vm
in this example.Start the virtual machine:
kubectl virt start VM_NAME
Replace
VM_NAME
with the name of the virtual machine—ubu-vm
in this example.Delete the
VirtualMachineDisk
resource for the installation disc image:kubectl delete virtualmachinedisk ubuntu-iso-disk
Provision a Windows virtual machine on Distributed Cloud connected with Symcloud Storage
The following example illustrates how to provision a Windows virtual machine with
Symcloud Storage. The steps are similar to provisioning a Linux virtual machine,
with the addition of the virtio
driver disk image, which is required for installing
Windows.
Obtain a licensed copy of Windows and its installation media image.
Create a
VirtualMachineDisk
resource with the following contents for the Windows installation disc image, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineDisk metadata: name: windows-iso-disk namespace: default spec: size: 5Gi storageClassName: robin diskType: cdrom source: http: url: WINDOWS_ISO_URL
Replace
NAT_GATEWAY
with the full URL to the target Windows installation ISO disc image.Create a
VirtualMachineDisk
resource with the following contents for thevirtio
driver, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineDisk metadata: name: windows-virtio-driver namespace: default spec: size: 1Gi storageClassName: robin diskType: cdrom source: http: url: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso
Create a
VirtualMachineDisk
resource with the following contents for the virtual machine's virtual hard disk, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineDisk metadata: name: windows-main-disk namespace: default spec: size: 15Gi storageClassName: robin
Create a
VirtualMachineType
resource with the following contents that specifies the virtual machine's configuration, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineType metadata: name: small-2-20 spec: cpu: vcpus: 2 memory: capacity: 20Gi
Create a
VirtualMachine
resource with the following contents that instantiates and starts the virtual machine on the cluster, and then apply it to your cluster:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: win-vm name: win-vm # Propagate the virtual machine name to the VMI spec: osType: Windows compute: virtualMachineTypeName: my-vmt interfaces: - name: eth0 networkName: pod-network default: true disks: - virtualMachineDiskName: windows-main-disk boot: true - virtualMachineDiskName: windows-iso-disk - virtualMachineDiskName: win-virtio-driver
The
osType
field only applies to local control plane clusters. It is required on local control plane clusters to configure the following features:Install Windows on the virtual machine:
- Wait for the
importer
Pod to download the Windows installation disc image. Check the status of the virtual machine:
kubectl get gvm VM_NAME
Replace
VM_NAME
with the name of the virtual machine—win-vm
in this example.Complete the Windows installation by following the steps in Connect to Windows VM and complete OS install.
- Wait for the
Clean up:
Stop the virtual machine:
kubectl virt stop VM_NAME
Replace
VM_NAME
with the name of the virtual machine—win-vm
in this example.Complete the steps in Detach the ISO image and drivers disk.
Provision a virtual machine on Distributed Cloud connected using virtctl
If you do not require the customization provided by writing your own resource specifications for
your virtual machines, you can provision a virtual machine on Distributed Cloud using the virtctl
command-line tool as described in Create a VM.
Manage virtual machines running on Distributed Cloud connected
For instructions about managing virtual machines running on Distributed Cloud connected, see the following VM Runtime on GDC documentation:
To manage virtual machines running on local control plane clusters, you must first
Configure kubectl
connectivity.
Configure the ttyS0
device for serial console access to Linux virtual machines
If you plan to access your Linux virtual machines by using the serial console
(kubectl virt console
), make sure that the ttyS0
serial console device has
been configured on the guest operating system. To configure this device,
complete the following steps:
Instantiate the
ttyS0
serial device in the system:setserial -g /dev/ttyS0
Configure the
grub
bootloader to use thettyS0
serial device by adding the following lines to your/etc/default/grub
configuration file. The first line replaces your existingGRUB_CMDLINE_LINUX
variable.GRUB_CMDLINE_LINUX='console=tty0 console=ttyS0,19200n8' GRUB_TERMINAL=serial GRUB_SERIAL_COMMAND="serial --speed=19200 --unit=0 --word=8 --parity=no --stop=1"
Apply the new
grub
configuration to your boot sector:update-grub
Restart the virtual machine.
Disable VM Runtime on GDC on Distributed Cloud connected
Follow the steps in this section to disable VM Runtime on GDC on Distributed Cloud connected. Before you can disable VM Runtime on GDC on Distributed Cloud connected, you must stop and delete all the virtual machines on your Distributed Cloud connected cluster as described in Delete a VM.
To disable VM Runtime on GDC on
Distributed Cloud connected, modify the VMRuntime
custom resource
by setting the enabled
spec parameter to false
as follows, and then apply
it to your cluster:
apiVersion: vm.cluster.gke.io/v1 kind: VMRuntime metadata: name: vmruntime spec: # Disable Anthos VM Runtime enabled: false # vmImageFormat defaults to "raw" if not set vmImageFormat: "raw"
What's next
- Deploy workloads on Distributed Cloud connected
- Manage GPU workloads
- Manage zones
- Manage machines
- Manage clusters
- Manage node pools