This tutorial shows you how to create and manage a Linux virtual machine (VM)
on a new installation of VM Runtime on Google Distributed Cloud. You learn the process to
create and define the basic dependencies such as a StorageClass
and virtual
network, then create a VM that uses these resources. You then learn how to edit
the VM, such as to add a new disk.
Objectives
- Configure basic dependencies
- Create a
StorageClass
for VM Runtime on Google Distributed Cloud - Create a virtual network for your VMs to use
- Create a
- Create a virtual machine boot disk
- Create a VM
- Edit a VM to add a new virtual disk
Before you begin
To complete this tutorial, you need access to the following resources and tools:
- Access to Google Distributed Cloud version 1.12.0 (
anthosBareMetalVersion: 1.12.0
) or higher cluster. You can use any cluster type capable of running workloads. If needed, try Google Distributed Cloud on Compute Engine or see the cluster creation overview. - The
virtctl
client tool installed as a plugin forkubectl
. If needed, install the virtctl client tool.
Create a StorageClass
You use a StorageClass
to define the type of storage you make available to
VMs. Different storage classes might map to a different type of storage
hardware, file system, or performance.
It's beyond the scope of this tutorial to provide specific instructions for
each StorageClass
and storage partner.
Anthos Ready storage partners provide qualified Container Storage Interface (CSI) drivers that install and configure the required custom resources for your storage. To install the CSI driver in your cluster, review the list of supported storage partners and follow their instructions.
After you install the CSI driver for your storage platform, one or more storage classes are available in your cluster. You use one of these storage classes to create a virtual hard disk in this tutorial.
The following basic example
NFS StorageClass
uses the
NFS CSI Driver.
You define the NFS server address and path to use in the StorageClass
. All the
nodes in your cluster can then connect to and use this NFS storage:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: example-nfs
provisioner: nfs.csi.k8s.io
parameters:
server: nfs-server.example.com
path: /vm-share
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- nconnect=8
- nfsvers=4.1
In this tutorial, use the name of a StorageClass
in your own cluster.
VM Runtime on Google Distributed Cloud automatically generates one storage profile for
each StorageClass
in a cluster. The storage profile is the same name as the
associated StorageClass
. Storage profiles provide extra configuration options
associated with each StorageClass
. For more information about storage
profiles, including configuration instructions, see
Configure storage profiles.
Create a virtual network
VMs connect to a virtual network using a virtual network interface. The virtual network lets them communicate with other VMs in the cluster, or to resources outside of the cluster.
In this tutorial, you create a basic Layer 2 (L2
) virtual network that can use
an external DHCP server. When you enable the use of an external DHCP server, you
can skip configuration of DNS and gateway settings if they're provided by
DHCP.
To create a network that uses an external DHCP server, complete the following steps:
Create a
Network
manifest, such asdhcp-network.yaml
, in the editor of your choice:nano dhcp-network.yaml
Copy and paste the following YAML manifest:
apiVersion: networking.gke.io/v1 kind: Network metadata: name: NETWORK_NAME spec: type: L2 nodeInterfaceMatcher: interfaceName: INTERFACE_NAME externalDHCP4: true
Replace the following values:
NETWORK_NAME
: the name for your network.INTERFACE_NAME
: the interface name on your Google Distributed Cloud node to attach the network to. All nodes should have the same interface name.
In this
Network
manifest, the following values are set:- Workloads can have only an
L2
attachment to this network. This is the only networktype
that you can create in VM Runtime on Google Distributed Cloud. - The network has external DHCP enabled. The external DHCP server is responsible for IPv4 address allocation, routes, gateway, and DNS configuration for workloads connected to this network.
Save and close the
Network
manifest in your editor.Create the network using
kubectl
:kubectl apply -f use-dhcp-network.yaml
Create a VM boot disk
A VM can use a precreated disk image, or boot from an ISO image to manually
install the OS. These disk images can be stored and accessed using HTTP, or in
Cloud Storage and accessed using a Secret
.
In this tutorial, you create a boot disk from the public Ubuntu Server 20.04 cloud image using HTTP.
To create a disk from an image, complete the following steps.
Create a
VirtualMachineDisk
manifest, such asmy-disk.yaml
, in the editor of your choice:nano my-disk.yaml
Copy and paste the following YAML definition:
apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineDisk metadata: name: DISK_NAME spec: size: 20Gi storageClassName: STORAGE_CLASS_NAME source: http: url: https://cloud-images.ubuntu.com/releases/focal/release/ubuntu-20.04-server-cloudimg-amd64.img
Replace the following values:
DISK_NAME
: the name that you want for your disk. This example creates a20Gi
(20 gibibyte) disk namedDISK_NAME
using a public Ubuntu Server 20.04 image.STORAGE_CLASS_NAME
: theStorageClass
that you want to use for yourVirtualMachineDisk
.- Use
kubectl get storageclass
to list what's available in your cluster.
- Use
Save and close the
VirtualMachineDisk
manifest in your editor.Create the disk using
kubectl
:kubectl apply -f my-disk.yaml
Create a VM
With a virtual network and boot disk created in the previous sections, now
create a VM. The VM connects to the virtual network, and boots from the virtual
disk. The following instructions create a VM by directly applying a
VirtualMachine
YAML manifest file with the kubectl
CLI.
Create a manifest that defines a
VirtualMachine
, such asmy-vm.yaml
, in the editor of your choice:nano my-vm.yaml
Copy and paste the following YAML definition:
apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachine metadata: name: VM_NAME spec: osType: linux compute: cpu: vcpus: VCPU_NUMBER memory: capacity: MEMORY_SIZE interfaces: - name: eth0 networkName: NETWORK_NAME default: true disks: - boot: true virtualMachineDiskName: DISK_NAME
Replace the following values:
VM_NAME
: the name for your VM.VCPU_NUMBER
: The optional number of vCPUs to assign to the VM. Without this setting, the default of 2 vCPUs is assigned.- You can assign between 1 and 96 vCPUs to a VM.
MEMORY_SIZE
: The optional amount of memory to assign to the VM. Without this setting, the default of 4 GiB of memory is assigned.- You can assign between 1M and 1T of memory to a VM. For more information, see Memory resource units.
NETWORK_NAME
: the name of your network created in a previous section.DISK_NAME
: the name of your boot disk created in the previous section. This disk is set toboot: true
.
Save and close the manifest in your editor.
Create the VM and disk using
kubectl
:kubectl apply -f my-vm.yaml
It can take a few minutes to create the VM. Check the status of the VM with the kubectl command:
kubectl get gvm VM_NAME
The following example output shows the VM in a
Running
state:NAME STATUS AGE IP MY_VM Running 64s 192.168.2.124
Connect to the VM
When your VM is running, connect to the console of the VM. This console connection lets you perform basic tasks such as to further configure the VM or install an application.
To access a VM from the console, use
kubectl
:kubectl virt ssh VM_NAME
After you successfully connect to the console of the VM, exit the VM session and console:
Ctrl + ]
Edit the VM
During the lifecycle of your VM, you might want to edit your VM. For example, you might want to add storage to install an application on a dedicated disk, or use additional storage for your application.
In this tutorial, create a blank disk and attach it to the VM. This scenario lets you create a data disk to store application data.
Create a
VirtualMachineDisk
manifest, such asmy-data-disk.yaml
, in the editor of your choice:nano my-data-disk.yaml
Copy and paste the following YAML definition:
apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachineDisk metadata: name: DATA_DISK_NAME spec: size: 10Gi storageClassName: STORAGE_CLASS_NAME
Replace the following values:
DATA_DISK_NAME
: the name that you want for your data disk. This example creates a10Gi
(10 gibibyte) disk.STORAGE_CLASS_NAME
: theStorageClass
that you want to use for yourVirtualMachineDisk
.- Use
kubectl get storageclass
to list what's available in your cluster.
- Use
Save and close the disk manifest in your editor.
Create the disk using
kubectl
:kubectl apply -f my-data-disk.yaml
Use
kubectl
to stop your VM before you attach the new virtual disk:kubectl virt stop VM_NAME
Edit your VM resource:
kubectl edit gvm VM_NAME
Update the
VirtualMachine
YAML manifest to attach the disk at the end of the VM'sspec.disks
section:apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachine metadata: name: VM_NAME spec: ... disks: - boot: true virtualMachineDiskName: DISK_NAME - virtualMachineDiskName: DATA_DISK_NAME
Replace
DATA_DISK_NAME
with the name of your disk created in the previous step.Save and close the updated VM manifest in your editor.
Use
kubectl
to start the VM:kubectl virt start VM_NAME
Clean up
To delete the resources created in this tutorial, complete the following steps:
Delete your VM:
kubectl delete -f my-vm.yaml
Delete your
VirtualMachineDisk
resources:kubectl delete -f my-data-disk.yaml kubectl delete -f my-disk.yaml
If you don't want to keep the basic dependencies in your cluster, complete the following steps:
Delete your virtual network:
kubectl delete -f use-dhcp-network.yaml
To remove the CSI driver from your cluster, follow the instructions from your storage partner.