이 문서는 Google Distributed Cloud를 실행하는 애플리케이션 소유자 및 플랫폼 관리자를 대상으로 작성되었습니다. 이 문서에서는 GDC용 VM 런타임을 사용하는 VM을 만들 때 VM 유형을 생성 및 사용하거나 CPU 및 메모리 리소스를 수동으로 지정하는 방법을 설명합니다.
VM_NAME: VM의 이름입니다. 이름 제약조건에 대한 자세한 내용은 객체 이름 및 ID를 참조하세요.
VCPU_NUMBER: VM에 할당할 vCPU 수입니다.
1~96개 사이의 vCPU를 VM에 할당할 수 있습니다.
MEMORY_SIZE: VM에 할당할 메모리 양입니다.
1M~1T 사이의 메모리를 VM에 할당할 수 있습니다. 자세한 내용은 메모리 리소스 단위를 참조하세요.
VM이 eth0을 기본 pod-network 네트워크에 연결합니다.
이름이 VM_NAME-boot-dv인 부팅 디스크가 이미 있어야 합니다. 자세한 내용은 가상 디스크 만들기 및 관리를 참조하세요.
편집기에서 VM 매니페스트를 저장하고 닫습니다.
kubectl을 사용하여 VM을 만듭니다.
kubectlapply-fmy-custom-vm.yaml
VM 유형 만들기 및 사용
GDC용 VM 런타임을 사용 설정하면 새로운 VirtualMachineType 커스텀 리소스 정의를 사용할 수 있습니다. 이 정의는 VM의 CPU 및 메모리 리소스를 지정하기 위해 사용됩니다. 필요한 여러 다른 워크로드에 대해 VM 유형을 만들고 해당 유형을 기반으로 VM에 일관적인 컴퓨팅 리소스 집합을 적용할 수 있습니다.
GDC용 VM 런타임이 Google Distributed Cloud에 사용 설정된 경우 vm-controller-manager는 사전 정의된 VM 유형을 설치합니다. 다음 정의는 기본 example-machinetype VM 유형을 보여줍니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-01(UTC)"],[],[],null,["This document is intended for application owners and platform administrators\nthat run Google Distributed Cloud. This document shows you how to create and use VM\ntypes or manually specify CPU and memory resources when you create VMs that use\nVM Runtime on GDC.\n\nBefore you begin\n\nTo complete this document, you need access to the following resources:\n\n- Access to Google Distributed Cloud version 1.12.0 (`anthosBareMetalVersion: 1.12.0`) or higher cluster. You can use any cluster type capable of running workloads. If needed, [try Google Distributed Cloud on Compute Engine](/kubernetes-engine/distributed-cloud/bare-metal/docs/try/gce-vms) or see the [cluster creation overview](/kubernetes-engine/distributed-cloud/bare-metal/docs/installing/creating-clusters/create-clusters-overview).\n- The `virtctl` client tool installed as a plugin for `kubectl`. If needed, [install the virtctl client tool](/kubernetes-engine/distributed-cloud/bare-metal/docs/vm-runtime/quickstart#install_the_virtctl_client_tool).\n\nCreate a VM\n\nWhen you create a VM, you can manually specify the CPU and memory requirements.\nThis ability lets you create VMs with the appropriate compute resources to match\nyour application needs.\n\nTo create a VM and manually specify the CPU and memory requirements, use the\nfollowing steps. \n\nCLI\n\n- Use `kubectl` to create a VM:\n\n kubectl virt create vm \u003cvar label=\"vm_name\" translate=\"no\"\u003eVM_NAME\u003c/var\u003e \\\n --image ubuntu20.04 \\\n --cpu \u003cvar label=\"vcpu_number\" translate=\"no\"\u003eCPU_NUMBER\u003c/var\u003e \\\n --memory \u003cvar label=\"memory_size\" translate=\"no\"\u003eMEMORY_SIZE\u003c/var\u003e\n\n Replace the following values:\n - \u003cvar translate=\"no\"\u003eVM_NAME\u003c/var\u003e: the name for your VM. For more information on name constraints, see [Object names and IDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/).\n - \u003cvar translate=\"no\"\u003eCPU_NUMBER\u003c/var\u003e: The number of virtual CPUs (vCPUs)to assign to the VM.\n - You can assign between 1 and 96 vCPUs to a VM.\n - \u003cvar translate=\"no\"\u003eMEMORY_SIZE\u003c/var\u003e: The amount of memory to assign to the VM.\n - You can assign between 1M and 1T of memory to a VM. For more information, see [Memory resource units](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory).\n\n | **Note:** In the `~/google-virtctl` directory, a \u003cvar scope=\"VM_NAME\" translate=\"no\"\u003eVM_NAME\u003c/var\u003e`.yaml` file is created. You can view the content of this file to see the definition of the Kubernetes resources that were created by the VM Runtime on GDC.\n\nManifest\n\n1. Create a `VirtualMachine` manifest, such as *my-custom-vm.yaml*, in the\n editor of your choice:\n\n nano my-custom-vm.yaml\n\n2. Copy and paste the following YAML manifest:\n\n apiVersion: vm.cluster.gke.io/v1\n kind: VirtualMachine\n metadata:\n name: \u003cvar label=\"vm_name\" translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eVM_NAME\u003c/span\u003e\u003c/var\u003e\n spec:\n compute:\n cpu:\n vcpus: \u003cvar label=\"vcpu_number\" translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eVCPU_NUMBER\u003c/span\u003e\u003c/var\u003e\n memory:\n capacity: \u003cvar label=\"memory_size\" translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eMEMORY_SIZE\u003c/span\u003e\u003c/var\u003e\n interfaces:\n - name: eth0\n networkName: pod-network\n default: true\n disks:\n - virtualMachineDiskName: \u003cvar label=\"vm_name\" translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eVM_NAME\u003c/span\u003e\u003c/var\u003e-boot-dv\n boot: true\n\n In this YAML file, define the following settings:\n - \u003cvar translate=\"no\"\u003eVM_NAME\u003c/var\u003e: the name for your VM. For more information on name constraints, see [Object names and IDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/).\n - \u003cvar translate=\"no\"\u003eVCPU_NUMBER\u003c/var\u003e: The number of vCPUs to assign to the VM.\n - You can assign between 1 and 96 vCPUs to a VM.\n - \u003cvar translate=\"no\"\u003eMEMORY_SIZE\u003c/var\u003e: The amount of memory to assign to the VM.\n - You can assign between 1M and 1T of memory to a VM. For more information, see [Memory resource units](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory).\n\n The VM connects `eth0` to the default `pod-network` network.\n\n The boot disk named \u003cvar translate=\"no\"\u003eVM_NAME\u003c/var\u003e`-boot-dv` must already\n exist. For more information, see\n [Create and manage virtual disks](/kubernetes-engine/distributed-cloud/bare-metal/docs/vm-runtime/create-manage-disks).\n3. Save and close the VM manifest in your editor.\n\n4. Create the VM using `kubectl`:\n\n kubectl apply -f my-custom-vm.yaml\n\nCreate and use VM types\n\nWhen you enable VM Runtime on GDC, a new *VirtualMachineType* custom\nresource definition is available. This definition is used to specify CPU and\nmemory resources of a VM. You can create VM types for the different workloads\nyou need, and apply a consistent set of compute resources to VMs based on these\ntypes.\n\nIf VM Runtime on GDC is enabled in Google Distributed Cloud, the\n`vm-controller-manager` installs a predefined VM type. The following definition\nshows the default `example-machinetype` VM type: \n\n apiVersion: vm.cluster.gke.io/v1\n kind: VirtualMachineType\n metadata:\n name: \"example-machinetype\"\n labels:\n vm.cluster.gke.io/predefined-machinetype: \"true\"\n spec:\n cpu:\n vcpus: 2\n memory:\n capacity: 4G\n\nYou can't update this predefined VM type. This predefined VM type is\nre-installed if it doesn't exist in the cluster every time the\n`vm-controller-manager` is started or restarted, such as if you deleted the VM\ntype.\n\nCreate a VM type\n\nYou can create your own VM types to fit the compute needs of your workloads.\n\n1. Create a `VirtualMachineType` manifest such as *my-vm-type.yaml*, in the\n editor of your choice:\n\n nano my-vm-type.yaml\n\n2. Copy and paste the following YAML manifest:\n\n apiVersion: vm.cluster.gke.io/v1\n kind: VirtualMachineType\n metadata:\n name: \u003cvar label=\"VM type name\" translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003emy-vm-type\u003c/span\u003e\u003c/var\u003e\n spec:\n cpu:\n vcpus: \u003cvar label=\"vcpu_number\" translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eVCPU_NUMBER\u003c/span\u003e\u003c/var\u003e\n memory:\n capacity: \u003cvar label=\"memory_size\" translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eMEMORY_SIZE\u003c/span\u003e\u003c/var\u003e\n\n In this VM type, you define the following settings:\n - \u003cvar translate=\"no\"\u003eVM_NAME\u003c/var\u003e: the name for your VM. For more information on name constraints, see [Object names and IDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/).\n - \u003cvar translate=\"no\"\u003eVCPU_NUMBER\u003c/var\u003e: The number of vCPUs to assign to the VM.\n - You can assign between 1 and 96 vCPUs to a VM.\n - \u003cvar translate=\"no\"\u003eMEMORY_SIZE\u003c/var\u003e: The amount of memory to assign to the VM.\n - You can assign between 1M and 1T of memory to a VM. For more information, see [Memory resource units](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory).\n3. Save and close the VM type manifest in your editor.\n\n4. Create the VM type using `kubectl`:\n\n kubectl apply -f my-vm-type.yaml\n\nCreate a VM using a VM type\n\nSpecify a VM type in your `VirtualMachine` manifest to apply predefined\n`compute` settings to your VM.\n\n1. Create a `VirtualMachine` manifest, such as *my-custom-vm.yaml*, in the\n editor of your choice.\n\n nano my-custom-vm.yaml\n\n2. Copy and paste the following YAML manifest:\n\n apiVersion: vm.cluster.gke.io/v1\n kind: VirtualMachine\n metadata:\n name: \u003cvar label=\"vm_name\" translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eVM_NAME\u003c/span\u003e\u003c/var\u003e\n spec:\n compute:\n virtualMachineTypeName: \u003cvar label=\"vm_type_name\" translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003emy-vm-type\u003c/span\u003e\u003c/var\u003e\n interfaces:\n - name: eth0\n networkName: pod-network\n default: true\n disks:\n - virtualMachineDiskName: \u003cvar label=\"vm_name\" translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eVM_NAME\u003c/span\u003e\u003c/var\u003e-boot-dv\n boot: true\n\n In this YAML file, specify the name of your custom VM type that you created in\n the previous section, such as \u003cvar translate=\"no\"\u003emy-vm-type\u003c/var\u003e, as the value for\n the `virtualMachineTypeName`.\n\n The VM connects `eth0` to the default `pod-network` network.\n\n The boot disk named \u003cvar translate=\"no\"\u003eVM_NAME\u003c/var\u003e`-boot-dv` must already\n exist. For more information, see\n [Create and manage virtual disks](/kubernetes-engine/distributed-cloud/bare-metal/docs/vm-runtime/create-manage-disks).\n3. Save and close the VM manifest in your editor.\n\n4. Create the VM using `kubectl`:\n\n kubectl apply -f my-custom-vm.yaml\n\nWhat's next\n\n- [Edit a VM in Google Distributed Cloud](/kubernetes-engine/distributed-cloud/bare-metal/docs/vm-runtime/edit-vm).\n- When you no longer need VMs, [Delete a VM in Google Distributed Cloud](/kubernetes-engine/distributed-cloud/bare-metal/docs/vm-runtime/delete)."]]