--metadata는 NVIDIA 드라이버의 자동 설치를 지정하는 데 사용됩니다. 값은 install-nvidia-driver=True입니다. 값을 지정하면 Compute Engine은 처음 부팅할 때 안정적인 최신 드라이버를 로드한 다음 드라이버 활성화에 필요한 마지막 재부팅을 포함하여 필요한 단계를 수행합니다.
NVIDIA 드라이버 설치를 선택한 경우 설치가 완료될 때까지 3~5분 정도 기다려 주세요.
VM이 완전히 프로비저닝되는 데 최대 5분이 걸릴 수 있습니다. 이 시간 동안에는 SSH를 통해 머신에 연결할 수 없습니다. 설치가 완료되면 SSH 연결 후 nvidia-smi를 실행하여 드라이버 설치를 성공했는지 확인할 수 있습니다.
이미지를 구성한 후 이미지의 스냅샷을 저장하면 드라이버가 설치될 때까지 기다릴 필요 없이 파생 인스턴스를 시작할 수 있습니다.
선점형 인스턴스 만들기
선점형 딥 러닝 VM 인스턴스를 만들 수 있습니다. 선점형 인스턴스는 일반 인스턴스보다 훨씬 더 낮은 가격으로 만들고 실행할 수 있는 인스턴스입니다. 하지만 Compute Engine이 다른 작업 때문에 리소스에 액세스해야 하는 경우 이러한 인스턴스를 중지(선점)할 수 있습니다.
선점형 인스턴스는 항상 24시간 후에 중지됩니다. 선점형 인스턴스에 대한 자세한 내용은 선점형 VM 인스턴스를 참조하세요.
선점형 딥 러닝 VM 인스턴스를 만들려면 다음 안내를 따르세요.
위에 설명된 안내에 따라 새 인스턴스를 만듭니다. gcloud compute instances create 명령어 끝에 다음을 추가합니다.
--preemptible
다음 단계
Google Cloud Console 또는 명령줄을 통해 새로운 딥 러닝 VM 인스턴스에 연결하는 방법에 대한 안내는 인스턴스에 연결을 참조하세요. 인스턴스 이름은 지정된 배포 이름 뒤에 -vm이 붙습니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-04-21(UTC)"],[[["\u003cp\u003eThis guide outlines the process of creating a new Deep Learning VM instance using the \u003ccode\u003egcloud\u003c/code\u003e command-line tool, either with or without GPUs.\u003c/p\u003e\n"],["\u003cp\u003eBefore creating an instance, you must install and initialize the Google Cloud CLI, choosing the appropriate Deep Learning VM image based on your preferred framework and processor type.\u003c/p\u003e\n"],["\u003cp\u003eTo create a CPU-only instance, use the \u003ccode\u003egcloud compute instances create\u003c/code\u003e command with CPU-specific image families, specifying the \u003ccode\u003edeeplearning-platform-release\u003c/code\u003e image project.\u003c/p\u003e\n"],["\u003cp\u003eCreating an instance with GPUs requires using a GPU-specific image family, setting the \u003ccode\u003emaintenance-policy\u003c/code\u003e to \u003ccode\u003eTERMINATE\u003c/code\u003e, and specifying the desired GPU type and count via the \u003ccode\u003e--accelerator\u003c/code\u003e option, with a note that the \u003ccode\u003e--metadata\u003c/code\u003e flag will assist you in the installation of NVIDIA drivers.\u003c/p\u003e\n"],["\u003cp\u003eYou can create a preemptible Deep Learning VM instance, which is a lower-cost option, by adding the \u003ccode\u003e--preemptible\u003c/code\u003e flag to the \u003ccode\u003egcloud compute instances create\u003c/code\u003e command.\u003c/p\u003e\n"]]],[],null,["# Create a Deep Learning VM instance from the command line\n\nThis topic contains instructions for creating a new Deep Learning VM Images instance\nfrom the command line. You can use the `gcloud` command-line\ntool with your preferred SSH application or in Cloud Shell.\n\nBefore you begin\n----------------\n\nTo use the Google Cloud CLI to create a new Deep Learning VM\ninstance, you must first install and initialize the [Google Cloud CLI](/sdk/docs):\n\n1. Download and install the Google Cloud CLI using the instructions given on [Installing Google Cloud CLI](/sdk/downloads).\n2. Initialize the SDK using the instructions given on [Initializing Cloud\n SDK](/sdk/docs/initializing).\n\nTo use `gcloud` in Cloud Shell, first activate Cloud Shell using the\ninstructions given on [Starting Cloud Shell](/shell/docs/starting-cloud-shell).\n\nNext, choose the specific Deep Learning VM image to use. Your choice\ndepends on your preferred framework and processor type. For more information\nabout the available images, see [Choosing an\nImage](/deep-learning-vm/docs/images).\n| **Note:** If you are using GPUs with your Deep Learning VM, check the quotas page to ensure that you have enough GPUs available in your project: [Quotas](https://console.cloud.google.com/quotas). If GPUs are not listed on the quotas page or you require additional GPU quota, you can request a quota increase. See \"[Requesting additional quota](/compute/quotas#requesting_additional_quota)\" on the Compute Engine [Resource Quotas](/compute/quotas) page.\n\nCreating an instance without GPUs\n---------------------------------\n\nTo provision a Deep Learning VM instance with a CPU but no GPU: \n\n export IMAGE_FAMILY=\"tf-ent-latest-cpu\"\n export ZONE=\"us-west1-b\"\n export INSTANCE_NAME=\"my-instance\"\n\n gcloud compute instances create $INSTANCE_NAME \\\n --zone=$ZONE \\\n --image-family=$IMAGE_FAMILY \\\n --image-project=deeplearning-platform-release\n\nOptions:\n\n- `--image-family` must be one of the CPU-specific image types. For more\n information, see [Choosing an Image](/deep-learning-vm/docs/images).\n\n- `--image-project` must be `deeplearning-platform-release`.\n\nCreating an instance with one or more GPUs\n------------------------------------------\n\nCompute Engine offers the option of adding GPUs to your virtual machine\ninstances. GPUs offer faster processing for many complex data and machine\nlearning tasks. To learn more about GPUs, see [GPUs on\nCompute Engine](/compute/docs/gpus).\n\nTo provision a Deep Learning VM instance with one or more GPUs: \n\n export IMAGE_FAMILY=\"tf-ent-latest-gpu\"\n export ZONE=\"us-west1-b\"\n export INSTANCE_NAME=\"my-instance\"\n\n gcloud compute instances create $INSTANCE_NAME \\\n --zone=$ZONE \\\n --image-family=$IMAGE_FAMILY \\\n --image-project=deeplearning-platform-release \\\n --maintenance-policy=TERMINATE \\\n --accelerator=\"type=nvidia-tesla-v100,count=1\" \\\n --metadata=\"install-nvidia-driver=True\"\n\nOptions:\n\n- `--image-family` must be one of the GPU-specific image types. For more\n information, see [Choosing an Image](/deep-learning-vm/docs/images).\n\n- `--image-project` must be `deeplearning-platform-release`.\n\n- `--maintenance-policy` must be `TERMINATE`. See\n [GPU Restrictions](/compute/docs/gpus#restrictions) to learn more.\n\n- `--accelerator` specifies the GPU type to use. Must be\n specified in the format `--accelerator=\"type=TYPE,count=COUNT\"`. Supported\n values of `TYPE` are:\n\n - `nvidia-tesla-v100` (`count=1` or `8`)\n - `nvidia-tesla-p100` (`count=1`, `2`, or `4`)\n - `nvidia-tesla-p4` (`count=1`, `2`, or `4`)\n\n Not all GPU types are supported in all regions. For details, see\n [GPUs on Compute Engine](/compute/docs/gpus).\n- `--metadata` is used to specify that the NVIDIA driver should be installed\n on your behalf. The value is `install-nvidia-driver=True`. If specified,\n Compute Engine loads the latest stable driver on the first boot\n and performs the necessary steps (including a final reboot to activate the\n driver).\n\nIf you've elected to install NVIDIA drivers, allow 3-5 minutes for installation\nto complete.\n\nIt may take up to 5 minutes before your VM is fully provisioned. In this\ntime, you will be unable to SSH into your machine. When the installation is\ncomplete, to guarantee that the driver installation was successful, you can\nSSH in and run `nvidia-smi`.\n\nWhen you've configured your image, you can save a snapshot of your\nimage so that you can start derivative instances without having to wait\nfor the driver installation.\n\nCreating a preemptible instance\n-------------------------------\n\nYou can create a preemptible Deep Learning VM instance. A preemptible\ninstance is an instance you can create and run at a much lower price than\nnormal instances. However, Compute Engine might stop (preempt) these\ninstances if it requires access to those resources for other tasks.\nPreemptible instances always stop after 24 hours. To learn more about\npreemptible instances, see [Preemptible VM\nInstances](/compute/docs/instances/preemptible).\n\nTo create a preemptible Deep Learning VM instance:\n\n- Follow the instructions located above to create a new instance. To the\n `gcloud compute instances create` command, append the following:\n\n ```\n --preemptible\n ```\n\nWhat's next\n-----------\n\nFor instructions on connecting to your new Deep Learning VM instance\nthrough the Google Cloud console or command line, see [Connecting to\nInstances](/compute/docs/instances/connecting-to-instance). Your instance name\nis the **Deployment name** you specified with `-vm` appended."]]