이 페이지에서는 PyTorch 및 기타 사전 설치된 도구를 사용하여 PyTorch Deep Learning VM Image 인스턴스를 만드는 방법을 보여줍니다. Google Cloud Console 내에서 또는 명령줄을 사용하여 Cloud Marketplace에서 PyTorch 인스턴스를 만들 수 있습니다.
시작하기 전에
Sign in to your Google Cloud account. If you're new to
Google Cloud,
create an account to evaluate how our products perform in
real-world scenarios. New customers also get $300 in free credits to
run, test, and deploy workloads.
In the Google Cloud console, on the project selector page,
select or create a Google Cloud project.
GPU를 사용하는 경우 NVIDIA 드라이버가 필요합니다.
드라이버는 직접 설치하거나 시작 시 NVIDIA GPU 드라이버가 자동으로 설치되도록 선택할 수 있습니다.
SSH 대신 URL을 통해 JupyterLab에 액세스 사용 설정(베타)을 선택할 수 있습니다. 이 베타 기능을 사용 설정하면 URL을 사용하여 JupyterLab 인스턴스에 액세스할 수 있습니다. Google Cloud 프로젝트의 편집자 또는 소유자 역할이 있는 사용자 누구나 이 URL에 액세스할 수 있습니다.
현재 이 기능은 미국, 유럽연합, 아시아에서만 작동합니다.
부팅 디스크 유형과 부팅 디스크 크기를 선택합니다.
원하는 네트워크 설정을 선택합니다.
배포를 클릭합니다.
NVIDIA 드라이버 설치를 선택한 경우 설치가 완료될 때까지 3~5분 정도 기다려 주세요.
VM 배포가 완료되면 페이지는 인스턴스 액세스에 관한 안내가 업데이트됩니다.
명령줄에서 PyTorch 딥 러닝 VM 인스턴스 만들기
Google Cloud CLI를 사용하여 새로운 Deep Learning VM 인스턴스를 만들려면 먼저 Google Cloud CLI를 설치하고 초기화해야 합니다.
--maintenance-policyTERMINATE여야 합니다. 자세한 내용은 GPU 제한을 참조하세요.
--accelerator는 사용할 GPU 유형을 지정하며 --accelerator="type=TYPE,count=COUNT" 형식이어야 합니다.
예를 들면 --accelerator="type=nvidia-tesla-p100,count=2"입니다.
사용할 수 있는 GPU 유형 및 개수 목록은 GPU 모델 표를 참조하세요.
리전에 따라 일부 GPU 유형이 지원되지 않을 수 있습니다. 자세한 내용은 GPU 리전 및 영역 가용성을 참조하세요.
--metadata는 NVIDIA 드라이버의 자동 설치를 지정하는 데 사용됩니다. 값은 install-nvidia-driver=True입니다. 값을 지정하면 Compute Engine은 처음 부팅할 때 안정적인 최신 드라이버를 로드한 다음 드라이버 활성화에 필요한 마지막 재부팅을 포함하여 필요한 단계를 수행합니다.
NVIDIA 드라이버 설치를 선택한 경우 설치가 완료될 때까지 3~5분 정도 기다려 주세요.
VM이 완전히 프로비저닝되는 데 최대 5분이 걸릴 수 있습니다. 이 시간 동안에는 SSH를 통해 머신에 연결할 수 없습니다. 설치가 완료되면 SSH 연결 후 nvidia-smi를 실행하여 드라이버 설치를 성공했는지 확인할 수 있습니다.
이미지를 구성한 후 이미지의 스냅샷을 저장하면 드라이버가 설치될 때까지 기다릴 필요 없이 파생 인스턴스를 시작할 수 있습니다.
선점형 인스턴스 만들기
선점형 딥 러닝 VM 인스턴스를 만들 수 있습니다. 선점형 인스턴스는 일반 인스턴스보다 훨씬 더 낮은 가격으로 만들고 실행할 수 있는 인스턴스입니다. 하지만 Compute Engine이 다른 작업 때문에 리소스에 액세스해야 하는 경우 이러한 인스턴스를 중지(선점)할 수 있습니다.
선점형 인스턴스는 항상 24시간 후에 중지됩니다. 선점형 인스턴스에 대한 자세한 내용은 선점형 VM 인스턴스를 참조하세요.
선점형 딥 러닝 VM 인스턴스를 만들려면 다음 안내를 따르세요.
위에 설명된 안내에 따라 명령줄을 사용하여 새 인스턴스를 만듭니다. gcloud compute instances create 명령어 끝에 다음을 추가합니다.
--preemptible
다음 단계
Google Cloud Console 또는 명령줄을 통해 새로운 딥 러닝 VM 인스턴스에 연결하는 방법에 대한 안내는 인스턴스에 연결을 참조하세요. 인스턴스 이름은 지정된 배포 이름 뒤에 -vm이 붙습니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[[["\u003cp\u003eThis guide explains how to create a PyTorch Deep Learning VM instance, either through the Google Cloud Marketplace or using the command line interface.\u003c/p\u003e\n"],["\u003cp\u003eYou can configure your instance with or without GPUs, and if using GPUs, an NVIDIA driver installation is required, which can be done automatically.\u003c/p\u003e\n"],["\u003cp\u003eWhen creating an instance, options such as machine type, zone, framework, and networking settings, along with specific configurations for GPUs, can be customized.\u003c/p\u003e\n"],["\u003cp\u003eUsing the command line, you can create instances with specified PyTorch versions, and it includes provisions for creating instances with or without GPUs, along with specifying details like image family and project.\u003c/p\u003e\n"],["\u003cp\u003ePreemptible instances, which offer a lower price but can be terminated, are available for creation and can be done by adding a \u003ccode\u003e--preemptible\u003c/code\u003e flag to your gcloud command.\u003c/p\u003e\n"]]],[],null,["# Create a PyTorch Deep Learning VM instance\n\nThis page shows you how to create a PyTorch Deep Learning VM Images instance\nwith PyTorch and other tools pre-installed. You can create\na PyTorch instance from Cloud Marketplace within\nthe Google Cloud console or using the command line.\n\nBefore you begin\n----------------\n\n- Sign in to your Google Cloud account. If you're new to Google Cloud, [create an account](https://console.cloud.google.com/freetrial) to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.\n- In the Google Cloud console, on the project selector page,\n select or create a Google Cloud project.\n\n | **Note**: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.\n\n [Go to project selector](https://console.cloud.google.com/projectselector2/home/dashboard)\n-\n [Verify that billing is enabled for your Google Cloud project](/billing/docs/how-to/verify-billing-enabled#confirm_billing_is_enabled_on_a_project).\n\n- In the Google Cloud console, on the project selector page,\n select or create a Google Cloud project.\n\n | **Note**: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.\n\n [Go to project selector](https://console.cloud.google.com/projectselector2/home/dashboard)\n-\n [Verify that billing is enabled for your Google Cloud project](/billing/docs/how-to/verify-billing-enabled#confirm_billing_is_enabled_on_a_project).\n\n1. If you are using GPUs with your Deep Learning VM, check the [quotas page](https://console.cloud.google.com/quotas) to ensure that you have enough GPUs available in your project. If GPUs are not listed on the quotas page or you require additional GPU quota, [request a\n quota increase](/compute/quotas#requesting_additional_quota).\n\nCreating a PyTorch Deep Learning VM instance from the Cloud Marketplace\n-----------------------------------------------------------------------\n\nTo create a PyTorch Deep Learning VM instance\nfrom the Cloud Marketplace, complete the following steps:\n\n1. Go to the Deep Learning VM Cloud Marketplace page in\n the Google Cloud console.\n\n [Go to the Deep Learning VM Cloud Marketplace page](https://console.cloud.google.com/marketplace/details/click-to-deploy-images/deeplearning)\n2. Click **Get started**.\n\n3. Enter a **Deployment name** , which will be the root of your VM name.\n Compute Engine appends `-vm` to this name when naming your instance.\n\n4. Select a **Zone**.\n\n5. Under **Machine type** , select the specifications that you\n want for your VM.\n [Learn more about machine types.](/compute/docs/machine-types)\n\n6. Under **GPUs** , select the **GPU type** and **Number of GPUs** .\n If you don't want to use GPUs,\n click the **Delete GPU** button\n and skip to step 7. [Learn more about GPUs.](/gpu)\n\n 1. Select a **GPU type** . Not all GPU types are available in all zones. [Find a combination that is supported.](/compute/docs/gpus)\n 2. Select the **Number of GPUs** . Each GPU supports different numbers of GPUs. [Find a combination that is supported.](/compute/docs/gpus)\n7. Under **Framework** , select **PyTorch 1.8 + fast.ai 2.1\n (CUDA 11.0)**.\n\n8. If you're using GPUs, an NVIDIA driver is required.\n You can install the driver\n yourself, or select **Install NVIDIA GPU driver automatically\n on first startup**.\n\n9. You have the option to select **Enable access to JupyterLab via URL\n instead of SSH (Beta)**. Enabling this Beta feature lets you\n access your JupyterLab\n instance using a URL. Anyone who is in the Editor or Owner role in your\n Google Cloud project can access this URL.\n Currently, this feature only works in\n the United States, the European Union, and Asia.\n\n10. Select a boot disk type and boot disk size.\n\n11. Select the networking settings that you want.\n\n12. Click **Deploy**.\n\nIf you choose to install NVIDIA drivers, allow 3-5 minutes for installation\nto complete.\n\nAfter the VM is deployed, the page updates with instructions for\naccessing the instance.\n\nCreating a PyTorch Deep Learning VM instance from the command line\n------------------------------------------------------------------\n\nTo use the Google Cloud CLI to create a new a Deep Learning VM\ninstance, you must first install and initialize the [Google Cloud CLI](/sdk/docs):\n\n1. Download and install the Google Cloud CLI using the instructions given on [Installing Google Cloud CLI](/sdk/downloads).\n2. Initialize the SDK using the instructions given on [Initializing Cloud\n SDK](/sdk/docs/initializing).\n\nTo use `gcloud` in Cloud Shell, first activate Cloud Shell using the\ninstructions given on [Starting Cloud Shell](/shell/docs/starting-cloud-shell).\n\n### Without GPUs\n\nTo create a Deep Learning VM instance\nwith the latest PyTorch image family and a\nCPU, enter the following at the command line: \n\n export IMAGE_FAMILY=\"pytorch-latest-cpu\"\n export ZONE=\"us-west1-b\"\n export INSTANCE_NAME=\"my-instance\"\n\n gcloud compute instances create $INSTANCE_NAME \\\n --zone=$ZONE \\\n --image-family=$IMAGE_FAMILY \\\n --image-project=deeplearning-platform-release\n\nOptions:\n\n- `--image-family` must be either `pytorch-latest-cpu`\n or `pytorch-`\u003cvar translate=\"no\"\u003eVERSION\u003c/var\u003e`-cpu`\n (for example, `pytorch-1-13-cpu`).\n\n- `--image-project` must be `deeplearning-platform-release`.\n\n### With one or more GPUs\n\nCompute Engine offers the option of adding one or more GPUs to your\nvirtual machine instances. GPUs offer faster processing for many complex data\nand machine learning tasks. To learn more about GPUs, see [GPUs on\nCompute Engine](/compute/docs/gpus).\n\nTo create a Deep Learning VM instance with the\nlatest PyTorch image family and one\nor more attached GPUs, enter the following at the command line: \n\n export IMAGE_FAMILY=\"pytorch-latest-gpu\"\n export ZONE=\"us-west1-b\"\n export INSTANCE_NAME=\"my-instance\"\n\n gcloud compute instances create $INSTANCE_NAME \\\n --zone=$ZONE \\\n --image-family=$IMAGE_FAMILY \\\n --image-project=deeplearning-platform-release \\\n --maintenance-policy=TERMINATE \\\n --accelerator=\"type=nvidia-tesla-v100,count=1\" \\\n --metadata=\"install-nvidia-driver=True\"\n\nOptions:\n\n- `--image-family` must be either `pytorch-latest-gpu`\n or `pytorch-`\u003cvar translate=\"no\"\u003eVERSION\u003c/var\u003e`-`\u003cvar translate=\"no\"\u003eCUDA-VERSION\u003c/var\u003e\n (for example, `pytorch-1-10-cu110`).\n\n- `--image-project` must be `deeplearning-platform-release`.\n\n- `--maintenance-policy` must be `TERMINATE`. To learn more, see\n [GPU Restrictions](/compute/docs/gpus#restrictions).\n\n- `--accelerator` specifies the GPU type to use. Must be\n specified in the format\n `--accelerator=\"type=`\u003cvar translate=\"no\"\u003eTYPE\u003c/var\u003e`,count=`\u003cvar translate=\"no\"\u003eCOUNT\u003c/var\u003e`\"`.\n For example, `--accelerator=\"type=nvidia-tesla-p100,count=2\"`.\n See the [GPU models\n table](/compute/docs/gpus#other_available_nvidia_gpu_models)\n for a list of available GPU types and counts.\n\n Not all GPU types are supported in all regions. For details, see\n [GPU regions and zones availability](/compute/docs/gpus/gpu-regions-zones).\n- `--metadata` is used to specify that the NVIDIA driver should be installed\n on your behalf. The value is `install-nvidia-driver=True`. If specified,\n Compute Engine loads the latest stable driver on the first boot\n and performs the necessary steps (including a final reboot to activate the\n driver).\n\nIf you've elected to install NVIDIA drivers, allow 3-5 minutes for installation\nto complete.\n\nIt may take up to 5 minutes before your VM is fully provisioned. In this\ntime, you will be unable to SSH into your machine. When the installation is\ncomplete, to guarantee that the driver installation was successful, you can\nSSH in and run `nvidia-smi`.\n\nWhen you've configured your image, you can save a snapshot of your\nimage so that you can start derivitave instances without having to wait\nfor the driver installation.\n\nCreating a preemptible instance\n-------------------------------\n\nYou can create a preemptible Deep Learning VM instance. A preemptible\ninstance is an instance you can create and run at a much lower price than\nnormal instances. However, Compute Engine might stop (preempt) these\ninstances if it requires access to those resources for other tasks.\nPreemptible instances always stop after 24 hours. To learn more about\npreemptible instances, see [Preemptible VM\nInstances](/compute/docs/instances/preemptible).\n\nTo create a preemptible Deep Learning VM instance:\n\n- Follow the instructions located above to create a new instance using the\n command line. To the `gcloud compute instances create` command, append the\n following:\n\n ```\n --preemptible\n ```\n\nWhat's next\n-----------\n\nFor instructions on connecting to your new Deep Learning VM instance\nthrough the Google Cloud console or command line, see [Connecting to\nInstances](/compute/docs/instances/connecting-to-instance). Your instance name\nis the **Deployment name** you specified with `-vm` appended."]]