시작하려면 사용 가능한 이미지 유형 중 하나를 사용하여 Deep Learning Containers 컨테이너를 만듭니다.
그런 다음 conda, pip 또는 Jupyter 명령어를 사용하여 필요에 따라 컨테이너 이미지를 수정합니다.
컨테이너 이미지를 빌드하고 푸시합니다.
컨테이너 이미지를 빌드한 다음 Compute Engine 서비스 계정에서 액세스할 수 있는 위치로 푸시합니다.
초기 Dockerfile 만들기 및 수정 명령어 실행
다음 명령어를 사용하여 Deep Learning Containers 이미지 유형을 선택하고 컨테이너 이미지를 약간 변경합니다. 이 예시는 TensorFlow 이미지로 시작하고 최신 버전의 TensorFlow로 이미지를 업데이트하는 방법을 보여줍니다.
Dockerfile에 다음 명령어를 작성합니다.
FROM us-docker.pkg.dev/deeplearning-platform-release/gcr.io/tf-gpu:latest
# Uninstall the container's TensorFlow version and install the latest version
RUN pip install --upgrade pip && \
pip uninstall -y tensorflow && \
pip install tensorflow
컨테이너 이미지 빌드 및 푸시
다음 명령어를 사용하여 컨테이너 이미지를 Google Compute Engine 서비스 계정으로 액세스할 수 있는 Artifact Registry로 빌드하고 푸시합니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2024-12-21(UTC)"],[[["\u003cp\u003eThis guide details the process of creating a derivative container from a standard Deep Learning Containers image, using either Cloud Shell or an environment with the Google Cloud CLI installed.\u003c/p\u003e\n"],["\u003cp\u003eThe process involves creating an initial Dockerfile and executing modification commands, such as using conda, pip, or Jupyter commands, to customize the container image.\u003c/p\u003e\n"],["\u003cp\u003eBefore starting, ensure you have completed the necessary setup steps, including enabling billing for your Google Cloud project and the Artifact Registry API.\u003c/p\u003e\n"],["\u003cp\u003eAfter modifying the container, you need to build it and push the resulting image to a repository, such as Artifact Registry, that is accessible to your Compute Engine service account.\u003c/p\u003e\n"],["\u003cp\u003eThe example provided shows how to take a tensorflow image, and modify the container by uninstalling the current version and installing the latest version of Tensorflow.\u003c/p\u003e\n"]]],[],null,["# Create a derivative container\n\nThis page describes how to create a derivative container based on one\nof the standard available Deep Learning Containers images.\n\nTo complete the steps in this guide, you can use either\n[Cloud Shell](https://console.cloud.google.com?cloudshell=true) or any\nenvironment where the [Google Cloud CLI](/sdk/docs) is installed.\n\nBefore you begin\n----------------\n\nBefore you begin, make sure you have completed the following steps.\n\n1. Complete the set up steps in the Before you begin section of [Getting\n started with a local deep learning\n container](/deep-learning-containers/docs/getting-started-local).\n\n2. Make sure that billing is enabled for your Google Cloud project.\n\n [Learn how to enable\n billing](https://cloud.google.com/billing/docs/how-to/modify-project)\n3. Enable the Artifact Registry API.\n\n [Enable the\n API](https://console.cloud.google.com/flows/enableapi?apiid=artifactregistry.googleapis.com)\n\nThe Process\n-----------\n\nTo create a derivative container, you'll use a process similar to this:\n\n1. Create the initial Dockerfile and run modification commands.\n\n To start, you create a Deep Learning Containers container using\n one of the [available image types](/deep-learning-containers/docs/choosing-container).\n Then use conda, pip, or\n Jupyter commands to modify the container\n image for your needs.\n2. Build and push the container image.\n\n Build the container image, and then push it to somewhere that is\n accessible to your Compute Engine service account.\n\nCreate the initial Dockerfile and run modification commands\n-----------------------------------------------------------\n\nUse the following commands to select a Deep Learning Containers image type\nand make a small change to the container image. This example shows how to\nstart with a TensorFlow image and updates the image\nwith the latest version of TensorFlow.\nWrite the following commands to the Dockerfile: \n\n```text\nFROM us-docker.pkg.dev/deeplearning-platform-release/gcr.io/tf-gpu:latest\n# Uninstall the container's TensorFlow version and install the latest version\nRUN pip install --upgrade pip && \\\n pip uninstall -y tensorflow && \\\n pip install tensorflow\n```\n\nBuild and push the container image\n----------------------------------\n\nUse the following commands to build and push the container image to\nArtifact Registry, where it can be accessed by your\nGoogle Compute Engine service account.\n\nCreate and authenticate the repository: \n\n```bash\nexport PROJECT=$(gcloud config list project --format \"value(core.project)\")\ngcloud artifacts repositories create REPOSITORY_NAME \\\n --repository-format=docker \\\n --location=LOCATION\ngcloud auth configure-docker LOCATION-docker.pkg.dev\n```\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e: The regional or multi-regional [location](/artifact-registry/docs/repositories#locations) of the repository, for example `us`. To view a list of supported locations, run the command `gcloud artifacts locations list`.\n- \u003cvar translate=\"no\"\u003eREPOSITORY_NAME\u003c/var\u003e: The name of the repository that you want to create, for example `my-tf-repo`.\n\nThen, build and push the image: \n\n```bash\nexport IMAGE_NAME=\"\u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e-docker.pkg.dev/${PROJECT}/\u003cvar translate=\"no\"\u003eREPOSITORY_NAME\u003c/var\u003e/tf-custom:v1\"\ndocker build . -t $IMAGE_NAME\ndocker push $IMAGE_NAME\n```"]]