Getting started with a local deep learning container

This page describes how to create and set up a local deep learning container. This guide expects you to have basic familiarity with Docker.

Before you begin

To complete the steps in this guide, you can use either Cloud Shell or any environment where the Cloud SDK can be installed.

Complete the following steps to set up a Google Cloud account, enable the required APIs, and install and activate the Cloud SDK.

  1. In the Google Cloud Console, go to the Manage resources page and select or create a project.

    Go to the Manage Resources page

  2. If you're using Cloud Shell, you may skip this step because the Cloud SDK and Docker are already installed. Otherwise, complete these steps:

    1. Install and initialize the Cloud SDK.

    2. Install Docker.

      If you're using a Linux-based operating system, such as Ubuntu or Debian, add your username to the docker group so that you can run Docker without using sudo:

      sudo usermod -a -G docker ${USER}
      

      You may need to restart your system after adding yourself to the docker group.

  3. Open Docker. To ensure that Docker is running, run the following Docker command, which returns the current time and date:

    docker run busybox date
    
  4. Use gcloud as the credential helper for Docker:

    gcloud auth configure-docker
    
  5. Optional: If you want to run the container using GPU locally, install nvidia-docker.

Create your container

Follow these steps to create your container.

  1. To view a list of containers available:

    gcloud container images list \
      --repository="gcr.io/deeplearning-platform-release"
    

    You may want to go to Choosing a container to help you select the container that you want.

  2. If you don't need to use a GPU-enabled container, enter the following code example. Replace tf-cpu.1-13 with the name of the container that you want to use.

    docker run -d -p 8080:8080 -v /path/to/local/dir:/home \
      gcr.io/deeplearning-platform-release/tf-cpu.1-13
    

    If you want to use a GPU-enabled container, enter the following code example. Replace tf-gpu.1-13 with the name of the container that you want to use.

    docker run --runtime=nvidia -d -p 8080:8080 -v /path/to/local/dir:/home \
      gcr.io/deeplearning-platform-release/tf-gpu.1-13
    

This command starts up the container in detached mode, mounts the local directory /path/to/local/dir to /home in the container, and maps port 8080 on the container to port 8080 on your local machine. The container is preconfigured to start a JupyterLab server, which you can visit at http://localhost:8080.

What's next