Dataproc은 그래픽 처리 장치(GPU)를 Dataproc 클러스터의 마스터 및 작업자 Compute Engine 노드에 연결할 수 있는 기능을 제공합니다. 이러한 GPU를 사용하여 인스턴스에서 머신러닝, 데이터 처리와 같은 특정 워크로드를 가속화할 수 있습니다.
GPU로 수행할 수 있는 작업과 이용 가능한 GPU 하드웨어의 유형에 대한 자세한 내용은 Compute Engine의 GPU를 참조하세요.
시작하기 전에
- GPU에는 특수 드라이버와 소프트웨어가 필요합니다. 이러한 항목은 Dataproc 클러스터에 사전 설치되어 있지 않습니다.
- 인스턴스에서 GPU를 사용하는 데 드는 비용을 알아보려면 Compute Engine의 GPU 가격을 참조하세요.
- 이러한 인스턴스가 GPU를 사용하지 않은 인스턴스와 어떻게 다르게 작동하는지 알아보려면 GPU를 사용하는 인스턴스의 제한사항을 참조하세요.
- 프로젝트에서 사용할 수 있는 GPU 할당량(
NVIDIA_K80_GPUS
,NVIDIA_P100_GPUS
,NVIDIA_V100_GPUS
)이 충분한지 확인하려면 프로젝트의 할당량 페이지를 참조하세요. GPU가 할당량 페이지에 나와 있지 않거나 추가 GPU 할당량이 필요한 경우 할당량 상향 조정을 요청하세요.
GPU 유형
Dataproc 노드는 다음과 같은 GPU 유형을 지원합니다. GPU를 Dataproc 클러스터에 연결할 때 GPU 유형을 지정해야 합니다.
nvidia-tesla-a100
- NVIDIA® Tesla® A100nvidia-tesla-k80
- NVIDIA® Tesla® K80nvidia-tesla-p100
- NVIDIA® Tesla® P100nvidia-tesla-v100
- NVIDIA® Tesla® V100nvidia-tesla-p4
- NVIDIA® Tesla® P4nvidia-tesla-t4
- NVIDIA® Tesla® T4nvidia-tesla-p100-vws
- NVIDIA® Tesla® P100 가상 워크스테이션nvidia-tesla-p4-vws
- NVIDIA® Tesla® P4 가상 워크스테이션nvidia-tesla-t4-vws
- NVIDIA® Tesla® T4 가상 워크스테이션
클러스터에 GPU 연결
gcloud
‑‑master-accelerator
,
‑‑worker-accelerator
,
‑‑secondary-worker-accelerator
플래그를 사용하여 클러스터를 만들 때 Dataproc 클러스터의 마스터 노드와 기본 및 보조 워커 노드에 GPU를 연결합니다. 이러한 플래그는 다음 두 값을 사용합니다.
- 노드에 연결할 GPU 유형
- 노드에 연결할 GPU 수
GPU 유형은 필수 항목이고 GPU 수는 선택사항입니다(기본값은 GPU 1개임).
예:
gcloud dataproc clusters create cluster-name \ --region=region \ --master-accelerator type=nvidia-tesla-k80 \ --worker-accelerator type=nvidia-tesla-k80,count=4 \ --secondary-worker-accelerator type=nvidia-tesla-k80,count=4 \ ... other flags
클러스터에서 GPU를 사용하려면 GPU 드라이버를 설치해야 합니다.
REST API
InstanceGroupConfig.AcceleratorConfig acceleratorTypeUri
및 acceleratorCount
필드를 cluster.create API 요청의 일부로 입력하여 Dataproc 클러스터의 마스터 노드와 기본 및 보조 워커 노드에 GPU를 연결합니다.
콘솔
Google Cloud 콘솔에서 클러스터 만들기 페이지에 있는 노드 구성 패널의 마스터 및 워커 노드 섹션에서 CPU 플랫폼 및 GPU→GPU→GPU 추가를 클릭하여 노드의 GPU 수와 GPU 유형을 지정합니다.
GPU 드라이버 설치
Dataproc 노드에 연결된 GPU를 활용하려면 GPU 드라이버가 필요합니다. 아래에 나열된 초기화 작업 안내에 따라 GPU 드라이버를 설치할 수 있습니다.
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# This script installs NVIDIA GPU drivers and collects GPU utilization metrics.
set -euxo pipefail
function compare_versions_lte {
[ "$1" = "$(echo -e "$1\n$2" | sort -V | head -n1)" ]
}
function compare_versions_lt() {
[ "$1" = "$2" ] && return 1 || compare_versions_lte $1 $2
}
function get_metadata_attribute() {
local -r attribute_name=$1
local -r default_value=$2
/usr/share/google/get_metadata_value "attributes/${attribute_name}" || echo -n "${default_value}"
}
OS_NAME=$(lsb_release -is | tr '[:upper:]' '[:lower:]')
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
readonly OS_NAME
# node role
ROLE="$(/usr/share/google/get_metadata_value attributes/dataproc-role)"
readonly ROLE
# CUDA version and Driver version
# https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html
readonly -A DRIVER_FOR_CUDA=([10.1]="418.88" [10.2]="440.64.00"
[11.0]="450.51.06" [11.1]="455.45.01" [11.2]="460.73.01"
[11.5]="495.29.05" [11.6]="510.47.03" [11.7]="515.65.01"
[11.8]="520.56.06")
readonly -A CUDNN_FOR_CUDA=( [10.1]="7.6.4.38" [10.2]="7.6.5.32"
[11.0]="8.0.4.30" [11.1]="8.0.5.39" [11.2]="8.1.1.33"
[11.5]="8.3.3.40" [11.6]="8.4.1.50" [11.7]="8.5.0.96"
[11.8]="8.6.0.163")
readonly -A NCCL_FOR_CUDA=( [10.1]="2.4.8" [10.2]="2.5.6"
[11.0]="2.7.8" [11.1]="2.8.3" [11.2]="2.8.3"
[11.5]="2.11.4" [11.6]="2.11.4" [11.7]="2.12.12"
[11.8]="2.15.5")
readonly -A CUDA_SUBVER=( [10.1]="10.1.243" [10.2]="10.2.89"
[11.0]="11.0.3" [11.1]="11.1.0" [11.2]="11.2.2"
[11.5]="11.5.2" [11.6]="11.6.2" [11.7]="11.7.1"
[11.8]="11.8.0")
RUNTIME=$(get_metadata_attribute 'rapids-runtime' 'SPARK')
DEFAULT_CUDA_VERSION='11.2'
if [[ ${DATAPROC_IMAGE_VERSION} == 2.* ]] && [[ "${RUNTIME}" == "SPARK" ]]; then
DEFAULT_CUDA_VERSION='11.5'
fi
readonly DEFAULT_CUDA_VERSION
readonly CUDA_VERSION=$(get_metadata_attribute 'cuda-version' "${DEFAULT_CUDA_VERSION}")
readonly DEFAULT_NVIDIA_DEBIAN_GPU_DRIVER_VERSION=${DRIVER_FOR_CUDA["${CUDA_VERSION}"]}
readonly NVIDIA_DEBIAN_GPU_DRIVER_VERSION=$(get_metadata_attribute 'gpu-driver-version' ${DEFAULT_NVIDIA_DEBIAN_GPU_DRIVER_VERSION})
readonly NVIDIA_DEBIAN_GPU_DRIVER_VERSION_PREFIX=${NVIDIA_DEBIAN_GPU_DRIVER_VERSION%%.*}
readonly DRIVER=${NVIDIA_DEBIAN_GPU_DRIVER_VERSION_PREFIX}
# As of Rocky 8.7, kernel 4.18.0-425 is unable to build older nvidia kernel drivers
if [[ "${OS_NAME}" == "rocky" && "${DRIVER}" < "510" ]]; then
readonly ROCKY_BINARY_INSTALL="true"
fi
# Fail early for configurations known to be unsupported
function unsupported_error {
echo "Unsupported kernel driver on ${distribution}: '${DRIVER}'"
exit -1
}
if [[ "${OS_NAME}" == "rocky" ]]; then
KERNEL_SUBVERSION=$(uname -r | awk -F- '{print $2}')
if [[ "${DRIVER}" < "460" && "${DRIVER}" != "450"
&& "${KERNEL_SUBVERSION%%.*}" > "305" ]]; then
unsupported_error
fi
elif [[ "${OS_NAME}" == "debian" ]]; then
KERNEL_VERSION=$(uname -r | awk -F- '{print $1}')
if [[ "${DRIVER}" < "455"
&& $(echo "${KERNEL_VERSION%.*} > 5.7" | bc -l) == 1 ]]; then
unsupported_error
fi
fi
DEFAULT_NCCL_VERSION=${NCCL_FOR_CUDA["${CUDA_VERSION}"]}
if [[ "${OS_NAME}" == "rocky" ]] \
&& (compare_versions_lte "${DEFAULT_NCCL_VERSION}" "2.8.4") ; then
DEFAULT_NCCL_VERSION="2.8.4"
fi
readonly DEFAULT_NCCL_VERSION
readonly NCCL_VERSION=$(get_metadata_attribute 'nccl-version' ${DEFAULT_NCCL_VERSION})
# Parameters for NVIDIA-provided Debian GPU driver
DEFAULT_NVIDIA_DEBIAN_GPU_DRIVER_URL="https://download.nvidia.com/XFree86/Linux-x86_64/${NVIDIA_DEBIAN_GPU_DRIVER_VERSION}/NVIDIA-Linux-x86_64-${NVIDIA_DEBIAN_GPU_DRIVER_VERSION}.run"
if [[ "$(curl -s -I ${DEFAULT_NVIDIA_DEBIAN_GPU_DRIVER_URL} | head -1 | awk '{print $2}')" != "200" ]]; then
DEFAULT_NVIDIA_DEBIAN_GPU_DRIVER_URL="https://download.nvidia.com/XFree86/Linux-x86_64/${NVIDIA_DEBIAN_GPU_DRIVER_VERSION%.*}/NVIDIA-Linux-x86_64-${NVIDIA_DEBIAN_GPU_DRIVER_VERSION%.*}.run"
fi
readonly DEFAULT_NVIDIA_DEBIAN_GPU_DRIVER_URL
NVIDIA_DEBIAN_GPU_DRIVER_URL=$(get_metadata_attribute 'gpu-driver-url' "${DEFAULT_NVIDIA_DEBIAN_GPU_DRIVER_URL}")
readonly NVIDIA_DEBIAN_GPU_DRIVER_URL
readonly NVIDIA_BASE_DL_URL='https://developer.download.nvidia.com/compute'
# Parameters for NVIDIA-provided NCCL library
readonly DEFAULT_NCCL_REPO_URL="${NVIDIA_BASE_DL_URL}/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb"
NCCL_REPO_URL=$(get_metadata_attribute 'nccl-repo-url' "${DEFAULT_NCCL_REPO_URL}")
readonly NCCL_REPO_URL
readonly NCCL_REPO_KEY="${NVIDIA_BASE_DL_URL}/machine-learning/repos/ubuntu1804/x86_64/7fa2af80.pub"
readonly -A DEFAULT_NVIDIA_DEBIAN_CUDA_URLS=(
[10.1]="${NVIDIA_BASE_DL_URL}/cuda/10.1/Prod/local_installers/cuda_10.1.243_418.87.00_linux.run"
[10.2]="${NVIDIA_BASE_DL_URL}/cuda/10.2/Prod/local_installers/cuda_10.2.89_440.33.01_linux.run"
[11.0]="${NVIDIA_BASE_DL_URL}/cuda/11.0.3/local_installers/cuda_11.0.3_450.51.06_linux.run"
[11.1]="${NVIDIA_BASE_DL_URL}/cuda/11.1.0/local_installers/cuda_11.1.0_455.23.05_linux.run"
[11.2]="${NVIDIA_BASE_DL_URL}/cuda/11.2.2/local_installers/cuda_11.2.2_460.32.03_linux.run"
[11.5]="${NVIDIA_BASE_DL_URL}/cuda/11.5.2/local_installers/cuda_11.5.2_495.29.05_linux.run"
[11.6]="${NVIDIA_BASE_DL_URL}/cuda/11.6.2/local_installers/cuda_11.6.2_510.47.03_linux.run"
[11.7]="${NVIDIA_BASE_DL_URL}/cuda/11.7.1/local_installers/cuda_11.7.1_515.65.01_linux.run"
[11.8]="${NVIDIA_BASE_DL_URL}/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run")
readonly DEFAULT_NVIDIA_DEBIAN_CUDA_URL=${DEFAULT_NVIDIA_DEBIAN_CUDA_URLS["${CUDA_VERSION}"]}
NVIDIA_DEBIAN_CUDA_URL=$(get_metadata_attribute 'cuda-url' "${DEFAULT_NVIDIA_DEBIAN_CUDA_URL}")
readonly NVIDIA_DEBIAN_CUDA_URL
# Parameters for NVIDIA-provided Ubuntu GPU driver
readonly NVIDIA_UBUNTU_REPO_URL="${NVIDIA_BASE_DL_URL}/cuda/repos/ubuntu1804/x86_64"
readonly NVIDIA_UBUNTU_REPO_KEY_PACKAGE="${NVIDIA_UBUNTU_REPO_URL}/cuda-keyring_1.0-1_all.deb"
readonly NVIDIA_UBUNTU_REPO_CUDA_PIN="${NVIDIA_UBUNTU_REPO_URL}/cuda-ubuntu1804.pin"
# Parameter for NVIDIA-provided Rocky Linux GPU driver
readonly NVIDIA_ROCKY_REPO_URL="${NVIDIA_BASE_DL_URL}/cuda/repos/rhel8/x86_64/cuda-rhel8.repo"
# Parameters for NVIDIA-provided CUDNN library
DEFAULT_CUDNN_VERSION=${CUDNN_FOR_CUDA["${CUDA_VERSION}"]}
if [[ "${OS_NAME}" == "rocky" ]] \
&& (compare_versions_lte "${DEFAULT_CUDNN_VERSION}" "8.0.5.39") ; then
DEFAULT_CUDNN_VERSION="8.0.5.39"
fi
readonly DEFAULT_CUDNN_VERSION
readonly CUDNN_VERSION=$(get_metadata_attribute 'cudnn-version' "${DEFAULT_CUDNN_VERSION}")
CUDNN_TARBALL="cudnn-${CUDA_VERSION}-linux-x64-v${CUDNN_VERSION}.tgz"
CUDNN_TARBALL_URL="${NVIDIA_BASE_DL_URL}/redist/cudnn/v${CUDNN_VERSION%.*}/${CUDNN_TARBALL}"
if ( compare_versions_lte "8.3.1.22" "${CUDNN_VERSION}" ); then
CUDNN_TARBALL="cudnn-linux-x86_64-${CUDNN_VERSION}_cuda${CUDA_VERSION%.*}-archive.tar.xz"
if ( compare_versions_lte "${CUDNN_VERSION}" "8.4.1.50" ); then
CUDNN_TARBALL="cudnn-linux-x86_64-${CUDNN_VERSION}_cuda${CUDA_VERSION}-archive.tar.xz"
fi
CUDNN_TARBALL_URL="${NVIDIA_BASE_DL_URL}/redist/cudnn/v${CUDNN_VERSION%.*}/local_installers/${CUDA_VERSION}/${CUDNN_TARBALL}"
fi
readonly CUDNN_TARBALL
readonly CUDNN_TARBALL_URL
# Whether to install NVIDIA-provided or OS-provided GPU driver
GPU_DRIVER_PROVIDER=$(get_metadata_attribute 'gpu-driver-provider' 'NVIDIA')
readonly GPU_DRIVER_PROVIDER
# Stackdriver GPU agent parameters
readonly GPU_AGENT_REPO_URL='https://raw.githubusercontent.com/GoogleCloudPlatform/ml-on-gcp/master/dlvm/gcp-gpu-utilization-metrics'
# Whether to install GPU monitoring agent that sends GPU metrics to Stackdriver
INSTALL_GPU_AGENT=$(get_metadata_attribute 'install-gpu-agent' 'false')
readonly INSTALL_GPU_AGENT
# Dataproc configurations
readonly HADOOP_CONF_DIR='/etc/hadoop/conf'
readonly HIVE_CONF_DIR='/etc/hive/conf'
readonly SPARK_CONF_DIR='/etc/spark/conf'
NVIDIA_SMI_PATH='/usr/bin'
MIG_MAJOR_CAPS=0
IS_MIG_ENABLED=0
function execute_with_retries() {
local -r cmd=$1
for ((i = 0; i < 10; i++)); do
if eval "$cmd"; then
return 0
fi
sleep 5
done
return 1
}
function install_nvidia_nccl() {
local -r nccl_version="${NCCL_VERSION}-1+cuda${CUDA_VERSION}"
if [[ ${OS_NAME} == rocky ]]; then
execute_with_retries "dnf -y -q install libnccl-${nccl_version} libnccl-devel-${nccl_version} libnccl-static-${nccl_version}"
elif [[ ${OS_NAME} == ubuntu ]] || [[ ${OS_NAME} == debian ]]; then
curl -fsSL --retry-connrefused --retry 10 --retry-max-time 30 "${NCCL_REPO_KEY}" | apt-key add -
local tmp_dir
tmp_dir=$(mktemp -d -t gpu-init-action-nccl-XXXX)
curl -fsSL --retry-connrefused --retry 10 --retry-max-time 30 \
"${NCCL_REPO_URL}" -o "${tmp_dir}/nvidia-ml-repo.deb"
dpkg -i "${tmp_dir}/nvidia-ml-repo.deb"
execute_with_retries "apt-get update"
execute_with_retries \
"apt-get install -y --allow-unauthenticated libnccl2=${nccl_version} libnccl-dev=${nccl_version}"
else
echo "Unsupported OS: '${OS_NAME}'"
exit 1
fi
}
function install_nvidia_cudnn() {
local major_version
major_version="${CUDNN_VERSION%%.*}"
local cudnn_pkg_version
cudnn_pkg_version="${CUDNN_VERSION}-1+cuda${CUDA_VERSION}"
if [[ ${OS_NAME} == rocky ]]; then
if [[ ${major_version} == 8 ]]; then
execute_with_retries "dnf -y -q install libcudnn8-${cudnn_pkg_version} libcudnn8-devel-${cudnn_pkg_version}"
else
echo "Unsupported CUDNN version: '${CUDNN_VERSION}'"
exit 1
fi
elif [[ ${OS_NAME} == ubuntu ]]; then
local -a packages
packages=(
"libcudnn${major_version}=${cudnn_pkg_version}"
"libcudnn${major_version}-dev=${cudnn_pkg_version}")
execute_with_retries \
"apt-get install -y --no-install-recommends ${packages[*]}"
elif [[ ${OS_NAME} == debian ]]; then
local tmp_dir
tmp_dir=$(mktemp -d -t gpu-init-action-cudnn-XXXX)
curl -fSsL --retry-connrefused --retry 10 --retry-max-time 30 \
"${CUDNN_TARBALL_URL}" -o "${tmp_dir}/${CUDNN_TARBALL}"
if ( compare_versions_lte "${CUDNN_VERSION}" "8.3.0.98" ); then
tar -xzf "${tmp_dir}/${CUDNN_TARBALL}" -C /usr/local
else
ln -sf /usr/local/cuda/targets/x86_64-linux/lib /usr/local/cuda/lib
tar -h --no-same-owner --strip-components=1 \
-xJf "${tmp_dir}/${CUDNN_TARBALL}" -C /usr/local/cuda
fi
cat <<'EOF' >>/etc/profile.d/cudnn.sh
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:${LD_LIBRARY_PATH}
EOF
else
echo "Unsupported OS: '${OS_NAME}'"
exit 1
fi
ldconfig
echo "NVIDIA cuDNN successfully installed for ${OS_NAME}."
}
# Install NVIDIA GPU driver provided by NVIDIA
function install_nvidia_gpu_driver() {
if [[ ${OS_NAME} == debian ]]; then
curl -fsSL --retry-connrefused --retry 10 --retry-max-time 30 \
"${NVIDIA_UBUNTU_REPO_KEY_PACKAGE}" -o /tmp/cuda-keyring.deb
dpkg -i "/tmp/cuda-keyring.deb"
curl -fsSL --retry-connrefused --retry 10 --retry-max-time 30 \
"${NVIDIA_DEBIAN_GPU_DRIVER_URL}" -o driver.run
bash "./driver.run" --silent --install-libglvnd
curl -fsSL --retry-connrefused --retry 10 --retry-max-time 30 \
"${NVIDIA_DEBIAN_CUDA_URL}" -o cuda.run
bash "./cuda.run" --silent --toolkit --no-opengl-libs
elif [[ ${OS_NAME} == ubuntu ]]; then
curl -fsSL --retry-connrefused --retry 10 --retry-max-time 30 \
"${NVIDIA_UBUNTU_REPO_KEY_PACKAGE}" -o /tmp/cuda-keyring.deb
dpkg -i "/tmp/cuda-keyring.deb"
curl -fsSL --retry-connrefused --retry 10 --retry-max-time 30 \
"${NVIDIA_UBUNTU_REPO_CUDA_PIN}" -o /etc/apt/preferences.d/cuda-repository-pin-600
add-apt-repository "deb ${NVIDIA_UBUNTU_REPO_URL} /"
execute_with_retries "apt-get update"
if [[ -n "${CUDA_VERSION}" ]]; then
local -r cuda_package=cuda-toolkit-${CUDA_VERSION//./-}
else
local -r cuda_package=cuda-toolkit
fi
# Without --no-install-recommends this takes a very long time.
execute_with_retries "apt-get install -y -q --no-install-recommends cuda-drivers-${NVIDIA_DEBIAN_GPU_DRIVER_VERSION_PREFIX}"
execute_with_retries "apt-get install -y -q --no-install-recommends ${cuda_package}"
elif [[ ${OS_NAME} == rocky ]]; then
execute_with_retries "dnf config-manager --add-repo ${NVIDIA_ROCKY_REPO_URL}"
execute_with_retries "dnf clean all"
if [[ "${ROCKY_BINARY_INSTALL}" == "true" ]]; then
execute_with_retries "dnf -y -q module install nvidia-driver"
else
execute_with_retries "dnf -y -q module install nvidia-driver:${NVIDIA_DEBIAN_GPU_DRIVER_VERSION_PREFIX}-dkms"
fi
NVIDIA_ROCKY_GPU_DRIVER_VERSION="$(ls -d /usr/src/nvidia-* | awk -F"nvidia-" '{print $2}')"
execute_with_retries "dkms build nvidia/${NVIDIA_ROCKY_GPU_DRIVER_VERSION}"
execute_with_retries "dkms install nvidia/${NVIDIA_ROCKY_GPU_DRIVER_VERSION}"
modprobe nvidia
execute_with_retries "dnf -y -q install cuda-${CUDA_VERSION//./-}"
else
echo "Unsupported OS: '${OS_NAME}'"
exit 1
fi
ldconfig
echo "NVIDIA GPU driver provided by NVIDIA was installed successfully"
}
# Collects 'gpu_utilization' and 'gpu_memory_utilization' metrics
function install_gpu_agent() {
if ! command -v pip; then
execute_with_retries "apt-get install -y -q python-pip"
fi
local install_dir=/opt/gpu-utilization-agent
mkdir -p "${install_dir}"
curl -fsSL --retry-connrefused --retry 10 --retry-max-time 30 \
"${GPU_AGENT_REPO_URL}/requirements.txt" -o "${install_dir}/requirements.txt"
curl -fsSL --retry-connrefused --retry 10 --retry-max-time 30 \
"${GPU_AGENT_REPO_URL}/report_gpu_metrics.py" -o "${install_dir}/report_gpu_metrics.py"
pip install -r "${install_dir}/requirements.txt"
# Generate GPU service.
cat <<EOF >/lib/systemd/system/gpu-utilization-agent.service
[Unit]
Description=GPU Utilization Metric Agent
[Service]
Type=simple
PIDFile=/run/gpu_agent.pid
ExecStart=/bin/bash --login -c 'python "${install_dir}/report_gpu_metrics.py"'
User=root
Group=root
WorkingDirectory=/
Restart=always
[Install]
WantedBy=multi-user.target
EOF
# Reload systemd manager configuration
systemctl daemon-reload
# Enable gpu-utilization-agent service
systemctl --no-reload --now enable gpu-utilization-agent.service
}
function set_hadoop_property() {
local -r config_file=$1
local -r property=$2
local -r value=$3
bdconfig set_property \
--configuration_file "${HADOOP_CONF_DIR}/${config_file}" \
--name "${property}" --value "${value}" \
--clobber
}
function configure_yarn() {
if [[ ! -f ${HADOOP_CONF_DIR}/resource-types.xml ]]; then
printf '<?xml version="1.0" ?>\n<configuration/>' >"${HADOOP_CONF_DIR}/resource-types.xml"
fi
set_hadoop_property 'resource-types.xml' 'yarn.resource-types' 'yarn.io/gpu'
set_hadoop_property 'capacity-scheduler.xml' \
'yarn.scheduler.capacity.resource-calculator' \
'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator'
set_hadoop_property 'yarn-site.xml' 'yarn.resource-types' 'yarn.io/gpu'
}
# This configuration should be applied only if GPU is attached to the node
function configure_yarn_nodemanager() {
set_hadoop_property 'yarn-site.xml' 'yarn.nodemanager.resource-plugins' 'yarn.io/gpu'
set_hadoop_property 'yarn-site.xml' \
'yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices' 'auto'
set_hadoop_property 'yarn-site.xml' \
'yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables' $NVIDIA_SMI_PATH
set_hadoop_property 'yarn-site.xml' \
'yarn.nodemanager.linux-container-executor.cgroups.mount' 'true'
set_hadoop_property 'yarn-site.xml' \
'yarn.nodemanager.linux-container-executor.cgroups.mount-path' '/sys/fs/cgroup'
set_hadoop_property 'yarn-site.xml' \
'yarn.nodemanager.linux-container-executor.cgroups.hierarchy' 'yarn'
set_hadoop_property 'yarn-site.xml' \
'yarn.nodemanager.container-executor.class' \
'org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor'
set_hadoop_property 'yarn-site.xml' 'yarn.nodemanager.linux-container-executor.group' 'yarn'
# Fix local dirs access permissions
local yarn_local_dirs=()
readarray -d ',' yarn_local_dirs < <(bdconfig get_property_value \
--configuration_file "${HADOOP_CONF_DIR}/yarn-site.xml" \
--name "yarn.nodemanager.local-dirs" 2>/dev/null | tr -d '\n')
chown yarn:yarn -R "${yarn_local_dirs[@]/,/}"
}
function configure_gpu_exclusive_mode() {
# check if running spark 3, if not, enable GPU exclusive mode
local spark_version
spark_version=$(spark-submit --version 2>&1 | sed -n 's/.*version[[:blank:]]\+\([0-9]\+\.[0-9]\).*/\1/p' | head -n1)
if [[ ${spark_version} != 3.* ]]; then
# include exclusive mode on GPU
nvidia-smi -c EXCLUSIVE_PROCESS
fi
}
function fetch_mig_scripts() {
mkdir -p /usr/local/yarn-mig-scripts
sudo chmod 755 /usr/local/yarn-mig-scripts
wget -P /usr/local/yarn-mig-scripts/ https://raw.githubusercontent.com/NVIDIA/spark-rapids-examples/branch-22.10/examples/MIG-Support/yarn-unpatched/scripts/nvidia-smi
wget -P /usr/local/yarn-mig-scripts/ https://raw.githubusercontent.com/NVIDIA/spark-rapids-examples/branch-22.10/examples/MIG-Support/yarn-unpatched/scripts/mig2gpu.sh
sudo chmod 755 /usr/local/yarn-mig-scripts/*
}
function configure_gpu_script() {
# Download GPU discovery script
local -r spark_gpu_script_dir='/usr/lib/spark/scripts/gpu'
mkdir -p ${spark_gpu_script_dir}
# need to update the getGpusResources.sh script to look for MIG devices since if multiple GPUs nvidia-smi still
# lists those because we only disable the specific GIs via CGROUPs. Here we just create it based off of:
# https://raw.githubusercontent.com/apache/spark/master/examples/src/main/scripts/getGpusResources.sh
echo '
#!/usr/bin/env bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
NUM_MIG_DEVICES=$(nvidia-smi -L | grep MIG | wc -l)
ADDRS=$(nvidia-smi --query-gpu=index --format=csv,noheader | sed -e '\'':a'\'' -e '\''N'\'' -e'\''$!ba'\'' -e '\''s/\n/","/g'\'')
if [ $NUM_MIG_DEVICES -gt 0 ]; then
MIG_INDEX=$(( $NUM_MIG_DEVICES - 1 ))
ADDRS=$(seq -s '\''","'\'' 0 $MIG_INDEX)
fi
echo {\"name\": \"gpu\", \"addresses\":[\"$ADDRS\"]}
' > ${spark_gpu_script_dir}/getGpusResources.sh
chmod a+rwx -R ${spark_gpu_script_dir}
}
function configure_gpu_isolation() {
# enable GPU isolation
sed -i "s/yarn\.nodemanager\.linux\-container\-executor\.group\=.*$/yarn\.nodemanager\.linux\-container\-executor\.group\=yarn/g" "${HADOOP_CONF_DIR}/container-executor.cfg"
if [[ $IS_MIG_ENABLED -ne 0 ]]; then
# configure the container-executor.cfg to have major caps
printf '\n[gpu]\nmodule.enabled=true\ngpu.major-device-number=%s\n\n[cgroups]\nroot=/sys/fs/cgroup\nyarn-hierarchy=yarn\n' $MIG_MAJOR_CAPS >> "${HADOOP_CONF_DIR}/container-executor.cfg"
printf 'export MIG_AS_GPU_ENABLED=1\n' >> "${HADOOP_CONF_DIR}/yarn-env.sh"
printf 'export ENABLE_MIG_GPUS_FOR_CGROUPS=1\n' >> "${HADOOP_CONF_DIR}/yarn-env.sh"
else
printf '\n[gpu]\nmodule.enabled=true\n[cgroups]\nroot=/sys/fs/cgroup\nyarn-hierarchy=yarn\n' >> "${HADOOP_CONF_DIR}/container-executor.cfg"
fi
# Configure a systemd unit to ensure that permissions are set on restart
cat >/etc/systemd/system/dataproc-cgroup-device-permissions.service<<EOF
[Unit]
Description=Set permissions to allow YARN to access device directories
[Service]
ExecStart=/bin/bash -c "chmod a+rwx -R /sys/fs/cgroup/cpu,cpuacct; chmod a+rwx -R /sys/fs/cgroup/devices"
[Install]
WantedBy=multi-user.target
EOF
systemctl enable dataproc-cgroup-device-permissions
systemctl start dataproc-cgroup-device-permissions
}
function main() {
if [[ ${OS_NAME} != debian ]] && [[ ${OS_NAME} != ubuntu ]] && [[ ${OS_NAME} != rocky ]]; then
echo "Unsupported OS: '${OS_NAME}'"
exit 1
fi
if [[ ${OS_NAME} == debian ]] || [[ ${OS_NAME} == ubuntu ]]; then
export DEBIAN_FRONTEND=noninteractive
execute_with_retries "apt-get update"
execute_with_retries "apt-get install -y -q pciutils"
elif [[ ${OS_NAME} == rocky ]] ; then
execute_with_retries "dnf -y -q update"
execute_with_retries "dnf -y -q install pciutils"
execute_with_retries "dnf -y -q install kernel-devel-$(uname -r)"
execute_with_retries "dnf -y -q install gcc"
fi
# This configuration should be ran on all nodes
# regardless if they have attached GPUs
configure_yarn
# Detect NVIDIA GPU
if (lspci | grep -q NVIDIA); then
# if this is called without the MIG script then the drivers are not installed
if (/usr/bin/nvidia-smi --query-gpu=mig.mode.current --format=csv,noheader | uniq | wc -l); then
NUM_MIG_GPUS=`/usr/bin/nvidia-smi --query-gpu=mig.mode.current --format=csv,noheader | uniq | wc -l`
if [[ $NUM_MIG_GPUS -eq 1 ]]; then
if (/usr/bin/nvidia-smi --query-gpu=mig.mode.current --format=csv,noheader | grep Enabled); then
IS_MIG_ENABLED=1
NVIDIA_SMI_PATH='/usr/local/yarn-mig-scripts/'
MIG_MAJOR_CAPS=`grep nvidia-caps /proc/devices | cut -d ' ' -f 1`
fetch_mig_scripts
fi
fi
fi
if [[ ${OS_NAME} == debian ]] || [[ ${OS_NAME} == ubuntu ]]; then
execute_with_retries "apt-get install -y -q 'linux-headers-$(uname -r)'"
fi
# if mig is enabled drivers would have already been installed
if [[ $IS_MIG_ENABLED -eq 0 ]]; then
install_nvidia_gpu_driver
if [[ -n ${CUDNN_VERSION} ]]; then
install_nvidia_nccl
install_nvidia_cudnn
fi
#Install GPU metrics collection in Stackdriver if needed
if [[ ${INSTALL_GPU_AGENT} == true ]]; then
install_gpu_agent
echo 'GPU metrics agent successfully deployed.'
else
echo 'GPU metrics agent will not be installed.'
fi
configure_gpu_exclusive_mode
fi
configure_yarn_nodemanager
configure_gpu_script
configure_gpu_isolation
elif [[ "${ROLE}" == "Master" ]]; then
configure_yarn_nodemanager
configure_gpu_script
fi
# Restart YARN services if they are running already
if [[ $(systemctl show hadoop-yarn-resourcemanager.service -p SubState --value) == 'running' ]]; then
systemctl restart hadoop-yarn-resourcemanager.service
fi
if [[ $(systemctl show hadoop-yarn-nodemanager.service -p SubState --value) == 'running' ]]; then
systemctl restart hadoop-yarn-nodemanager.service
fi
}
main
GPU 드라이버 설치 확인
Dataproc 노드에 GPU 드라이버를 설치한 후 드라이버가 제대로 작동하는지 확인할 수 있습니다. Dataproc 클러스터의 마스터 노드에 SSH를 연결하고 다음 명령어를 실행합니다.
nvidia-smi
드라이버가 제대로 작동하는 경우 드라이버 버전과 GPU 통계가 출력에 표시됩니다(GPU 드라이버 설치 확인 참조).
Spark 구성
Spark에 작업 제출 시 다음 Spark 구성을 사용하여 필요한 라이브러리를 로드할 수 있습니다.
spark.executorEnv.LD_PRELOAD=libnvblas.so
GPU 작업 예
GPU로 실행할 때 유용한 다음 작업 중 하나를 실행하여 Dataproc에서 GPU를 테스트할 수 있습니다.
- Spark ML 예 중 하나를 실행합니다.
- 행렬 계산을 실행하려면
spark-shell
를 사용하여 다음 예를 실행합니다.
import org.apache.spark.mllib.linalg._ import org.apache.spark.mllib.linalg.distributed._ import java.util.Random def makeRandomSquareBlockMatrix(rowsPerBlock: Int, nBlocks: Int): BlockMatrix = { val range = sc.parallelize(1 to nBlocks) val indices = range.cartesian(range) return new BlockMatrix( indices.map( ij => (ij, Matrices.rand(rowsPerBlock, rowsPerBlock, new Random()))), rowsPerBlock, rowsPerBlock, 0, 0) } val N = 1024 * 5 val n = 2 val mat1 = makeRandomSquareBlockMatrix(N, n) val mat2 = makeRandomSquareBlockMatrix(N, n) val mat3 = mat1.multiply(mat2) mat3.blocks.persist.count println("Processing complete!")