기본적으로 Google Cloud Apache Spark용 서버리스는 저장 중 고객 콘텐츠를 암호화합니다. Apache Spark용 서버리스는 사용자의 추가 작업 없이 자동으로 암호화를 처리합니다. 이 옵션을 Google 기본 암호화라고 부릅니다.
암호화 키를 제어하려면 Apache Spark용 서버리스를 포함한 CMEK 통합 서비스와 함께 Cloud KMS에서 고객 관리 암호화 키(CMEK)를 사용하면 됩니다. Cloud KMS 키를 사용하면 보호 수준, 위치, 순환 일정, 사용 및 액세스 권한, 암호화 경계를 관리할 수 있습니다.
Cloud KMS를 사용하면 키 사용을 추적하고 감사 로그를 보며 키 수명 주기를 제어할 수도 있습니다.
Google에서 데이터를 보호하는 대칭 키 암호화 키(KEK)를 소유하고 관리하는 대신 사용자가 Cloud KMS에서 이러한 키를 제어하고 관리할 수 있습니다.
CMEK로 리소스를 설정한 후 Apache Spark용 서버리스 리소스에 액세스하는 환경은 Google 기본 암호화를 사용하는 것과 유사합니다.
암호화 옵션에 대한 자세한 내용은 고객 관리 암호화 키(CMEK)를 참조하세요.
CMEK 사용
이 섹션의 단계에 따라 CMEK를 사용하여 Google Cloud Apache Spark용 서버리스가 영구 디스크 및 Dataproc 스테이징 버킷에 작성하는 데이터를 암호화합니다.
KMS_PROJECT_ID: Cloud KMS를 실행하는 Google Cloud 프로젝트의 ID. 이 프로젝트는 Dataproc 리소스를 실행하는 프로젝트일 수도 있습니다.
PROJECT_NUMBER: Dataproc 리소스를 실행하는 Google Cloud 프로젝트의 프로젝트 번호 (프로젝트 ID 아님)
Apache Spark용 서버리스 리소스를 실행하는 프로젝트에서 Cloud KMS API를 사용 설정합니다.
Dataproc 서비스 에이전트 역할이 Dataproc 서비스 에이전트 서비스 계정에 연결되어 있지 않은 경우 Dataproc 서비스 에이전트 서비스 계정에 연결된 커스텀 역할에 serviceusage.services.use 권한을 추가합니다. Dataproc 서비스 에이전트 역할이 Dataproc 서비스 에이전트 서비스 계정에 연결되어 있으면 이 단계를 건너뜁니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[],[],null,["# Use CMEK with Google Cloud Serverless for Apache Spark\n\nBy default, Google Cloud Serverless for Apache Spark encrypts customer content at\nrest. Serverless for Apache Spark handles encryption for you without any\nadditional actions on your part. This option is called *Google default encryption*.\n\nIf you want to control your encryption keys, then you can use customer-managed encryption keys\n(CMEKs) in [Cloud KMS](/kms/docs) with CMEK-integrated services including\nServerless for Apache Spark. Using Cloud KMS keys gives you control over their protection\nlevel, location, rotation schedule, usage and access permissions, and cryptographic boundaries.\n\nUsing Cloud KMS also lets\nyou [track key usage](/kms/docs/view-key-usage), view audit logs, and\ncontrol key lifecycles.\n\n\nInstead of Google owning and managing the symmetric\n[key encryption keys (KEKs)](/kms/docs/envelope-encryption#key_encryption_keys) that protect your data, you control and\nmanage these keys in Cloud KMS.\n\nAfter you set up your resources with CMEKs, the experience of accessing your\nServerless for Apache Spark resources is similar to using Google default encryption.\nFor more information about your encryption\noptions, see [Customer-managed encryption keys (CMEK)](/kms/docs/cmek).\n| When you use Google Cloud Serverless for Apache Spark, data is stored on disks on the underlying serverless infrastructure and in a Cloud Storage [staging bucket](/dataproc-serverless/docs/concepts/buckets). This data is encrypted using a Google-generated data encryption key (DEK) and key encryption key (KEK). If you want control of your KEK, you can use a customer-managed encryption key (CMEK) instead of [default encryption at\n| rest](/security/encryption/default-encryption). When you use a CMEK, you create the key and manage access to it, and you can revoke access to it to prevent decryption of your DEKs and data.\n\nUse CMEK\n--------\n\nFollow the steps in this section to use CMEK to encrypt data that Google Cloud Serverless for Apache Spark\nwrites to persistent disk and to the Dataproc staging bucket.\n| Beginning April 23, 2024:\n|\n| - Serverless for Apache Spark also uses your CMEK to encrypt batch job arguments. The [Cloud KMS CryptoKey Encrypter/Decrypter](/kms/docs/reference/permissions-and-roles#cloudkms.cryptoKeyEncrypterDecrypter) IAM role must be assigned to the Dataproc Service Agent service account to enable this behavior. If the [Dataproc Service Agent role](/iam/docs/understanding-roles#dataproc.serviceAgent) is not attached to the Dataproc Service Agent service account, then add the `serviceusage.services.use` permission to a custom role attached to the Dataproc Service Agent service account . The Cloud KMS API must be enabled on the project that runs Serverless for Apache Spark resources.\n| - [`batches.list`](/dataproc-serverless/docs/reference/rest/v1/projects.locations.batches/list) returns an `unreachable` field that lists any batches with job arguments that couldn't be decrypted. You can issue [`batches.get`](/dataproc-serverless/docs/reference/rest/v1/projects.locations.batches/get) requests to obtain more information on unreachable batches.\n| - The key (CMEK) must be located in the same location as the encrypted resource. For example, the CMEK used to encrypt a batch that runs in the `us-central1` region must also be located in the `us-central1` region.\n\n1. Create a key using the\n [Cloud Key Management Service (Cloud KMS)](/kms/docs/creating-keys).\n\n2. Copy the resource name.\n\n The resource name is is constructed as follows: \n\n ```\n projects/PROJECT_ID/locations/REGION/keyRings/KEY_RING_NAME/cryptoKeys/KEY_NAME\n ```\n\n \u003cbr /\u003e\n\n3. Enable the Compute Engine, Dataproc, and Cloud Storage Service Agent\n service accounts to use your key:\n\n 1. See [Protect resources by using Cloud KMS keys \\\u003e Required Roles](/compute/docs/disks/customer-managed-encryption#required-roles) to assign the [Cloud KMS CryptoKey Encrypter/Decrypter](/kms/docs/reference/permissions-and-roles#cloudkms.cryptoKeyEncrypterDecrypter) role to the [Compute Engine Service Agent service account](/compute/docs/access/service-accounts#compute_engine_service_account). If this service account is not listed on the IAM page in Google Cloud console, click **Include Google-provided role grants** to list it.\n 2. Assign the [Cloud KMS CryptoKey Encrypter/Decrypter](/kms/docs/reference/permissions-and-roles#cloudkms.cryptoKeyEncrypterDecrypter)\n role to the [Dataproc Service Agent service account](/dataproc/docs/concepts/iam/dataproc-principals#service_agent_control_plane_identity).\n You can use the Google Cloud CLI to assign the role:\n\n ```\n gcloud projects add-iam-policy-binding KMS_PROJECT_ID \\\n --member serviceAccount:service-PROJECT_NUMBER@dataproc-accounts.iam.gserviceaccount.com \\\n --role roles/cloudkms.cryptoKeyEncrypterDecrypter\n ```\n\n Replace the following:\n\n \u003cvar translate=\"no\"\u003eKMS_PROJECT_ID\u003c/var\u003e: the ID of your Google Cloud project that\n runs Cloud KMS. This project can also be the project that runs Dataproc resources.\n\n \u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e: the project number (not the project ID) of your Google Cloud project that runs Dataproc resources.\n 3. Enable the Cloud KMS API on the project that runs Serverless for Apache Spark resources.\n\n 4. If the [Dataproc Service Agent role](/iam/docs/understanding-roles#dataproc.serviceAgent) is not attached to the [Dataproc Service Agent service account](/dataproc/docs/concepts/iam/dataproc-principals#service_agent_control_plane_identity),\n then add the `serviceusage.services.use` permission to the custom\n role attached to the Dataproc Service Agent service account. If the Dataproc Service Agent role is\n attached to the Dataproc Service Agent service account, you can skip this step.\n\n 5. Follow the steps to\n [add your key on the bucket](/storage/docs/encryption/using-customer-managed-keys#set-default-key).\n\n4. When you\n [submit a batch workload](/dataproc-serverless/docs/quickstarts/spark-batch#submit_a_spark_batch_workload):\n\n 1. Specify your key in the batch [`kmsKey`](/dataproc-serverless/docs/reference/rest/v1/EnvironmentConfig#ExecutionConfig.FIELDS.kms_key) parameter.\n 2. Specify the name of your Cloud Storage bucket in the batch [`stagingBucket`](/dataproc-serverless/docs/reference/rest/v1/EnvironmentConfig#ExecutionConfig.FIELDS.staging_bucket) parameter.\n5. When you [create an interactive session or session template](/dataproc-serverless/docs/guides/create-serverless-sessions-templates):\n\n 1. Specify your key in the session [`kmsKey`](/dataproc-serverless/docs/reference/rest/v1/EnvironmentConfig#ExecutionConfig.FIELDS.kms_key) parameter.\n 2. Specify the name of your Cloud Storage bucket in the session [`stagingBucket`](/dataproc-serverless/docs/reference/rest/v1/EnvironmentConfig#ExecutionConfig.FIELDS.staging_bucket) parameter."]]