使用自訂限制
Google Cloud 機構政策可讓您透過程式集中控管機構資源。身為機構政策管理員,您可以定義機構政策,也就是一組稱為限制的限制,適用於Google Cloud 資源和Google Cloud 資源階層中這些資源的子系。您可以在機構、資料夾或專案層級強制執行組織政策。
機構政策提供各種Google Cloud 服務的預先定義限制。不過,如要進一步自訂機構政策中受限制的特定欄位,也可以建立自訂限制,並在機構政策中使用這些自訂限制。
優點
您可以使用自訂機構政策,允許或拒絕對 Serverless for Apache Spark 批次和工作階段執行特定作業。舉例來說,如果建立批次工作負載的要求未通過機構政策設定的自訂限制驗證,要求就會失敗,並向呼叫端傳回錯誤。
政策繼承
根據預設,機構政策會由您強制執行政策的資源子系繼承。舉例來說,如果您對資料夾強制執行政策, Google Cloud 系統會對該資料夾中的所有專案強制執行政策。如要進一步瞭解這項行為及如何變更,請參閱「階層評估規則」。
定價
機構政策服務 (包括預先定義和自訂限制) 免費提供。
事前準備
- 設定專案
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Verify that billing is enabled for your Google Cloud project.
-
Enable the Serverless for Apache Spark API.
-
Install the Google Cloud CLI.
-
如果您使用外部識別資訊提供者 (IdP),請先 使用聯合身分登入 gcloud CLI。
-
如要初始化 gcloud CLI,請執行下列指令:
gcloud init
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Verify that billing is enabled for your Google Cloud project.
-
Enable the Serverless for Apache Spark API.
-
Install the Google Cloud CLI.
-
如果您使用外部識別資訊提供者 (IdP),請先 使用聯合身分登入 gcloud CLI。
-
如要初始化 gcloud CLI,請執行下列指令:
gcloud init
- 請確認您知道機構 ID。
-
orgpolicy.constraints.list
-
orgpolicy.policies.create
-
orgpolicy.policies.delete
-
orgpolicy.policies.list
-
orgpolicy.policies.update
-
orgpolicy.policy.get
-
orgpolicy.policy.set
ORGANIZATION_ID
:您的機構 ID,例如123456789
。CONSTRAINT_NAME
:新自訂限制的名稱。自訂限制條件開頭須為custom.
,且只能包含大寫英文字母、小寫英文字母或數字,例如custom.batchMustHaveSpecifiedCategoryLabel
。這個欄位的長度上限為 70 個字元,不含前置字元,例如organizations/123456789/customConstraints/custom
。CONDITION
:針對支援服務資源的代表項目編寫的 CEL 條件。這個欄位的長度上限為 1000 個字元。如要進一步瞭解可用於撰寫條件的資源,請參閱「Dataproc Serverless 資源和作業限制」。範例條件:("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service'])
。ACTION
:符合條件時採取的動作。可以是ALLOW
或DENY
。DISPLAY_NAME
:限制條件的易記名稱。顯示名稱範例:「Enforce batch 'category' label requirement」。 這個欄位的長度上限為 200 個字元。DESCRIPTION
:違反政策時,要以錯誤訊息形式顯示的限制說明。這個欄位的長度上限為 2000 個字元。範例說明:「只有在具有『category』標籤,且值為『retail』、『ads』或『service』時,才允許建立 Dataproc 批次」。ORGANIZATION_ID
:您的機構 ID,例如123456789
。CONSTRAINT_NAME
:新自訂限制的名稱。自訂限制條件開頭須為custom.
,且只能包含大寫英文字母、小寫英文字母或數字,例如custom.SessionNameMustStartWithTeamName
。這個欄位的長度上限為 70 個字元,不含前置字元organizations/123456789/customConstraints/
。例如:organizations/123456789/customConstraints/custom
。CONDITION
:針對支援服務資源的代表項目編寫的 CEL 條件。這個欄位的長度上限為 1000 個字元。如要進一步瞭解可用於編寫條件的資源,請參閱「Dataproc Serverless 資源和作業的限制」。範例條件:(resource.name.startsWith("dataproc")
。ACTION
:符合條件時採取的動作。可以是ALLOW
或DENY
。DISPLAY_NAME
:限制條件的易記名稱。顯示名稱範例:「Enforce session must have a ttl < 2 hours」(強制工作階段必須有存留時間 < 2 小時)。這個欄位的長度上限為 200 個字元。DESCRIPTION
:違反政策時,要以錯誤訊息形式顯示的限制說明。這個欄位的長度上限為 2000 個字元。說明範例:「只有在設定允許的 TTL 時,才允許建立工作階段」。- 前往 Google Cloud 控制台的「Organization policies」(機構政策) 頁面。
- 在專案選擇工具中,選取要設定機構政策的專案。
- 在「Organization policies」(機構政策) 頁面上的清單中選取限制條件,即可查看該限制條件的「Policy details」(政策詳情) 頁面。
- 如要設定資源的機構政策,請按一下「管理政策」。
- 在「編輯政策」頁面中,選取「覆寫上層政策」。
- 按一下「新增規則」。
- 在「Enforcement」(強制執行) 區段中,選取是否要強制執行這項機構政策。
- 選用:如要根據標記設定機構政策條件,請按一下「新增條件」。請注意,如果為組織政策新增條件式規則,您必須至少新增一項無條件規則,否則無法儲存政策。詳情請參閱「使用標記設定組織政策」。
- 按一下「測試變更」,模擬機構政策的影響。舊版受管理限制不支援政策模擬。詳情請參閱「 使用 Policy Simulator 測試組織政策變更」。
- 如要完成並套用機構政策,請按一下「設定政策」。這項政策最多需要 15 分鐘才會生效。
-
PROJECT_ID
:您要強制執行限制的專案。 -
CONSTRAINT_NAME
:您為自訂限制定義的名稱。例如:
。custom.batchMustHaveSpecifiedCategoryLabel
resource.labels
resource.pysparkBatch.mainPythonFileUri
resource.pysparkBatch.args
resource.pysparkBatch.pythonFileUris
resource.pysparkBatch.jarFileUris
resource.pysparkBatch.fileUris
resource.pysparkBatch.archiveUris
resource.sparkBatch.mainJarFileUri
resource.sparkBatch.mainClass
resource.sparkBatch.args
resource.sparkBatch.jarFileUris
resource.sparkBatch.fileUris
resource.sparkBatch.archiveUris
resource.sparkRBatch.mainRFileUri
resource.sparkRBatch.args
resource.sparkRBatch.fileUris
resource.sparkRBatch.archiveUris
resource.sparkSqlBatch.queryFileUri
resource.sparkSqlBatch.queryVariables
resource.sparkSqlBatch.jarFileUris
resource.runtimeConfig.version
resource.runtimeConfig.containerImage
resource.runtimeConfig.properties
resource.runtimeConfig.repositoryConfig.pypiRepositoryConfig.pypiRepository
resource.runtimeConfig.autotuningConfig.scenarios
resource.runtimeConfig.cohort
resource.environmentConfig.executionConfig.serviceAccount
resource.environmentConfig.executionConfig.networkUri
resource.environmentConfig.executionConfig.subnetworkUri
resource.environmentConfig.executionConfig.networkTags
resource.environmentConfig.executionConfig.kmsKey
resource.environmentConfig.executionConfig.idleTtl
resource.environmentConfig.executionConfig.ttl
resource.environmentConfig.executionConfig.stagingBucket
resource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType
resource.environmentConfig.peripheralsConfig.metastoreService
resource.environmentConfig.peripheralsConfig.sparkHistoryServerConfig.dataprocCluster
resource.name
resource.sparkConnectSession
resource.user
resource.sessionTemplate
resource.jupyterSession.kernel
resource.jupyterSession.displayName
resource.runtimeConfig.version
resource.runtimeConfig.containerImage
resource.runtimeConfig.properties
resource.runtimeConfig.repositoryConfig.pypiRepositoryConfig.pypiRepository
resource.runtimeConfig.autotuningConfig.scenarios
resource.runtimeConfig.cohort
resource.environmentConfig.executionConfig.serviceAccount
resource.environmentConfig.executionConfig.networkUri
resource.environmentConfig.executionConfig.subnetworkUri
resource.environmentConfig.executionConfig.networkTags
resource.environmentConfig.executionConfig.kmsKey
resource.environmentConfig.executionConfig.idleTtl
resource.environmentConfig.executionConfig.ttl
resource.environmentConfig.executionConfig.stagingBucket
resource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType
resource.environmentConfig.peripheralsConfig.metastoreService
resource.environmentConfig.peripheralsConfig.sparkHistoryServerConfig.dataprocCluster
必要的角色
如要取得管理機構政策所需的權限,請要求管理員為您授予機構資源的機構政策管理員 (
roles/orgpolicy.policyAdmin
) IAM 角色。如要進一步瞭解如何授予角色,請參閱「管理專案、資料夾和機構的存取權」。這個預先定義的角色具備管理機構政策所需的權限。如要查看確切的必要權限,請展開「必要權限」部分:
所需權限
如要管理組織政策,您必須具備下列權限:
建立自訂限制
自訂限制是在 YAML 檔案中定義,其中包含限制適用的資源、方法、條件和動作。Serverless for Apache Spark 支援套用至批次和工作階段資源
CREATE
方法的自訂限制。如要進一步瞭解如何建立自訂限制,請參閱「定義自訂限制」。
為批次資源建立自訂限制
如要為批次資源的 Apache Spark 無伺服器自訂限制建立 YAML 檔案,請使用下列格式:
name: organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAME resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: CONDITION actionType: ACTION displayName: DISPLAY_NAME description: DESCRIPTION
更改下列內容:
為工作階段資源建立自訂限制
如要為工作階段資源建立無伺服器 Apache Spark 自訂限制的 YAML 檔案,請使用下列格式:
name: organizations/ORGANIZATION_ID/customConstraints/CONSTRAINT_NAME resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: CONDITION actionType: ACTION displayName: DISPLAY_NAME description: DESCRIPTION
更改下列內容:
設定自訂限制
為新的自訂限制建立 YAML 檔案後,您必須進行設定,才能在貴機構的機構政策中使用該檔案。如要設定自訂限制,請使用gcloud org-policies set-custom-constraint
指令: 將gcloud org-policies set-custom-constraint CONSTRAINT_PATH
CONSTRAINT_PATH
替換為自訂限制檔案的完整路徑。例如:/home/user/customconstraint.yaml
。 完成後,自訂限制就會顯示在 Google Cloud 機構政策清單中,做為機構政策使用。如要確認自訂限制存在,請使用gcloud org-policies list-custom-constraints
指令: 將gcloud org-policies list-custom-constraints --organization=ORGANIZATION_ID
ORGANIZATION_ID
替換為機構資源的 ID。 詳情請參閱「查看組織政策」。強制執行自訂限制
如要強制執行限制,請建立參照該限制的機構政策,然後將該政策套用至 Google Cloud 資源。控制台
gcloud
如要建立含有布林值規則的機構政策,請建立參照限制的政策 YAML 檔案:
name: projects/PROJECT_ID/policies/CONSTRAINT_NAME spec: rules: - enforce: true
取代下列項目:
如要強制執行包含限制的機構政策,請執行下列指令:
gcloud org-policies set-policy POLICY_PATH
將
POLICY_PATH
替換為機構政策 YAML 檔案的完整路徑。這項政策最多需要 15 分鐘才會生效。測試自訂限制
本節說明如何測試批次和工作階段資源的自訂限制。
測試批次資源的自訂限制
以下批次建立範例假設已建立自訂限制,並在批次建立時強制執行,要求批次附加「category」標籤,且值為「retail」、「ads」或「service」:
("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service'])
。gcloud dataproc batches submit spark \ --region us-west1 --jars file:///usr/lib/spark/examples/jars/spark-examples.jar \ --class org.apache.spark.examples.SparkPi \ --network default \ --labels category=foo \ --100
輸出內容範例:
Operation denied by custom org policies: ["customConstraints/
custom.batchMustHaveSpecifiedCategoryLabel
": ""Only allow Dataproc batch creation if it has a 'category' label with a 'retail', 'ads', or 'service' value""]測試工作階段資源的自訂限制
下列工作階段建立範例假設已建立自訂限制,並在工作階段建立時強制執行,要求工作階段的
name
以orgName
開頭。gcloud beta dataproc sessions create spark test-session --location us-central1
輸出內容範例:
Operation denied by custom org policy: ["customConstraints/custom.denySessionNameNotStartingWithOrgName": "Deny session creation if its name does not start with 'orgName'"]
Serverless for Apache Spark 的資源和作業限制
本節列出批次和工作階段資源適用的 Google Cloud Serverless for Apache Spark 自訂限制。
支援 Google Cloud Serverless for Apache Spark 批次限制
建立 (提交) 批次工作負載時,可以使用下列 Serverless for Apache Spark 自訂限制:
一般
PySparkBatch
SparkBatch
SparRBatch
SparkSqlBatch
RuntimeConfig
ExecutionConfig
PeripheralsConfig
支援 Google Cloud 無伺服器 Apache Spark 工作階段限制
建立無伺服器工作階段的自訂限制時,可以使用下列 Google Cloud Serverless for Apache Spark 工作階段屬性:
一般
JupyterSession
RuntimeConfig
ExecutionConfig
PeripheralsConfig
常見用途的自訂限制範例
本節提供批次和工作階段資源常見用途的自訂限制範例。
批次資源的自訂限制範例
下表提供 Serverless for Apache Spark 批次自訂限制的範例:
說明 限制語法 批次必須附加「category」標籤,並使用允許的值。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchMustHaveSpecifiedCategoryLabel resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: ("category" in resource.labels) && (resource.labels['category'] in ['retail', 'ads', 'service']) actionType: ALLOW displayName: Enforce batch "category" label requirement. description: Only allow batch creation if it attaches a "category" label with an allowable value.
Batch 必須設定允許的執行階段版本。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchMustUseAllowedVersion resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.runtimeConfig.version)) && (resource.runtimeConfig.version in ["2.0.45", "2.0.48"]) actionType: ALLOW displayName: Enforce batch runtime version. description: Only allow batch creation if it sets an allowable runtime version.
必須使用 SparkSQL。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchMustUseSparkSQL resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.sparkSqlBatch)) actionType: ALLOW displayName: Enforce batch only use SparkSQL Batch. description: Only allow creation of SparkSQL Batch.
批次作業必須將存留時間設為少於 2 小時。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchMustSetLessThan2hTtl resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.environmentConfig.executionConfig.ttl)) && (resource.environmentConfig.executionConfig.ttl <= duration('2h')) actionType: ALLOW displayName: Enforce batch TTL. description: Only allow batch creation if it sets an allowable TTL.
批次最多只能設定 20 個 Spark 初始執行器。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.runtimeConfig.properties)) && ('spark.executor.instances' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.executor.instances'])>20) actionType: DENY displayName: Enforce maximum number of batch Spark executor instances. description: Deny batch creation if it specifies more than 20 Spark executor instances.
批次最多只能設定 20 個 Spark 動態分配初始執行器。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchDynamicAllocationInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (has(resource.runtimeConfig.properties)) && ('spark.dynamicAllocation.initialExecutors' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.dynamicAllocation.initialExecutors'])>20) actionType: DENY displayName: Enforce maximum number of batch dynamic allocation initial executors. description: Deny batch creation if it specifies more than 20 Spark dynamic allocation initial executors.
批次最多只能有 20 個動態分配執行器。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchDynamicAllocationMaxExecutorMax20 resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: (resource.runtimeConfig.properties['spark.dynamicAllocation.enabled']=='false') || (('spark.dynamicAllocation.maxExecutors' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.dynamicAllocation.maxExecutors'])<=20)) actionType: ALLOW displayName: Enforce batch maximum number of dynamic allocation executors. description: Only allow batch creation if dynamic allocation is disabled or the maximum number of dynamic allocation executors is set to less than or equal to 20.
批次必須將 KMS 金鑰設為允許的模式。 name: organizations/ORGANIZATION_ID/custom.batchKmsPattern resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: matches(resource.environmentConfig.executionConfig.kmsKey, '^keypattern[a-z]$') actionType: ALLOW displayName: Enforce batch KMS Key pattern. description: Only allow batch creation if it sets the KMS key to an allowable pattern.
批次必須將暫存值區前置字串設為允許的值。 name: organizations/ORGANIZATION_ID/customConstraints/custom.batchStagingBucketPrefix resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: resource.environmentConfig.executionConfig.stagingBucket.startsWith(
ALLOWED_PREFIX
) actionType: ALLOW displayName: Enforce batch staging bucket prefix. description: Only allow batch creation if it sets the staging bucket prefix to ALLOWED_PREFIX.批次執行器記憶體設定必須以 m
後置字元結尾,且小於 20000 m。name: organizations/ORGANIZATION_ID/customConstraints/custom.batchExecutorMemoryMax resourceTypes: - dataproc.googleapis.com/Batch methodTypes: - CREATE condition: ('spark.executor.memory' in resource.runtimeConfig.properties) && (resource.runtimeConfig.properties['spark.executor.memory'].endsWith('m')) && (int(resource.runtimeConfig.properties['spark.executor.memory'].split('m')[0])<20000) actionType: ALLOW displayName: Enforce batch executor maximum memory. description: Only allow batch creation if the executor memory setting ends with a suffix 'm' and is less than 20000 m.
工作階段資源的自訂限制範例
下表提供 Serverless for Apache Spark 工作階段自訂限制的範例:
說明 限制語法 工作階段必須將 sessionTemplate
設為空白字串。name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateMustBeEmpty resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: resource.sessionTemplate == "" actionType: ALLOW displayName: Enforce empty session templates. description: Only allow session creation if session template is empty string.
sessionTemplate
必須等於核准的範本 ID。name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionTemplateIdMustBeApproved resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: resource.sessionTemplate.startsWith("https://www.googleapis.com/compute/v1/projects/") && resource.sessionTemplate.contains("/locations/") && resource.sessionTemplate.contains("/sessionTemplates/") && ( resource.sessionTemplate.endsWith("/1") || resource.sessionTemplate.endsWith("/2") || resource.sessionTemplate.endsWith("/13") ) actionType: ALLOW displayName: Enforce templateId must be 1, 2, or 13. description: Only allow session creation if session template ID is in the approved list, that is, 1, 2 and 13.
工作階段必須使用使用者憑證驗證工作負載。 name: organizations/ORGANIZATION_ID/customConstraints/custom.AllowEUCSessions resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: resource.environmentConfig.executionConfig.authenticationConfig.userWorkloadAuthenticationType=="END_USER_CREDENTIALS" actionType: ALLOW displayName: Require end user credential authenticated sessions. description: Allow session creation only if the workload is authenticated using end-user credentials.
工作階段必須設定允許的執行階段版本。 name: organizations/ORGANIZATION_ID/custom.sessionMustUseAllowedVersion resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: (has(resource.runtimeConfig.version)) && (resource.runtimeConfig.version in ["2.0.45", "2.0.48"]) actionType: ALLOW displayName: Enforce session runtime version. description: Only allow session creation if it sets an allowable runtime version.
工作階段必須將存留時間設為少於 2 小時。 name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionMustSetLessThan2hTtl resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: (has(resource.environmentConfig.executionConfig.ttl)) && (resource.environmentConfig.executionConfig.ttl <= duration('2h')) actionType: ALLOW displayName: Enforce session TTL. description: Only allow session creation if it sets an allowable TTL.
工作階段最多只能設定 20 個 Spark 初始執行者。 name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: (has(resource.runtimeConfig.properties)) && ('spark.executor.instances' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.executor.instances'])>20) actionType: DENY displayName: Enforce maximum number of session Spark executor instances. description: Deny session creation if it specifies more than 20 Spark executor instances.
工作階段最多只能設定 20 個 Spark 動態分配初始執行器。 name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionDynamicAllocationInitialExecutorMax20 resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: (has(resource.runtimeConfig.properties)) && ('spark.dynamicAllocation.initialExecutors' in resource.runtimeConfig.properties) && (int(resource.runtimeConfig.properties['spark.dynamicAllocation.initialExecutors'])>20) actionType: DENY displayName: Enforce maximum number of session dynamic allocation initial executors. description: Deny session creation if it specifies more than 20 Spark dynamic allocation initial executors.
工作階段必須將 KMS 金鑰設為允許的模式。 name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionKmsPattern resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: matches(resource.environmentConfig.executionConfig.kmsKey, '^keypattern[a-z]$') actionType: ALLOW displayName: Enforce session KMS Key pattern. description: Only allow session creation if it sets the KMS key to an allowable pattern.
工作階段必須將暫存值區前置字元設為允許的值。 name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionStagingBucketPrefix resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: resource.environmentConfig.executionConfig.stagingBucket.startsWith(ALLOWED_PREFIX) actionType: ALLOW displayName: Enforce session staging bucket prefix. description: Only allow batch creation if it sets the staging bucket prefix to ALLOWED_PREFIX.
工作階段執行器記憶體設定必須以 m
結尾,且小於 20000 m。name: organizations/ORGANIZATION_ID/customConstraints/custom.sessionExecutorMemoryMax resourceTypes: - dataproc.googleapis.com/Session methodTypes: - CREATE condition: ('spark.executor.memory' in resource.runtimeConfig.properties) && (resource.runtimeConfig.properties['spark.executor.memory'].endsWith('m')) && (int(resource.runtimeConfig.properties['spark.executor.memory'].split('m')[0])<20000) actionType: ALLOW displayName: Enforce session executor maximum memory. description: Only allow session creation if the executor memory setting ends with a suffix 'm' and is less than 20000 m.
後續步驟