[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-19。"],[[["Dataproc Serverless allows the execution of Spark workloads without the need to provision and manage a Dataproc cluster, offering two methods: Spark Batch and Spark Interactive."],["Dataproc Serverless for Spark Batch allows users to submit batch workloads via the Google Cloud console, CLI, or API, with the service managing resource scaling and only charging for active workload execution time."],["Dataproc Serverless for Spark Interactive enables the writing and running of code within Jupyter notebooks, accessible through the Dataproc JupyterLab plugin, which also provides functionalities for creating and managing Dataproc on Compute Engine clusters."],["Compared to Dataproc on Compute Engine, Dataproc Serverless for Spark provides serverless capabilities, faster startup times, and interactive sessions, while Compute Engine offers greater infrastructure control and supports other open-source frameworks."],["Dataproc Serverless adheres to data residency, CMEK, and VPC-SC security requirements and supports various Spark batch workload types including PySpark, Spark SQL, Spark R, and Spark (Java or Scala)."]]],[]]