Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Cluster node tunggal adalah cluster Dataproc dengan hanya satu node. Satu node ini berfungsi sebagai master dan worker untuk cluster Dataproc Anda. Meskipun cluster node tunggal hanya memiliki satu node, sebagian besar konsep dan fitur Dataproc tetap berlaku, kecuali yang tercantum di bawah.
Ada sejumlah situasi saat cluster Dataproc node tunggal dapat berguna, termasuk:
Mencoba versi baru Spark dan Hadoop atau komponen open source lainnya
Membangun demonstrasi bukti konsep (PoC)
Data science ringan
Pemrosesan data tidak penting skala kecil
Edukasi terkait ekosistem Spark dan Hadoop
Semantik cluster node tunggal
Semantik berikut berlaku untuk cluster Dataproc node tunggal:
Cluster node tunggal dikonfigurasi sama seperti cluster Dataproc multi-node, dan mencakup layanan seperti HDFS dan YARN.
Cluster node tunggal menampilkan 0 pekerja karena node tunggal bertindak sebagai
master dan pekerja.
Cluster node tunggal diberi nama host yang mengikuti pola clustername-m.
Anda dapat menggunakan nama host ini untuk melakukan SSH ke atau terhubung ke
UI web di node.
Cluster node tunggal tidak dapat diupgrade ke cluster multi-node. Setelah dibuat, cluster node tunggal dibatasi hingga satu node. Demikian pula, cluster multi-node tidak dapat di-downscale menjadi cluster node tunggal.
Batasan
Cluster node tunggal tidak direkomendasikan untuk pemrosesan data paralel skala besar. Jika Anda melebihi resource pada cluster satu node, sebaiknya gunakan cluster Dataproc multi-node.
Cluster node tunggal tidak tersedia dengan
ketersediaan tinggi
karena hanya ada satu node dalam cluster.
Cluster node tunggal tidak dapat menggunakan preemptible VM.
Membuat cluster node tunggal
Perintah gcloud
Anda dapat membuat cluster Dataproc node tunggal menggunakan alat command line gcloud. Untuk membuat cluster node tunggal, teruskan
flag --single-node ke perintah
gcloud dataproc clusters create.
Anda dapat membuat cluster node tunggal melalui
Dataproc REST API menggunakan permintaan
clusters.create. Saat membuat permintaan ini, Anda harus:
Tambahkan properti "dataproc:dataproc.allow.zero.workers":"true" ke
SoftwareConfig
dari permintaan cluster.
Jangan mengirimkan nilai untuk workerConfig dan secondaryWorkerConfig
(lihat ClusterConfig).
Konsol
Anda dapat membuat cluster node tunggal dengan memilih "Single Node
(1 master, 0 workers)" di bagian Cluster type pada
panel Set up cluster di halaman
Create a cluster Dataproc.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-04 UTC."],[[["\u003cp\u003eSingle node Dataproc clusters utilize a single node as both the master and worker, simplifying cluster management for certain use cases.\u003c/p\u003e\n"],["\u003cp\u003eThese clusters are useful for tasks like testing new Spark/Hadoop versions, creating proof-of-concept demos, lightweight data science, small-scale data processing, and educational purposes.\u003c/p\u003e\n"],["\u003cp\u003eWhile most Dataproc features apply, single node clusters lack high-availability and do not support preemptible VMs or scaling to multi-node setups.\u003c/p\u003e\n"],["\u003cp\u003eSingle node clusters can be created using the \u003ccode\u003egcloud\u003c/code\u003e command-line tool with the \u003ccode\u003e--single-node\u003c/code\u003e flag, the Dataproc REST API with specific configurations, or the Google Cloud console by choosing the "Single Node" cluster type.\u003c/p\u003e\n"],["\u003cp\u003eSingle node clusters are not suitable for large-scale data processing, as they are limited by the resources of a single node.\u003c/p\u003e\n"]]],[],null,["Single node clusters are Dataproc clusters with only one node. This single\nnode acts as the master and worker for your\nDataproc cluster. While single\nnode clusters only have one node, most Dataproc concepts and features\nstill apply, except those [listed below](#limitations).\n\nThere are a number of situations where single node Dataproc clusters can\nbe useful, including:\n\n- Trying out new versions of Spark and Hadoop or other open source components\n- Building proof-of-concept (PoC) demonstrations\n- Lightweight data science\n- Small-scale non-critical data processing\n- Education related to the Spark and Hadoop ecosystem\n\nSingle node cluster semantics\n\nThe following semantics apply to single node Dataproc clusters:\n\n- Single node clusters are configured the same as multi node Dataproc clusters, and include services such as HDFS and YARN.\n- Single node clusters report as master nodes for [initialization actions](/dataproc/docs/concepts/configuring-clusters/init-actions).\n- Single node clusters show 0 workers since the single node acts as both master and worker.\n- Single node clusters are given hostnames that follow the pattern `clustername-m`. You can use this hostname to SSH into or connect to a [web UI](/dataproc/docs/concepts/accessing/cluster-web-interfaces) on the node.\n- Single node clusters cannot be upgraded to multi node clusters. Once created, single node clusters are restricted to one node. Similarly, multi node clusters cannot be scaled down to single node clusters.\n\nLimitations\n\n- Single node clusters are not recommended for large-scale parallel data\n processing. If you exceed the resources on a single node cluster, a multi node\n Dataproc cluster is recommended.\n\n- Single node clusters are not available with\n [high-availability](/dataproc/docs/concepts/configuring-clusters/high-availability)\n since there is only one node in the cluster.\n\n- Single node clusters cannot use [preemptible VMs](/dataproc/docs/concepts/compute/preemptible-vms).\n\nCreate a single node cluster \n\ngcloud command\n\n\nYou can create a single node Dataproc cluster using the `gcloud`\ncommand-line tool. To create a single node cluster, pass the\n`--single-node` flag to the\n[`gcloud dataproc clusters create`](/sdk/gcloud/reference/dataproc/clusters/create)\ncommand. \n\n```\ngcloud dataproc clusters create cluster-name \\\n --region=region \\\n --single-node \\\n ... other args\n```\n\n\u003cbr /\u003e\n\nREST API\n\n\nYou can create a single node cluster through the\n[Dataproc REST API](/dataproc/docs/reference/rest) using a\n[clusters.create](/dataproc/docs/reference/rest/v1/projects.regions.clusters/create)\nrequest. When making this request, you must:\n\n1. Add the property `\"dataproc:dataproc.allow.zero.workers\":\"true\"` to the [SoftwareConfig](/dataproc/docs/reference/rest/v1/ClusterConfig#SoftwareConfig) of the cluster request.\n2. Don't submit values for `workerConfig` and `secondaryWorkerConfig` (see [ClusterConfig](/dataproc/docs/reference/rest/v1/ClusterConfig)).\n\n| To examine and construct the JSON body of a Dataproc API clusters create request, open the Dataproc [Create a cluster](https://console.cloud.google.com/dataproc/clustersAdd) page, fill in the applicable fields, then click the **Equivalent REST** button at the bottom of the left panel to view the POST request with the completed JSON request body.\n\n\u003cbr /\u003e\n\nConsole\n\n\nYou can create a single node cluster by selecting \"Single Node\n(1 master, 0 workers)\" on the Cluster type section of\nthe Set up cluster panel on the Dataproc\n[Create a cluster](https://console.cloud.google.com/dataproc/clustersAdd)\npage."]]