Diagnose Cloud Dataproc clusters

Looking at log and configuration information can be useful to troubleshoot a cluster or job. Unfortunately, there are many log and configuration files, and gathering each one for investigation can be time consuming. To address this problem, Cloud Dataproc clusters support a special diagnose command through the Cloud SDK. This command gathers and archives important system, Spark/Hadoop, and Cloud Dataproc logs, and then uploads the archive to the Cloud Storage bucket attached to your cluster.

Using the diagnose command

You can use the Cloud SDK diagnose command on your Cloud Dataproc clusters (see Cloud Dataproc and Cloud SDK).

Once the Cloud SDK is installed and configured, you can run the diagnose command on your cluster as shown below (replace cluster-name with the name of your cluster).

gcloud dataproc clusters diagnose cluster-name

The command outputs the name and location of the archive that contains your data.

...
Saving archive to cloud
Copying file:///tmp/tmp.FgWEq3f2DJ/diagnostic.tar ...
Uploading   ...23db9-762e-4593-8a5a-f4abd75527e6/diagnostic.tar ...
Diagnostic results saved in:
gs://bucket-name/.../cluster-uuid/.../job-id/diagnostic.tar
    ...
In this example, bucket-name is the Cloud Storage bucket attached to your cluster, cluster-uuid is the unique ID (UUID) of your cluster, and job-id is the UUID belonging to the system task that ran the diagnose command.

When you create a Cloud Dataproc cluster, Cloud Dataproc automatically creates a Cloud Storage bucket and attaches it to your cluster. The diagnose command outputs the archive file to this bucket. To determine the name of the bucket created by Cloud Dataproc, use the Cloud SDK clusters describe command. The bucket associated with your cluster is listed next to configurationBucket.

gcloud dataproc clusters describe cluster-name
  clusterName: cluster-name
  clusterUuid: daa40b3f-5ff5-4e89-9bf1-bcbfec6e0eac
  configuration:
  configurationBucket: dataproc-edc9d85f-12f9-4905-ba4c-eaa8dfac5824-us
  ...

Sharing the data gathered by diagnose

You can share the archive generated by the diagnose command in two ways:

  1. Download the file from Cloud Storage, then share the downloaded archive.
  2. Change the permissions on the archive to allow other Google Cloud Platform users or projects to access the file.

For example, the following command adds read permissions to the diagnose archive in a test-project:

gsutil -m acl ch -g test-project:R path-to-archive

Items included in diagnose command output

The diagnose command includes the following configuration files, logs, and outputs from your cluster in an archive file. The archive file is placed in the Cloud Storage bucket associated with your Cloud Dataproc cluster, as discussed above.

Daemon and state information

Command executed Location in archive
yarn node -list -all /system/yarn-nodes.log
hdfs dfsadmin -report -live -decommissioning /system/hdfs-nodes.log
service --status-all /system/service.log
df -h /system/df.log
ps aux /system/ps.log
free -m /system/free.log
netstat -anp /system/netstat.log
sysctl -a /system/sysctl.log
uptime /system/uptime.log
cat /proc/sys/fs/file-nr /system/fs-file-nr.log
ping -c 1 /system/cluster-ping.log

Logfiles

Item(s) included Location in archive
All logs in /var/log with the following prefixes in their filename:
gcs
google
gcdp
hadoop
hdfs
hive
spark
syslog
yarn
Files are placed in the archive logs folder, and keep their original filenames.
Cloud Dataproc node startup logs for each node (master and worker) in your cluster. Files are placed in the archive node_startup folder, which contains separate sub-folders for each machine in the cluster.

Configuration files

Item(s) included Location in archive
All files in /etc/hadoop/conf/ Files are placed in the archive hadoop_conf folder, and keep their original filenames.
Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Dataproc Documentation