Apache Hadoop

The Apache Hadoop integration collects name-node metrics related to storage, such as capacity utilization, file accesses, and blocks. The integration also collects Hadoop logs and parses them into a JSON payload. The result includes fields for source, level, and message.

For more information about Hadoop, see hadoop.apache.org/.

Prerequisites

To collect and ingest Hadoop logs and metrics, you must install Ops Agent version 2.11.0 or higher.

This receiver supports Apache Hadoop versions 2.10.x, 3.2.x, and 3.3.x.

Configure your Hadoop instance

To expose a JMX endpoint, you must set the com.sun.management.jmxremote.port system property when starting the JVM. We also recommend setting the com.sun.management.jmxremote.rmi.port system property to the same port. To expose a JMX endpoint remotely, you must also set the java.rmi.server.hostname system property.

By default, these properties are set in a Hadoop deployment's hadoop-env.sh file.

To set system properties by using command-line arguments, prepend the property name with -D when starting the JVM. For example, to set com.sun.management.jmxremote.port to port 8004, specify the following when starting the JVM:

-Dcom.sun.management.jmxremote.port=8004

Configure the Ops Agent for Hadoop

Following the guide for Configuring the Ops Agent, add the required elements to collect logs and metrics from your Hadoop instances, and restart the agent.

Example configuration

The following command creates the configuration file to collect and ingest logs and metrics for Hadoop and restarts the Ops Agent on Linux.

sudo tee /etc/google-cloud-ops-agent/config.yaml > /dev/null << EOF
logging:
  receivers:
    hadoop:
      type: hadoop
  service:
    pipelines:
      hadoop:
        receivers:
          - hadoop
metrics:
  receivers:
    hadoop:
      type: hadoop
  service:
    pipelines:
      hadoop:
        receivers:
          - hadoop
EOF
sudo service google-cloud-ops-agent restart

Configure logs collection

To ingest logs from Hadoop, you must create receivers for the logs Hadoop produces and then create a pipeline for the new receivers.

To configure a receiver for your hadoop logs, specify the following fields:

Field Default Description
type This value must be hadoop.
include_paths [/opt/hadoop/logs/hadoop-*.log, /opt/hadoop/logs/yarn-*.log] A list of filesystem paths to read by tailing each file. A wild card (*) can be used in the paths.
exclude_paths [] A list of filesystem path patterns to exclude from the set matched by include_paths.
record_log_file_path false If set to true, then the path to the specific file from which the log record was obtained appears in the output log entry as the value of the agent.googleapis.com/log_file_path label. When using a wildcard, only the path of the file from which the record was obtained is recorded.
wildcard_refresh_interval 60s The interval at which wildcard file paths in include_paths are refreshed. Specified as a time interval parsable by time.ParseDuration. Must be a multiple of 1s.

What is logged

The logName of the hadoop logs are derived from the receiver IDs specified in the configuration. Detailed fields inside the LogEntry are as follows.

Field Type Description
jsonPayload.source string The source Java class of the log entry.
jsonPayload.message string Log message.
severity string (LogSeverity) Log entry level (translated).
timestamp string (Timestamp) Time the entry was logged.

Log entries don't contain any fields that are blank or missing.

Configure metrics collection

To collect metrics from Hadoop, you must create a receiver for Hadoop metrics and then create a pipeline for the new receiver. To configure a receiver for your Hadoop metrics, specify the following fields:

Field Default Description
type This value must be hadoop.
endpoint localhost:8004 The JMX Service URL or host and port used to construct the service URL. This value must be in the form of service:jmx:<protocol>:<sap> or host:port. Values in host:port form are used to create a service URL of service:jmx:rmi:///jndi/rmi://<host>:<port>/jmxrmi.
collect_jvm_metrics true Configures the receiver to also collect the supported JVM metrics.
username The configured username if JMX is configured to require authentication.
password The configured password if JMX is configured to require authentication.
collection_interval 60s A time.Duration value, such as 30s or 5m.

What is monitored

The following table provides the list of metrics that the Ops Agent collects from the Hadoop instance.

Metric type 
Kind, Type
Monitored resources
Labels
workload.googleapis.com/hadoop.name_node.block.corrupt
GAUGEINT64
gce_instance
node_name
workload.googleapis.com/hadoop.name_node.block.count
GAUGEINT64
gce_instance
node_name
workload.googleapis.com/hadoop.name_node.block.missing
GAUGEINT64
gce_instance
node_name
workload.googleapis.com/hadoop.name_node.capacity.limit
GAUGEINT64
gce_instance
node_name
workload.googleapis.com/hadoop.name_node.capacity.usage
GAUGEINT64
gce_instance
node_name
workload.googleapis.com/hadoop.name_node.data_node.count
GAUGEINT64
gce_instance
state
node_name
workload.googleapis.com/hadoop.name_node.file.load
GAUGEINT64
gce_instance
node_name
workload.googleapis.com/hadoop.name_node.volume.failed
GAUGEINT64
gce_instance
node_name

Verify the configuration

You can use the Logs Explorer and Metrics Explorer to verify that you correctly configured the Hadoop receiver. It might take one or two minutes for the Ops Agent to begin collecting logs and metrics.

To verify the logs are ingested, go to the Logs Explorer and run the following query to view the Hadoop logs:

resource.type="gce_instance"
logName=("projects/PROJECT_ID/logs/hadoop")


To verify the metrics are ingested, go to Metrics Explorer and run the following query in the MQL tab.

fetch gce_instance
| metric 'workload.googleapis.com/hadoop.name_node.block.count'
| align rate(1m)
| every 1m

What's next

For a walkthrough on how to use Ansible to install the Ops Agent, configure a third-party application, and install a sample dashboard, see the Install the Ops Agent to troubleshoot third-party applications video.