The Apache Hadoop integration collects name-node metrics related to storage, such as capacity utilization, file accesses, and blocks. The integration also collects Hadoop logs and parses them into a JSON payload. The result includes fields for source, level, and message.
For more information about Hadoop, see the Apache Hadoop documentation.
Prerequisites
To collect Hadoop telemetry, you must install the Ops Agent:
- For metrics, install version 2.11.0 or higher.
- For logs, install version 2.11.0 or higher.
This integration supports Hadoop versions 2.10.x, 3.2.x, and 3.3.x.
Configure your Hadoop instance
To expose a JMX endpoint, you must set the com.sun.management.jmxremote.port
system property when starting the JVM. We also recommend setting the
com.sun.management.jmxremote.rmi.port
system property to the same port. To
expose a JMX endpoint remotely, you must also set the java.rmi.server.hostname
system property.
By default, these properties are set in a Hadoop deployment's hadoop-env.sh
file.
To set system properties by using command-line arguments, prepend the property
name with -D
when starting the JVM. For example, to set
com.sun.management.jmxremote.port
to port 8004
, specify the following when
starting the JVM:
-Dcom.sun.management.jmxremote.port=8004
Configure the Ops Agent for Hadoop
Following the guide to Configure the Ops Agent, add the required elements to collect telemetry from Hadoop instances, and restart the agent.
Example configuration
The following commands create the configuration to collect and ingest telemetry for Hadoop and restart the Ops Agent.
Configure logs collection
To ingest logs from Hadoop, you must create a receiver for the logs that Hadoop produces and then create a pipeline for the new receiver.
To configure a receiver for your hadoop
logs, specify the following
fields:
Field | Default | Description |
---|---|---|
exclude_paths |
A list of filesystem path patterns to exclude from the set matched by include_paths . |
|
include_paths |
[/opt/hadoop/logs/hadoop-*.log, /opt/hadoop/logs/yarn-*.log] |
A list of filesystem paths to read by tailing each file. A wild card (* ) can be used in the paths. |
record_log_file_path |
false |
If set to true , then the path to the specific file from which the log record was obtained appears in the output log entry as the value of the agent.googleapis.com/log_file_path label. When using a wildcard, only the path of the file from which the record was obtained is recorded. |
type |
This value must be hadoop . |
|
wildcard_refresh_interval |
60s |
The interval at which wildcard file paths in include_paths are refreshed. Given as a time duration, for example 30s or 2m . This property might be useful under high logging throughputs where log files are rotated faster than the default interval. |
What is logged
The logName
is derived from
the receiver IDs specified in the configuration. Detailed fields inside the
LogEntry
are as follows.
The hadoop
logs contain the following fields in the LogEntry
:
Field | Type | Description |
---|---|---|
jsonPayload.message |
string | Log message |
jsonPayload.severity |
string | Log entry level |
jsonPayload.source |
string | The source Java class of the log entry |
severity |
string (LogSeverity ) |
Log entry level (translated). |
Configure metrics collection
To ingest metrics from Hadoop, you must create a receiver for the metrics that Hadoop produces and then create a pipeline for the new receiver.
This receiver does not support the use of multiple instances in the configuration, for example, to monitor multiple endpoints. All such instances write to the same time series, and Cloud Monitoring has no way to distinguish among them.
To configure a receiver for your hadoop
metrics, specify the following
fields:
Field | Default | Description |
---|---|---|
collect_jvm_metrics |
true |
Configures the receiver to also collect the supported JVM metrics. |
collection_interval |
60s |
A time duration value, such as 30s or 5m . |
endpoint |
localhost:8004 |
The JMX Service URL or host and port used to construct the service URL. This value must be in the form of service:jmx:<protocol>:<sap> or host:port . Values in host:port form are used to create a service URL of service:jmx:rmi:///jndi/rmi://<host>:<port>/jmxrmi . |
password |
The configured password if JMX is configured to require authentication. | |
type |
This value must be hadoop . |
|
username |
The configured username if JMX is configured to require authentication. |
What is monitored
The following table provides the list of metrics that the Ops Agent collects from the Hadoop instance.
Metric type | |
---|---|
Kind, Type Monitored resources |
Labels |
workload.googleapis.com/hadoop.name_node.block.corrupt
|
|
GAUGE , INT64 gce_instance |
node_name
|
workload.googleapis.com/hadoop.name_node.block.count
|
|
GAUGE , INT64 gce_instance |
node_name
|
workload.googleapis.com/hadoop.name_node.block.missing
|
|
GAUGE , INT64 gce_instance |
node_name
|
workload.googleapis.com/hadoop.name_node.capacity.limit
|
|
GAUGE , INT64 gce_instance |
node_name
|
workload.googleapis.com/hadoop.name_node.capacity.usage
|
|
GAUGE , INT64 gce_instance |
node_name
|
workload.googleapis.com/hadoop.name_node.data_node.count
|
|
GAUGE , INT64 gce_instance |
node_name state
|
workload.googleapis.com/hadoop.name_node.file.load
|
|
GAUGE , INT64 gce_instance |
node_name
|
workload.googleapis.com/hadoop.name_node.volume.failed
|
|
GAUGE , INT64 gce_instance |
node_name
|
Verify the configuration
This section describes how to verify that you correctly configured the Hadoop receiver. It might take one or two minutes for the Ops Agent to begin collecting telemetry.
To verify that Hadoop logs are being sent to Cloud Logging, do the following:
-
In the Google Cloud console, go to the Logs Explorer page:
If you use the search bar to find this page, then select the result whose subheading is Logging.
- Enter the following query in the editor, and then click Run query:
resource.type="gce_instance" log_id("hadoop")
To verify that Hadoop metrics are being sent to Cloud Monitoring, do the following:
-
In the Google Cloud console, go to the leaderboard Metrics explorer page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- In the toolbar of the query-builder pane, select the button whose name is either code MQL or code PromQL.
- Verify that MQL is selected in the Language toggle. The language toggle is in the same toolbar that lets you format your query.
- Enter the following query in the editor, and then click Run query:
fetch gce_instance | metric 'workload.googleapis.com/hadoop.name_node.capacity.usage' | every 1m
View dashboard
To view your Hadoop metrics, you must have a chart or dashboard configured. The Hadoop integration includes one or more dashboards for you. Any dashboards are automatically installed after you configure the integration and the Ops Agent has begun collecting metric data.
You can also view static previews of dashboards without installing the integration.
To view an installed dashboard, do the following:
-
In the Google Cloud console, go to the Dashboards page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- Select the Dashboard List tab, and then choose the Integrations category.
- Click the name of the dashboard you want to view.
If you have configured an integration but the dashboard has not been installed, then check that the Ops Agent is running. When there is no metric data for a chart in the dashboard, installation of the dashboard fails. After the Ops Agent begins collecting metrics, the dashboard is installed for you.
To view a static preview of the dashboard, do the following:
-
In the Google Cloud console, go to the Integrations page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- Click the Compute Engine deployment-platform filter.
- Locate the entry for Hadoop and click View Details.
- Select the Dashboards tab to see a static preview. If the dashboard is installed, then you can navigate to it by clicking View dashboard.
For more information about dashboards in Cloud Monitoring, see Dashboards and charts.
For more information about using the Integrations page, see Manage integrations.
Install alerting policies
Alerting policies instruct Cloud Monitoring to notify you when specified conditions occur. The Hadoop integration includes one or more alerting policies for you to use. You can view and install these alerting policies from the Integrations page in Monitoring.
To view the descriptions of available alerting policies and install them, do the following:
-
In the Google Cloud console, go to the Integrations page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- Locate the entry for Hadoop and click View Details.
- Select the Alerts tab. This tab provides descriptions of available alerting policies and provides an interface for installing them.
- Install alerting policies. Alerting policies need
to know where to send notifications that the alert has been
triggered, so they require information from you for installation.
To install alerting policies, do the following:
- From the list of available alerting policies, select those that you want to install.
In the Configure notifications section, select one or more notification channels. You have the option to disable the use of notification channels, but if you do, then your alerting policies fire silently. You can check their status in Monitoring, but you receive no notifications.
For more information about notification channels, see Manage notification channels.
- Click Create Policies.
For more information about alerting policies in Cloud Monitoring, see Introduction to alerting.
For more information about using the Integrations page, see Manage integrations.
What's next
For a walkthrough on how to use Ansible to install the Ops Agent, configure a third-party application, and install a sample dashboard, see the Install the Ops Agent to troubleshoot third-party applications video.