Apache Hadoop

This document describes how to configure your Google Kubernetes Engine deployment so that you can use Google Cloud Managed Service for Prometheus to collect metrics from Apache Hadoop. This document shows you how to do the following:

  • Set up the exporter for Hadoop to report metrics.
  • Configure a PodMonitoring resource for Managed Service for Prometheus to collect the exported metrics.
  • Access a dashboard in Cloud Monitoring to view the metrics.
  • Configure alerting rules to monitor the metrics.

These instructions apply only if you are using managed collection with Managed Service for Prometheus. If you are using self-deployed collection, then see the source repository for the JMX exporter for installation information.

These instructions are provided as an example and are expected to work in most Kubernetes environments. If you are having trouble installing an application or exporter due to restrictive security or organizational policies, we recommend you consult open-source documentation for support.

For information about Hadoop, see Apache Hadoop.

Prerequisites

To collect metrics from Hadoop by using Managed Service for Prometheus and managed collection, your deployment must meet the following requirements:

  • Your cluster must be running Google Kubernetes Engine version 1.21.4-gke.300 or later.
  • You must be running Managed Service for Prometheus with managed collection enabled. For more information, see Get started with managed collection.

  • To use dashboards available in Cloud Monitoring for the Hadoop integration, you must use jmx-exporter version 0.17.0 or later.

    For more information about available dashboards, see View dashboards.

Ensure that the values of the port and matchLabels fields match those of the Hadoop pods you want to monitor. NameNodes and DataNodes must be configured to accept remote JMX connections. This configuration can be done by setting the NAMENODE_HDFS_OPTS and DATANODE_HDFS_OPTS environment variables as described in the Hadoop Unix Shell Guide.

Install the Hadoop exporter

We recommend that you install the Hadoop exporter, jmx-exporter, as a sidecar to your Hadoop workload. For information about using sidecars, see Extended applications on Kubernetes with multi-container pods.

To install jmx-exporter as a sidecar to Hadoop, modify your Hadoop configuration as shown in the following example:

# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+   name: hadoop-exporter
+ data:
+   config.yaml: |
+     hostPort: localhost:1026
+     lowercaseOutputName: true
+     lowercaseOutputLabelNames: true
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: hadoop-hdfs
spec:
  serviceName: hadoop-hdfs
  selector:
    matchLabels:
+     app.kubernetes.io/name: hadoop
  template:
    metadata:
      labels:
+       app.kubernetes.io/name: hadoop
    spec:
      containers:
      - name: hadoop-hdfs
        image: "farberg/apache-hadoop:3.3.2"
+       env:
+         - name: HDFS_NAMENODE_OPTS
+           value: "-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=1026"
        command:
        - "/bin/bash"
        - "/tmp/hadoop-config/bootstrap.sh"
        - "-d"
+       ports:
+       - containerPort: 1026
+         name: jmx
+     - name: exporter
+       image: bitnami/jmx-exporter:0.17.0
+       command:
+         - java
+         - -jar
+         - jmx_prometheus_httpserver.jar
+       args:
+         - "9900"
+         - config.yaml
+       ports:
+       - containerPort: 9900
+         name: prometheus
+       volumeMounts:
+       - mountPath: /opt/bitnami/jmx-exporter/config.yaml
+         subPath: config.yaml
+         name: hadoop-exporter
+     volumes:
+     - name: hadoop-exporter
+       configMap:
+         name: hadoop-exporter
+         items:
+         - key: config.yaml
+           path: config.yaml

You must add any lines preceded by the + symbol to your configuration.

These instructions are based on changes made to a helm chart. The templates can be downloaded and altered. The preceding example assumes everything is in a single YAML file.

To apply configuration changes from a local file, run the following command:

kubectl apply -n NAMESPACE_NAME -f FILE_NAME

You can also use Terraform to manage your configurations.

Define a PodMonitoring resource

For target discovery, the Managed Service for Prometheus Operator requires a PodMonitoring resource that corresponds to the Hadoop exporter in the same namespace.

You can use the following PodMonitoring configuration:

# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: monitoring.googleapis.com/v1
kind: PodMonitoring
metadata:
  name: hadoop
  labels:
    app.kubernetes.io/name: hadoop
    app.kubernetes.io/part-of: google-cloud-managed-prometheus
spec:
  endpoints:
  - port: prometheus
    scheme: http
    interval: 30s
    path: /metrics
  selector:
    matchLabels:
      app.kubernetes.io/name: hadoop

Ensure that the label selectors and the port match the selectors and port used in Install the Hadoop exporter.

To apply configuration changes from a local file, run the following command:

kubectl apply -n NAMESPACE_NAME -f FILE_NAME

You can also use Terraform to manage your configurations.

Define rules and alerts

You can use the following Rules configuration to define alerts on your Hadoop metrics:

# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: monitoring.googleapis.com/v1
kind: Rules
metadata:
  name: hadoop-rules
  labels:
    app.kubernetes.io/component: rules
    app.kubernetes.io/name: hadoop-rules
    app.kubernetes.io/part-of: google-cloud-managed-prometheus
spec:
  groups:
  - name: hadoop
    interval: 30s
    rules:
    - alert: HadoopDown
      annotations:
        description: |-
          Hadoop instance is down
            VALUE = {{ $value }}
            LABELS: {{ $labels }}
        summary: Hadoop down (instance {{ $labels.instance }})
      expr: hadoop_namenode_numdeaddatanodes > 0
      for: 5m
      labels:
        severity: critical
    - alert: HadoopLowAvailableCapacity
      annotations:
        description: |-
          Hadoop low available capacity
            VALUE = {{ $value }}
            LABELS: {{ $labels }}
        summary: Hadoop low available capacity (instance {{ $labels.instance }})
      expr: (hadoop_namenode_capacityused/hadoop_namenode_capacitytotal) > 0.8
      for: 5m
      labels:
        severity: critical
    - alert: HadoopVolumeFailure
      annotations:
        description: |-
          Hadoop volume failure
            VALUE = {{ $value }}
            LABELS: {{ $labels }}
        summary: Hadoop volume failure (instance {{ $labels.instance }})
      expr: hadoop_namenode_volumefailurestotal > 0
      for: 5m
      labels:
        severity: critical

To apply configuration changes from a local file, run the following command:

kubectl apply -n NAMESPACE_NAME -f FILE_NAME

You can also use Terraform to manage your configurations.

For more information about applying rules to your cluster, see Managed rule evaluation and alerting.

You can adjust the alert thresholds to suit your application.

Verify the configuration

You can use Metrics Explorer to verify that you correctly configured the Hadoop exporter. It might take one or two minutes for Cloud Monitoring to ingest your metrics.

To verify the metrics are ingested, do the following:

  1. In the navigation panel of the Google Cloud console, select Monitoring, and then select  Metrics explorer:

    Go to Metrics explorer

  2. In the toolbar of the query-builder pane, select the button whose name is either  MQL or  PromQL.
  3. Verify that PromQL is selected in the Language toggle. The language toggle is in the same toolbar that lets you format your query.
  4. Enter and run the following query:
    up{job="hadoop", cluster="CLUSTER_NAME", namespace="NAMESPACE_NAME"}

View dashboards

The Cloud Monitoring integration includes the Hadoop Prometheus Overview dashboard. Dashboards are automatically installed when you configure the integration. You can also view static previews of dashboards without installing the integration.

To view an installed dashboard, do the following:

  1. In the navigation panel of the Google Cloud console, select Monitoring, and then select  Dashboards:

    Go to Dashboards

  2. Select the Dashboard List tab.
  3. Choose the Integrations category.
  4. Click the name of the dashboard, for example, Hadoop Prometheus Overview.

To view a static preview of the dashboard, do the following:

  1. In the navigation panel of the Google Cloud console, select Monitoring, and then select  Integrations:

    Go to Integrations

  2. Click the Kubernetes Engine deployment-platform filter.
  3. Locate the Apache Hadoop integration and click View Details.
  4. Select the Dashboards tab.

Troubleshooting

For information about troubleshooting metric ingestion problems, see Problems with collection from exporters in Troubleshooting ingestion-side problems.