Elasticsearch

Elasticsearch is an open-source search server, based on the Lucene search library. It runs in a Java virtual machine on top of a number of operating systems. The elasticsearch receiver collects node- and cluster-level telemetry from your Elasticsearch instances.

For more information about Elasticsearch, see the Elasticsearch documentation.

Prerequisites

To collect Elasticsearch telemetry, you must install the Ops Agent:

  • For metrics, install version 2.21.0 or higher.
  • For logs, install version 2.9.0 or higher.

This integration supports Elasticsearch version 7.9+.

Configure your Elasticsearch instance

If you enable Elasticsearch security features, you must configure a user with the monitor or manage cluster privilege.

Configure the Ops Agent for Elasticsearch

Following the guide to Configure the Ops Agent, add the required elements to collect telemetry from Elasticsearch instances, and restart the agent.

Example configuration

The following commands create the configuration to collect and ingest telemetry for Elasticsearch and restart the Ops Agent.

# Configures Ops Agent to collect telemetry from the app and restart Ops Agent.

set -e

# Create a back up of the existing file so existing configurations are not lost.
sudo cp /etc/google-cloud-ops-agent/config.yaml /etc/google-cloud-ops-agent/config.yaml.bak

# Configure the Ops Agent.
sudo tee /etc/google-cloud-ops-agent/config.yaml > /dev/null << EOF
metrics:
  receivers:
    elasticsearch:
      type: elasticsearch
  service:
    pipelines:
      elasticsearch:
        receivers:
          - elasticsearch

logging:
  receivers:
    elasticsearch_json:
      type: elasticsearch_json
    elasticsearch_gc:
      type: elasticsearch_gc
  service:
    pipelines:
      elasticsearch:
        receivers:
          - elasticsearch_json
          - elasticsearch_gc
EOF

sudo service google-cloud-ops-agent restart
sleep 60

Configure logs collection

To ingest logs from Elasticsearch, you must create a receiver for the logs that Elasticsearch produces and then create a pipeline for the new receiver.

To configure a receiver for your elasticsearch_json logs, specify the following fields:

Field Default Description
exclude_paths A list of filesystem path patterns to exclude from the set matched by include_paths.
include_paths [/var/log/elasticsearch/*_server.json, /var/log/elasticsearch/*_deprecation.json, /var/log/elasticsearch/*_index_search_slowlog.json, /var/log/elasticsearch/*_index_indexing_slowlog.json, /var/log/elasticsearch/*_audit.json] A list of filesystem paths to read by tailing each file. A wild card (*) can be used in the paths.
record_log_file_path false If set to true, then the path to the specific file from which the log record was obtained appears in the output log entry as the value of the agent.googleapis.com/log_file_path label. When using a wildcard, only the path of the file from which the record was obtained is recorded.
type This value must be elasticsearch_json.
wildcard_refresh_interval 60s The interval at which wildcard file paths in include_paths are refreshed. Given as a time duration, for example 30s or 2m. This property might be useful under high logging throughputs where log files are rotated faster than the default interval.

To configure a receiver for your elasticsearch_gc logs, specify the following fields:

Field Default Description
exclude_paths A list of filesystem path patterns to exclude from the set matched by include_paths.
include_paths [/var/log/elasticsearch/gc.log] A list of filesystem paths to read by tailing each file. A wild card (*) can be used in the paths.
record_log_file_path false If set to true, then the path to the specific file from which the log record was obtained appears in the output log entry as the value of the agent.googleapis.com/log_file_path label. When using a wildcard, only the path of the file from which the record was obtained is recorded.
type This value must be elasticsearch_gc.
wildcard_refresh_interval 60s The interval at which wildcard file paths in include_paths are refreshed. Given as a time duration, for example 30s or 2m. This property might be useful under high logging throughputs where log files are rotated faster than the default interval.

What is logged

The logName is derived from the receiver IDs specified in the configuration. Detailed fields inside the LogEntry are as follows.

The elasticsearch_json logs contain the following fields in the LogEntry:

Field Type Description
jsonPayload.cluster string The cluster emitting the log record
jsonPayload.cluster.name string The name of the cluster emitting the log record
jsonPayload.cluster.uuid string The UUID of the cluster emitting the log record
jsonPayload.component string The component of Elasticsearch that emitted the log
jsonPayload.level string Log entry level
jsonPayload.message string Log message
jsonPayload.node string The node emitting the log record
jsonPayload.node.name string The name of the node emitting the log record
jsonPayload.node.uuid string The UUID of the node emitting the log record
jsonPayload.type string The type of log, indicating which log the record came from (e.g. server indicates this LogEntry came from the server log)
severity string (LogSeverity) Log entry level (translated).

The elasticsearch_gc logs contain the following fields in the LogEntry:

Field Type Description
jsonPayload.gc_run number The run of the garbage collector
jsonPayload.message string Log message
jsonPayload.type string The type of the log record
severity string (LogSeverity) Log entry level (translated).

Configure metrics collection

To ingest metrics from Elasticsearch, you must create a receiver for the metrics that Elasticsearch produces and then create a pipeline for the new receiver.

This receiver does not support the use of multiple instances in the configuration, for example, to monitor multiple endpoints. All such instances write to the same time series, and Cloud Monitoring has no way to distinguish among them.

To configure a receiver for your elasticsearch metrics, specify the following fields:

Field Default Description
ca_file Path to the CA certificate. As a client, this verifies the server certificate. If empty, the receiver uses the system root CA.
cert_file Path to the TLS certificate to use for mTLS-required connections.
collection_interval 60s A time duration value, such as 30s or 5m.
endpoint http://localhost:92002 The base URL for the Elasticsearch REST API.
insecure true Sets whether or not to use a secure TLS connection. If set to false, then TLS is enabled.
insecure_skip_verify false Sets whether or not to skip verifying the certificate. If insecure is set to true, then the insecure_skip_verify value is not used.
key_file Path to the TLS key to use for mTLS-required connections.
password Password for authentication with Elasticsearch. Required if username is set.
type This value must be elasticsearch.
username Username for authentication with Elasticsearch. Required if password is set.

What is monitored

The following table provides the list of metrics that the Ops Agent collects from the Elasticsearch instance.

Metric type 
Kind, Type
Monitored resources
Labels
workload.googleapis.com/elasticsearch.breaker.memory.estimated
GAUGEINT64
gce_instance
name
workload.googleapis.com/elasticsearch.breaker.memory.limit
GAUGEINT64
gce_instance
name
workload.googleapis.com/elasticsearch.breaker.tripped
CUMULATIVEINT64
gce_instance
name
workload.googleapis.com/elasticsearch.cluster.data_nodes
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.cluster.health
GAUGEINT64
gce_instance
status
workload.googleapis.com/elasticsearch.cluster.in_flight_fetch
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.cluster.nodes
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.cluster.pending_tasks
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.cluster.published_states.differences
GAUGEINT64
gce_instance
state
workload.googleapis.com/elasticsearch.cluster.published_states.full
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.cluster.shards
GAUGEINT64
gce_instance
state
workload.googleapis.com/elasticsearch.cluster.state_queue
GAUGEINT64
gce_instance
state
workload.googleapis.com/elasticsearch.cluster.state_update.count
CUMULATIVEINT64
gce_instance
state
workload.googleapis.com/elasticsearch.cluster.state_update.time
CUMULATIVEINT64
gce_instance
state
type
workload.googleapis.com/elasticsearch.indexing_pressure.memory.limit
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.indexing_pressure.memory.total.primary_rejections
CUMULATIVEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.indexing_pressure.memory.total.replica_rejections
CUMULATIVEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.memory.indexing_pressure
GAUGEINT64
gce_instance
stage
workload.googleapis.com/elasticsearch.node.cache.count
GAUGEINT64
gce_instance
type
workload.googleapis.com/elasticsearch.node.cache.evictions
CUMULATIVEINT64
gce_instance
cache_name
workload.googleapis.com/elasticsearch.node.cache.memory.usage
GAUGEINT64
gce_instance
cache_name
workload.googleapis.com/elasticsearch.node.cluster.connections
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.cluster.io
CUMULATIVEINT64
gce_instance
direction
workload.googleapis.com/elasticsearch.node.disk.io.read
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.disk.io.write
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.documents
GAUGEINT64
gce_instance
state
workload.googleapis.com/elasticsearch.node.fs.disk.available
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.fs.disk.free
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.fs.disk.total
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.http.connections
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.ingest.documents
CUMULATIVEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.ingest.documents.current
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.ingest.operations.failed
CUMULATIVEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.open_files
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.operations.completed
CUMULATIVEINT64
gce_instance
operation
workload.googleapis.com/elasticsearch.node.operations.time
CUMULATIVEINT64
gce_instance
operation
workload.googleapis.com/elasticsearch.node.pipeline.ingest.documents.current
GAUGEINT64
gce_instance
name
workload.googleapis.com/elasticsearch.node.pipeline.ingest.documents.preprocessed
GAUGEINT64
gce_instance
name
workload.googleapis.com/elasticsearch.node.pipeline.ingest.operations.failed
CUMULATIVEINT64
gce_instance
name
workload.googleapis.com/elasticsearch.node.script.cache_evictions
CUMULATIVEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.script.compilation_limit_triggered
CUMULATIVEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.script.compilations
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.shards.data_set.size
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.shards.reserved.size
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.shards.size
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.thread_pool.tasks.finished
CUMULATIVEINT64
gce_instance
state
thread_pool_name
workload.googleapis.com/elasticsearch.node.thread_pool.tasks.queued
GAUGEINT64
gce_instance
thread_pool_name
workload.googleapis.com/elasticsearch.node.thread_pool.threads
GAUGEINT64
gce_instance
state
thread_pool_name
workload.googleapis.com/elasticsearch.node.translog.operations
CUMULATIVEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.translog.size
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.node.translog.uncommitted.size
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.os.cpu.load_avg.15m
GAUGEDOUBLE
gce_instance
 
workload.googleapis.com/elasticsearch.os.cpu.load_avg.1m
GAUGEDOUBLE
gce_instance
 
workload.googleapis.com/elasticsearch.os.cpu.load_avg.5m
GAUGEDOUBLE
gce_instance
 
workload.googleapis.com/elasticsearch.os.cpu.usage
GAUGEINT64
gce_instance
 
workload.googleapis.com/elasticsearch.os.memory
GAUGEINT64
gce_instance
state

Verify the configuration

This section describes how to verify that you correctly configured the Elasticsearch receiver. It might take one or two minutes for the Ops Agent to begin collecting telemetry.

To verify that Elasticsearch logs are being sent to Cloud Logging, do the following:

  1. In the Google Cloud console, go to the Logs Explorer page:

    Go to Logs Explorer

    If you use the search bar to find this page, then select the result whose subheading is Logging.

  2. Enter the following query in the editor, and then click Run query:
    resource.type="gce_instance"
    (log_id("elasticsearch_json") OR log_id("elasticsearch_gc"))
    

To verify that Elasticsearch metrics are being sent to Cloud Monitoring, do the following:

  1. In the Google Cloud console, go to the  Metrics explorer page:

    Go to Metrics explorer

    If you use the search bar to find this page, then select the result whose subheading is Monitoring.

  2. In the toolbar of the query-builder pane, select the button whose name is either  MQL or  PromQL.
  3. Verify that MQL is selected in the Language toggle. The language toggle is in the same toolbar that lets you format your query.
  4. Enter the following query in the editor, and then click Run query:
    fetch gce_instance
    | metric 'workload.googleapis.com/elasticsearch.node.open_files'
    | every 1m
    

View dashboard

To view your Elasticsearch metrics, you must have a chart or dashboard configured. The Elasticsearch integration includes one or more dashboards for you. Any dashboards are automatically installed after you configure the integration and the Ops Agent has begun collecting metric data.

You can also view static previews of dashboards without installing the integration.

To view an installed dashboard, do the following:

  1. In the Google Cloud console, go to the  Dashboards page:

    Go to Dashboards

    If you use the search bar to find this page, then select the result whose subheading is Monitoring.

  2. Select the Dashboard List tab, and then choose the Integrations category.
  3. Click the name of the dashboard you want to view.

If you have configured an integration but the dashboard has not been installed, then check that the Ops Agent is running. When there is no metric data for a chart in the dashboard, installation of the dashboard fails. After the Ops Agent begins collecting metrics, the dashboard is installed for you.

To view a static preview of the dashboard, do the following:

  1. In the Google Cloud console, go to the  Integrations page:

    Go to Integrations

    If you use the search bar to find this page, then select the result whose subheading is Monitoring.

  2. Click the Compute Engine deployment-platform filter.
  3. Locate the entry for Elasticsearch and click View Details.
  4. Select the Dashboards tab to see a static preview. If the dashboard is installed, then you can navigate to it by clicking View dashboard.

For more information about dashboards in Cloud Monitoring, see Dashboards and charts.

For more information about using the Integrations page, see Manage integrations.

Install alerting policies

Alerting policies instruct Cloud Monitoring to notify you when specified conditions occur. The Elasticsearch integration includes one or more alerting policies for you to use. You can view and install these alerting policies from the Integrations page in Monitoring.

To view the descriptions of available alerting policies and install them, do the following:

  1. In the Google Cloud console, go to the  Integrations page:

    Go to Integrations

    If you use the search bar to find this page, then select the result whose subheading is Monitoring.

  2. Locate the entry for Elasticsearch and click View Details.
  3. Select the Alerts tab. This tab provides descriptions of available alerting policies and provides an interface for installing them.
  4. Install alerting policies. Alerting policies need to know where to send notifications that the alert has been triggered, so they require information from you for installation. To install alerting policies, do the following:
    1. From the list of available alerting policies, select those that you want to install.
    2. In the Configure notifications section, select one or more notification channels. You have the option to disable the use of notification channels, but if you do, then your alerting policies fire silently. You can check their status in Monitoring, but you receive no notifications.

      For more information about notification channels, see Manage notification channels.

    3. Click Create Policies.

For more information about alerting policies in Cloud Monitoring, see Introduction to alerting.

For more information about using the Integrations page, see Manage integrations.

What's next

For a walkthrough on how to use Ansible to install the Ops Agent, configure a third-party application, and install a sample dashboard, see the Install the Ops Agent to troubleshoot third-party applications video.