Scenarios for exporting Cloud Logging: Elasticsearch

This scenario shows how to export selected logs from Logging to an Elasticsearch cluster. The scenario assumes there is an established Elasticsearch cluster on Elastic Cloud (Elasticsearch managed service) available for log ingestion.

If you are a Google Cloud customer, you can get started with Elasticsearch through Elastic Cloud on Google Cloud Marketplace, which has integrated billing through your Google Cloud account.

Elasticsearch is a distributed, RESTful search and analytics engine providing rich features and services, optimized for analyzing structured and unstructured logs, metrics, and telemetry. Elastic Cloud on Google Cloud provides industry-leading monitoring and support from both Elastic and Google. Get automatic backups, upgrades, and the latest security patches with little downtime or disruptions.

You can use a similar process for the self-managed Elastic Stack in their Google Cloud account.

This scenario is part of the series Design patterns for exporting Logging.


The Elastic Stack unifies all operational data and brings to it fast, reliable, and relevant search and real-time analytics. Beats, Elastic's lightweight data shippers, sit on the edge, or run serverless as a Cloud Function, and are used for collecting and ingesting data. They are the simplest way to ingest data into Elasticsearch to be visualized in Kibana.

Logstash remains a useful ingest tool if you are looking to collect data from multiple sources and significantly transforming your data prior to ingesting into Elasticsearch or exporting to a variety of outputs. Beats and Logstash can be used together, or individually, for sending Google Cloud related log data to Elasticsearch by publishing those events to Pub/Sub.


Beats ship with prepackaged modules focusing on monitoring, alerting, and reporting on logs and metrics. This document focuses on Filebeat, because it serves as a good starting point in an overall observability solution, adding other modules and integrations subsequently.

Some of the more popular use cases for Beats include the following:

The following diagram shows an example of one basic architecture for exporting Google Operations Cloud Logging into Elasticsearch using Beats. Filebeat for Google Cloud module collects audit, VPC flow, and firewall logs. Whether the environment contains containers, serverless functions, VMs, or apps, Cloud Logging sinks are configured to send the appropriately filtered data to a Pub/Sub topic, which Filebeat will send into Elasticsearch for ingesting and aggregating to be searched and reported on within Kibana.

Example ingest flow from Google Cloud to Elastic Cloud. Example ingest flow from Google Cloud to Elastic Cloud.

Setting up the real-time logging export

In this section, you create the pipeline for real-time log exporting from Logging, to Elasticsearch through Filebeat, by using Pub/Sub. A Pub/Sub topic will be created to collect relevant logging resources with refined filtering, after which a Sink service is established, and then finally Filebeat is configured.

Set up a Pub/Sub topic

Follow the instructions to set up a Pub/Sub topic that will receive your exported logs. Name the topic something like es-auditlogs-export. Topics also need a subscription, so for simplicity reasons, you can create a subscription by the same name. Use the default settings when you configure the subscription. You can set the Identity and Access Management permissions later.

Turn on audit logging for all services

Data access audit logs, except for BigQuery, are disabled by default. In order to enable all audit logs.

You can configure audit logs in the Google Cloud console.

  1. In the Google Cloud console, go to the IAM & Admin menu.

    Go to IAM & Admin

  2. Click Audit Logs, and then click Default Audit Config.

  3. Ensure that the correct log types are selected—in this case, Admin Read, Data Read, and Data Write.

  4. Click Save.

Enabling Google Cloud Audit logs. Enabling Google Cloud Audit logs.

Optionally enable Google Cloud VPC flows and Firewall logs. Optionally enable Google Cloud VPC flows and Firewall logs

Refine the logging export with filters

After you set up aggregated exports or logs export, you need to refine the logging filters to export audit logs, virtual machine–related logs, storage logs, and database logs.

The following logging filter includes the Admin Activity, Data Access, and System Event audit logs, and logs for specific resource types—in this case, the Compute Engine and Cloud Storage buckets, and the BigQuery resources.

logName:"projects/PROJECT-ID/logs/" OR
"projects/PROJECT-ID/logs/" OR
"projects/PROJECT-ID/logs/" OR
resource.type:gce OR
resource.type=gcs_bucket OR

Replace PROJECT-ID with your Google Cloud project ID. You can view your project ID by running the command gcloud config get-value project. Replace SERVICE with the name of your Cloud Run service.

Create a Sink service

Creating the Logs Router Sink sends the correct logs to the Pub/Sub topic that Elasticsearch will ingest. The key is to ensure the correct filters are selected on the object. For more information, see Best practices for working with Google Cloud Audit Logs.

To create the sink, follow these steps:

  1. In the Google Cloud console, go to the Operations Logging menu, and then click Logs Router.
  2. Make sure that the correct Google Cloud project is selected, and then click Create Sink.
  3. Select Cloud Pub/Sub topic and click Next. Create sink service.

  4. Create an advanced filter.

    Create advanced filter.

  5. Enter the following text in the filter box, deleting anything that may have been added by the system, such as resource.type="global" and replacing PROJECT-ID with the name of your project.

    logName="projects/PROJECT-ID/logs/" OR
    "projects/PROJECT-ID/logs/" OR
    "projects/PROJECT-ID/logs/" OR
    resource.type:"gce" OR resource.type="gcs_bucket" OR resource.type="bigquery_resource"

    This filter serves as a getting started procedure for which logs can be ingested into Elasticsearch. There are any number of other filters you can configure with this process.

  6. Click Submit Filter.

    Log viewer.

  7. Enter the Sink Name gcp_logging_sink_pubsub, and set the Sink Destination to the es-auditlogs-export created previously.

    Create log sink.

    After you click Create Sink, you are presented with a unique service account that has the right to write logs to the destination selected. Take note of this information because you need it for the next step.

The following example gcloud command creates a sink called gcp_logging_sink_pubsub for sending the correctly filtered audit logs to the es-auditlogs-export Pub/Sub topic. The sink includes all children projects and specifies filtering to select specific audit logs.

gcloud logging sinks create gcp_logging_sink_pubsub \ \
    --log-filter='logName="projects/PROJECT-ID/logs/" OR "projects/PROJECT-ID/logs/" OR "projects/PROJECT-ID/logs/" OR resource.type:"gce" OR resource.type="gcs_bucket" OR resource.type="bigquery_resource"'

The command output is similar to the following:

Created [].
Please remember to grant `` the Pub/Sub Publisher role on the topic.
More information about sinks can be found at

The serviceAccount entry returned from the API call shows which identity must be added to the particular Pub/Sub topic as a publisher. This identity represents a Google Cloud service account that has been created for the export. Until you grant this identity publisher permissions to the destination topic, log entry exports from this sink will fail. For more information, see the next section or the documentation for Granting access for a resource.

Set IAM policy permissions for the Pub/Sub topic

By adding the service account to the es-auditlogs-export Pub/Sub topic with the Pub/Sub Publisher permissions, you grant the service account permission to publish to the topic.

To add the permissions to the service account, follow these steps:

  1. In the Google Cloud console, open the Cloud Pub/Sub Topics page:


  2. Click the es-auditlogs-export topic name.

  3. Click Show info panel, and then click Add Member to configure permissions.

  4. Make sure that the Pub/Sub Publisher permission is selected.

    Select Pub/Sub publisher permission.

After you create the logging export by using this filter, log files begin to populate in the Pub/Sub topic in the configured project. You can confirm that the topic is receiving messages by using the Metrics Explorer in Cloud Monitoring. Using the following resource type and metric, observe the number of message-send operations over a brief period. If you have configured the export properly, you will see activity above 0 on the graph, as in the following screenshot.

  • Resource type: Cloud Pub/Sub Topic - pubsub_topic
  • Metric: Publish message operations -
  • Filter: topic_id="es-auditlogs-export"

Metrics explorer.

Filebeat setup

Now that Google Cloud is monitoring the audit logs, and a few other metrics, it is time to install and configure Filebeat, Elastic's lightweight data shipper. You need to install Filebeat on a host, for example, a VM that is hosted within the same Google Cloud project Project.

Filebeat is configured to collect and then send the published logs to Elasticsearch, but you can easily configure it to do much more, such as monitoring application, container, and system metrics by utilizing prepackaged modules, many with predeveloped visualizations and dashboards.

Create a service account

As you prepare for configuring Filebeat, you must generate a Google Cloud service account and a corresponding key file. If you are unfamiliar with this process, see Creating and managing service account keys. Export the file in JSON format, naming it something like gcp-es-service-account.json.

You use this file with Filebeat to authenticate to the appropriate Pub/Sub topic. Save the file to the host that's running Filebeat. You'll move and place it in the correct location in a later step.

Provide the account with the Pub/Sub Editor role. For more information, see Pub/Sub access control

Service account Pub/Sub editor role.

Install Filebeat

This article assumes that you have a host which can run Filebeat, and an existing Elastic Cloud deployment. However, it only takes a few minutes to create one. So at this point, if you don't have one, and need a little guidance, visit the Elasticsearch getting started guide.

You can follow the Filebeat Quick Start installation and configuration guide, or on a yum/RPM system you can run the following:

sudo yum install filebeat

Even better, follow the instructions found directly in Kibana.

  1. Start by clicking Add log data from the Kibana Home page.

    Add log data in Kibana.

  2. Scroll down and click System logs or another module for instructions on downloading and configuring Filebeat. You don't need to enable that module.

    Filebeat comes with many prepackaged modules. If you want to use them, you only need to enable and configure them. In this procedure, you don't need need to enable the System logs module. Instead, you enable the Google Cloud module.

  3. Choose your operating system.

    If your operating system is not listed, you will need to follow the Quickstart guide; although this page still provides information that you will need later.

  4. Download and install Filebeat according to the instructions, typically by using a curl command.

    Windows deployments will require downloading.

Configure Filebeat

The following procedure uses the prebuilt elastic user and generated password. It is a best practice to create a unique role/user that is used for production environments.

For details on how to configure the role and user, see Grant privileges and roles needed for setup.

The elastic user is a superuser, and the account should be reserved for administrative or testing purposes only. If that user's password needs to be reset, everywhere it is used it will also need to be reset. For this reason, in production environments, it's best to set up a unique user for each intended purpose.

There are essentially two options to establish authentication between Filebeat and Elasticsearch: configure a keystore, or hard-code the credentials in the filebeat.yml configuration file.

For either of the following options, you will need to configure and cloud.auth.

You can get the in the previous Install Filebeat steps in Kibana, or from the Elasticsearch Service deployments management page.

If you haven't already saved the authorization password during the deployment process, you can reset the password for the elastic user. Reset the elastic user password instructions.

Option 1 - Using the Keystore

A more secure way of authenticating Filebeat with Elastic Cloud is to use the Filebeat keystore, to securely store secret values.

You run the filebeat keystore command on the host that's running Filebeat. This keystore encrypts the Cloud ID and user password and offer the ability to use them as variables in Filebeat, or any other Beat.

The following steps apply to a Linux machine. If you need help with your operating system, see the Quickstart because commands do differ slightly.

  1. Create they keystore:

    sudo filebeat keystore add ES_PWD

    When prompted, paste the password given to you for the elastic user during deployment, or when you reset it. Otherwise, if you created a user, then enter that user's password.

  2. Add the deployment

    sudo filebeat keystore add CLOUD_ID

    When prompted, paste the Cloud ID. (Do not include quotes if referring to the Install Filebeat steps.)

  3. Update the filebeat.yml file to use these new variables, making sure that you are in the directory where the filebeat.yml file is, or be sure to enter the full directory path to the existing file.

    echo \"\$\{CLOUD_ID\}\" >> filebeat.yml
    echo cloud.auth: \"elastic:\$\{ES_PWD\}\" >> filebeat.yml
  4. Additional helpful keystore commands:

    sudo filebeat keystore list - list all keystores

    sudo filebeat keystore --help - get command help

  5. Skip to Configure the Google Cloud module

For more information, see the keystore documentation.

Option 2 - Modifying the file directly

You can optionally follow the instructions from the installation page under Install Filebeat, or the Quickstart guide, which outlines configuring the filebeat.yml file (the directory layout depends on your environment), by directly adding the credential information to that file.

  1. In order for Filebeat to ship data to Elasticsearch, you must set two variables, and cloud.auth, found in the filebeat.yml file.


    # ====== Elastic Cloud ================= "gcp_filebeat:dXMtY2VudHJhjQZjJkODdkOGJjNmJi" cloud.auth: "elastic:<password>"

    Where gcp_filebeat is the name of the deployment, and <password> should be replaced with the password you have assigned.

Configure the Google Cloud module

Once the filebeat.yml file is configured to connect to your Elasticsearch Service, you must enable and then configure, the Google Cloud module.

  1. Enable the googlecloud module. For example, on Linux type:

    sudo filebeat modules enable googlecloud

    Tip: You can find out which modules are enabled by running:

    sudo filebeat modules list

  2. After you enable the googlecloud module, you must configure the specific YAML configuration file found in the modules.d directory. It could be /usr/share/filebeator where the files were extracted.

Example googlecloud.yml configuration, enabling audit logs:

# open {path}/modules.d/googlecloud.yml, add the following

- module: googlecloud
    enabled: true
    var.project_id: project-id
    var.topic: es-auditlogs-export
    var.subscription_name: es-auditlogs-export
    var.credentials_file: ${path.config}/gcp-es-service-account.json

The location of the var.credentials_file file depends on the system. You can run the next test config command to display where the Config path is pointing to, and where that credentials file must live. Also refer to the directory layout page at Elastic.

  1. Ensure that the vpcflow and firewall collection settings are not enabled.

    - module: googlecloud
        enabled: false
        enabled: false
  2. Test the configuration by running a similar command:

    sudo filebeat test config -e

  3. Place the exported JSON file, from the Create a service account step, into the location determined by the previous command, for Config path.

    On Linux CentOs, with Filebeat version 7.9.2, the location is the extracted location:

    Config path: [/home/UserName/filebeat-7.9.2-linux-x86_64]

    If it's installed using a packaging system, the location will be /etc/filebeat.

  4. Run the setup command, which ensures that the imported logs are able to be parsed and indexed correctly, as well as providing predefined visualizations, including dashboards.

    sudo filebeat setup

  5. Start filebeat:

    sudo filebeat -e

If any errors occur when you start the service, the logs should provide some helpful information. A common problem is where the configuration file is placed. The error will indicate where the configuration file must be located in this scenario. Other common problems are related to authentication with the service account that you created.

If you're having an issue with configuration, you can post questions and search for similar situations in the Elastic Discussion forum.

Using the exported logs

After the exported logs have been ingested by Elasticsearch, you can log into Kibana and check out the Filebeat GoogleCloud Audit dashboard, where you gain insight into:

  • Audit Source Locations - represented by a coordinate map
  • Audit Events Outcome over time - represented by a vertical bar chart
  • Audit Event Action - represented by a pie chart
  • Audit Top User Email - represented by a tag cloud
  • Audit User Agent - represented by a pie chart
  • Audit Resource Name - represented by a pie chart
  • Audit events - listed in a data table or JSON format

Audit log dashboard.


If you want to install and use Logstash , transform your data, and then send it over to Elasticsearch, you can follow these steps. To get started, or if you are trying a testing scenario, you can use the same VM that's running Beats. In a production environment, however, it's best to have a dedicated server.

Configure Logstash

  1. Download and install Logstash by following the Getting Started with Logstash guide. This guide provides several deployment options depending on your environment, while also providing the latest prerequisites and requirements, such as the Java version and setting its home environment variable.

  2. You can use Logstash to ingest logs either from Filebeat (option 1) or directly from Pub/Sub (option 2). In either scenario, you need to know the Elasticsearch endpoint and the elastic user password, both of which you can find in the Elastic Cloud console.

    You likely saved the elastic password during deployment, but if you need to, you can reset it by clicking the Security link in the same console.

    Logstash endpoint.

Option 1

Because Filebeat has already been configured to pull data from Pub/Sub, you can bring in that data from Filebeat into a Logstash pipeline, transform it, and then ship it to Elasticsearch.

  1. Modify the /etc/filebeat/filebeat.yml file by commenting out the and cloud.auth configured previously.

    # ====== Elastic Cloud ================= "xxx"
    #cloud.auth: "xxx"
  2. Comment out the output.elasticsearch and hosts lines under Elasticsearch Output.

    # ------ Elasticsearch Output -------------------
    # Array of hosts to connect to.
    #hosts: ["localhost:9200"]
  3. Remove the hash sign (#) in front of hosts under Logstash Output.

    # ------ Logstash Output -------------------
    # The Logstash hosts
    hosts: ["localhost:5044"]

    This will send the Filebeat output to the local host, which is running Logstash in this scenario, using port 5044. Logstash, by default, listens on this port so it should not need to be modified.

  4. Create a Logstash pipeline configuration file.

    sudo vim /etc/logstash/

    This file will contain the following:

    input {
      beats {
        port => 5044
    output {
      stdout { codec => rubydebug } #Used to validate/troubleshoot
      elasticsearch {
        hosts => [""]
        user => "elastic"
        password => "glnvy8k27gwQE8pVPaxW35a"
        index => "logstash-%{+YYYY.MM.dd}"

Option 2

If you prefer to just use Logstash directly, rather than also use Filebeat, follow the following steps to configure the Pub/Sub input plugin for Logstash.

  1. Per the instruction, you must first install the plugin by running the following command, or similar to your environment. Directory for a Linux environment is /usr/share/logstash but might differ depending on your system.

    bin/logstash-plugin install logstash-input-google_pubsub

  2. Google Cloud authentication is the same for the Filebeat installation. You can refer to the previous Create a service account steps in this document. You will have an entry shown below for the key JSON file.

  3. Create a Pub/Sub configuration file, to be saved under the logstash directory, typically /etc/logstash/conf.d, naming it pub_sub.conf.

    The configuration file should look similar to the following. The hosts is the Elasticsearch endpoint.

    input {
      google_pubsub {
        project_id => "project-id"
        topic => "es-auditlogs-export"
        subscription => "es-auditlogs-export"
        json_key_file => "gcp-es-service-account.json"
    output {
      stdout { codec => rubydebug } #Used to validate/troubleshoot
      elasticsearch {
        hosts => [""]
        user => "elastic"
        password => "glnvy8k27gwQE8pVPaxW35a"
        index => "logstash-%{+YYYY.MM.dd}"

Start Logstash

The instructions in this section demonstrate how to start and validate Logstash on Linux, where the package manager was used to install Logstash. For details for other systems, see Running Logstash as a Service on Debian or RPM.

  1. Start Logstash as a service:

    sudo service logstash start
  2. Validate the service:

    sudo journalctl -u logstash -f

View Logstash collection in Kibana

Now that the Pub/Sub logs are being sent to Elasticsearch using Logstash, you need to perform a few things within Kibana to view the data.

  1. Open Kibana and go to Stack Management.

    Kibana stack management.

  2. Click Kibana > Index Patterns.

    Kibana index pattern.

  3. Click Create index pattern.

    Kibana create index pattern.

  4. Enter logstash-* in the Index pattern name field. You should see the index pattern match, due to the previous steps running Logstash.

    Kibana define index pattern.

  5. Click Next step.

  6. Select @timestamp for the Time field filter, and then click Create index pattern.

    Kibana select timestamp.

  7. Click Discover.

    Kibana discover.

  8. Ensure that the logstash-* index pattern is selected.

    Kibana select index.

  9. You can now visualize the ingested data. Learn more about what you can do with Kibana., such as using Logs., which can correlate logs and metrics.

    For example, add the logstash-* index to the list of patterns, separated by a comma, as in the following screenshot.

    Kibana visualize data.

Next steps with Logstash

Sign up to watch the Getting Started with Logstash webinar video, This video explains the background of Logstash, its architecture, its layout of important files, how to setup pipelines with filters, and how to use data transformation and enrichment techniques.

What's next