Sending Security Command Center data to Elastic Stack using Docker

Stay organized with collections Save and categorize content based on your preferences.

This page explains how to use a Docker container to host your Elastic Stack installation, and automatically send Security Command Center findings, assets, audit logs, and security sources to Elastic Stack. It also describes how to manage the exported data.

Docker is a platform for managing applications in containers. Elastic Stack is a security information and event management (SIEM) platform that ingests data from one or more sources and lets security teams manage responses to incidents and perform real-time analytics. The Elastic Stack configuration discussed in this guide includes four components:

  • Filebeat: a lightweight agent installed on edge hosts, such as virtual machines (VM), that can be configured to collect and forward data
  • Logstash: a transformation service that ingests data, maps it into required fields, and forwards the results to Elasticsearch
  • Elasticsearch: a search database engine that stores data
  • Kibana: powers dashboards that let you visualize and analyze data

In this guide, you set up Docker, ensure that the required Security Command Center and Google Cloud services are properly configured, and use a custom module to send findings, assets, audit logs, and security sources to Elastic Stack.

The following figure illustrates the data path when using Elastic Stack with Security Command Center.

Security Command Center and Elastic Stack integration (click to enlarge)
Security Command Center and Elastic Stack integration (click to enlarge)

Configure authentication and authorization

Before connecting to Elastic Stack, you need to create an Identity and Access Management (IAM) service account and grant to it IAM roles at both the organization and project levels.

Create a service account and grant IAM roles

The following steps use the Google Cloud console. For other methods, see the links at the end of this section.

  1. In the same project in which you create your Pub/Sub topics, use the Service Accounts page in the Google Cloud console to create a service account. For instructions, see Creating and managing service accounts.

  2. Grant the service account the following role:

    • Pub/Sub Editor (roles/pubsub.editor)
  3. Copy the name of the service account that you just created.

  4. Use the project selector in the Google Cloud console to switch to the organization level.

  5. Open the IAM page for the organization:

    Go to IAM

  6. On the IAM page, click Grant access. The grant access panel opens.

  7. In the Grant access panel, complete the following steps:

    1. In the Add principals section in the New principals field, paste the name of the service account.
    2. In the Assign roles section, use the Role field to grant the following IAM roles to the service account:

      • Security Center Admin Viewer (roles/securitycenter.adminViewer)
      • Security Center Notification Configurations Editor (roles/securitycenter.notificationConfigEditor)
      • Organization Viewer (roles/resourcemanager.organizationViewer)
      • Cloud Asset Viewer (roles/cloudasset.viewer)
      • Logs Configuration Writer (roles/logging.configWriter)
    3. Click Save. The security account appears on the Permissions tab of the IAM page under View by principals.

      By inheritance, the service account also becomes a principal in all child projects of the organization and the roles that are applicable at the project level are listed as inherited roles.

For more information about creating service accounts and granting roles, see the following topics:

Provide the credentials to Elastic Stack

Depending on where you are hosting Elastic Stack, how you provide the IAM credentials to Elastic Stack differs.

Configure notifications

  1. Set up finding notifications as follows:

    1. Enable the Security Command Center API.
    2. Create a filter to export desired findings and assets.
    3. Create four Pub/Sub topics for findings, resources, audit logs, and assets. The notificationConfig must use the Pub/Sub topic you create for findings.

    You will need your organization ID, project ID, and Pub/Sub topic names from this task to configure Elastic Stack.

  2. Enable the Cloud Asset API for your project.

Install the Docker and Elasticsearch components

Follow these steps to install the Docker and Elastic search components in your environment.

Install Docker Engine and Docker Compose

You can install Docker for use on-premises or with a cloud provider. To get started, complete the following guides in Docker's product documentation:

Install Elasticsearch and Kibana

The Docker image that you installed in Install Docker includes Logstash and Filebeat. If you don't already have Elasticsearch and Kibana installed, use the following guides to install the applications:

You need the following information from those tasks to complete this guide:

  • Elastic Stack: host, port, username, and password
  • Kibana: host, port, username, and password

Download the GoApp module

This section explains how to download the GoApp module, a Go program maintained by Security Command Center. The module automates the process of scheduling Security Command Center API calls and regularly retrieves Security Command Center data for use in Elastic Stack.

To install GoApp, do the following:

  1. In a terminal window, install wget, a free software utility used to retrieve content from web servers.

    For Ubuntu and Debian distributions, run the following:

      # apt-get install wget
    

    For RHEL, CentOS, and Fedora distributions, run the following:

      # yum install wget
    
  2. Install unzip, a free software utility used to extract the contents of ZIP files.

    For Ubuntu and Debian distributions, run the following:

      # apt-get install unzip
    

    For RHEL, CentOS, and Fedora distributions, run the following:

      # yum install unzip
    
  3. Create a directory for the GoogleSCCElasticIntegration installation package:

      mkdir GoogleSCCElasticIntegration
    
  4. Download the GoogleSCCElasticIntegration installation package:

    wget -c https://storage.googleapis.com/security-center-elastic-stack/GoogleSCCElasticIntegration-Installation.zip
    
  5. Extract the contents of the GoogleSCCElasticIntegration installation package into the GoogleSCCElasticIntegration directory:

    unzip GoogleSCCElasticIntegration-Installation.zip -d GoogleSCCElasticIntegration
    
  6. Create a working directory to store and run GoApp module components:

      mkdir WORKING_DIRECTORY
    

    Replace WORKING_DIRECTORY with the directory name.

  7. Navigate to the GoogleSCCElasticIntegration installation directory:

      cd ROOT_DIRECTORY/GoogleSCCElasticIntegration/
    

    Replace ROOT_DIRECTORY with the path to the directory that contains the GoogleSCCElasticIntegration directory.

  8. Move install.sh, config.env, and dashboards.ndjson into your working directory.

      mv install.sh config.env 'Kibana Dashboards'/dashboards.ndjson WORKING_DIRECTORY
    

    Replace WORKING_DIRECTORY with the path to your working directory.

Install the Docker container

To set up the Docker container, you download and install a preformatted image from Google Cloud that contains Logstash and Filebeat. For information about the Docker image, go to the Container Registry repository in the Google Cloud console.

Go to Container Registry

During installation, you configure the GoApp module with Security Command Center and Elastic Stack credentials.

  1. Navigate to your working directory:

      cd /WORKING_DIRECTORY
    

    Replace WORKING_DIRECTORY with the path to your working directory.

  2. Run the following commands to install the Docker image and configure the GoApp module.

      chmod +x install.sh
      ./install.sh
    
  3. During the installation process, enter the requested variables:

    Variable Description
    UPDATE Whether you are upgrading from a previous version, either N for no or Y for yes
    CLIENT_CREDENTIAL_PATH One of:
    • The path to your service account JSON, if you are using service account keys
    • The credential configuration file, if you are using workload identity federation, as described in Before you begin
    PROJECT_ID The ID for project that contains the Pub/Sub topic
    ORGANIZATION_ID Your organization ID
    FINDING_TOPIC_NAME The name of the Pub/Sub topic for findings
    FINDING_SUBSCRIPTION_NAME The name of the Pub/Sub subscription for findings
    ASSET_TOPIC_NAME_RESOURCE The name of the Pub/Sub topic for resources
    ASSET_SUBSCRIPTION_NAME_RESOURCE The name of the Pub/Sub subscription for resources
    ASSET_TOPIC_NAME_IAMPOLICY The name of the Pub/Sub topic for IAM policies
    ASSET_SUBSCRIPTION_NAME_IAMPOLICY The name of the Pub/Sub subscription for IAM policies
    AUDITLOG_TOPIC_NAME The name of the Pub/Sub topic for audit logs
    AUDITLOG_SUBSCRIPTION_NAME The name of the Pub/Sub subscription for audit logs
    ELASTIC_HOST The IP address of your Elastic Stack host
    ELASTIC_PORT The port for your Elastic Stack host
    KIBANA_HOST The IP address or hostname to which the Kibana server will bind
    KIBANA_PORT The port for the Kibana server
    HTTP_PROXY An optional link with the username, password, IP address, and port for your proxy host, for example, http://USER:PASSWORD@PROXY_IP:PROXY_PORT
    ELASTIC_USERNAME Your Elasticsearch username (optional)
    ELASTIC_PASSWORD Your Elasticsearch password (optional)
    KIBANA_USERNAME Your Kibana username (optional)
    KIBANA_PASSWORD Your Kibana password (optional)
    FINDINGS_START_DATE The optional date to start migrating findings, for example, 2021-04-01T12:00:00+05:30

    The GoApp module downloads the Docker image, installs the image, and sets up the container.

  4. When the process is finished, copy the email address of the WriterIdentity service account from the installation output.

    Your working directory should have the following structure:

      ├── config.env
      ├── dashboards.ndjson
      ├── docker-compose.yml
      ├── install.sh
      └── main
          ├── client_secret.json
          ├── filebeat
          │          └── config
          │                   └── filebeat.yml
          ├── GoApp
          │       └── .env
          └── logstash
                     └── pipeline
                                └── logstash.conf
    

Update permissions for audit logs

To update permissions so that audit logs can flow to your SIEM:

  1. Navigate to the Pub/Sub topics page.

    Go to Pub/Sub

  2. Select your project that includes your Pub/Sub topics.

  3. Select the Pub/Sub topic that you created for audit logs.

  4. In Permissions, add the WriterIdentity service account (that you copied in step 4 of the installation procedure) as a new principal and assign it the Pub/Sub Publisher role. The audit log policy is updated.

The Docker and Elastic Stack configurations are complete. You can now set up Kibana.

View Docker logs

  1. Open a terminal, and run the following command to see your container information, including container IDs. Note the ID for the container where Elastic Stack is installed.

      docker container ls
    
  2. To start a container and view its logs, run the following commands:

      docker exec -it CONTAINER_ID /bin/bash
      cat go.log
    

    Replace CONTAINER_ID with the ID of the container where Elastic Stack is installed.

Set up Kibana

Complete these steps when you are installing the Docker container for the first time.

  1. Open kibana.yml in a text editor.

      sudo vim KIBANA_DIRECTORY/config/kibana.yml
    

    Replace KIBANA_DIRECTORY with the path to your Kibana installation folder.

  2. Update the following variables:

    • server.port: the port to use for Kibana's back-end server; default is 5601
    • server.host: the IP address or hostname to which the Kibana server will bind
    • elasticsearch.hosts: the IP address and port of the Elasticsearch instance to use for queries
    • server.maxPayloadBytes: the maximum payload size in bytes for incoming server requests; default is 1,048,576
    • url_drilldown.enabled: a Boolean value that controls the ability to navigate from Kibana dashboard to internal or external URLS; default is true

    The completed configuration resembles the following:

      server.port: PORT
      server.host: "HOST"
      elasticsearch.hosts: ["http://ELASTIC_IP_ADDRESS:ELASTIC_PORT"]
      server.maxPayloadBytes: 5242880
      url_drilldown.enabled: true
    

Import Kibana dashboards

  1. Open the Kibana application.
  2. In the navigation menu, go to Stack Management, and then click Saved Objects.
  3. Click Import, navigate to the working directory and select dashboards.ndjson. The dashboards are imported and index patterns are created.

Complete these steps when you are installing the Docker container for the first time.

The findings and assets dashboards can include links to assets and findings in Google Cloud.

To update links in the findings dashboard in the Google Cloud console, do the following:

  1. In the navigation menu, go to Stack Management.
  2. Under Kibana, select Index Pattern.
  3. Click gccfindings.
  4. Search for the finding.name.keyword field.
  5. Open finding.name.keyword in edit mode.
  6. Enter the following URL:

    https://https://console.cloud.google.com/security/command-center/findings?organizationId=ORGANIZATION_ID&orgonly=true&supportedpurview=organizationId&view_type=vt_finding_type&vt_severity_type=All&columns=category,resourceName,eventTime,createTime,parent,securityMarks.marks&vt_finding_type=All&resourceId={value}
    

    Replace ORGANIZATION_ID with the ID for your organization.

  7. Click Save field.

  8. Open the findings dashboard, and click on any value under Finding Name to check whether the link configuration was successful. If there's an issue, modify the URL.

To update the link to the assets dashboard in the Google Cloud console, do the following:

  1. In the navigation menu, go to Stack Management.
  2. Under Kibana, select Index Pattern.
  3. Click gccassets.
  4. Search for the Asset Name field.
  5. Open Asset Name in edit mode.
  6. Enter the following URL:

    https://https://console.cloud.google.com/security/command-center/assets?organizationId=ORGANIZATION_ID&orgonly=true&supportedpurview=organizationId&view_type=vt_asset_type&vt_asset_type=All&columns=securityCenterProperties.resourceType,securityCenterProperties.resourceOwners,securityMarks.marks&pageState=(%22cscc-asset-inventory%22:(%22f%22:%22%255B%257B_22k_22_3A_22securityCenterProperties.resourceName_22_2C_22t_22_3A10_2C_22v_22_3A_22_5C_22\{value}_5C_22_22_2C_22s_22_3Atrue_2C_22i_22_3A_22securityCenterProperties.resourceName_22%257D%255D%22))
    

    Replace ORGANIZATION_ID with the ID for your organization.

  7. Click Save field.

  8. Open the assets dashboard, and click on any value under Asset Name to check whether the link works. If there's an issue, modify the URL.

Upgrade the Docker container

If you deployed a previous version of the GoApp module, you can upgrade to a newer version. When you upgrade the Docker container to a newer version, you can keep your existing service account setup, Pub/Sub topics, and ElasticSearch components.

If you are upgrading from an integration that didn't use a Docker container, see Upgrade to the latest release.

  1. Add the Logs Configuration Writer (roles/logging.configWriter) role to the service account.

  2. Create a Pub/Sub topic for your audit logs.

  3. If you are installing the Docker container in another cloud, configure workload identity federation and download the credentials configuration file.

  4. Optionally, to avoid issues when importing the new dashboards, remove the existing dashboards from Kibana:

    1. Open the Kibana application.
    2. In the navigation menu, go to Stack Management, and then click Saved Objects.
    3. Search for Google SCC.
    4. Select all the dashboards that you want to remove.
    5. Click Delete.
  5. Remove the existing Docker container:

    1. Open a terminal and stop the container:

      docker stop CONTAINER_ID
      

      Replace CONTAINER_ID with the ID of the container where Elastic Stack is installed.

    2. Remove the Docker container:

      docker rm CONTAINER_ID
      

      If necessary, add -f before the container ID to remove the container forcefully.

  6. Complete the steps in Download the GoApp module.

  7. Complete the steps in Install Docker.

  8. Complete the steps in Update permissions for audit logs.

  9. Import the new dashboards, as described in Import Kibana dashboards. This step will overwrite your existing Kibana dashboards.

View and edit Kibana dashboards

You can use custom dashboards in Elastic Stack to visualize and analyze your findings, assets, and security sources. The dashboards display critical findings and help your security team prioritize fixes.

Overview dashboard

The Overview dashboard contains a series of charts that displays the total number of findings in your organization by severity level, category, and state. Findings are compiled from Security Command Center's built-in services—Security Health Analytics, Web Security Scanner, Event Threat Detection, and Container Threat Detection—and any integrated services you enable.

To filter content by criteria such as misconfigurations or vulnerabilities, you can select the Finding class.

Additional charts show which categories, projects, and assets are generating the most findings.

Assets dashboard

The Assets dashboard displays tables that show your Google Cloud assets. The tables show asset owners, asset counts by resource type and projects, and your most recently added and updated assets.

You can filter asset data by asset name, asset type, and parents, and quickly drill down to findings for specific assets. If you click an asset name, you are redirected to Security Command Center's Assets page in the Google Cloud console and shown details for the selected asset.

Audit logs dashboard

The Audit logs dashboard displays a series of charts and tables that show audit log information. The audit logs that are included in the dashboard are the administrator activity, data access, system events, and policy denied audit logs. The table includes the time, severity, log type, log name, service name, resource name, and resource type.

You can filter the data by source (such as a project), severity, log type, and resource type.

Findings dashboard

The Findings dashboard includes charts showing your most recent findings. The charts provide information about the number of findings, their severity, category, and state. You can also view active findings over time, and which projects or resources have the most findings.

You can filter the data by finding class.

If you click a finding name, you are redirected to Security Command Center's Findings page in the Google Cloud console and shown details for the selected finding.

Sources dashboard

The Sources dashboard shows the total number of findings and security sources, the number of findings by source name, and a table of all your security sources. Table columns include name, display name, and description.

Add columns

  1. Navigate to a dashboard.
  2. Click Edit, and then click Edit visualization.
  3. Under Add sub-bucket, select Split rows.
  4. In the list, select Aggregation.
  5. In the Descending drop-down menu, select ascending or descending. In the Size field, enter the maximum number of rows for the table.
  6. Select the column you want to add.
  7. Save the changes.

Remove columns

  1. Navigate to the dashboard.
  2. Click Edit.
  3. To hide columns, next to the column name, click the visibility, or eye, icon. To remove a column, next to the colume name, click on the X, or delete, icon.

Uninstall Docker

  1. To see your container information, including container IDs, open the terminal and run the following command:

      docker container ls
    
  2. Stop the container:

      docker stop CONTAINER_ID
    

    Replace CONTAINER_ID with the ID of the container where Elastic Stack is installed.

  3. Remove the Docker container:

      docker rm CONTAINER_ID
    

    If necessary, add -f before the container ID to remove the container forcefully.

  4. Remove the Docker image:

      docker rmi us.gcr.io/security-center-gcr-host/googlescc_elk:latest
    
  5. Remove Pub/Sub feeds for assets, findings, IAM policies, and audit logs.

  6. Delete the working directory.

What's next