This page explains how to use a Docker container to host your Elastic Stack installation, and automatically send Security Command Center findings, assets, audit logs, and security sources to Elastic Stack. It also describes how to manage the exported data.
Docker is a platform for managing applications in containers. Elastic Stack is a security information and event management (SIEM) platform that ingests data from one or more sources and lets security teams manage responses to incidents and perform real-time analytics. The Elastic Stack configuration discussed in this guide includes four components:
- Filebeat: a lightweight agent installed on edge hosts, such as virtual machines (VM), that can be configured to collect and forward data
- Logstash: a transformation service that ingests data, maps it into required fields, and forwards the results to Elasticsearch
- Elasticsearch: a search database engine that stores data
- Kibana: powers dashboards that let you visualize and analyze data
In this guide, you set up Docker, ensure that the required Security Command Center and Google Cloud services are properly configured, and use a custom module to send findings, assets, audit logs, and security sources to Elastic Stack.
The following figure illustrates the data path when using Elastic Stack with Security Command Center.

Before you begin
Before connecting to Elastic Stack, do the following:
Create a service account with the following Identity and Access Management (IAM) roles:
- Security Center Admin (
roles/securitycenter.admin
) - Organization Viewer (
roles/resourcemanager.organizationViewer
) - Cloud Asset Owner (
roles/cloudasset.owner
) - Pub/Sub Admin (
roles/pubsub.admin
) - Logs Configuration Writer (
roles/logging.configWriter
)
For instructions on granting roles, see Granting, changing, and revoking access to resources.
- Security Center Admin (
Complete one of the following:
If you are hosting the Docker container in Google Cloud, add the service account to the VM that will host your Kubernetes nodes.
If you are hosting the Docker container in your on-premises environment, create a service account key. You will need the service account JSON from this task to complete this guide.
If you are installing the Docker container in another cloud, configure workload identity federation and download the credentials configuration file.
Set up finding notifications as follows:
- Enable the Security Command Center API.
- Create a filter to export desired findings and assets.
- Create four Pub/Sub topics for findings, resources, audit logs, and
assets. The
notificationConfig
must use the Pub/Sub topic you create for findings.
You will need your organization ID, project ID, and Pub/Sub topic names from this task to configure Elastic Stack.
Enable the Cloud Asset API for your project.
Install the Docker and Elasticsearch components
Follow these steps to install the Docker and Elastic search components in your environment.
Install Docker Engine and Docker Compose
You can install Docker for use on-premises or with a cloud provider. To get started, complete the following guides in Docker's product documentation:
Install Elasticsearch and Kibana
The Docker image that you installed in Install Docker includes Logstash and Filebeat. If you don't already have Elasticsearch and Kibana installed, use the following guides to install the applications:
You need the following information from those tasks to complete this guide:
- Elastic Stack: host, port, username, and password
- Kibana: host, port, username, and password
Download the GoApp module
This section explains how to download the GoApp
module, a Go program
maintained by Security Command Center. The module automates the process of scheduling
Security Command Center API calls and regularly retrieves Security Command Center data for use in
Elastic Stack.
To install GoApp
, do the following:
In a terminal window, install wget, a free software utility used to retrieve content from web servers.
For Ubuntu and Debian distributions, run the following:
# apt-get install wget
For RHEL, CentOS, and Fedora distributions, run the following:
# yum install wget
Install
unzip
, a free software utility used to extract the contents of ZIP files.For Ubuntu and Debian distributions, run the following:
# apt-get install unzip
For RHEL, CentOS, and Fedora distributions, run the following:
# yum install unzip
Create a directory for the GoogleSCCElasticIntegration installation package:
mkdir GoogleSCCElasticIntegration
Download the GoogleSCCElasticIntegration installation package:
wget -c https://storage.googleapis.com/security-center-elastic-stack/GoogleSCCElasticIntegration-Installation.zip
Extract the contents of the GoogleSCCElasticIntegration installation package into the
GoogleSCCElasticIntegration
directory:unzip GoogleSCCElasticIntegration-Installation.zip -d GoogleSCCElasticIntegration
Create a working directory to store and run
GoApp
module components:mkdir WORKING_DIRECTORY
Replace
WORKING_DIRECTORY
with the directory name.Navigate to the
GoogleSCCElasticIntegration
installation directory:cd ROOT_DIRECTORY/GoogleSCCElasticIntegration/
Replace
ROOT_DIRECTORY
with the path to the directory that contains theGoogleSCCElasticIntegration
directory.Move
install.sh
,config.env
, anddashboards.ndjson
into your working directory.mv install.sh config.env 'Kibana Dashboards'/dashboards.ndjson WORKING_DIRECTORY
Replace
WORKING_DIRECTORY
with the path to your working directory.
Install the Docker container
To set up the Docker container, you download and install a preformatted image from Google Cloud that contains Logstash and Filebeat. For information about the Docker image, go to the Container Registry repository in the Google Cloud console.
During installation, you configure the GoApp
module with Security Command Center
and Elastic Stack credentials.
Navigate to your working directory:
cd /WORKING_DIRECTORY
Replace
WORKING_DIRECTORY
with the path to your working directory.Run the following commands to install the Docker image and configure the
GoApp
module.chmod +x install.sh ./install.sh
During the installation process, enter the requested variables:
Variable Description UPDATE
Whether you are upgrading from a previous version, either N
for no orY
for yesCLIENT_CREDENTIAL_PATH
One of: - The path to your service account JSON, if you are using service account keys
- The credential configuration file, if you are using workload identity federation, as described in Before you begin
PROJECT_ID
The ID for project that contains the Pub/Sub topic ORGANIZATION_ID
Your organization ID FINDING_TOPIC_NAME
The name of the Pub/Sub topic for findings FINDING_SUBSCRIPTION_NAME
The name of the Pub/Sub subscription for findings ASSET_TOPIC_NAME_RESOURCE
The name of the Pub/Sub topic for resources ASSET_SUBSCRIPTION_NAME_RESOURCE
The name of the Pub/Sub subscription for resources ASSET_TOPIC_NAME_IAMPOLICY
The name of the Pub/Sub topic for IAM policies ASSET_SUBSCRIPTION_NAME_IAMPOLICY
The name of the Pub/Sub subscription for IAM policies AUDITLOG_TOPIC_NAME
The name of the Pub/Sub topic for audit logs AUDITLOG_SUBSCRIPTION_NAME
The name of the Pub/Sub subscription for audit logs ELASTIC_HOST
The IP address of your Elastic Stack host ELASTIC_PORT
The port for your Elastic Stack host KIBANA_HOST
The IP address or hostname to which the Kibana server will bind KIBANA_PORT
The port for the Kibana server HTTP_PROXY
An optional link with the username, password, IP address, and port for your proxy host, for example, http://USER:PASSWORD@PROXY_IP:PROXY_PORT
ELASTIC_USERNAME
Your Elasticsearch username (optional) ELASTIC_PASSWORD
Your Elasticsearch password (optional) KIBANA_USERNAME
Your Kibana username (optional) KIBANA_PASSWORD
Your Kibana password (optional) FINDINGS_START_DATE
The optional date to start migrating findings, for example, 2021-04-01T12:00:00+05:30
The
GoApp
module downloads the Docker image, installs the image, and sets up the container.When the process is finished, copy the email address of the WriterIdentity service account from the installation output.
Your working directory should have the following structure:
├── config.env ├── dashboards.ndjson ├── docker-compose.yml ├── install.sh └── main ├── client_secret.json ├── filebeat │ └── config │ └── filebeat.yml ├── GoApp │ └── .env └── logstash └── pipeline └── logstash.conf
Update permissions for audit logs
To update permissions so that audit logs can flow to your SIEM:
Navigate to the Pub/Sub topics page.
Select your project that includes your Pub/Sub topics.
Select the Pub/Sub topic that you created for audit logs.
In Permissions, add the WriterIdentity service account (that you copied in step 4 of the installation procedure) as a new principal and assign it the Pub/Sub Publisher role. The audit log policy is updated.
The Docker and Elastic Stack configurations are complete. You can now set up Kibana.
View Docker logs
Open a terminal, and run the following command to see your container information, including container IDs. Note the ID for the container where Elastic Stack is installed.
docker container ls
To start a container and view its logs, run the following commands:
docker exec -it CONTAINER_ID /bin/bash cat go.log
Replace
CONTAINER_ID
with the ID of the container where Elastic Stack is installed.
Set up Kibana
Complete these steps when you are installing the Docker container for the first time.
Open
kibana.yml
in a text editor.sudo vim KIBANA_DIRECTORY/config/kibana.yml
Replace
KIBANA_DIRECTORY
with the path to your Kibana installation folder.Update the following variables:
server.port
: the port to use for Kibana's back-end server; default is 5601server.host
: the IP address or hostname to which the Kibana server will bindelasticsearch.hosts
: the IP address and port of the Elasticsearch instance to use for queriesserver.maxPayloadBytes
: the maximum payload size in bytes for incoming server requests; default is 1,048,576url_drilldown.enabled
: a Boolean value that controls the ability to navigate from Kibana dashboard to internal or external URLS; default istrue
The completed configuration resembles the following:
server.port: PORT server.host: "HOST" elasticsearch.hosts: ["http://ELASTIC_IP_ADDRESS:ELASTIC_PORT"] server.maxPayloadBytes: 5242880 url_drilldown.enabled: true
Import Kibana dashboards
- Open the Kibana application.
- In the navigation menu, go to Stack Management, and then click Saved Objects.
- Click Import, navigate to the working directory and select dashboards.ndjson. The dashboards are imported and index patterns are created.
Update links to Google Cloud console
Complete these steps when you are installing the Docker container for the first time.
The findings and assets dashboards can include links to assets and findings in Google Cloud.
To update links in the findings dashboard in the Google Cloud console, do the following:
- In the navigation menu, go to Stack Management.
- Under Kibana, select Index Pattern.
- Click gccfindings.
- Search for the finding.name.keyword field.
- Open finding.name.keyword in edit mode.
Enter the following URL:
https://https://console.cloud.google.com/security/command-center/findings?organizationId=ORGANIZATION_ID&orgonly=true&supportedpurview=organizationId&view_type=vt_finding_type&vt_severity_type=All&columns=category,resourceName,eventTime,createTime,parent,securityMarks.marks&vt_finding_type=All&resourceId={value}
Replace
ORGANIZATION_ID
with the ID for your organization.Click Save field.
Open the findings dashboard, and click on any value under Finding Name to check whether the link configuration was successful. If there's an issue, modify the URL.
To update the link to the assets dashboard in the Google Cloud console, do the following:
- In the navigation menu, go to Stack Management.
- Under Kibana, select Index Pattern.
- Click gccassets.
- Search for the Asset Name field.
- Open Asset Name in edit mode.
Enter the following URL:
https://https://console.cloud.google.com/security/command-center/assets?organizationId=ORGANIZATION_ID&orgonly=true&supportedpurview=organizationId&view_type=vt_asset_type&vt_asset_type=All&columns=securityCenterProperties.resourceType,securityCenterProperties.resourceOwners,securityMarks.marks&pageState=(%22cscc-asset-inventory%22:(%22f%22:%22%255B%257B_22k_22_3A_22securityCenterProperties.resourceName_22_2C_22t_22_3A10_2C_22v_22_3A_22_5C_22\{value}_5C_22_22_2C_22s_22_3Atrue_2C_22i_22_3A_22securityCenterProperties.resourceName_22%257D%255D%22))
Replace
ORGANIZATION_ID
with the ID for your organization.Click Save field.
Open the assets dashboard, and click on any value under Asset Name to check whether the link works. If there's an issue, modify the URL.
Upgrade the Docker container
If you deployed a previous version of the GoApp
module, you can upgrade to a newer version.
When you upgrade the Docker container to a newer version, you can keep your
existing service account setup, Pub/Sub
topics, and ElasticSearch components.
If you are upgrading from an integration that didn't use a Docker container, see Upgrade to the latest release.
Add the Logs Configuration Writer (
roles/logging.configWriter
) role to the service account.Create a Pub/Sub topic for your audit logs.
If you are installing the Docker container in another cloud, configure workload identity federation and download the credentials configuration file.
Optionally, to avoid issues when importing the new dashboards, remove the existing dashboards from Kibana:
- Open the Kibana application.
- In the navigation menu, go to Stack Management, and then click Saved Objects.
- Search for Google SCC.
- Select all the dashboards that you want to remove.
- Click Delete.
Remove the existing Docker container:
Open a terminal and stop the container:
docker stop CONTAINER_ID
Replace
CONTAINER_ID
with the ID of the container where Elastic Stack is installed.Remove the Docker container:
docker rm CONTAINER_ID
If necessary, add
-f
before the container ID to remove the container forcefully.
Complete the steps in Download the GoApp module.
Complete the steps in Install Docker.
Complete the steps in Update permissions for audit logs.
Import the new dashboards, as described in Import Kibana dashboards. This step will overwrite your existing Kibana dashboards.
View and edit Kibana dashboards
You can use custom dashboards in Elastic Stack to visualize and analyze your findings, assets, and security sources. The dashboards display critical findings and help your security team prioritize fixes.
Overview dashboard
The Overview dashboard contains a series of charts that displays the total number of findings in your organization by severity level, category, and state. Findings are compiled from Security Command Center's built-in services—Security Health Analytics, Web Security Scanner, Event Threat Detection, and Container Threat Detection—and any integrated services you enable.
To filter content by criteria such as misconfigurations or vulnerabilities, you can select the Finding class.
Additional charts show which categories, projects, and assets are generating the most findings.
Assets dashboard
The Assets dashboard displays tables that show your Google Cloud assets. The tables show asset owners, asset counts by resource type and projects, and your most recently added and updated assets.
You can filter asset data by asset name, asset type, and parents, and quickly drill down to findings for specific assets. If you click an asset name, you are redirected to Security Command Center's Assets page in the Google Cloud console and shown details for the selected asset.
Audit logs dashboard
The Audit logs dashboard displays a series of charts and tables that show audit log information. The audit logs that are included in the dashboard are the administrator activity, data access, system events, and policy denied audit logs. The table includes the time, severity, log type, log name, service name, resource name, and resource type.
You can filter the data by source (such as a project), severity, log type, and resource type.
Findings dashboard
The Findings dashboard includes charts showing your most recent findings. The charts provide information about the number of findings, their severity, category, and state. You can also view active findings over time, and which projects or resources have the most findings.
You can filter the data by finding class.
If you click a finding name, you are redirected to Security Command Center's Findings page in the Google Cloud console and shown details for the selected finding.
Sources dashboard
The Sources dashboard shows the total number of findings and security sources, the number of findings by source name, and a table of all your security sources. Table columns include name, display name, and description.
Add columns
- Navigate to a dashboard.
- Click Edit, and then click Edit visualization.
- Under Add sub-bucket, select Split rows.
- In the list, select Aggregation.
- In the Descending drop-down menu, select ascending or descending. In the Size field, enter the maximum number of rows for the table.
- Select the column you want to add.
- Save the changes.
Remove columns
- Navigate to the dashboard.
- Click Edit.
- To hide columns, next to the column name, click the visibility, or eye, icon. To remove a column, next to the colume name, click on the X, or delete, icon.
Uninstall Docker
To see your container information, including container IDs, open the terminal and run the following command:
docker container ls
Stop the container:
docker stop CONTAINER_ID
Replace
CONTAINER_ID
with the ID of the container where Elastic Stack is installed.Remove the Docker container:
docker rm CONTAINER_ID
If necessary, add
-f
before the container ID to remove the container forcefully.Remove the Docker image:
docker rmi us.gcr.io/security-center-gcr-host/googlescc_elk:latest
Remove Pub/Sub feeds for assets, findings, IAM policies, and audit logs.
Delete the working directory.
What's next
Learn more about setting up finding notifications in Security Command Center.
Read about filtering finding notifications in Security Command Center.