Stay organized with collections Save and categorize content based on your preferences.

Install and configure the forwarder on Linux

This document describes how to install and configure the forwarder on Linux. To install the forwarder on Windows, see Windows forwarder.

Forwarder is used to send logs from the customer environment to the Chronicle instance. This is used when the customers want to send the logs directly to Chronicle, and do not wish to use the cloud buckets to ingest data, or the logtype does not have native ingestion via 3rd party API. The forwarder can be used as a ready to deploy solution, instead of manually incorporating the ingestion API.

You can install the forwarder on a variety of Linux distributions including Debian, Ubuntu, Red Hat, and Suse. Google Cloud provides the software using a Docker container. You can run and manage the Docker container on either a physical or virtual machine running Linux.

System requirements

The following are general recommendations. For recommendations specific to your system, contact Chronicle Support.

  • RAM—1 GB for each collected data type. For example, endpoint detection and response (EDR), DNS, and DHCP are all separate data types. You need 3 GB of RAM to collect data for all three.

  • CPU—2 CPUs are sufficient to handle less than 10,000 events per second (EPS) (total for all data types). If you expect to forward more than 10,000 EPS, provision 4 to 6 CPUs.

  • Disk—100 MB of disk space is sufficient, regardless of how much data the Chronicle forwarder handles. If you need to buffer backlogged messages to disk as opposed to memory, see Disk Buffering. The Chronicle forwarder buffers to memory by default.

Verify the firewall configuration

Any firewalls or authenticated proxies in between the Chronicle forwarder container and the Internet require rules to open access to the following hosts:

Connection Type Destination Port
TCP malachiteingestion-pa.googleapis.com 443
TCP malachiteingestion-europe-backstory.googleapis.com 443
TCP malachiteingestion-europe-west2-backstory.googleapis.com 443
TCP malachiteingestion-asia-southeast1-backstory.googleapis.com 443
TCP accounts.google.com 443
TCP gcr.io 443
TCP storage.googleapis.com 443

Customize the configuration files

Contact your Chronicle representative to get the configuration file templates.

Google Cloud tailors the configuration files to the forwarder instance with specific metadata as shown in the output section. You can modify the configuration files, as per your requirements, and include information about the log types to be ingested under the collectors section. If you need more information on the configuration settings, contact your Chronicle support.

To configure the Linux forwarder:

  1. Make a copy of the configuration file template provided with the software.

  2. Save the two files in the same directory using the following naming convention:

    FORWARDER_NAME.conf—Use this file to define the configuration settings related to log ingestion.

    FORWARDER_NAME_auth.conf—Use this file to define the authorization credentials.

  3. Modify the files to include the configuration for your forwarder instance. Use the samples provided in this document as a reference.

  4. Ensure that an entry exists for each input in the FORWARDER_NAME_auth.conf file even if the input doesn't have corresponding authentication details. This is required to map the data correctly.

Sample configuration

The following code sample shows the format of the configuration files for a forwarder. For details about the settings for each type of ingestion mechanism, such as Splunk or Syslog, see Collect Data.

The FORWARDER_NAME.conf file

output:
  url: malachiteingestion-pa.googleapis.com:443
  identity:
    identity:
    collector_id: COLLECTOR_ID \
    customer_id: CUSTOMER_ID \

collectors:
  - syslog:
      common:
        enabled: true
        data_type: `WINDOWS_DHCP`
        data_hint:
        batch_n_seconds: 10
        batch_n_bytes: 1048576
      tcp_address: 0.0.0.0:10514
      udp_address: 0.0.0.0:10514
      connection_timeout_sec: 60
      tcp_buffer_size: 524288
  - syslog:
      common:
        enabled: true
        data_type: `WINDOWS_DNS`
        data_hint:
        batch_n_seconds: 10
        batch_n_bytes: 1048576
      tcp_address: 0.0.0.0:10515
      connection_timeout_sec: 60
      tcp_buffer_size: 524288
enable_auto_update: false

The FORWARDER_NAME_auth.conf file

output:
  identity:
    secret_key: |
      {
        "type": "service_account",
        "project_id": "PROJECT_ID" \,
        "private_key_id": "PRIVATE_KEY_ID" \,
        "private_key": "-----BEGIN PRIVATE KEY-----\\"PRIVATE_KEY" \n-----END PRIVATE KEY-----\n",
        "client_email": "CLIENT_EMAIL" \,
        "client_id": "CLIENT_ID" \,
        "auth_uri": "https://accounts.google.com/o/oauth2/auth",
        "token_uri": "https://oauth2.googleapis.com/token",
        "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
        "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/example-account-1%40example-account.iam.gserviceaccount.com"
      }

collectors:
  - syslog:
  - syslog:
      certificate: "../forwarder/inputs/testdata/localhost.pem"
      certificate_key: "../forwarder/inputs/testdata/localhost.key"

This two file system lets you store authentication credentials in a separate file for enhanced security. You can store the FORWARDER_NAME.conf file in a version control repository or any open configuration management system. You can store the FORWARDER_NAME_auth.conf file directly in the physical or virtual machine running the forwarder.

Sample Configuration (Single file)

output:
  url: malachiteingestion-pa.googleapis.com:443
  identity:
    identity:
    collector_id: "COLLECTOR_ID" \
    customer_id: "CUSTOMER_ID" \
    secret_key: |
      {
        "type": "service_account",
        "project_id": "PROJECT_ID" \,
        "private_key_id": "PRIVATE_KEY_ID" \,
        "private_key": "-----BEGIN PRIVATE KEY-----\ "PRIVATE_KEY" \n-----END PRIVATE KEY-----\n",
        "client_email": "CLIENT_EMAIL" \,
        "client_id": "CLIENT_ID" \,
        "auth_uri": "https://accounts.google.com/o/oauth2/auth",
        "token_uri": "https://oauth2.googleapis.com/token",
        "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
        "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/malachite-test-1%40malachite-test.iam.gserviceaccount.com"
      }

collectors:
  - syslog:
      common:
        enabled: true
        data_type: `WINDOWS_DHCP`
        data_hint:
        batch_n_seconds: 10
        batch_n_bytes: 1048576
      tcp_address: 0.0.0.0:10514
      udp_address: 0.0.0.0:10514
      connection_timeout_sec: 60
      tcp_buffer_size: 524288
  - syslog:
      common:
        enabled: true
        data_type: `WINDOWS_DNS`
        data_hint:
        batch_n_seconds: 10
        batch_n_bytes: 1048576
      tcp_address: 0.0.0.0:10515
      connection_timeout_sec: 60
      certificate: "../forwarder/inputs/testdata/localhost.pem"
      certificate_key: "../forwarder/inputs/testdata/localhost.key"
      tcp_buffer_size: 524288
enable_auto_update: false

If you are using the single configuration file and want to move to the two file system, do the following:

  1. Create a copy of your existing configuration.
  2. Save one file as the FORWARDER_NAME.conf file and delete the authorization credentials from the file.
  3. Save the other file as FORWARDER_NAME_auth.conf file and delete all the non-authorization data from the file. Use the sample configuration files given in this guide as reference.
  4. Make sure that you follow the naming convention and other guidelines mentioned in the section Customize the configuration files.

Install Docker

The installation of Docker is dependent on the host environment. You can install Docker on different host operating systems. Google Cloud provides limited documentation to assist you in installing Docker on several of the more popular Linux distributions. However, Docker is open source and all necessary documentation is already available. For instructions on docker installation, refer to Docker Documentation.

Once Docker is installed on your system, the Chronicle forwarder installation process is similar to any type of Linux distribution.

To check if Docker is installed properly on your system, execute the following command (elevated privileges):

   docker ps
  

The following response indicates that Docker has been installed properly:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

You can gather additional information about the Docker installation using the following command:

    docker info
  

If you have any issues with Docker, the Chronicle support team can request the output from this command to help and debug with the issue.

Install the forwarder on Linux

This section describes how to install the Chronicle Forwarder using a Docker container on a Linux system.

Step 1. Download, transfer and install the forwarder configuration files

Chronicle provides forwarder configuration files specific to your operating system (Linux or Windows). Download the files from the link provided by your Chronicle representative to a local directory on your laptop (for example, to a directory named chronicle). After you complete the following steps, transfer the configuration files from your laptop to your forwarder /opt/chronicle/config directory within the user's home directory.

  1. Connect to the host of the Linux forwarder via terminal.

  2. Create a new user on the host of the Linux forwarder.

      adduser USERNAME
      passwd USERNAME
      usermod -aG wheel USERNAME
    

  3. Change directory to the home directory of the new user that runs the Docker Container.

  4. Create a directory to store the Chronicle forwarder configuration files:

      mkdir /opt/chronicle/config
    

  5. Change directory.

      cd /opt/chronicle/config
    

  6. Once files have been transferred, ensure that the configuration files are located in the /opt/chronicle/config directory:

      ls -l
    

Step 2. Run the forwarder within the Docker container

You can use the following procedures to start the Chronicle forwarder for the first time as well as to upgrade to the latest version of the Chronicle container:

The --log-opt options have been available since Docker 1.13. These options limit the size of the container log files and must be used as long as your version of Docker supports them.

  1. If you are upgrading, start by cleaning up any previous Docker runs. In the following example, the name of the Docker container is cfps. Obtain the latest Docker image from Google Cloud with the docker pull command as shown below.

    docker stop cfps
    
    docker rm cfps
    
  2. Obtain the latest Docker image from Google Cloud:

    docker pull gcr.io/chronicle-container/cf_production_stable
    
  3. Start Chronicle forwarder from the Docker container:

    docker run \
    --detach \
    --name cfps \
    --restart=always \
    --log-opt max-size=100m \
    --log-opt max-file=10 \
    --net=host \
    -v /opt/chronicle/config:/opt/chronicle/external \
    gcr.io/chronicle-container/cf_production_stable
    

Uninstall the forwarder

The following Docker commands help you to stop and uninstall or remove the Chronicle forwarder.

To stop or uninstall the forwarder container:

    docker stop cfps
  

To remove the forwarder container:

    docker rm cfps
  

Update the forwarder

The Chronicle forwarder has two parts and is upgraded as follows:

  • Forwarder Bundle—Is automatically updated and a restart is not required.

  • Forwarder Docker image—Is updated manually after stopping the existing forwarder and starting a new instance as stated in Step 2.

Collect Data

The following sections help you configure the Chronicle forwarder to ingest different types of data, which is forwarded to the Chronicle instance.

Collect Splunk data

You can configure the Chronicle forwarder to forward your Splunk data to Chronicle. Google Cloud configures Chronicle forwarder with the following information to forward your data from Splunk:

  • URL for the Splunk REST API (for example, https://10.0.113.15:8089).

  • Splunk queries to generate data for each of the required data types (for example, index=dns).

FORWARDER_NAME.conf
output:
collectors:
  - splunk:
      common:
        enabled: true
        data_type: WINDOWS_DNS
        data_hint: "#fields ts      uid     id.orig_h       id.orig_p       id.resp_h         id.resp_p       proto   trans_id        query   qclass  qclass_name"
        batch_n_seconds: 10
        batch_n_bytes: 819200
      url: https://127.0.0.1:8089
      is_ignore_cert: true
      minimum_window_size: 10s
      maximum_window_size: 30s
      query_string: search index=* sourcetype=dns
      query_mode: realtime

You must make your Splunk account credentials available to the Chronicle forwarder. You can do this by creating a creds.txt file or by adding user and password fields in the splunk settings section of your FORWARDER_NAME_auth.conf file. The following two procedures describe each method. Use only one method. The recommended method is to use the FORWARDER_NAME_auth.conf file.

To use the FORWARDER_NAME_auth.conf file, add the user and password fields to the splunk section of the FORWARDER_NAME_auth.conf file as shown below.

output:
  identity:
    secret_key: |
      {
        "type": "service_account",
        "project_id": "PROJECT_ID" \,
        "private_key_id": "PRIVATE_KEY_ID" \,
        "private_key": "-----BEGIN PRIVATE KEY-----\\"PRIVATE_KEY" \n-----END PRIVATE KEY-----\n",
        "client_email": "CLIENT_EMAIL" \,
        "client_id": "CLIENT_ID" \,
        "auth_uri": "https://accounts.google.com/o/oauth2/auth",
        "token_uri": "https://oauth2.googleapis.com/token",
        "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
        "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/example-account-1%40example-account.iam.gserviceaccount.com"
      }

collectors:
  - splunk:
      common:
      user: myusername
      password: mypassword

minimum_window_size: The minimum time range passed to the Splunk query. The default value is 10 seconds. This parameter is used for tuning if the requirement is to change how frequently the Splunk server is queried when the Forwarder is in steady state. Also, when there is a lag, the Splunk API call can be made several times.

maximum_window_size: The maximum time range passed to the Splunk query. The default value is 30 seconds. This parameter is used for tuning in cases when there is a lag or if more data is required per query.

Change this parameter (equal to or greater than) when you change the min parameter. Lag cases can occur if a Splunk query call is taking longer than the maximum_window_size.

query_mode: There's only one valid value: realtime. For details about real-time searches in Splunk, see the Splunk documentation.

To use a creds.txt file:

  1. Create a local file for your Splunk credentials and name it creds.txt.

  2. Place your username on the first line and the password on the second line:

    cat creds.txt
    
    myusername
    mypassword
    
  3. For customers who use the Chronicle forwarder to access a Splunk instance, copy the creds.txt file to the config directory (the same directory where the configuration files reside). For example:

    cp creds.txt /opt/chronicle/config/creds.txt
    
  4. Verify the creds.txt file is in its proper location:

    ls /opt/chronicle/config
    

Collect syslog data

Chronicle forwarder can work as a Syslog server. You can configure any appliance or server that supports sending syslog data over a TCP or UDP connection to forward their data to Chronicle forwarder. You can control the exact data that the appliance or the server sends to the Chronicle forwarder. The Chronicle forwarder can then forward the data to Chronicle.

The FORWARDER_NAME.conf configuration file (provided by Google Cloud) specifies which ports to monitor for each type of forwarded data (for example, port 10514). By default, the Chronicle forwarder accepts both TCP and UDP connections.

Configure rsyslog

To configure rsyslog, you need to specify a target for each port (for example, each data type). Consult your system documentation for the correct syntax. The following examples illustrate the rsyslog target configuration:

  • TCP log traffic: dns.* @@192.168.0.12:10514

  • UDP log traffic: dns.* @192.168.0.12:10514

Enable TLS for syslog configurations

You can enable TLS for the Syslog connection to the Chronicle forwarder. In the Chronicle forwarder configuration file (FORWARDER_NAME.conf), specify the location of your own generated certificate and certificate key as shown in the following example:

certificate "/opt/chronicle/external/certs/client_generated_cert.pem"
certificate_key "/opt/chronicle/external/certs/client_generated_cert.key"

Based on the example shown, modify the Chronicle forwarder configuration file (FORWARDER_NAME.conf) as follows:

  collectors:
- syslog:
    common:
      enabled: true
      data_type: WINDOWS_DNS
      data_hint:
      batch_n_seconds: 10
      batch_n_bytes: 1048576
  tcp_address: 0.0.0.0:10515
  tcp_buffer_size: 65536
  connection_timeout_sec: 60
  certificate: "/opt/chronicle/external/certs/client_generated_cert.pem"
  certificate_key: "/opt/chronicle/external/certs/client_generated_cert.key"
  minimum_tls_version: "TLSv1_3"

Few important points to note:

  • You can configure the TCP buffer size. The default TCP buffer size is 64 KB.

  • The default and recommended value for connection_timeout is 60 seconds. The TCP connection gets terminated if the connection is inactive for a specified time.

  • The minimum TLS version is checked against the TLS version of the input request. The TLS version of the input request should be greater than the minimum TLS version. The minimum TLS version should be one of the following values: TLSv1_0, TLSv1_1, TLSv1_2, TLSv1_3.

You can create a certs directory under the configuration directory and store the certificate files there.

Collect file data

Use this if you want to manually upload logs from a single log file. This can be used to backfill logs for a particular log file.

Start the Chronicle forwarder from the Docker container:

  docker run \
    --name cfps \
    --log-opt max-size=100m \
    --log-opt max-file=10 \
    --net=host \
    -v /opt/chronicle/config:/opt/chronicle/external \
    -v /var/log/crowdstrike/falconhoseclient:/opt/chronicle/edr \
     gcr.io/chronicle-container/cf_production_stable

This docker run command is critical to map the load volume to the container.

Based on this example, you should modify the Chronicle forwarder configuration (FORWARDER_NAME.conf file) as follows:

 collectors:
  - file:
       common:
         enabled: true
         data_type: CS_EDR
         data_hint:
         batch_n_seconds: 10
         batch_n_bytes: 1048576
       file_path: /opt/chronicle/edr/output/sample.txt
       filter:

Collect packet data

Chronicle forwarder can capture packets directly from a network interface using libcap on Linux. For more information on libcap, refer to libcap - Linux manual page.

Packets are captured and sent to Chronicle instead of log entries. Packet capture is handled from a local interface only. To enable packet capture for your system, contact Chronicle support.

Google Cloud configures Chronicle forwarder with the Berkeley Packet Filter (BPF) expression used when capturing packets (for example, port 53 and not localhost). For more information, refer to Berkeley packet filters.

Collect data from Kafka topic

You can ingest data from the Kafka topics just like you can from syslog. The consumer groups are leveraged to enable you to deploy up to 3 Forwarders and pull data from the same Kafka topic. For more information, refer to Kafka.

For more information on Kafka consumer groups, see the following: https://docs.confluent.io/platform/current/clients/consumer.html

Example configuration: Kafka input

The following forwarder configuration shows how to setup the forwarder to ingest data from the Kafka topics.

The FORWARDER_NAME.conf file

collectors:
- kafka:
      common:
        batch_n_bytes: 1048576
        batch_n_seconds: 10
        data_hint: null
        data_type: NIX_SYSTEM
        enabled: true
      topic: example-topic
      group_id: chronicle-forwarder
      timeout: 60s
      brokers: ["broker-1:9092", "broker-2:9093"]
      tls:
        insecureSkipVerify: true
        certificate: "/path/to/cert.pem"
        certificate_key: "/path/to/cert.key"
- syslog:
      common:
        batch_n_bytes: 1048576
        batch_n_seconds: 10
        data_hint: null
        data_type: WINEVTLOG
        enabled: true
      tcp_address: 0.0.0.0:30001
      connection_timeout_sec: 60

The FORWARDER_NAME_auth.conf file

collectors:
- kafka:
      username: user
      password: password
- syslog:

Customize optional configurations

Toggle data compression

Log compression reduces network bandwidth consumption when transferring logs to Chronicle. However, the compression might cause an increase in CPU usage. The tradeoff between CPU usage and bandwidth depends on many factors, including the type of log data, the compressibility of that data, the availability of CPU cycles on the host running the forwarder and the need for reducing network bandwidth consumption.

For example, text based logs compress well and can provide substantial bandwidth savings with low CPU usage. However, encrypted payloads of raw packets do not compress well and incur higher CPU usage.

By default, log compression is disabled. Enabling log compression might reduce bandwidth consumption. However, enabling log compression might also increase CPU usage. Be aware of the trade off.

To enable log compression, set the compression field to true in the Chronicle forwarder configuration file as shown in the following example:

The FORWARDER_NAME.conf file

output:
  compression: true
    url: malachiteingestion-pa.googleapis.com:443
    identity:
      identity:
      collector_id: 10479925-878c-11e7-9421-10604b7cb5c1
      customer_id: ebdc4bb9-878b-11e7-8455-10604b7cb5c1
...

The FORWARDER_NAME_auth.conf file

output:
  identity:
    secret_key: |
    {
     "type": "service_account",
...
    }

Configure disk buffering

Disk buffering enables you to buffer backlogged messages to disk as opposed to memory. The backlogged messages can be stored in case the forwarder crashes or the underlying host crashes. Be aware that enabling disk buffering can affect performance.

If disk buffering is disabled, the Forwarder uses 1 GB of memory (RAM) for each log type (for example, per connector). Specify the max_memory_buffer_bytes configuration parameter. The maximum memory allowed is 4 GB.

If you are running the forwarder using Docker, Google recommends mounting a volume separate from your configuration volume for isolation purposes. Also, each input should be isolated with its own directory or volume to avoid conflicts.

Example configuration: disk buffering

The following configuration includes syntax to enable disk buffering:

collectors:
- syslog:
    common:
      write_to_disk_buffer_enabled: true
      # /buffers/NIX_SYSTEM is part of the external mounted volume for the
forwarder
      write_to_disk_dir_path: /buffers/NIX_SYSTEM
      max_file_buffer_bytes: 1073741824
      batch_n_bytes: 1048576
      batch_n_seconds: 10
      data_hint: null
      data_type: NIX_SYSTEM
      enabled: true
    tcp_address: 0.0.0.0:30000
    connection_timeout_sec: 60
- syslog:
    common:
      batch_n_bytes: 1048576
      batch_n_seconds: 10
      data_hint: null
      data_type: WINEVTLOG
      enabled: true
    tcp_address: 0.0.0.0:30001
    connection_timeout_sec: 60

Set regular expression filters

Regular expression filters enable you to filter logs based on regular expression matches against raw logs.

The filters employ the RE2 syntax described here: https://github.com/google/re2/wiki/Syntax

The filters must include a regular expression and, optionally, define a behavior when there is a match. The default behavior on a match is block (you can also explicitly configure it as block).

Alternatively, you can specify filters with the allow behavior. If you specify any allow filters, the forwarder blocks any logs that do not match at least one allow filter.

It is possible to define an arbitrary number of filters. Block filters take precedence over allow filters.

When filters are defined, they must be assigned a name. The names of active filters will be reported to Chronicle via Forwarder health metrics. Filters defined at the root of the configuration are merged with filters defined at the collector level. The collector level filters take precedence in cases of conflicting names. If no filters are defined either at the root or collector level, the behavior is to allow all.

Example configuration: regular expression filters

In the following Forwarder configuration, the WINEVTLOG logs that do not match the root filter (allow_filter) are blocked. Given the regular expression, the filter only allows logs with priorities between 0 and 99. However, any NIX_SYSTEM logs containing 'foo' or 'bar' are blocked, despite the allow_filter. This is because the filters use a logical OR. All logs are processed until a filter is triggered.

regex_filters:
  allow_filter:
    regexp: ^<[1-9][0-9]?$>.*$
    behavior_on_match: allow
collectors:
- syslog:
    common:
      regex_filters:
        block_filter_1:
          regexp: ^.*foo.*$
          behavior_on_match: block
        block_filter_2:
          regexp: ^.*bar.*$
      batch_n_bytes: 1048576
      batch_n_seconds: 10
      data_hint: null
      data_type: NIX_SYSTEM
      enabled: true
    tcp_address: 0.0.0.0:30000
    connection_timeout_sec: 60
- syslog:
    common:
      batch_n_bytes: 1048576
      batch_n_seconds: 10
      data_hint: null
      data_type: WINEVTLOG
      enabled: true
    tcp_address: 0.0.0.0:30001
    connection_timeout_sec: 60

Configure arbitrary labels

Labels are used to attach arbitrary metadata to logs using key and value pairs. Labels can be configured for an entire forwarder or within a specific collector of a forwarder. If both are provided, the labels are merged with the collector's keys taking precedence over the forwarder's keys if the keys overlap.

Example configuration: arbitrary labels

In the following forwarder configuration, the 'foo=bar' and 'meow=mix' key and value pairs are both attached to WINEVTLOG logs, and the 'foo=baz' and 'meow=mix' key and value pairs are attached to the NIX_SYSTEM logs.

metadata:
  labels:
    foo: bar
    meow: mix
collectors:
syslog:
    common:
      metadata:
        labels:
          foo: baz
          meow: mix
      batch_n_bytes: 1048576
      batch_n_seconds: 10
      data_hint: null
      data_type: NIX_SYSTEM
      enabled: true
    tcp_address: 0.0.0.0:30000
    connection_timeout_sec: 60
syslog:
    common:
      batch_n_bytes: 1048576
      batch_n_seconds: 10
      data_hint: null
      data_type: WINEVTLOG
      enabled: true
    tcp_address: 0.0.0.0:30001
    connection_timeout_sec: 60

Configure namespaces

Use namespace labels to identify logs from distinct network segments and deconflict overlapping IP addresses. You can configure a namespace label for an entire forwarder or within a specific collector of the forwarder. If both are included, the specific collector's namespace takes precedence.

Any namespace configured for the forwarder appears with the associated assets in the Chronicle user interface. You can also search for namespaces using the Chronicle Search feature.

For information about how to view namespaces in the Chronicle user interface, see here.

Example configuration: namespaces

In the following forwarder configuration, the WINEVTLOG logs are attached to the FORWARDER namespace and NIX_SYSTEM logs are attached to the CORPORATE namespace.

metadata:
  namespace: FORWARDER
collectors:
- syslog:
      common:
        metadata:
          namespace: CORPORATE
        batch_n_bytes: 1048576
        batch_n_seconds: 10
        data_hint: null
        data_type: NIX_SYSTEM
        enabled: true
      tcp_address: 0.0.0.0:30000
      connection_timeout_sec: 60
- syslog:
      common:
        batch_n_bytes: 1048576
        batch_n_seconds: 10
        data_hint: null
        data_type: WINEVTLOG
        enabled: true
      tcp_address: 0.0.0.0:30001
      connection_timeout_sec: 60

Configure load balancing and high availability options

The Chronicle forwarder for Linux can be deployed in an environment where a Layer 4 load balancer is installed between the data source and forwarder instances. This allows a customer to distribute the log collection across multiple forwarders or send logs to a different forwarder if one fails. This feature is supported only with the syslog collection type.

The Linux forwarder includes a built-in HTTP server that responds to HTTP health checks from the load balancer. The HTTP server also helps ensure that logs are not lost during startup or shutdown of a forwarder.

Configure the HTTP server, load balancing, and high availability options under the server section of the forwarder configuration file. These options support setting timeout durations and status codes returned in response to health checks received in container scheduler and orchestration-based deployments, as well as from traditional load balancers.

Use the following URL paths for health, readiness, and liveness checks. The <host:port> values are defined in the forwarder configuration.

  • http://<host:port>/meta/available: liveness checks for container schedulers/orchestrators, such as Kubernetes.
  • http://<host:port>/meta/ready: readiness checks and traditional load balancer health checks.

The following forwarder configuration is an example for load balancing and high availability:

collectors:
- syslog:
    common:
      batch_n_bytes: 1048576
      batch_n_seconds: 10
      data_hint: null
      data_type: NIX_SYSTEM
      enabled: true
    tcp_address: 0.0.0.0:30000
    connection_timeout_sec: 60
- syslog:
    common:
      batch_n_bytes: 1048576
      batch_n_seconds: 10
      data_hint: null
      data_type: WINEVTLOG
      enabled: true
    tcp_address: 0.0.0.0:30001
    connection_timeout_sec: 60
server:
  graceful_timeout: 15s
  drain_timeout: 10s
  http:
    port: 8080
    host: 0.0.0.0
    read_timeout: 3s
    read_header_timeout: 3s
    write_timeout: 3s
    idle_timeout: 3s
    routes:
    - meta:
        available_status: 204
        ready_status: 204
        unready_status: 503
Configuration path Description
server : graceful_timeout The amount of time the forwarder returns a bad readiness/health check and still accepts new connections. This is also the time to wait between receiving a signal to stop and actually beginning the shutdown of the server itself. This allows the load balancer time to remove the forwarder from the pool.
server : drain_timeout The amount of time the forwarder waits for active connections to successfully close on their own before being closed by the server.
server : http : port The port number that the HTTP server listens on for health checks from the load balancer. Must be between 1024-65535.
server : http : host The IP address, or hostname that can be resolved to IP addresses, that the server should listen to. If empty, the default value is local system (0.0.0.0).
server : http : read_timeout Used to tune the HTTP server. Typically, does not need to be changed from the default setting. The maximum amount of time allowed to read the entire request, both the header and the body. You can set both read_timeout and read_header_timeout.
server : http : read_header_timeout Used to tune the HTTP server. Typically, does not need to be changed from the default setting. The maximum amount of time allowed to read request headers. The connection's read the deadline is reset after reading the header.
server : http : write_timeout Used to tune the HTTP server. Typically, does not need to be changed from the default setting. The maximum amount of time allowed to send a response. It is reset when a new request header is read.
server : http : idle_timeout Used to tune the HTTP server. Typically, does not need to be changed from the default setting. The maximum amount of time to wait for the next request when idle connections are enabled. If idle_timeout is zero, the value of read_timeout is used. If both are zero, the read_header_timeout is used.
routes : meta : ready_status The status code the forwarder returns when it is ready to accept the traffic in either of the following situations:
  • Readiness check is received from a container scheduler or orchestrator, such as Kubernetes.
  • Health check is received from a traditional load balancer.
routes : meta : unready_status The status code the forwarder returns when it is not ready to accept traffic.
routes : meta : available_status The status code the forwarder returns when a liveness check is received and the forwarder is available. Container schedulers/orchestrators such as Kubernetes often send liveness checks.

Frequently asked questions

How do I update my forwarder?

The Windows forwarder is not constantly updated as few customers use it. The Linux forwarder is constantly updated through a Shell script in the docker image so there is no need to provide any executable for this. However, if a customer opens a support case to obtain the latest Windows executable file for the forwarder, the support team provides an EXE file to the customer through the support portal.

What is a Docker container?

  • Docker containers, are like virtual machines that provide additional security, isolation and resource management.

  • Virtual Machines—have both a privileged space (linux kernel) and a user space (everything you interact with: libc, python, ls, tcpdump, and so on).

  • Containers—have only a user space (everything you interact with: libc, python, ls, tcpdump, and so on) and rely on the host's privilege space.

Why distribute Chronicle forwarder using a container?

  • Better security through isolation:
    • Customer environment and requirements do not affect Chronicle forwarder.
    • Chronicle forwarder environment and requirements do not affect the customer.
    • Container distribution mechanism already exists and can be private and separate for Google Cloud and customers. https://cloud.google.com/container-registry/

Why only Linux for containers? What about Windows?

  • Containers were developed for Linux first and are production ready.

  • Windows support for Containers is in progress. Containers are available for Windows Server 2016 and Windows 10.

Do you need to learn advanced Docker commands?

  • Chronicle forwarder uses a single container, so there is no need to learn about Swarm, orchestration, Kubernetes, or other advanced Docker concepts or commands.