Chronicle forwarder for Windows on Docker
This document describes how to install and configure the Chronicle forwarder for Windows on Docker.
System requirements
The following are general recommendations. For recommendations specific to your system, contact Chronicle Support.
- Windows Server Version: The Chronicle forwarder is supported on Microsoft Windows Server 2022.
- RAM: 1.5 GB for each collected log type. For example, endpoint detection and response (EDR), DNS, and DHCP are all separate log types. You would need 4.5 GB of RAM to collect data for all three. For a list of supported default parsers and log types, see Supported default parsers.
- CPU: 2 CPUs are sufficient to handle less than 10,000 events per second (EPS) total across all data types. If you expect to send more than 10,000 EPS, then 4 to 6 CPUs are necessary.
Disk: 100 MB of disk space is sufficient, regardless of how much data the Chronicle forwarder handles. You can buffer the disk by adding
write_to_disk_buffer_enabled
andwrite_to_disk_dir_path
parameters in the configuration file.For example:
- <collector>: common: ... write_to_disk_buffer_enabled: true write_to_disk_dir_path: directory_path ...
Google IP address ranges
You might need the IP address range to open when setting up a Chronicle forwarder configuration, such as when setting up the configuration for your firewall. It's not possible for Google to provide a specific list of IP addresses. However, you can obtain Google IP address ranges.
Verify the firewall configuration
If you have firewalls or authenticated proxies between the Chronicle forwarder container and the internet, they require rules to allow access to the following Google Cloud hosts:
Connection Type | Destination | Port |
TCP | malachiteingestion-pa.googleapis.com | 443 |
TCP | asia-northeast1-malachiteingestion-pa.googleapis.com | 443 |
TCP | asia-south1-malachiteingestion-pa.googleapis.com | 443 |
TCP | asia-southeast1-malachiteingestion-pa.googleapis.com | 443 |
TCP | australia-southeast1-malachiteingestion-pa.googleapis.com | 443 |
TCP | europe-malachiteingestion-pa.googleapis.com | 443 |
TCP | europe-west2-malachiteingestion-pa.googleapis.com | 443 |
TCP | europe-west3-malachiteingestion-pa.googleapis.com | 443 |
TCP | europe-west6-malachiteingestion-pa.googleapis.com | 443 |
TCP | me-central2-malachiteingestion-pa.googleapis.com | 443 |
TCP | me-west1-malachiteingestion-pa.googleapis.com | 443 |
TCP | northamerica-northeast2-malachiteingestion-pa.googleapis.com | 443 |
TCP | accounts.google.com | 443 |
TCP | gcr.io | 443 |
TCP | oauth2.googleapis.com | 443 |
TCP | storage.googleapis.com | 443 |
You can check network connectivity to Google Cloud using the following steps:
Start Windows PowerShell with administrator privileges (Click Start, type
PowerShell
, right-click Windows PowerShell, and click Run as administrator).Run the following command.
C:\> test-netconnection <host> -port <port>
The command returns that
TcpTestSucceeded
istrue
.For example:
C:\> test-netconnection malachiteingestion-pa.googleapis.com -port 443 ComputerName : malachiteingestion-pa.googleapis.com RemoteAddress : 198.51.100.1 RemotePort : 443 InterfaceAlias : Ethernet SourceAddress : 203.0.113.1 TcpTestSucceeded : True
Install Docker on Microsoft Windows
This section describes how to install Docker on Microsoft Windows using the command-line interface and PowerShell.
Advantages of Chronicle forwarder using a container:
- Better security through isolation:
- Customer environment and requirements do not affect Chronicle forwarder.
- Chronicle forwarder environment and requirements do not affect the customer.
- Container distribution mechanism already exists and can be private and separate for Google Cloud and customers. For more information, see Artifact Registry.
Complete the following steps on Microsoft Windows Server Core 2022.
Enable the Microsoft Windows container feature.
Install-WindowsFeature containers -Restart
Execute the following command in PowerShell Administrator mode to install Docker CE:
Invoke-WebRequest -UseBasicParsing "https://raw.githubusercontent.com/microsoft/Windows-Containers/Main/helpful_tools/Install-DockerCE/install-docker-ce.ps1" -o install-docker-ce.ps1 .\install-docker-ce.ps1
Test the Docker command line interface by running the command
docker ps
, which returns a list of running containers. If the command does not list any containers that are running, the installation is successful. If Docker is not installed properly, an error is displayed.For more information, see Get started: Prep Windows for containers.
For enterprise deployments, install Mirantis Container Runtime, also known as Docker EE.
Configure the Chronicle forwarder
To configure the Chronicle forwarder for Windows on Docker, see Manage forwarder configurations through the Chronicle UI.
When you configure the Chronicle forwarder, ensure that all paths in the forwarder start with the `c:` prefix.
Any changes made to the configuration file will be automatically applied by the Chronicle forwarder within 5 minutes.
To collect packet data using the Chronicle forwarder for Windows on Docker, see Collect packet data.
Run the Chronicle forwarder within the Docker container
If you are upgrading the Chronicle forwarder, start by cleaning up previous Docker runs. In the following example, the name of the Docker container is
cfps
.docker stop cfps docker rm cfps
Obtain the latest Docker image from Google Cloud using this Docker pull command.
docker pull gcr.io/chronicle-container/cf_production_stable_windows
Start the Chronicle forwarder from the Docker container.
docker run ` --detach ` --name cfps ` --restart=always ` --log-opt max-size=100m ` --log-opt max-file=10 ` -p 10514:10514 ` -v C:\config\:C:/opt/chronicle/external ` gcr.io/chronicle-container/cf_production_stable_windows
You can add multiple ports using multiple options or multiple ranges. For example:
-p 3001:3000 -p 2023:2022
or-p 7000-8000:7000-8000
View forwarder logs
To view the Chronicle forwarder logs, execute the following command:
sudo docker logs cfps
To view the path of the file in which the logs are stored, execute the following command:
docker inspect --format='{{.LogPath}}' CONTAINER_NAME
To view the live running logs, execute the following command:
sudo docker logs cfps -f
To store the logs in a file, execute the following command:
sudo docker logs cfps &> logs.txt
Uninstall the Chronicle forwarder
The following Docker commands enable you to stop and uninstall or remove the Chronicle forwarder.
This command stops the Chronicle forwarder container:
docker stop cfps
This command removes the Chronicle forwarder container:
docker rm cfps
Upgrade the Chronicle forwarder
The Chronicle forwarder for Windows on Docker is constantly updated using a shell script in the Docker image so there is no need to provide any executable for this.
Collect data
The following sections help you configure the Chronicle forwarder to ingest different types of data, which is forwarded to the Chronicle instance.
Do not configure a value greater than 1 MB for batch_n_bytes
. If you configure the value greater than 1 MB, it will automatically reset the value to 1 MB.
Collect Splunk data
You can configure the Chronicle forwarder to forward your Splunk data to Chronicle. Google Cloud configures the Chronicle forwarder with the following information to forward your data from Splunk:
URL for the Splunk REST API (for example,
https://10.0.113.15:8089
).Splunk queries to generate data for each of the required data types (for example,
index=dns
).
FORWARDER_NAME.conf output: collectors: - splunk: common: enabled: true data_type: WINDOWS_DNS data_hint: "#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name" batch_n_seconds: 10 batch_n_bytes: 819200 url: https://127.0.0.1:8089 is_ignore_cert: true minimum_window_size: 10s maximum_window_size: 30s query_string: search index=* sourcetype=dns query_mode: realtime
- Make your Splunk account credentials available to the Chronicle
forwarder. You can do this by creating a
creds.txt
file.
To use a creds.txt
file:
Create a local file for your Splunk credentials and name it
creds.txt
.Place your username on the first line and the password on the second line:
cat creds.txt myusername mypassword
To use the Chronicle forwarder to access a Splunk instance, copy the
creds.txt
file to the configuration directory (the same directory where the configuration files reside). For example:cp creds.txt c:/opt/chronicle/config/creds.txt
Verify the
creds.txt
file is in its proper location:ls c:/opt/chronicle/config
Collect syslog data
The Chronicle forwarder can work as a syslog server. You can configure any appliance or server that supports sending syslog data over a TCP or UDP connection to forward its data to the Chronicle forwarder. You can control the exact data that the appliance or the server sends to the Chronicle forwarder. The Chronicle forwarder can then forward the data to Chronicle.
The FORWARDER_NAME.conf
configuration file (provided by
Google Cloud) specifies which ports to monitor for each type of
forwarded data (for example, port 10514). By default, the Chronicle
forwarder accepts both TCP and UDP connections.
Configure rsyslog
To configure rsyslog, you need to specify a target for each port (for example, each data type). Consult your system documentation for the correct syntax. The following examples illustrate the rsyslog target configuration:
TCP log traffic:
dns.* @@192.168.0.12:10514
UDP log traffic:
dns.* @192.168.0.12:10514
Enable TLS for syslog configurations
You can enable TLS for the syslog connection to the Chronicle
forwarder. In the Chronicle forwarder configuration file
(FORWARDER_NAME.conf
), specify the location of your own
generated certificate and certificate key as shown in the following example:
certificate | c:/opt/chronicle/external/certs/client_generated_cert.pem |
certificate_key | c:/opt/chronicle/external/certs/client_generated_cert.key |
Based on the example shown, modify the Chronicle forwarder
configuration file (FORWARDER_NAME.conf
) as follows:
collectors: - syslog: common: enabled: true data_type: WINDOWS_DNS data_hint: batch_n_seconds: 10 batch_n_bytes: 1048576 tcp_address: 0.0.0.0:10515 tcp_buffer_size: 65536 connection_timeout_sec: 60 certificate: "c:/opt/chronicle/external/certs/client_generated_cert.pem" certificate_key: "c:/opt/chronicle/external/certs/client_generated_cert.key" minimum_tls_version: "TLSv1_3"
A few important points to note:
You can configure the TCP buffer size. The default TCP buffer size is 64 KB.
The default and recommended value for
connection_timeout
is 60 seconds. If the connection is inactive for a specified time, then the TCP connection is terminated.The minimum TLS version is checked against the TLS version of the input request. The TLS version of the input request should be greater than the minimum TLS version. The minimum TLS version should be one of the following values:
TLSv1_0
,TLSv1_1
,TLSv1_2
,TLSv1_3
.
You can create a certs directory under the configuration directory and store the certificate files there.
Collect file data
A file collector is designed to fetch the logs from a file. The file should be bound to the Docker container.
Use this if you want to manually upload logs from a single log file. This can be used to backfill logs for a particular log file.
Start the Chronicle forwarder from the Docker container:
docker run ` --name cfps ` --log-opt max-size=100m ` --log-opt max-file=10 ` -p 10514:10514 ` -v c:/opt/chronicle/config:c:/opt/chronicle/external ` -v c:/var/log/crowdstrike/falconhoseclient:c:/opt/chronicle/edr ` gcr.io/chronicle-container/cf_production_stable
You can add multiple ports using multiple options or multiple ranges. For example: -p 3001:3000 -p 2023:2022
or -p 7000-8000:7000-8000
This docker run
command is critical to map the load volume to the container.
Based on this example, you should modify the Chronicle forwarder
configuration (FORWARDER_NAME.conf
file) as follows.
The sample.txt
file should be present in the
/var/log/crowdstrike/falconhostclient
folder.
collectors: - file: common: enabled: true data_type: CS_EDR data_hint: batch_n_seconds: 10 batch_n_bytes: 1048576 file_path: c:/opt/chronicle/edr/output/sample.txt filter:
Flag configurations
skip_seek_to_end
(bool): This flag is set to false
by default and the file
input only sends new log lines as input. Setting this to true
causes all the
previous log lines to be sent again during forwarder restarts. This causes log
duplication. Setting this flag to true
is helpful in certain
situations (for example, during outages), because restarting the forwarder sends the
missing log lines again.
poll
(bool): File collector uses the Tail library to check for any changes in
the file system. By setting this flag to true
, the Tail library uses the polling
method instead of the default notify method.
Collect packet data
The Chronicle forwarder can capture packets directly from a network interface using Npcap on Windows systems.
Packets are captured and sent to Google Cloud instead of log entries. Capture is done from a local interface only.
Contact Chronicle Support to update your Chronicle forwarder configuration file to support packet capture.
To run a Packet Capture (PCAP) forwarder, you need the following:
Install Npcap on the Microsoft Windows host.
Grant the Chronicle forwarder root or administrator privileges to monitor the network interface.
No command-line options are needed.
On the Npcap installation, enable WinPcap compatibility mode.
To configure a PCAP forwarder, Google Cloud needs the GUID for the interface used to capture packets.
Run getmac.exe
on the machine where you plan to install the Chronicle forwarder
(either the server or the machine listening on the span port) and send the output to Chronicle.
Alternatively, you could modify the configuration file. Locate the PCAP section and replace the GUID value shown next to interface with GUID displayed from running getmac.exe.
For example, here is an original PCAP section:
- pcap:
common:
enabled: true
data_type: PCAP_DNS
batch_n_seconds: 10
batch_n_bytes: 1048576
interface: \Device\NPF_{1A7E7C8B-DD7B-4E13-9637-0437AB1A12FE}
bpf: udp port 53
Here is the output from running getmac.exe
:
C:\>getmac.exe
Physical Address Transport Name
===========================================================================
A4-73-9F-ED-E1-82 \Device\Tcpip_{2E0E9440-ABFF-4E5B-B43C-E188FCAD1234}
And finally, here is the revised PCAP section with the new GUID:
- pcap:
common:
enabled: true
data_type: PCAP_DNS
batch_n_seconds: 10
batch_n_bytes: 1048576
interface: \Device\NPF_{2E0E9440-ABFF-4E5B-B43C-E188FCAD9734}
bpf: udp port 53
Collect data from Kafka topic
You can ingest data from the Kafka topics just like you can from syslog. The consumer groups are used to enable you to deploy up to three Chronicle forwarders and pull data from the same Kafka topic. For more information, refer to Kafka.
For more information on Kafka consumer groups, refer to Kafka consumer groups
Example configuration: Kafka input
The following Chronicle forwarder configuration shows how to set up the Chronicle forwarder to ingest data from the Kafka topics.
FORWARDER_NAME.conf
file
collectors: - kafka: common: batch_n_bytes: 1048576 batch_n_seconds: 10 data_hint: null data_type:NIX_SYSTEM
enabled: true topic: example-topic group_id: chronicle-forwarder timeout: 60s brokers: ["broker-1:9092", "broker-2:9093"] tls: insecureSkipVerify: true certificate: "c:/path/to/cert.pem" certificate_key: "c:/path/to/cert.key" - syslog: common: batch_n_bytes: 1048576 batch_n_seconds: 10 data_hint: null data_type:WINEVTLOG
enabled: true tcp_address: 0.0.0.0:30001 connection_timeout_sec: 60
FORWARDER_NAME_auth.conf
file
collectors: - kafka: username: user password: password - syslog:
Collect WebProxy data
The Chronicle forwarder can capture WebProxy data directly from a network interface using Npcap and send it to Google Cloud.
To enable WebProxy data capture for your system, contact Chronicle Support.
Before you run a WebProxy forwarder, do the following:
Install Npcap on the Microsoft Windows host. Enable WinPcap compatibility mode during the installation.
Grant root or administrator privileges to the Chronicle forwarder to monitor the network interface.
To configure a WebProxy forwarder, Google Cloud needs the GUID for the interface used to capture the WebProxy packets.
Run
getmac.exe
on the machine where you want to install the Chronicle forwarder and send the output to Chronicle. Alternatively, you can modify the configuration file. Locate the WebProxy section and replace the GUID shown next to the interface with the GUID displayed after runninggetmac.exe
.Modify the Chronicle forwarder configuration (
FORWARDER_NAME.conf
) file as follows:- webproxy: common: enabled : true data_type: <Your LogType> batch_n_seconds: 10 batch_n_bytes: 1048576 interface: \Device\NPF_{2E0E9440-ABFF-4E5B-B43C-E188FCAD9734} bpf: tcp and dst port 80
Customize configurations
The following table lists important parameters used in the forwarder configuration file.
Parameter | Description |
---|---|
data_type | The type of log data that the collector can collect and process. |
metadata | Metadata, which overrides global metadata. |
max_file_buffer_bytes | Maximum number of bytes that can be accumulated in the disk or file buffer. The default value is 1073741824 , which is 1 GB. |
max_memory_buffer_bytes | Maximum number of bytes that can be accumulated in the memory buffer. The
default value is 1073741824 , which is 1 GB. |
write_to_disk_dir_path | The path to be used for file or disk buffer. |
write_to_disk_buffer_enabled | If true , disk buffer is used instead of memory buffer. The default value is false .
|
batch_n_bytes | Maximum number of bytes that can be accumulated by the collector after
which the data is batched. The default value is 1048576 , which is
1 MB. |
batch_n_seconds | The number of seconds after which the data gathered by the collector is batched. The default value is 11 seconds. |
data_hint | Data format that the collector can receive (usually the log file header that describes the format). |
For an extensive list of parameters used in the configuration file, see Forwarder configuration fields and Collector configuration fields.
Toggle data compression
Log compression reduces network bandwidth consumption when transferring logs to Chronicle. However, the compression might cause an increase in CPU usage. The trade-off between CPU usage and bandwidth depends on many factors, including the type of log data, the compressibility of that data, the availability of CPU cycles on the host running the Chronicle forwarder, and the need for reducing network bandwidth consumption.
For example, text-based logs compress well and can provide substantial bandwidth savings with low CPU usage. However, encrypted payloads of raw packets do not compress well and incur higher CPU usage.
By default, log compression is disabled. Enabling log compression might reduce bandwidth consumption. However, enabling log compression might also increase CPU usage. Be aware of the trade-off.
To enable log compression, set the compression
field to
true
in the Chronicle forwarder configuration file as
shown in the following example:
FORWARDER_NAME.conf
file
output: compression: true url: malachiteingestion-pa.googleapis.com:443 identity: identity: collector_id: 10479925-878c-11e7-9421-10604b7cb5c1 customer_id: ebdc4bb9-878b-11e7-8455-10604b7cb5c1 ...
The FORWARDER_NAME_auth.conf
file
output: identity: secret_key: | { "type": "service_account", ... }
Configure disk buffering
Disk buffering enables you to buffer backlogged messages to disk as opposed to memory. The backlogged messages can be stored in case the Chronicle forwarder crashes or the underlying host crashes. Be aware that enabling disk buffering can affect performance.
If disk buffering is disabled, the Chronicle forwarder uses 1 GB of memory (RAM) for each
log type (for example, for each connector). Specify the max_memory_buffer_bytes
configuration parameter. The maximum memory allowed is 4 GB.
You can configure automatic disk buffering to use a dynamically shared buffer across collectors, which deals better with spikes in traffic. To enable the dynamically shared buffer, add the following in your forwarder config:
auto_buffer: enabled: true target_memory_utilization: 80
If automatic disk buffering is enabled but
target_memory_utilization
is not defined, it uses a default value
of 70
.
If you are running the Chronicle forwarder using Docker, Google recommends mounting a volume separate from your configuration volume for isolation purposes. Also, each input should be isolated with its own directory or volume to avoid conflicts.
Example configuration: disk buffering
The following configuration includes syntax to enable disk buffering:
collectors: - syslog: common: write_to_disk_buffer_enabled: true # c:/buffers/NIX_SYSTEM
is part of the external mounted volume for the forwarder write_to_disk_dir_path: c:/buffers/NIX_SYSTEM
max_file_buffer_bytes: 1073741824 batch_n_bytes: 1048576 batch_n_seconds: 10 data_hint: null data_type:NIX_SYSTEM
enabled: true tcp_address: 0.0.0.0:30000 connection_timeout_sec: 60 - syslog: common: batch_n_bytes: 1048576 batch_n_seconds: 10 data_hint: null data_type:WINEVTLOG
enabled: true tcp_address: 0.0.0.0:30001 connection_timeout_sec: 60
Set regular expression filters
Regular expression filters enable you to filter logs based on regular expression matches against raw logs.
The filters employ the RE2 syntax.
The filters must include a regular expression and, optionally, define a behavior when there is a match. The default behavior on a match is block (you can also explicitly configure it as block).
Alternatively, you can specify filters with the allow
behavior. If you specify
any allow
filters, the Chronicle forwarder blocks any logs that do not match at least one
allow
filter.
It is possible to define an arbitrary number of filters. Block filters take
precedence over allow
filters.
When filters are defined, they must be assigned a name. The names of active filters will be reported to Chronicle using Forwarder health metrics. Filters defined at the root of the configuration are merged with filters defined at the collector level. The collector level filters take precedence in cases of conflicting names. If no filters are defined either at the root or collector level, the behavior is to allow all.
Example configuration: regular expression filters
In the following Chronicle forwarder configuration, the WINEVTLOG
logs that
do not match the root filter (allow_filter
) are blocked. Given the regular
expression, the filter only allows logs with priorities between 0 and 99.
However, any NIX_SYSTEM
logs containing 'foo' or 'bar' are blocked,
despite the allow_filter
. This is because the filters use a logical OR. All
logs are processed until a filter is triggered.
regex_filters: allow_filter: regexp: ^<[1-9][0-9]?$>.*$ behavior_on_match: allow collectors: - syslog: common: regex_filters: block_filter_1: regexp: ^.*foo.*$ behavior_on_match: block block_filter_2: regexp: ^.*bar.*$ batch_n_bytes: 1048576 batch_n_seconds: 10 data_hint: null data_type:NIX_SYSTEM
enabled: true tcp_address: 0.0.0.0:30000 connection_timeout_sec: 60 - syslog: common: batch_n_bytes: 1048576 batch_n_seconds: 10 data_hint: null data_type:WINEVTLOG
enabled: true tcp_address: 0.0.0.0:30001 connection_timeout_sec: 60
Configure arbitrary labels
Labels are used to attach arbitrary metadata to logs using key and value pairs. Labels can be configured for an entire Chronicle forwarder or within a specific collector of a Chronicle forwarder. If both are provided, the labels are merged with the collector's keys taking precedence over the Chronicle forwarder's keys if the keys overlap.
Example configuration: arbitrary labels
In the following Chronicle forwarder configuration, the foo=bar
and meow=mix
key and
value pairs are both attached to WINEVTLOG
logs, and the foo=baz
and
meow=mix
key and value pairs are attached to the NIX_SYSTEM
logs.
metadata: labels: foo: bar meow: mix collectors: syslog: common: metadata: labels: foo: baz meow: mix batch_n_bytes: 1048576 batch_n_seconds: 10 data_hint: null data_type:NIX_SYSTEM
enabled: true tcp_address: 0.0.0.0:30000 connection_timeout_sec: 60 syslog: common: batch_n_bytes: 1048576 batch_n_seconds: 10 data_hint: null data_type:WINEVTLOG
enabled: true tcp_address: 0.0.0.0:30001 connection_timeout_sec: 60
Configure namespaces
Use namespace labels to identify logs from distinct network segments and deconflict overlapping IP addresses. You can configure a namespace label for an entire Chronicle forwarder or within a specific collector of the Chronicle forwarder. If both are included then the specific collector's namespace takes precedence.
Any namespace configured for the Chronicle forwarder appears with the associated assets in the Chronicle user interface. You can also search for namespaces using the Chronicle Search feature.
For information about how to view namespaces in the Chronicle user interface, see here.
Example configuration: namespaces
In the following Chronicle forwarder configuration, the WINEVTLOG
logs are
attached to the FORWARDER namespace and NIX_SYSTEM
logs are
attached to the CORPORATE namespace.
metadata: namespace: FORWARDER collectors: - syslog: common: metadata: namespace: CORPORATE batch_n_bytes: 1048576 batch_n_seconds: 10 data_hint: null data_type:NIX_SYSTEM
enabled: true tcp_address: 0.0.0.0:30000 connection_timeout_sec: 60 - syslog: common: batch_n_bytes: 1048576 batch_n_seconds: 10 data_hint: null data_type:WINEVTLOG
enabled: true tcp_address: 0.0.0.0:30001 connection_timeout_sec: 60
Configure load balancing and high availability options
The Chronicle forwarder for Windows on Docker can be deployed in an environment where a Layer 4 load balancer is installed between the data source and Chronicle forwarder instances. This allows you to distribute the log collection across multiple Chronicle forwarders or send logs to a different Chronicle forwarder if one fails. This feature is supported only with the syslog collection type.
The Chronicle forwarder for Windows on Docker includes a built-in HTTP server that responds to HTTP health checks from the load balancer. The HTTP server also helps ensure that logs are not lost during startup or shutdown of a Chronicle forwarder.
Configure the HTTP server, load balancing, and high availability options
in the server
section of the Chronicle forwarder configuration file. These options
support setting timeout durations and status codes returned in response to
health checks received in container scheduler and orchestration-based
deployments, as well as from conventional load balancers.
Use the following URL paths for health, readiness, and liveness checks.
The <host:port>
values are defined in the Chronicle forwarder configuration.
http://<host:port>/meta/available
: liveness checks for container schedulers/orchestrators, such as Kubernetes.http://<host:port>/meta/ready
: readiness checks and traditional load balancer health checks.
The following Chronicle forwarder configuration is an example for load balancing and high availability:
collectors: - syslog: common: batch_n_bytes: 1048576 batch_n_seconds: 10 data_hint: null data_type:NIX_SYSTEM
enabled: true tcp_address: 0.0.0.0:30000 connection_timeout_sec: 60 - syslog: common: batch_n_bytes: 1048576 batch_n_seconds: 10 data_hint: null data_type:WINEVTLOG
enabled: true tcp_address: 0.0.0.0:30001 connection_timeout_sec: 60 server: graceful_timeout: 15s drain_timeout: 10s http: port: 8080 host: 0.0.0.0 read_timeout: 3s read_header_timeout: 3s write_timeout: 3s idle_timeout: 3s routes: - meta: available_status: 204 ready_status: 204 unready_status: 503
Configuration path | Description |
---|---|
server : graceful_timeout | The amount of time the Chronicle forwarder returns a bad readiness/health check and still accepts new connections. This is also the time to wait between receiving a signal to stop and actually beginning the shutdown of the server itself. This allows the load balancer time to remove the Chronicle forwarder from the pool. |
server : drain_timeout | The amount of time the Chronicle forwarder waits for active connections to successfully close on their own before being closed by the server. |
server : http : port | The port number that the HTTP server listens on for health checks from the load balancer. Must be between 1024-65535. |
server : http : host | The IP address, or hostname that can be resolved to IP addresses, that the server should listen to. If empty, the default value is local system (0.0.0.0). |
server : http : read_timeout | Used to tune the HTTP server. Typically, does not need to be changed from the default setting. The maximum amount of time allowed to read the entire request, both the header and the body. You can set both read_timeout and read_header_timeout. |
server : http : read_header_timeout | Used to tune the HTTP server. Typically, does not need to be changed from the default setting. The maximum amount of time allowed to read request headers. The connection's read the deadline is reset after reading the header. |
server : http : write_timeout | Used to tune the HTTP server. Typically, does not need to be changed from the default setting. The maximum amount of time allowed to send a response. It is reset when a new request header is read. |
server : http : idle_timeout | Used to tune the HTTP server. Typically, does not need to be changed from the default setting. The maximum amount of time to wait for the next request when idle connections are enabled. If idle_timeout is zero, the value of read_timeout is used. If both are zero, the read_header_timeout is used. |
routes : meta : ready_status | The status code the Chronicle forwarder returns when it is ready to accept the traffic
in either of the following situations:
|
routes : meta : unready_status | The status code the Chronicle forwarder returns when it is not ready to accept traffic. |
routes : meta : available_status | The status code the Chronicle forwarder returns when a liveness check is received and the Chronicle forwarder is available. Container schedulers/orchestrators such as Kubernetes often send liveness checks. |