Collect Tanium audit logs

Supported in:

This document explains how to ingest Tanium audit logs to Google Security Operations using Amazon S3 using Tanium Connect's native S3 export capability. The parser extracts the logs, initially clearing numerous default fields. It then parses the log message using grok and the json filter, extracting fields like timestamp, device IP, and audit details. The parser maps these extracted fields to the UDM, handling various data types and conditional logic to populate appropriate UDM fields based on the presence and values of specific Tanium audit log attributes.

Before you begin

Make sure you have the following prerequisites:

  • A Google SecOps instance
  • Privileged access to Tanium Connect and Tanium Console
  • Privileged access to AWS (S3, IAM)

Create an Amazon S3 bucket

  1. Open the Amazon S3 console.
  2. If required, you can change the Region.
    • From the navigation bar, select the Region where you want your Tanium Audit logs to reside.
  3. Click Create Bucket.
    • Bucket Name: Enter a meaningful name for the bucket (for example, tanium-audit-logs).
    • Region: Select your preferred Region (for example, us-east-1).
    • Click Create.

Create an IAM user with full access to Amazon S3

  1. Open the IAM console.
  2. Click Users > Add user.
  3. Enter a user name (for example, tanium-connect-s3-user).
  4. Select both Programmatic access and/or AWS Management Console access as needed.
  5. Select either Autogenerated password or Custom password.
  6. Click Next: Permissions.
  7. Choose Attach existing policies directly.
  8. Search for and select the AmazonS3FullAccess policy to the user.
  9. Click Next: Tags.
  10. Click Next: Review.
  11. Click Create user.
  12. Copy and save the Access Key ID and Secret Access Key for future reference.

Configure permissions on Amazon S3 bucket

  1. In the Amazon S3 console, choose the bucket that you previously created.
  2. Click Permissions > Bucket policy.
  3. In the Bucket Policy Editor, add the following policy:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::YOUR_ACCOUNT_ID:user/tanium-connect-s3-user"
          },
          "Action": [
            "s3:PutObject",
            "s3:PutObjectAcl",
            "s3:GetObject",
            "s3:ListBucket"
          ],
          "Resource": [
            "arn:aws:s3:::tanium-audit-logs",
            "arn:aws:s3:::tanium-audit-logs/*"
          ]
        }
      ]
    }
    
  4. Replace the following variables:

    • Change YOUR_ACCOUNT_ID to your AWS account ID.
    • Change tanium-audit-logs to your actual bucket name if different.
    • Change tanium-connect-s3-user to your actual IAM username if different.
  5. Click Save.

Configure Tanium Connect for S3 export

Create an AWS S3 connection in Tanium Connect

  1. Sign in to the Tanium Console as an administrator.
  2. Go to Tanium Connect > Connections.
  3. Click Create Connection.
  4. In the General Information section, provide the following configuration details:
    • Name: Enter a descriptive name (for example, Tanium Audit to S3).
    • Description: Enter a meaningful description (for example, Export Tanium audit logs to S3 for Google SecOps ingestion).
    • Enable: Select to enable the connection.
    • Log Level: Select Information (default) or adjust as needed.

Configure the connection source

  1. In the Configuration section, for Source, select Tanium Audit.
  2. Configure the audit source settings:
    • Days of History Retrieved: Enter the number of days of historical audit data to retrieve (for example, 7 for one week).
    • Audit Types: Select the audit types you want to export. Choose from:
      • Action History: Actions issued by console operators.
      • Authentication: User authentication events.
      • Content: Content changes and modifications.
      • Groups: Computer group changes.
      • Packages: Package-related activities.
      • Sensors: Sensor modifications.
      • System Settings: System configuration changes.
      • Users: User management activities.

Configure the AWS S3 destination

  1. For Destination, select AWS S3.
  2. Provide the following configuration details:
    • Destination Name: Enter a name (for example, Google SecOps S3 Bucket).
    • AWS Access Key: Enter the Access Key ID from the IAM user created earlier.
    • AWS Secret Key: Enter the Secret Access Key from the IAM user created earlier.
    • Bucket Name: Enter your S3 bucket name (for example, tanium-audit-logs).
    • Bucket Path: Optional. Enter a path prefix (for example, tanium/audit/).
    • Region: Select the AWS region where your bucket resides (for example, us-east-1).

Configure the format and schedule

  1. In the Format section, configure the output format:
    • Format Type: Select JSON.
    • Include Column Headers: Select if you want column headers included.
    • Generate Document: Deselect this option to send raw JSON data.
  2. In the Schedule section, configure when the connection runs:
    • Schedule Type: Select Cron.
    • Cron Expression: Enter a cron expression for regular exports (for example, 0 */1 * * * for hourly exports).
    • Start Date: Set the start date for the schedule.
  3. Click Save Changes.

Test and run the connection

  1. From the Connect Overview page, go to Connections.
  2. Click the connection you created (Tanium Audit to S3).
  3. Click Run Now to test the connection.
  4. Confirm that you want to run the connection.
  5. Monitor the connection status and verify that audit logs are being exported to your S3 bucket.

Optional: Create read-only IAM user & keys for Google SecOps

  1. Go to AWS Console > IAM > Users > Add users.
  2. Click Add users.
  3. Provide the following configuration details:
    • User: Enter secops-reader.
    • Access type: Select Access key – Programmatic access.
  4. Click Create user.
  5. Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
  6. In the JSON editor, enter the following policy:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": ["s3:GetObject"],
          "Resource": "arn:aws:s3:::tanium-audit-logs/*"
        },
        {
          "Effect": "Allow",
          "Action": ["s3:ListBucket"],
          "Resource": "arn:aws:s3:::tanium-audit-logs"
        }
      ]
    }
    
  7. Set the name to secops-reader-policy.

  8. Go to Create policy > search/select > Next > Add permissions.

  9. Go to Security credentials > Access keys > Create access key.

  10. Download the CSV (these values are entered into the feed).

Configure a feed in Google SecOps to ingest Tanium Audit logs

  1. Go to SIEM Settings > Feeds.
  2. Click + Add New Feed.
  3. In the Feed name field, enter a name for the feed (for example, Tanium Audit logs).
  4. Select Amazon S3 V2 as the Source type.
  5. Select Tanium Audit as the Log type.
  6. Click Next.
  7. Specify values for the following input parameters:
    • S3 URI: s3://tanium-audit-logs/tanium/audit/ (adjust path if you used a different bucket name or path).
    • Source deletion options: Select deletion option according to your preference.
    • Maximum File Age: Include files modified in the last number of days. Default is 180 days.
    • Access Key ID: User access key with access to the S3 bucket (from the read-only user created above).
    • Secret Access Key: User secret key with access to the S3 bucket (from the read-only user created above).
    • Asset namespace: The asset namespace.
    • Ingestion labels: The label to be applied to the events from this feed.
  8. Click Next.
  9. Review your new feed configuration in the Finalize screen, and then click Submit.

UDM Mapping Table

Log Field UDM Mapping Logic
ActionId metadata.product_log_id Directly mapped from the ActionId field.
ActionName security_result.action_details Directly mapped from the ActionName field.
Approver additional.fields[Approver].value.string_value Directly mapped from the Approver field.
Approver principal.user.userid Mapped from the Approver field if Issuer is not present.
audit_name metadata.description Directly mapped from the audit_name field.
audit_row_id additional.fields[audit_row_id].value.string_value Directly mapped from the audit_row_id field.
audit_type additional.fields[audit_type].value.string_value Directly mapped from the audit_type field.
authentication_type principal.user.attribute.labels[authentication_type].value Directly mapped from the authentication_type field extracted from the details field.
Command principal.process.command_line Directly mapped from the Command field after URL decoding.
creation_time target.resource.attribute.creation_time Directly mapped from the creation_time field.
details network.session_id Extracted from the details field using key-value parsing.
details principal.user.attribute.labels[authentication_type].value Extracted from the details field using key-value parsing.
details principal.asset.ip, principal.ip The IP address is extracted from the details field using key-value parsing and mapped to both principal.asset.ip and principal.ip.
DistributeOver additional.fields[DistributeOver].value.string_value Directly mapped from the DistributeOver field.
dvc_ip intermediary.hostname Directly mapped from the dvc_ip field extracted from the syslog message.
dvc_ip observer.ip Directly mapped from the dvc_ip field if logstash.collect.host is not present.
Expiration additional.fields[Expiration].value.string_value Directly mapped from the Expiration field.
host.architecture target.asset.hardware.cpu_platform Directly mapped from the host.architecture field.
host.id target.asset.asset_id Directly mapped from the host.id field, prefixed with "Host ID:".
host.ip target.ip Directly mapped from the host.ip field.
host.mac target.mac Directly mapped from the host.mac field.
host.name target.hostname Directly mapped from the host.name field if host.hostname is not present.
host.os.kernel target.platform_patch_level Directly mapped from the host.os.kernel field.
host.os.name additional.fields[os_name].value.string_value Directly mapped from the host.os.name field.
host.os.version target.platform_version Directly mapped from the host.os.version field.
InsertTime additional.fields[InsertTime].value.string_value Directly mapped from the InsertTime field.
Issuer additional.fields[Issuer].value.string_value Directly mapped from the Issuer field.
Issuer principal.user.userid Directly mapped from the Issuer field if present.
last_modified_by principal.resource.attribute.labels[last_modified_by].value Directly mapped from the last_modified_by field.
log.source.address principal.ip The IP address is extracted from the log.source.address field and mapped to principal.ip.
log.source.address principal.port The port is extracted from the log.source.address field.
logstash.collect.host observer.ip Directly mapped from the logstash.collect.host field if present.
logstash.collect.timestamp metadata.collected_timestamp Directly mapped from the logstash.collect.timestamp field.
logstash.ingest.timestamp metadata.ingested_timestamp Directly mapped from the logstash.ingest.timestamp field.
logstash.irm_environment additional.fields[irm_environment].value.string_value Directly mapped from the logstash.irm_environment field.
logstash.irm_region additional.fields[irm_region].value.string_value Directly mapped from the logstash.irm_region field.
logstash.irm_site additional.fields[irm_site].value.string_value Directly mapped from the logstash.irm_site field.
logstash.process.host intermediary.hostname Directly mapped from the logstash.process.host field.
message dvc_ip, json_data, timestamp Parsed using grok to extract dvc_ip, json_data, and timestamp.
modification_time target.resource.attribute.last_update_time Directly mapped from the modification_time field.
modifier_user_id principal.resource.attribute.labels[modifier_user_id].value Directly mapped from the modifier_user_id field.
object_id target.resource.product_object_id Directly mapped from the object_id field.
object_name target.resource.name Directly mapped from the object_name field.
object_type_name target.resource.attribute.labels[object_type_name].value Directly mapped from the object_type_name field.
PackageName additional.fields[PackageName].value.string_value Directly mapped from the PackageName field.
SourceId additional.fields[SourceId].value.string_value Directly mapped from the SourceId field.
StartTime additional.fields[StartTime].value.string_value Directly mapped from the StartTime field.
Status security_result.action Mapped to "BLOCK" if Status is "Closed", "ALLOW" if Status is "Open".
Status security_result.summary Directly mapped from the Status field.
tanium_audit_type metadata.product_event_type Directly mapped from the tanium_audit_type field.
timestamp metadata.event_timestamp Directly mapped from the timestamp field extracted from the syslog message or message field.
type additional.fields[type].value.string_value Directly mapped from the type field.
type_name metadata.product_event_type Directly mapped from the type_name field.
User principal.user.userid Directly mapped from the User field. Determined by parser logic based on the presence of src_ip, has_target, and has_user. Can be "NETWORK_CONNECTION", "USER_RESOURCE_ACCESS", "STATUS_UPDATE", or "GENERIC_EVENT". Hardcoded to "TANIUM_AUDIT". Hardcoded to "cybersecurity". Hardcoded to "TANIUM_AUDIT".
@version metadata.product_version Directly mapped from the @version field.
agent.ephemeral_id additional.fields[ephemeral_id].value.string_value Directly mapped from the agent.ephemeral_id field.
agent.id observer.asset_id Directly mapped from the agent.id field, prefixed with "filebeat:".
agent.type observer.application Directly mapped from the agent.type field.
agent.version observer.platform_version Directly mapped from the agent.version field.
Comment security_result.description Directly mapped from the Comment field.
host.hostname target.hostname Directly mapped from the host.hostname field if present.
input.type network.ip_protocol Mapped to "TCP" if input.type is "tcp" or "TCP".
syslog_severity security_result.severity Mapped to "HIGH" if syslog_severity is "error" or "warning", "MEDIUM" if "notice", "LOW" if "information" or "info".
syslog_severity security_result.severity_details Directly mapped from the syslog_severity field.

Need more help? Get answers from Community members and Google SecOps professionals.