Ingest AWS logs into Google Security Operations

This document details the steps for configuring the ingestion of AWS CloudTrail logs and context data into Google Security Operations. These steps also apply to ingesting logs from other AWS services, such as AWS GuardDuty, AWS VPC Flow, AWS CloudWatch, and AWS Security Hub.

To ingest event logs, the configuration directs the CloudTrail logs into an Amazon Simple Storage Service (Amazon S3) bucket, optionally using an Amazon Simple Queue Service (Amazon SQS) queue. If an Amazon SQS queue is used, Google Security Operations reads the Amazon S3 notifications that are sent to the Amazon SQS queue and pulls the corresponding files out of the Amazon S3 bucket. This is effectively a push-based version of an Amazon S3 feed and can be used to achieve better throughput.

The first part of this document provides concise steps for using Amazon S3 as the feed source type or, optionally, using Amazon S3 with Amazon SQS as the feed source type. The second part provides more detailed steps with screenshots for using Amazon S3 as the feed source type. Using Amazon SQS is not covered in the second part. The third part provides information about how to ingest AWS context data about hosts, services, VPC networks, and users.

Basic steps to ingest logs from S3 or S3 with SQS

This section describes the basic steps for ingesting AWS CloudTrail logs into your Google Security Operations instance. The steps describe how to do this using Amazon S3 as the feed source type or, optionally, using Amazon S3 with Amazon SQS as the feed source type.

Configure AWS CloudTrail and S3

In this procedure, you configure AWS CloudTrail logs to be written to an S3 bucket.

  1. In the AWS console, search for CloudTrail.
  2. Click Create trail.
  3. Provide a Trail name.
  4. Select Create new S3 bucket. You may also choose to use an existing S3 bucket.
  5. Provide a name for AWS KMS alias, or choose an existing AWS KMS Key.
  6. You can leave the other settings as default, and click Next.
  7. Choose Event type, add Data events as required, and click Next.
  8. Review the settings in Review and create and click Create trail.
  9. In the AWS console, search for Amazon S3 Buckets.
  10. Click the newly created log bucket, and select the folder AWSLogs. Then click Copy S3 URI and save it for use in the following steps.

Create an SQS queue

Optionally, you can use an SQS queue. If you use an SQS queue, it must be a Standard queue, not a FIFO queue.

For details about creating SQS queues, see Getting started with Amazon SQS.

Set up notifications to your SQS queue

If you use an SQS queue, set up notifications on your S3 bucket to write to your SQS queue. Be sure to attach an access policy.

Configure AWS IAM user

Configure an AWS IAM user which Google Security Operations will use to access both the SQS queue (if used) and the S3 bucket.

  1. In the AWS console, search for IAM.
  2. Click Users, and then in the following screen, click Add Users.
  3. Provide a name for the user, e.g. chronicle-feed-user, Select AWS credential type as Access key - Programmatic access and click Next: Permissions.
  4. In the next step, select Attach existing policies directly and select AmazonS3ReadOnlyAccess or AmazonS3FullAccess, as required. AmazonS3FullAccess would be used if Google Security Operations should clear the S3 buckets after reading logs, to optimize AWS S3 storage costs.
  5. As a recommended alternative to the previous step, you can further restrict access to only the specified S3 bucket by creating a custom policy. Click Create policy and follow the AWS documentation to create a custom policy.
  6. When you apply a policy, make sure that you have included sqs:DeleteMessage. Google Security Operations is not able to delete messages if the sqs:DeleteMessage permission is not attached to the SQS queue. All the messages are accumulated on the AWS side, which causes a delay as Google Security Operations repeatedly attempts to transfer the same files.
  7. Click Next:Tags.
  8. Add any tags if required, and click Next:Review.
  9. Review the configuration and click Create user.
  10. Copy the Access key ID and Secret access key of the created user, for use in the next step.

Create the feed

After completing the procedures above, create a feed to ingest AWS logs from your Amazon S3 bucket into your Google Security Operations instance. If you are also using an SQS queue, in the following procedure select Amazon SQS for the source type instead of Amazon S3.

To create a feed:

  1. In the navigation bar, select Settings, SIEM Settings, and then Feeds
  2. On the Feeds page, click Add New.
  3. In the Add feed dialog, use the Source type dialog to select either Amazon S3 or Amazon SQS.
  4. In the Log Type menu, select AWS CloudTrail (or another AWS service).
  5. Click Next.
  6. Enter the input parameters for your feed in the fields.

    If the source type is Amazon S3:
    1. Select region and provide S3 URI of the Amazon S3 bucket you copied earlier. Further, you could append the S3 URI with:
      
          {{datetime("yyyy/MM/dd")}}
      
          
      As in the following example, so that Google Security Operations would scan logs each time only for a particular day:
      
          s3://aws-cloudtrail-logs-XXX-1234567/AWSLogs/1234567890/CloudTrail/us-east-1/{{datetime("yyyy/MM/dd")}}/
      
          
    2. For URI IS A, select Directories including subdirectories. Select an appropriate option under Source Deletion Option. This should match the permissions of the IAM User account you created earlier.
    3. Provide the Access Key ID and Secret Access Key of the IAM user account you created earlier.

  7. Click Next and Finish.

Detailed steps to ingest logs from S3

Configure AWS CloudTrail (or other service)

Complete the following steps to configure AWS CloudTrail logs and direct these logs to be written to the AWS S3 bucket created in the previous procedure:

  1. In the AWS console, search for CloudTrail.
  2. Click Create trail.

    alt_text

  3. Provide a Trail name.

  4. Select Create new S3 bucket. You may also choose to use an existing S3 bucket.

  5. Provide a name for AWS KMS alias, or choose an existing AWS KMS Key.

    alt_text

  6. You can leave the other settings as default, and click Next.

  7. Choose Event type, add Data events as required, and click Next.

    alt_text

  8. Review the settings in Review and create and click Create trail.

  9. In the AWS console, search for Amazon S3 Buckets.

    alt_text

  10. Click the newly created log bucket, and select the folder AWSLogs. Then click Copy S3 URI and save it for use in the following steps.

    alt_text

Configure AWS IAM User

In this step, we will configure an AWS IAM user which Google Security Operations will use to get log feeds from AWS.

  1. In the AWS console, search for IAM.

    alt_text

  2. Click Users, and then in the following screen, click Add Users.

    alt_text

  3. Provide a name for the user, e.g. chronicle-feed-user, Select AWS credential type as Access key - Programmatic access and click Next: Permissions.

    alt_text

  4. In the next step, select Attach existing policies directly and select AmazonS3ReadOnlyAccess or AmazonS3FullAccess, as required. AmazonS3FullAccess would be used if Google Security Operations should clear the S3 buckets after reading logs, to optimize AWS S3 storage costs. Click Next:Tags.

    alt_text

  5. As a recommended alternative to the previous step, you can further restrict access to only the specified S3 bucket by creating a custom policy. Click Create policy and follow the AWS documentation to create a custom policy.

    alt_text

  6. Add any tags if required, and click Next:Review.

  7. Review the configuration and click Create user.

    alt_text

  8. Copy the Access key ID and Secret access key of the created user, for use in the next step.

    alt_text

Configure Feed in Google Security Operations to Ingest AWS Logs

  1. Go to Google Security Operations settings, and click Feeds.
  2. Click Add New.
  3. Select Amazon S3 for Source Type.
  4. Select AWS CloudTrail (or other AWS service) for Log Type.

alt_text

  1. Click Next.
  2. Select region and provide S3 URI of the Amazon S3 bucket you copied earlier. Further you could append the S3 URI with:

    
    {{datetime("yyyy/MM/dd")}}
    
    

    As in the following example, so that Google Security Operations would scan logs each time only for a particular day:

    
    s3://aws-cloudtrail-logs-XXX-1234567/AWSLogs/1234567890/CloudTrail/us-east-1/{{datetime("yyyy/MM/dd")}}/
    
    
  3. Under URI IS A select Directories including subdirectories. Select an appropriate option under Source Deletion Option, this should match with the permissions of the IAM User account we created earlier.

  4. Provide Access Key ID and Secret Access Key of the IAM User account we created earlier. alt_text

  5. Click Next and Finish.

Steps to ingest AWS context data

To ingest context data about AWS entities (such as hosts, instances, and users) create a feed for each of the following log types, listed by description and ingestion label:

  • AWS EC2 HOSTS (AWS_EC2_HOSTS)
  • AWS EC2 INSTANCES (AWS_EC2_INSTANCES)
  • AWS EC2 VPCS (AWS_EC2_VPCS)
  • AWS Identity and Access Management (IAM) (AWS_IAM)

To create a feed for each log type listed above, do the following:

  1. In the navigation bar, select Settings, SIEM Settings, and then Feeds.
  2. On the Feeds page, click Add New. The Add feed dialog appears.
  3. In the Source type menu, select Third party API.
  4. In the Log Type menu, select AWS EC2 Hosts.
  5. Click Next.
  6. Enter the input parameters for the feed in the fields.
  7. Click Next, and then Finish.

For more detailed information about setting up a feed for each log type, see the following Feed management documentation:

For general information about creating a feed, see Feed management user guide or Feed management API.