Google SecOps는 액세스 키 ID 및 보안 비밀 메서드를 사용한 로그 수집을 지원합니다.
액세스 키 ID와 보안 비밀을 만드는 방법은 AWS로 도구 인증 구성을 참고하세요.
다음 필드에 값을 지정합니다.
소스 유형: Amazon SQS V2
Queue Name: 읽어올 SQS 큐 이름
S3 URI: 버킷 URI입니다.
s3://your-log-bucket-name/
your-log-bucket-name을 실제 S3 버킷 이름으로 바꿉니다.
소스 삭제 옵션: 수집 환경설정에 따라 삭제 옵션을 선택합니다.
최대 파일 기간: 지난 일수 동안 수정된 파일을 포함합니다. 기본값은 180일입니다.
SQS 대기열 액세스 키 ID: 20자리 영숫자 문자열인 계정 액세스 키입니다.
SQS 대기열 보안 비밀 액세스 키: 40자로 된 영숫자 문자열인 계정 액세스 키입니다.
고급 옵션
피드 이름: 피드를 식별하는 미리 채워진 값입니다.
애셋 네임스페이스: 피드와 연결된 네임스페이스입니다.
수집 라벨: 이 피드의 모든 이벤트에 적용되는 라벨입니다.
피드 만들기를 클릭합니다.
이 제품군 내에서 다양한 로그 유형에 대해 여러 피드를 구성하는 방법에 관한 자세한 내용은 제품별 피드 구성을 참고하세요.
필드 매핑 참조
이 파서는 AWS RDS syslog 메시지에서 필드를 추출하며, 주로 타임스탬프, 설명, 클라이언트 IP에 중점을 둡니다. 그로크 패턴을 사용하여 이러한 필드를 식별하고 해당 UDM 필드를 채우며, 클라이언트 IP의 존재 여부에 따라 이벤트를 GENERIC_EVENT 또는 STATUS_UPDATE로 분류합니다.
UDM 매핑 테이블
로그 필드
UDM 매핑
논리
client_ip
principal.ip
정규 표현식 \\[CLIENT: %{IP:client_ip}\\]을 사용하여 원시 로그 메시지에서 추출됩니다.
create_time.nanos
해당 사항 없음
IDM 객체에 매핑되지 않았습니다.
create_time.seconds
해당 사항 없음
IDM 객체에 매핑되지 않았습니다.
metadata.description
grok 패턴을 사용하여 추출한 로그의 설명 메시지입니다. create_time.nanos에서 복사됨 create_time.seconds에서 복사됨 기본적으로 'GENERIC_EVENT'로 설정됩니다. client_ip이 있는 경우 'STATUS_UPDATE'로 변경됩니다. 파서에 의해 설정된 정적 값 'AWS_RDS'입니다. 파서에 의해 설정된 정적 값 'AWS_RDS'입니다.
pid
principal.process.pid
정규 표현식 process ID of %{INT:pid}을 사용하여 descrip 필드에서 추출됩니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[[["\u003cp\u003eThis document provides instructions on how to collect AWS RDS logs and ingest them into Google SecOps using a Google SecOps feed.\u003c/p\u003e\n"],["\u003cp\u003eTo collect logs from AWS RDS, you must first configure log exports in your RDS database, selecting the appropriate log types for your database engine and publishing them to Amazon CloudWatch.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eAWS_RDS\u003c/code\u003e ingestion label is used for the parser that structures raw log data into the Unified Data Model (UDM) format in Google SecOps.\u003c/p\u003e\n"],["\u003cp\u003eConfiguring a feed in Google SecOps involves selecting either Amazon S3 or Amazon SQS as the source type and providing the necessary AWS credentials and parameters, such as region, S3 URI, or queue details.\u003c/p\u003e\n"],["\u003cp\u003eThe parser for AWS RDS logs extracts key information like timestamp, description, and client IP, mapping these to corresponding UDM fields, and categorizes events as either \u003ccode\u003eGENERIC_EVENT\u003c/code\u003e or \u003ccode\u003eSTATUS_UPDATE\u003c/code\u003e based on the log content.\u003c/p\u003e\n"]]],[],null,["# Collect AWS RDS logs\n====================\n\nSupported in: \nGoogle secops [SIEM](/chronicle/docs/secops/google-secops-siem-toc)\n| **Note:** This feature is covered by [Pre-GA Offerings Terms](https://chronicle.security/legal/service-terms/) of the Google Security Operations Service Specific Terms. Pre-GA features might have limited support, and changes to pre-GA features might not be compatible with other pre-GA versions. For more information, see the [Google SecOps Technical Support Service guidelines](https://chronicle.security/legal/technical-support-services-guidelines/) and the [Google SecOps Service Specific Terms](https://chronicle.security/legal/service-terms/).\n\nThis document describes how you can collect AWS RDS logs by\nsetting up a Google SecOps feed.\n\nFor more information, see [Data ingestion to Google SecOps](/chronicle/docs/data-ingestion-flow).\n\nAn ingestion label identifies the parser that normalizes raw log data\nto structured UDM format. The information in this document applies to the parser with the `AWS_RDS` ingestion label.\n\nBefore you begin\n----------------\n\nEnsure you have the following prerequisites:\n\n- An AWS account that you can sign in to\n\n- A global administrator or RDS administrator\n\nHow to configure AWS RDS\n------------------------\n\n1. Use an existing database or create a new database:\n - To use an existing database, select the database, click **Modify** , and then select **Log exports**.\n - To use a new database, when you create the database, select **Additional configuration**.\n2. To publish to Amazon CloudWatch, select the following log types:\n - **Audit log**\n - **Error log**\n - **General log**\n - **Slow query log**\n3. To specify log export for AWS Aurora PostgreSQL and PostgreSQL, select **PostgreSQL log**.\n4. To specify log export for AWS Microsoft SQL server, select the following log types:\n - **Agent log**\n - **Error log**\n5. Save the log configuration.\n6. Select **CloudWatch \\\u003e Logs** to view the collected logs. The log groups are automatically created after the logs are available through the instance.\n\nTo publish the logs to CloudWatch, configure IAM user and KMS key policies. For more information, see [IAM user and KMS key policies](https://docs.aws.amazon.com/kms/latest/developerguide/iam-policies.html).\n\nBased on the service and region, identify the endpoints for connectivity by referring to the following AWS documentation:\n\n- For information about any logging sources, see [AWS Identity and Access Management endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/iam-service.html).\n\n- For information about CloudWatch logging sources, see [CloudWatch logs endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/cwl_region.html).\n\nFor engine-specific information, see the following documentation:\n\n- [Publishing MariaDB logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.MariaDB.html#USER_LogAccess.MariaDB.PublishtoCloudWatchLogs).\n\n- [Publishing MySQL logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.MySQLDB.PublishtoCloudWatchLogs.html).\n\n- [Publishing Oracle logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.Oracle.html#USER_LogAccess.Oracle.PublishtoCloudWatchLogs).\n\n- [Publishing PostgreSQL logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.PostgreSQL.html#USER_LogAccess.Concepts.PostgreSQL.PublishtoCloudWatchLogs).\n\n- [Publishing SQL Server logs to Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.SQLServer.html#USER_LogAccess.SQLServer.PublishtoCloudWatchLogs).\n\nSet up feeds\n------------\n\nThere are two different entry points to set up feeds in the\nGoogle SecOps platform:\n\n- **SIEM Settings \\\u003e Feeds \\\u003e Add New**\n- **Content Hub \\\u003e Content Packs \\\u003e Get Started**\n\nHow to set up the AWS RDS feed\n------------------------------\n\n1. Click the **Amazon Cloud Platform** pack.\n2. Locate the **AWS RDS** log type.\n3. Google SecOps supports log collection using an access key ID and secret method. To create the access key ID and secret, see [Configure tool authentication with AWS](https://docs.aws.amazon.com/powershell/latest/userguide/creds-idc.html).\n4. Specify the values in the following fields.\n\n - **Source Type**: Amazon SQS V2\n - **Queue Name**: The SQS queue name to read from\n - **S3 URI** : The bucket URI.\n - `s3://your-log-bucket-name/`\n - Replace `your-log-bucket-name` with the actual name of your S3 bucket.\n - **Source deletion options**: Select the deletion option according to your ingestion preferences.\n\n | **Note:** If you select the `Delete transferred files` or `Delete transferred files and empty directories` option, make sure that you granted appropriate permissions to the service account.\n - **Maximum File Age**: Include files modified in the last number of days. Default is 180 days.\n\n - **SQS Queue Access Key ID**: An account access key that is a 20-character alphanumeric string.\n\n - **SQS Queue Secret Access Key**: An account access key that is a 40-character alphanumeric string.\n\n **Advanced options**\n - **Feed Name**: A prepopulated value that identifies the feed.\n - **Asset Namespace**: Namespace associated with the feed.\n - **Ingestion Labels**: Labels applied to all events from this feed.\n5. Click **Create feed**.\n\n| **Note:** The Content Hub is not available on the SIEM standalone platform. To upgrade, contact your Google SecOps representative.\n\nFor more information about configuring multiple feeds for different log types within this product family, see [Configure feeds by product](/chronicle/docs/ingestion/ingestion-entities/configure-multiple-feeds).\n\nField mapping reference\n-----------------------\n\nThis parser extracts fields from AWS RDS syslog messages, primarily focusing on timestamp, description, and client IP. It uses grok patterns to identify these fields and populates corresponding UDM fields, classifying events as either `GENERIC_EVENT` or `STATUS_UPDATE` based on the presence of a client IP.\n\nUDM mapping table\n-----------------\n\n**Need more help?** [Get answers from Community members and Google SecOps professionals.](https://security.googlecloudcommunity.com/google-security-operations-2)"]]