[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Collect Cisco Umbrella Web Proxy logs\n=====================================\n\nSupported in: \nGoogle secops [SIEM](/chronicle/docs/secops/google-secops-siem-toc)\n| **Note:** This feature is covered by [Pre-GA Offerings Terms](https://chronicle.security/legal/service-terms/) of the Google Security Operations Service Specific Terms. Pre-GA features might have limited support, and changes to pre-GA features might not be compatible with other pre-GA versions. For more information, see the [Google SecOps Technical Support Service guidelines](https://chronicle.security/legal/technical-support-services-guidelines/) and the [Google SecOps Service Specific Terms](https://chronicle.security/legal/service-terms/).\n\nThis document explains how to collect Cisco Umbrella Web Proxy logs to a Google Security Operations feed using AWS S3 bucket. The parser extracts fields from a CSV log, renaming columns for clarity and handling potential variations in the input data. It then uses included files (`umbrella_proxy_udm.include` and `umbrella_handle_identities.include`) to map the extracted fields to the UDM and process identity information based on the `identityType` field.\n\nBefore you begin\n----------------\n\n- Ensure that you have a Google SecOps instance.\n- Ensure that you privileged access to AWS IAM and S3.\n- Ensure that you have privileged access to Cisco Umbrella.\n\nConfigure a Cisco-managed Amazon S3 bucket\n------------------------------------------\n\n| **Note:** In a Cisco-managed bucket, Cisco Umbrella audit owns the bucket and sets the configuration automatically.\n\n1. Sign in to the **Cisco Umbrella** dashboard.\n2. Go to **Admin \\\u003e Log management**.\n3. Select **Use a Cisco-managed Amazon S3 bucket** option.\n4. Provide the following configuration details:\n - **Select a region**: select a region closer to your location for lower latency.\n - **Select a retention duration**: select the time period. The retention duration is 7, 14, or 30 days. After the selected time period, data is deleted and cannot be recovered. If your ingestion cycle is regular, use a shorter time period. You can change the retention duration at a later time.\n5. Click **Save**.\n6. Click **Continue** to confirm your selections and to receive activation notification. \n In the **Activation complete** window that appears, the **Access key** and **Secret key** values are displayed.\n7. Copy the **Access key** and **Secret key** values. If you lose these keys, you must regenerate them.\n8. Click **Got it \\\u003e Continue**.\n9. A summary page displays the configuration and your bucket name. You can turn logging off or on as required by your organization. However, logs are purged based on the retention duration, regardless of new data getting added.\n\n| **Note:** For more information on device configuration for log collection, see [Enable Logging to a Cisco-managed S3 Bucket](https://docs.umbrella.com/deployment-umbrella/docs/cisco-managed-s3-bucket).\n\nOptional: Configure user access keys for self-managed AWS S3 bucket\n-------------------------------------------------------------------\n\n1. Sign in to the [AWS Management Console](https://aws.amazon.com/console/).\n2. Create a **User** following this user guide: [Creating an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console).\n3. Select the created **User**.\n4. Select the **Security credentials** tab.\n5. Click **Create Access Key** in the **Access Keys** section.\n6. Select **Third-party service** as the **Use case**.\n7. Click **Next**.\n8. Optional: add a description tag.\n9. Click **Create access key**.\n10. Click **Download CSV file** to save the **Access Key** and **Secret Access Key** for later use.\n11. Click **Done**.\n12. Select the **Permissions** tab.\n13. Click **Add permissions** in the **Permissions policies** section.\n14. Select **Add permissions**.\n15. Select **Attach policies directly**.\n16. Search for and select the **AmazonS3FullAccess** policy.\n17. Click **Next**.\n18. Click **Add permissions**.\n\nOptional: Configure a self-managed Amazon S3 bucket\n---------------------------------------------------\n\n1. Sign in to the [AWS Management Console](https://aws.amazon.com/console/).\n\n | **Note:** for more information on S3 bucket creation, see [Create a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html).\n2. Go to **S3**.\n\n3. Click **Create bucket**.\n\n4. Provide the following configuration details:\n\n - **Bucket name**: provide a name for the Amazon S3 bucket.\n - **Region**: select a region.\n5. Click **Create**.\n\nOptional: Configure a bucket policy for self-managed AWS S3 bucket\n------------------------------------------------------------------\n\n1. Click the newly created bucket to open it.\n2. Select **Properties \\\u003e Permissions**.\n3. In the **Permissions** list, click **Add bucket policy**.\n4. Enter the preconfigured bucket policy as follows:\n\n {\n \"Version\": \"2008-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::568526795995:user/logs\"\n },\n \"Action\": \"s3:PutObject\",\n \"Resource\": \"arn:aws:s3:::\u003cvar translate=\"no\"\u003eBUCKET_NAME\u003c/var\u003e/*\"\n },\n {\n \"Sid\": \"\",\n \"Effect\": \"Deny\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::568526795995:user/logs\"\n },\n \"Action\": \"s3:GetObject\",\n \"Resource\": \"arn:aws:s3:::\u003cvar translate=\"no\"\u003eBUCKET_NAME\u003c/var\u003e/*\"},\n {\n \"Sid\": \"\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::568526795995:user/logs\"\n },\n \"Action\": \"s3:GetBucketLocation\",\n \"Resource\": \"arn:aws:s3:::\u003cvar translate=\"no\"\u003eBUCKET_NAME\u003c/var\u003e\"\n },\n {\n \"Sid\": \"\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::568526795995:user/logs\"\n },\n \"Action\": \"s3:ListBucket\",\n \"Resource\": \"arn:aws:s3:::\u003cvar translate=\"no\"\u003eBUCKET_NAME\u003c/var\u003e\"\n }\n ]\n }\n\n - Replace \u003cvar translate=\"no\"\u003eBUCKET_NAME\u003c/var\u003e with the Amazon S3 bucket name you provided.\n5. Click **Save**.\n\nOptional: Required Verification for self-managed Amazon S3 bucket\n-----------------------------------------------------------------\n\n1. In the **Cisco Umbrella** dashboard, select **Admin \\\u003e Log management \\\u003e Amazon S3**.\n2. In the **Bucket name** field, specify your exact Amazon S3 bucket name, and then click **Verify**.\n3. As part of the verification process, a file named `README_FROM_UMBRELLA.txt` is uploaded from Cisco Umbrella to your Amazon S3 bucket. You may need to refresh your browser in order to see the readme file when it is uploaded.\n4. Download the `README_FROM_UMBRELLA.txt` file, and open it using a text editor.\n5. Copy and save the unique **Cisco Umbrella** token from the file.\n6. Go to the **Cisco Umbrella** dashboard.\n7. In the **Token number** field, specify the token and click **Save**.\n8. If successful, you get a confirmation message in your dashboard indicating that the bucket was successfully verified. If you receive an error indicating that your bucket can't be verified, re-check the syntax of the bucket name and review the configuration.\n\nConfigure a feed in Google SecOps to ingest the Cisco Umbrella Web Proxy logs\n-----------------------------------------------------------------------------\n\n1. Go to **SIEM Settings \\\u003e Feeds**.\n2. Click **Add new**.\n3. In the **Feed name** field, enter a name for the feed; for example, **Cisco Umbrella Web Proxy Logs**.\n4. Select **Amazon S3 V2** as the **Source type**.\n5. Select **Cisco Umbrella Web Proxy** as the **Log type**.\n6. Click **Next**.\n7. Specify values for the following input parameters:\n\n - **S3 URI** : the bucket URI.\n - `s3:/BUCKET_NAME`\n - Replace `BUCKET_NAME` with the actual name of the bucket.\n - **Source deletion options**: select deletion option according to your preference.\n\n | **Note:** If you select the `Delete transferred files` or `Delete transferred files and empty directories` option, make sure that you granted appropriate permissions to the service account. \\* **Maximum File Age** : Includes files modified in the last number of days. Default is 180 days. \\* **Access Key ID** : enter the User access key with access to the s3 bucket. \\* **Secret Access Key** : enter the User secret key with access to the s3 bucket. \\* Optional: **Asset namespace** : provide the [asset namespace](/chronicle/docs/investigation/asset-namespaces). \\* Optional: **Ingestion labels**: provide the label to be applied to the events from this feed.\n8. Click **Next**.\n\n9. Review your new feed configuration in the **Finalize** screen, and then click **Submit**.\n\nUDM Mapping Table\n-----------------\n\n**Need more help?** [Get answers from Community members and Google SecOps professionals.](https://security.googlecloudcommunity.com/google-security-operations-2)"]]