Collect Cisco AMP for Endpoints logs
This document explains how to ingest Cisco AMP for Endpoints logs to Google Security Operations using Amazon S3. The parser transforms raw JSON formatted logs into a structured format conforming to the Chronicle UDM. It extracts fields from nested JSON objects, maps them to the UDM schema, identifies event categories, assigns severity levels, and ultimately generates a unified event output, flagging security alerts when specific conditions are met.
Before you begin
- A Google SecOps instance
- Privileged access to Cisco AMP for Endpoints console
- Privileged access to AWS (S3, IAM, Lambda, EventBridge)
Collect Cisco AMP for Endpoints prerequisites (IDs, API keys, org IDs, tokens)
- Sign in to the Cisco AMP for Endpoints console.
- Go to Accounts > API Credentials.
- Click New API Credential to create a new API key and client ID.
- Provide the following configuration details:
- Application Name: Enter a name (for example,
Chronicle SecOps Integration
). - Scope: Select Read-only for basic event polling, or Read & Write if you plan to create Event Streams.
- Application Name: Enter a name (for example,
- Click Create.
- Copy and save in a secure location the following details:
- 3rd Party API Client ID
- API Key
- API Base URL: Depending on your region:
- US:
https://api.amp.cisco.com
- EU:
https://api.eu.amp.cisco.com
- APJC:
https://api.apjc.amp.cisco.com
- US:
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucket following this user guide: Creating a bucket
- Save bucket Name and Region for future reference (for example,
cisco-amp-logs
). - Create a User following this user guide: Creating an IAM user.
- Select the created User.
- Select Security credentials tab.
- Click Create Access Key in section Access Keys.
- Select Third-party service as Use case.
- Click Next.
- Optional: Add a description tag.
- Click Create access key.
- Click Download CSV file to save the Access Key and Secret Access Key for future reference.
- Click Done.
- Select Permissions tab.
- Click Add permissions in section Permissions policies.
- Select Add permissions.
- Select Attach policies directly.
- Search for AmazonS3FullAccess policy.
- Select the policy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies.
- Click Create policy > JSON tab.
Enter the following policy:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowPutObjects", "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::cisco-amp-logs/*" }, { "Sid": "AllowGetStateObject", "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::cisco-amp-logs/cisco-amp-events/state.json" } ] }
- Replace
cisco-amp-logs
if you entered a different bucket name.
- Replace
Click Next > Create policy.
Go to IAM > Roles > Create role > AWS service > Lambda.
Attach the newly created policy.
Name the role
cisco-amp-lambda-role
and click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
Provide the following configuration details:
Setting Value Name cisco-amp-events-collector
Runtime Python 3.13 Architecture x86_64 Execution role cisco-amp-lambda-role
After the function is created, open the Code tab, delete the stub and enter the following code (
cisco-amp-events-collector.py
):import json import boto3 import urllib3 import base64 from datetime import datetime, timedelta import os import logging # Configure logging logger = logging.getLogger() logger.setLevel(logging.INFO) # AWS S3 client and HTTP pool manager s3_client = boto3.client('s3') http = urllib3.PoolManager() def lambda_handler(event, context): """ AWS Lambda handler to fetch Cisco AMP events and store them in S3 """ try: # Get environment variables s3_bucket = os.environ['S3_BUCKET'] s3_prefix = os.environ['S3_PREFIX'] state_key = os.environ['STATE_KEY'] api_client_id = os.environ['AMP_CLIENT_ID'] api_key = os.environ['AMP_API_KEY'] api_base = os.environ['API_BASE'] # Optional parameters page_size = int(os.environ.get('PAGE_SIZE', '500')) max_pages = int(os.environ.get('MAX_PAGES', '10')) logger.info(f"Starting Cisco AMP events collection for bucket: {s3_bucket}") # Get last run timestamp from state file last_timestamp = get_last_timestamp(s3_bucket, state_key) if not last_timestamp: last_timestamp = (datetime.utcnow() - timedelta(days=1)).isoformat() + 'Z' # Create Basic Auth header auth_header = base64.b64encode(f"{api_client_id}:{api_key}".encode()).decode() headers = { 'Authorization': f'Basic {auth_header}', 'Accept': 'application/json' } # Build initial API URL base_url = f"{api_base}/v1/events" next_url = f"{base_url}?limit={page_size}&start_date={last_timestamp}" all_events = [] page_count = 0 while next_url and page_count < max_pages: logger.info(f"Fetching page {page_count + 1} from: {next_url}") # Make API request using urllib3 response = http.request('GET', next_url, headers=headers, timeout=60) if response.status != 200: raise RuntimeError(f"API request failed: {response.status} {response.data[:256]!r}") data = json.loads(response.data.decode('utf-8')) # Extract events from response events = data.get('data', []) if events: all_events.extend(events) logger.info(f"Collected {len(events)} events from page {page_count + 1}") # Check for next page next_url = data.get('metadata', {}).get('links', {}).get('next') page_count += 1 else: logger.info("No events found on current page") break logger.info(f"Total events collected: {len(all_events)}") # Store events in S3 if any were collected if all_events: timestamp_str = datetime.utcnow().strftime('%Y%m%d_%H%M%S') s3_key = f"{s3_prefix}cisco_amp_events_{timestamp_str}.ndjson" # Convert events to NDJSON format (one JSON object per line) ndjson_content = 'n'.join(json.dumps(event) for event in all_events) # Upload to S3 s3_client.put_object( Bucket=s3_bucket, Key=s3_key, Body=ndjson_content.encode('utf-8'), ContentType='application/x-ndjson' ) logger.info(f"Uploaded {len(all_events)} events to s3://{s3_bucket}/{s3_key}") # Update state file with current timestamp current_timestamp = datetime.utcnow().isoformat() + 'Z' update_state(s3_bucket, state_key, current_timestamp) return { 'statusCode': 200, 'body': json.dumps({ 'message': 'Success', 'events_collected': len(all_events), 'pages_processed': page_count }) } except Exception as e: logger.error(f"Error in lambda_handler: {str(e)}") return { 'statusCode': 500, 'body': json.dumps({ 'error': str(e) }) } def get_last_timestamp(bucket, state_key): """ Get the last run timestamp from S3 state file """ try: response = s3_client.get_object(Bucket=bucket, Key=state_key) state_data = json.loads(response['Body'].read().decode('utf-8')) return state_data.get('last_timestamp') except s3_client.exceptions.NoSuchKey: logger.info("No state file found, starting from 24 hours ago") return None except Exception as e: logger.warning(f"Error reading state file: {str(e)}") return None def update_state(bucket, state_key, timestamp): """ Update the state file with the current timestamp """ try: state_data = { 'last_timestamp': timestamp, 'updated_at': datetime.utcnow().isoformat() + 'Z' } s3_client.put_object( Bucket=bucket, Key=state_key, Body=json.dumps(state_data).encode('utf-8'), ContentType='application/json' ) logger.info(f"Updated state file with timestamp: {timestamp}") except Exception as e: logger.error(f"Error updating state file: {str(e)}")
Go to Configuration > Environment variables.
Click Edit > Add new environment variable.
Enter the following environment variables provided, replacing with your values.
Key Example value S3_BUCKET
cisco-amp-logs
S3_PREFIX
cisco-amp-events/
STATE_KEY
cisco-amp-events/state.json
AMP_CLIENT_ID
<your-client-id>
AMP_API_KEY
<your-api-key>
API_BASE
https://api.amp.cisco.com
(or your region URL)PAGE_SIZE
500
MAX_PAGES
10
After the function is created, stay on its page (or open Lambda > Functions > cisco-amp-events-collector).
Select the Configuration tab.
In the General configuration panel click Edit.
Change Timeout to 5 minutes (300 seconds) and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate (
1 hour
). - Target: your Lambda function
cisco-amp-events-collector
. - Name:
cisco-amp-events-collector-1h
.
- Recurring schedule: Rate (
- Click Create schedule.
Optional: Create read-only IAM user & keys for Google SecOps
- Go to AWS Console > IAM > Users > Add users.
- Click Add users.
- Provide the following configuration details:
- User: Enter
secops-reader
. - Access type: Select Access key – Programmatic access.
- User: Enter
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
In the JSON editor, enter the following policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::cisco-amp-logs/*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": "arn:aws:s3:::cisco-amp-logs" } ] }
Set the name to
secops-reader-policy
.Go to Create policy > search/select > Next > Add permissions.
Go to Security credentials > Access keys > Create access key.
Download the CSV (these values are entered into the feed).
Configure a feed in Google SecOps to ingest Cisco AMP for Endpoints logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed name field, enter a name for the feed (for example,
Cisco AMP for Endpoints logs
). - Select Amazon S3 V2 as the Source type.
- Select Cisco AMP as the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://cisco-amp-logs/cisco-amp-events/
- Source deletion options: Select deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default is 180 days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace.
- Ingestion labels: The label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalize screen, and then click Submit.
UDM Mapping Table
Log Field | UDM Mapping | Logic |
---|---|---|
active | read_only_udm.principal.asset.active | Directly mapped from computer.active |
connector_guid | read_only_udm.principal.asset.uuid | Directly mapped from computer.connector_guid |
date | read_only_udm.metadata.event_timestamp.seconds | Directly mapped from date after converting to timestamp |
detection | read_only_udm.security_result.threat_name | Directly mapped from detection |
detection_id | read_only_udm.security_result.detection_fields.value | Directly mapped from detection_id |
disposition | read_only_udm.security_result.description | Directly mapped from file.disposition |
error.error_code | read_only_udm.security_result.detection_fields.value | Directly mapped from error.error_code |
error.description | read_only_udm.security_result.detection_fields.value | Directly mapped from error.description |
event_type | read_only_udm.metadata.product_event_type | Directly mapped from event_type |
event_type_id | read_only_udm.metadata.product_log_id | Directly mapped from event_type_id |
external_ip | read_only_udm.principal.asset.external_ip | Directly mapped from computer.external_ip |
file.file_name | read_only_udm.target.file.names | Directly mapped from file.file_name |
file.file_path | read_only_udm.target.file.full_path | Directly mapped from file.file_path |
file.identity.md5 | read_only_udm.security_result.about.file.md5 | Directly mapped from file.identity.md5 |
file.identity.md5 | read_only_udm.target.file.md5 | Directly mapped from file.identity.md5 |
file.identity.sha1 | read_only_udm.security_result.about.file.sha1 | Directly mapped from file.identity.sha1 |
file.identity.sha1 | read_only_udm.target.file.sha1 | Directly mapped from file.identity.sha1 |
file.identity.sha256 | read_only_udm.security_result.about.file.sha256 | Directly mapped from file.identity.sha256 |
file.identity.sha256 | read_only_udm.target.file.sha256 | Directly mapped from file.identity.sha256 |
file.parent.disposition | read_only_udm.target.resource.attribute.labels.value | Directly mapped from file.parent.disposition |
file.parent.file_name | read_only_udm.target.resource.attribute.labels.value | Directly mapped from file.parent.file_name |
file.parent.identity.md5 | read_only_udm.target.resource.attribute.labels.value | Directly mapped from file.parent.identity.md5 |
file.parent.identity.sha1 | read_only_udm.target.resource.attribute.labels.value | Directly mapped from file.parent.identity.sha1 |
file.parent.identity.sha256 | read_only_udm.target.resource.attribute.labels.value | Directly mapped from file.parent.identity.sha256 |
file.parent.process_id | read_only_udm.security_result.about.process.parent_process.pid | Directly mapped from file.parent.process_id |
file.parent.process_id | read_only_udm.target.process.parent_process.pid | Directly mapped from file.parent.process_id |
hostname | read_only_udm.principal.asset.hostname | Directly mapped from computer.hostname |
hostname | read_only_udm.target.hostname | Directly mapped from computer.hostname |
hostname | read_only_udm.target.asset.hostname | Directly mapped from computer.hostname |
ip | read_only_udm.principal.asset.ip | Directly mapped from computer.network_addresses.ip |
ip | read_only_udm.principal.ip | Directly mapped from computer.network_addresses.ip |
ip | read_only_udm.security_result.about.ip | Directly mapped from computer.network_addresses.ip |
mac | read_only_udm.principal.mac | Directly mapped from computer.network_addresses.mac |
mac | read_only_udm.security_result.about.mac | Directly mapped from computer.network_addresses.mac |
severity | read_only_udm.security_result.severity | Mapped from severity based on the following logic: - "Medium" -> "MEDIUM" - "High" or "Critical" -> "HIGH" - "Low" -> "LOW" - Otherwise -> "UNKNOWN_SEVERITY" |
timestamp | read_only_udm.metadata.event_timestamp.seconds | Directly mapped from timestamp |
user | read_only_udm.security_result.about.user.user_display_name | Directly mapped from computer.user |
user | read_only_udm.target.user.user_display_name | Directly mapped from computer.user |
vulnerabilities.cve | read_only_udm.extensions.vulns.vulnerabilities.cve_id | Directly mapped from vulnerabilities.cve |
vulnerabilities.name | read_only_udm.extensions.vulns.vulnerabilities.name | Directly mapped from vulnerabilities.name |
vulnerabilities.score | read_only_udm.extensions.vulns.vulnerabilities.cvss_base_score | Directly mapped from vulnerabilities.score after converting to float |
vulnerabilities.url | read_only_udm.extensions.vulns.vulnerabilities.vendor_knowledge_base_article_id | Directly mapped from vulnerabilities.url |
vulnerabilities.version | read_only_udm.extensions.vulns.vulnerabilities.cvss_version | Directly mapped from vulnerabilities.version |
is_alert | Set to true if event_type is one of the following: "Threat Detected", "Exploit Prevention", "Executed malware", "Potential Dropper Infection", "Multiple Infected Files", "Vulnerable Application Detected" or if security_result.severity is "HIGH" |
|
is_significant | Set to true if event_type is one of the following: "Threat Detected", "Exploit Prevention", "Executed malware", "Potential Dropper Infection", "Multiple Infected Files", "Vulnerable Application Detected" or if security_result.severity is "HIGH" |
|
read_only_udm.metadata.event_type | Determined based on event_type and security_result.severity values. - If event_type is one of the following: "Executed malware", "Threat Detected", "Potential Dropper Infection", "Cloud Recall Detection", "Malicious Activity Detection", "Exploit Prevention", "Multiple Infected Files", "Cloud IOC", "System Process Protection", "Vulnerable Application Detected", "Threat Quarantined", "Execution Blocked", "Cloud Recall Quarantine Successful", "Cloud Recall Restore from Quarantine Failed", "Cloud Recall Quarantine Attempt Failed", "Quarantine Failure", then the event type is set to "SCAN_FILE". - If security_result.severity is "HIGH", then the event type is set to "SCAN_FILE". - If both has_principal and has_target are true, then the event type is set to "SCAN_UNCATEGORIZED". - Otherwise, the event type is set to "GENERIC_EVENT". |
|
read_only_udm.metadata.log_type | Set to "CISCO_AMP" | |
read_only_udm.metadata.vendor_name | Set to "CISCO_AMP" | |
read_only_udm.security_result.about.file.full_path | Directly mapped from file.file_path |
|
read_only_udm.security_result.about.hostname | Directly mapped from computer.hostname |
|
read_only_udm.security_result.about.user.user_display_name | Directly mapped from computer.user |
|
read_only_udm.security_result.detection_fields.key | Set to "Detection ID" for detection_id , "Error Code" for error.error_code , "Error Description" for error.description , "Parent Disposition" for file.parent.disposition , "Parent File Name" for file.parent.file_name , "Parent MD5" for file.parent.identity.md5 , "Parent SHA1" for file.parent.identity.sha1 , and "Parent SHA256" for file.parent.identity.sha256 |
|
read_only_udm.security_result.summary | Set to event_type if event_type is one of the following: "Threat Detected", "Exploit Prevention", "Executed malware", "Potential Dropper Infection", "Multiple Infected Files", "Vulnerable Application Detected" or if security_result.severity is "HIGH" |
|
read_only_udm.target.asset.ip | Directly mapped from computer.network_addresses.ip |
|
read_only_udm.target.resource.attribute.labels.key | Set to "Parent Disposition" for file.parent.disposition , "Parent File Name" for file.parent.file_name , "Parent MD5" for file.parent.identity.md5 , "Parent SHA1" for file.parent.identity.sha1 , and "Parent SHA256" for file.parent.identity.sha256 |
|
timestamp.seconds | Directly mapped from date after converting to timestamp |
Need more help? Get answers from Community members and Google SecOps professionals.