Collect ZeroFox Platform logs
This document explains how to ingest ZeroFox Platform logs to Google Security Operations using Amazon S3.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance.
- Privileged access to ZeroFox Platform tenant.
- Privileged access to AWS (S3, Identity and Access Management (IAM), Lambda, EventBridge).
Get ZeroFox prerequisites
- Sign in to the ZeroFox Platform at https://cloud.zerofox.com.
- Go to Data Connectors > API Data Feeds.
- Direct URL (after login): https://cloud.zerofox.com/data_connectors/api
- If you don't see this menu item, contact your ZeroFox administrator for access.
 
- Direct URL (after login): 
- Click Generate Token or Create Personal Access Token.
- Provide the following configuration details:
- Name: Enter a descriptive name (for example, Google SecOps S3 Ingestion).
- Expiration: Select a rotation period according to your organization's security policy.
- Permissions/Feeds: Select read permissions for: Alerts,CTI feeds, and other data types you want to export
 
- Name: Enter a descriptive name (for example, 
- Click Generate.
- Copy and save the generated Personal Access Token in a secure location (you won't be able to view it again).
- Save the ZEROFOX_BASE_URL: https://api.zerofox.com(default for most tenants)
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucket following this user guide: Creating a bucket
- Save bucket Name and Region for future reference (for example, zerofox-platform-logs).
- Create a User following this user guide: Creating an IAM user.
- Select the created User.
- Select Security credentials tab.
- Click Create Access Key in section Access Keys.
- Select Third-party service as Use case.
- Click Next.
- Optional: Add a description tag.
- Click Create access key.
- Click Download .CSV file to save the Access Key and Secret Access Key for future reference.
- Click Done.
- Select Permissions tab.
- Click Add permissions in section Permissions policies.
- Select Add permissions.
- Select Attach policies directly.
- Search for AmazonS3FullAccess policy.
- Select the policy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies,
- Click Create policy > JSON tab.
- Copy and paste the following policy.
- Policy JSON (replace - zerofox-platform-logsif you entered a different bucket name):- { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowPutObjects", "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::zerofox-platform-logs/*" }, { "Sid": "AllowGetStateObject", "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::zerofox-platform-logs/zerofox/platform/state.json" } ] }
- Click Next > Create policy. 
- Go to IAM > Roles > Create role > AWS service > Lambda. 
- Attach the newly created policy. 
- Name the role - ZeroFoxPlatformToS3Roleand click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
- Provide the following configuration details: - Setting - Value - Name - zerofox_platform_to_s3- Runtime - Python 3.13 - Architecture - x86_64 - Execution role - ZeroFoxPlatformToS3Role
- After the function is created, open the Code tab, delete the stub and paste the following code ( - zerofox_platform_to_s3.py).- #!/usr/bin/env python3 # Lambda: Pull ZeroFox Platform data (alerts/incidents/logs) to S3 (no transform) import os, json, time, urllib.parse from urllib.request import Request, urlopen from urllib.error import HTTPError, URLError import boto3 S3_BUCKET = os.environ["S3_BUCKET"] S3_PREFIX = os.environ.get("S3_PREFIX", "zerofox/platform/") STATE_KEY = os.environ.get("STATE_KEY", "zerofox/platform/state.json") LOOKBACK_SEC = int(os.environ.get("LOOKBACK_SECONDS", "3600")) PAGE_SIZE = int(os.environ.get("PAGE_SIZE", "200")) MAX_PAGES = int(os.environ.get("MAX_PAGES", "20")) HTTP_TIMEOUT = int(os.environ.get("HTTP_TIMEOUT", "60")) HTTP_RETRIES = int(os.environ.get("HTTP_RETRIES", "3")) URL_TEMPLATE = os.environ.get("URL_TEMPLATE", "") AUTH_HEADER = os.environ.get("AUTH_HEADER", "") # e.g. "Authorization: Bearer <token>" ZEROFOX_BASE_URL = os.environ.get("ZEROFOX_BASE_URL", "https://api.zerofox.com") ZEROFOX_API_TOKEN = os.environ.get("ZEROFOX_API_TOKEN", "") s3 = boto3.client("s3") def _iso(ts: float) -> str: return time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime(ts)) def _load_state() -> dict: try: obj = s3.get_object(Bucket=S3_BUCKET, Key=STATE_KEY) b = obj["Body"].read() return json.loads(b) if b else {} except Exception: return {"last_since": _iso(time.time() - LOOKBACK_SEC)} def _save_state(st: dict) -> None: s3.put_object( Bucket=S3_BUCKET, Key=STATE_KEY, Body=json.dumps(st, separators=(",", ":")).encode("utf-8"), ContentType="application/json", ) def _headers() -> dict: hdrs = {"Accept": "application/json", "Content-Type": "application/json"} if AUTH_HEADER: try: k, v = AUTH_HEADER.split(":", 1) hdrs[k.strip()] = v.strip() except ValueError: hdrs["Authorization"] = AUTH_HEADER.strip() elif ZEROFOX_API_TOKEN: hdrs["Authorization"] = f"Bearer {ZEROFOX_API_TOKEN}" return hdrs def _http_get(url: str) -> dict: attempt = 0 while True: try: req = Request(url, method="GET") for k, v in _headers().items(): req.add_header(k, v) with urlopen(req, timeout=HTTP_TIMEOUT) as r: body = r.read() try: return json.loads(body.decode("utf-8")) except json.JSONDecodeError: return {"raw": body.decode("utf-8", errors="replace")} except HTTPError as e: if e.code in (429, 500, 502, 503, 504) and attempt < HTTP_RETRIES: retry_after = int(e.headers.get("Retry-After", 1 + attempt)) time.sleep(max(1, retry_after)) attempt += 1 continue raise except URLError: if attempt < HTTP_RETRIES: time.sleep(1 + attempt) attempt += 1 continue raise def _put_json(obj: dict, label: str) -> str: ts = time.gmtime() key = f"{S3_PREFIX}/{time.strftime('%Y/%m/%d/%H%M%S', ts)}-zerofox-{label}.json" s3.put_object( Bucket=S3_BUCKET, Key=key, Body=json.dumps(obj, separators=(",", ":")).encode("utf-8"), ContentType="application/json", ) return key def _extract_next_token(payload: dict): next_token = (payload.get("next") or payload.get("next_token") or payload.get("nextPageToken") or payload.get("next_page_token")) if isinstance(next_token, dict): return next_token.get("token") or next_token.get("cursor") or next_token.get("value") return next_token def _extract_items(payload: dict) -> list: for key in ("results", "data", "alerts", "items", "logs", "events"): if isinstance(payload.get(key), list): return payload[key] return [] def _extract_newest_timestamp(items: list, current: str) -> str: newest = current for item in items: timestamp = (item.get("timestamp") or item.get("created_at") or item.get("last_modified") or item.get("event_time") or item.get("log_time") or item.get("updated_at")) if isinstance(timestamp, str) and timestamp > newest: newest = timestamp return newest def lambda_handler(event=None, context=None): st = _load_state() since = st.get("last_since") or _iso(time.time() - LOOKBACK_SEC) # Use URL_TEMPLATE if provided, otherwise construct default alerts endpoint if URL_TEMPLATE: base_url = URL_TEMPLATE.replace("{SINCE}", urllib.parse.quote(since)) else: base_url = f"{ZEROFOX_BASE_URL}/v1/alerts?since={urllib.parse.quote(since)}" page_token = "" pages = 0 total_items = 0 newest_since = since while pages < MAX_PAGES: # Construct URL with pagination if URL_TEMPLATE: url = (base_url .replace("{PAGE_TOKEN}", urllib.parse.quote(page_token)) .replace("{PAGE_SIZE}", str(PAGE_SIZE))) else: url = f"{base_url}&limit={PAGE_SIZE}" if page_token: url += f"&page_token={urllib.parse.quote(page_token)}" payload = _http_get(url) _put_json(payload, f"page-{pages:05d}") items = _extract_items(payload) total_items += len(items) newest_since = _extract_newest_timestamp(items, newest_since) pages += 1 next_token = _extract_next_token(payload) if not next_token: break page_token = str(next_token) if newest_since and newest_since != st.get("last_since"): st["last_since"] = newest_since _save_state(st) return {"ok": True, "pages": pages, "items": total_items, "since": since, "new_since": newest_since} if __name__ == "__main__": print(lambda_handler())
- Go to Configuration > Environment variables. 
- Click Edit > Add new environment variable. 
- Enter the environment variables provided in the following table, replacing the example values with your values. - Environment variables - Key - Example value - S3_BUCKET- zerofox-platform-logs- S3_PREFIX- zerofox/platform/- STATE_KEY- zerofox/platform/state.json- ZEROFOX_BASE_URL- https://api.zerofox.com- ZEROFOX_API_TOKEN- your-zerofox-personal-access-token- LOOKBACK_SECONDS- 3600- PAGE_SIZE- 200- MAX_PAGES- 20- HTTP_TIMEOUT- 60- HTTP_RETRIES- 3- URL_TEMPLATE- (optional) Custom URL template with - {SINCE},- {PAGE_TOKEN},- {PAGE_SIZE}- AUTH_HEADER- (optional) - Authorization: Bearer <token>for custom auth
- After the function is created, stay on its page (or open Lambda > Functions > your-function). 
- Select the Configuration tab. 
- In the General configuration panel, click Edit. 
- Change Timeout to 5 minutes (300 seconds) and click Save. 
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate (1 hour).
- Target: Your Lambda function zerofox_platform_to_s3.
- Name: zerofox-platform-1h.
 
- Recurring schedule: Rate (
- Click Create schedule.
(Optional) Create read-only IAM user & keys for Google SecOps
- Go to AWS Console > IAM > Users.
- Click Add users.
- Provide the following configuration details:
- User: Enter secops-reader.
- Access type: Select Access key — Programmatic access.
 
- User: Enter 
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
- JSON: - { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::zerofox-platform-logs/*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": "arn:aws:s3:::zerofox-platform-logs" } ] }
- Name = - secops-reader-policy.
- Click Create policy > search/select > Next > Add permissions. 
- Create access key for - secops-reader: Security credentials > Access keys.
- Click Create access key. 
- Download the - .CSV. (You'll paste these values into the feed).
Configure a feed in Google SecOps to ingest ZeroFox Platform logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed name field, enter a name for the feed (for example, ZeroFox Platform Logs).
- Select Amazon S3 V2 as the Source type.
- Select ZeroFox Platform as the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI: s3://zerofox-platform-logs/zerofox/platform/
- Source deletion options: Select deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default is 180 days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace.
- Ingestion labels: The label applied to the events from this feed.
 
- S3 URI: 
- Click Next.
- Review your new feed configuration in the Finalize screen, and then click Submit.
Need more help? Get answers from Community members and Google SecOps professionals.