Cloud Threats Category Preview features
This document provides an overview of the rule sets in the Cloud Threats category that are available in Preview and the required data sources. It contains the following sections:
- Curated detections for Microsoft Azure data: Lists the rule sets available for Azure data and describes the patterns of activity that each identifies.
- Supported devices and required log types: Describes the data sources you should ingest to have maximum rule coverage.
- Ingest Microsoft Azure and Microsoft Entra ID data: Provides information about how to ingest the data sources to Google SecOps.
- Verify the ingestion of Azure data: Describes how to use Azure Managed Detection Testing rules to verify that Azure data is ingested and in the expected format.
Curated detections for Azure data
Certain rule sets in this category are designed to work with Azure data to identify threats in Azure environments using event data, context data, and alerts. They include the following:
- Azure - Compute: Detects anomalous activity related to Azure compute resources, such as Kubernetes and virtual machines (VMs).
- Azure - Data: Detects activity associated with data resources such as Azure blob permissions, modifications, and invitations to external users to use Azure services on the tenant.
- Azure - Defender for Cloud: Identifies alerts received from context-aware Microsoft Defender for Cloud related to user behavior, credential access, cryptomining, discovery, evasion, execution, exfiltration, impact, initial access, malware, penetration testing, persistence, policy, privilege escalation, or unauthorized access across all Azure cloud services.
- Azure - Hacktools: Detects the use of hacking tools in an Azure environment, such as Tor and VPN anonymizers, scanners, and red teaming toolkits.
- Azure - Identity: Detects activity related to authentication and authorization, indicating unusual behavior such as concurrent access from multiple geographic locations, overly permissive access management policies, or Azure RBAC activity from suspicious tools.
- Azure - Logging and Monitoring: Detects activity related to the disabling of logging and monitoring services within Azure.
- Azure - Network: Detects insecure and notable alterations to Azure networking devices or settings, such as security groups or firewalls, Azure Web Application Firewall, and denial of service policies.
- Azure - Organization: Detects activity associated with your organization such as the addition or removal of subscriptions and accounts.
- Azure - Secrets: Detects activity associated with secrets, tokens, and passwords (for example modifications to Azure Key Vault or storage account access keys).
Supported devices and required log types
These rule sets have been tested and are supported with the following data sources, listed by product name and Google SecOps ingestion label.
- Azure Cloud Services
(
AZURE_ACTIVITY
) - Microsoft Entra ID,
previously Azure Active Directory (
AZURE_AD
) - Microsoft Entra ID audit logs,
previously Azure AD audit logs (
AZURE_AD_AUDIT
) - Microsoft Defender for Cloud
(
MICROSOFT_GRAPH_ALERT
)
Ingest Azure and Microsoft Entra ID data
You must ingest data from every data source to have maximum rule coverage. See the following documentation for information about how to ingest data from each source.
- Ingest Azure Monitor Activity logs from Azure Cloud Services.
- Collect Microsoft Entra ID data
(formerly called Azure AD), including the following:
- Microsoft Entra ID logs
- Microsoft Entra ID audit logs
- Microsoft Entra ID context data
- Collect Microsoft Graph security API alert logs to Ingest Microsoft Defender for Cloud logs using the Microsoft Graph Security API.
The following section describes how to verify the ingestion of Azure data using predefined test rules.
Verify the ingestion of Azure data
The Google SecOps Data Ingestion and Health dashboard lets you see information about the type, volume, and health of all data being ingested into Google SecOps using SIEM ingestion features.
You can use also Azure Managed Detection Testing test rules to verify the ingestion of Azure data. After setting up the ingestion, you perform actions in the Azure portal that should trigger the test rules. They are intended to verify that data is ingested and in the expected format to use the curated detections for Azure data.
Enable the Azure Managed Detection Testing test rules
- In Google Security Operations, click Detections > Rules & Detections to open the Curated Detections page.
- Select the Managed Detection Testing > Azure Managed Detection Testing.
- Enable both Status and Alerting for the Broad and Precise rules.
Send user action data to trigger the test rules
To verify that data is ingested as expected, create a user and login to verify that these actions trigger the test rules. For information about creating users in Microsoft Entra ID, see How to create, invite, and delete users.
In Azure, create a new Microsoft Entra ID user.
- Navigate to the Azure portal.
- Open Microsoft Entra ID.
- Click Add, then Create New User.
Do the following to define the user:
- Enter the following information:
- User principal name:
GCTI_ALERT_VALIDATION
- User principal name:
GCTI_ALERT_VALIDATION
- Display name:
GCTI_ALERT_VALIDATION
- User principal name:
- Select Auto-generate Password to auto-generate a password for this user.
- Select the Account Enabled checkbox.
- Open the Review + Create tab.
- Remember the auto-generated password. You will use this in upcoming steps.
- Click Create.
- Enter the following information:
- Open a browser window in incognito mode, and then navigate to the Azure portal.
- Login with the newly created user and password.
- Change the user password.
- Enroll in multi-factor authentication (MFA) as s with your organization's policy.
- Ensure that you successfully log out of the Azure portal.
Do the following to verify that alerts are created in Google Security Operations:
In Google Security Operations, click Detections > Rules & Detections to open the Curated Detections page.
Click Dashboard.
In the list of detections, check that the following rules were triggered:
- tst_azure_ad_user_creation
- tst_azure_ad_user_login
After you confirm that data is sent and that these rules are triggered, deactivate or deprovision the user account.
Send sample alerts to trigger the test rules
Perform the following steps to verify that generating sample security alerts in Azure triggers the test rules. For more information about generating sample security alerts in Microsoft Defender for Cloud, see Alert validation in Microsoft Defender for Cloud.
- In the Azure Portal, navigate to All Services.
- Under Security, open Microsoft Defender for Cloud.
- Navigate to Security Alerts.
- Click Sample Alerts, and then do the following:
- Select your subscription.
- Select all for Defender for Cloud Plans.
- Click Create Sample Alerts.
- Verify that test alerts are triggered.
- In Google Security Operations, click Detections > Rules & Detections to open the Curated Detections page.
- Click Dashboard.
- In the list of detections, check that the following rules were triggered:
- tst_azure_activity
- tst_azure_defender_for_cloud_alerts
Disable the Azure Managed Detection Testing rule sets
- In Google Security Operations, click Detection > Rules & Detections to open the Curated Detections page.
- Select the Managed Detection Testing > Azure Managed Detection Testing rules.
- Disable both Status and Alerting for the Broad and Precise
rules.