This document offers informal guidance to help you investigate findings of suspicious activities in your Google Cloud environment from potentially malicious actors. This document also provides additional resources to add context to Security Command Center findings. Following these steps helps you understand what happened during a potential attack and develop possible responses for affected resources.
The techniques on this page are not guaranteed to be effective against any previous, current, or future threats you face. See Remediating threats to understand why Security Command Center does not provide official remediation guidance for threats.
Before you begin
You need adequate Identity and Access Management (IAM) roles to view or edit findings and logs, and modify Google Cloud resources. If you encounter access errors in Security Command Center, ask your administrator for assistance and see Access control to learn about roles. To resolve resource errors, read documentation for affected products.
Understanding threat findings
Event Threat Detection produces security findings by matching events in your Cloud Logging log streams to known indicators of compromise (IoC). IoCs, developed by internal Google security sources, identify potential vulnerabilities and attacks. Event Threat Detection also detects threats by identifying known adversarial tactics, techniques, and procedures in your logging stream, and by detecting deviations from past behavior of your organization or project. If you activate Security Command Center Premium tier at the organization level, Event Threat Detection can also scan your Google Workspace logs.
Container Threat Detection generates findings by collecting and analyzing low-level observed behavior in the guest kernel of containers.
Findings are written to Security Command Center. If you activate Security Command Center Premium tier at the organization level, you can also configure findings to be written to Cloud Logging.
Reviewing findings
To review threat findings in the Google Cloud console, follow these steps:
In the Google Cloud console, go to the Security Command Center Findings page.
If necessary, select your Google Cloud project, folder, or organization.
In the Quick filters section, click an appropriate filter to display the finding that you need in the Findings query results table. For example, if you select Event Threat Detection or Container Threat Detection in the Source display name subsection, only findings from the selected service appear in the results.
The table is populated with findings for the source you selected.
To view details of a specific finding, click the finding name under
Category
. The finding details pane expands to display a summary of the finding's details.To view the finding's JSON definition, click the JSON tab.
Findings provide the names and numeric identifiers of resources involved in an incident, along with environment variables and asset properties. You can use that information to quickly isolate affected resources and determine the potential scope of an event.
To aid in your investigation, threat findings also contain links to the following external resources:
- MITRE ATT&CK framework entries. The framework explains techniques for attacks against cloud resources and provides remediation guidance.
VirusTotal, an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses. If available, the VirusTotal Indicator field provides a link to VirusTotal to help you further investigate potential security issues.
VirusTotal is a separately priced offering with different usage limits and features. You are responsible for understanding and adhering to VirusTotal's API usage policies and any associated costs. For more information, see the VirusTotal documentation.
The following sections outline potential responses to threat findings.
Deactivation of threat findings
After you resolve an issue that triggered a threat finding,
Security Command Center does not automatically set the state of the finding
to INACTIVE
. The state of a threat finding remains ACTIVE
unless you
manually set the state
property to INACTIVE
.
For a false positive, consider leaving the state of the finding as
ACTIVE
and instead mute the finding.
For persistent or recurring false-positives, create a mute rule. Setting a mute rule can reduce the number of findings that you need to manage, which makes it easier to identify a true threat when one occurs.
For a true threat, before you set the state of the finding to INACTIVE
,
eliminate the threat and complete a thorough investigation of the
detected threat, the extent of the intrusion, and any other related findings
and issues.
To mute a finding or change its state, see the following topics:
Event Threat Detection responses
To learn more about Event Threat Detection, see how Event Threat Detection works.
This section doesn't contain responses for findings generated by custom modules for Event Threat Detection, because your organization defines the recommended actions for those detectors.
Evasion: Access from Anonymizing Proxy
Anomalous access from an anonymous proxy is detected by examining Cloud Audit Logs for Google Cloud service modifications that originated from anonymous proxy IP addresses, like Tor IP addresses.
To respond to these findings, do the following:
Step 1: Review finding details
- Open an
Evasion: Access from Anonymizing Proxy
finding, as directed in Reviewing findings. The panel for the finding details opens, displaying the Summary tab. On the Summary tab of the finding details panel, review the listed values in the following sections:
- What was detected, especially the following fields:
- Principal email: the account that made the changes (a potentially compromised account).
- IP: The proxy IP address where the changes are conducted from.
- Affected resource
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
Optionally, click the JSON tab to view additional finding fields.
Step 2: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Proxy: Multi-hop Proxy.
- Contact the owner of the account in the
principalEmail
field. Confirm whether the action was conducted by the legitimate owner. - To develop a response plan, combine your investigation results with MITRE research.
Defense Evasion: Breakglass Workload Deployment Created
Breakglass Workload Deployment Created
is detected by examining Cloud Audit Logs
to see if there are any deployments to workloads that use the breakglass flag to override
Binary Authorization controls.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Defense Evasion: Breakglass Workload Deployment Created
finding, as directed in Reviewing findings. The panel for the finding details opens, displaying the Summary tab. On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the account that performed the modification.
- Method name: the method that was called.
- Kubernetes pods: the pod name and namespace.
- Affected resource, especially the following field:
- Resource display name: the GKE namespace where the deployment occurred.
- Related links:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
Step 2: Check logs
- On the Summary tab of the finding details in the Google Cloud console, go to Logs Explorer by clicking the link in the Cloud Logging URI field.
- Check the value in the
protoPayload.resourceName
field to identify the specific certificate signing request. Check for other actions taken by the principal by using the following filters:
resource.labels.cluster_name="CLUSTER_NAME"
protoPayload.authenticationInfo.principalEmail="PRINCIPAL_EMAIL"
Replace the following:
CLUSTER_NAME
: the value that you noted in the Resource display name field in the finding details.PRINCIPAL_EMAIL
: the value that you noted in the Principal email field in the finding details.
Step 3: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Defense Evasion: Breakglass Workload Deployment.
- Review related findings by clicking the link on the Related findings on the Related findings row in the Summary tab of the finding details.
- To develop a response plan, combine your investigation results with MITRE research.
Defense Evasion: Breakglass Workload Deployment Updated
Breakglass Workload Deployment Updated
is detected by examining Cloud Audit Logs
to see if there are any updates to workloads that use the breakglass flag to override
Binary Authorization controls.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Defense Evasion: Breakglass Workload Deployment Updated
finding, as directed in Reviewing findings. The panel for the finding details opens, displaying the Summary tab. On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the account that performed the modification.
- Method name: the method that was called.
- Kubernetes pods: the pod name and namespace.
- Affected resource, especially the following field:
- Resource display name: the GKE namespace where the update occurred.
- Related links:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
Step 2: Check logs
- On the Summary tab of the finding details in the Google Cloud console, go to Logs Explorer by clicking the link in the Cloud Logging URI field.
- Check the value in the
protoPayload.resourceName
field to identify the specific certificate signing request. Check for other actions taken by the principal by using the following filters:
resource.labels.cluster_name="CLUSTER_NAME"
protoPayload.authenticationInfo.principalEmail="PRINCIPAL_EMAIL"
Replace the following:
CLUSTER_NAME
: the value that you noted in the Resource display name field in the finding details.PRINCIPAL_EMAIL
: the value that you noted in the Principal email field in the finding details.
Step 3: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Defense Evasion: Breakglass Workload Deployment.
- Review related findings by clicking the link on the Related findings on the Related findings row in the Summary tab of the finding details.
- To develop a response plan, combine your investigation results with MITRE research.
Defense Evasion: Manually Deleted Certificate Signing Request (CSR)
Someone manually deleted a certificate signing request (CSR). CSRs are automatically removed by a garbage collection controller, but malicious actors might manually delete them to evade detection. If the deleted CSR was for an approved and issued certificate, the potentially malicious actor now has an additional authentication method to access the cluster. The permissions associated with the certificate vary depending on which subject they included, but can be highly privileged. Kubernetes does not support certificate revocation. For more details, see the log message for this alert.
- Review the audit logs in Cloud Logging and additional alerts for other
events related to this CSR to determine if the CSR was
approved
and if the CSR creation was expected activity by the principal. - Determine whether there are other signs of malicious activity by the
principal in the audit logs in Cloud Logging. For example:
- Was the principal who deleted the CSR different from the one who created or approved it?
- Has the principal tried requesting, creating, approving, or deleting other CSRs?
- If a CSR approval was not expected, or is determined to be malicious, the cluster will require a credential rotation to invalidate the certificate. Review the guidance for performing a rotation of your cluster credentials.
Defense Evasion: Modify VPC Service Control
This finding isn't available for project-level activations.
Audit logs are examined to detect changes to VPC Service Controls perimeters that would lead to a reduction in the protection offered by that perimeter. The following are some examples:
- A project is removed from a perimeter
- An access level policy is added to an existing perimeter
- One or more services are added to the list of accessible services
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Defense Evasion: Modify VPC Service Control
finding, as directed in Reviewing findings. The panel for the finding details opens, displaying the Summary tab. On the Summary tab, review the information in the following sections:
- What was detected, especially the following field:
- Principal email: the account that performed the modification.
- Affected resource, especially the following field:
- Resource full name: the name of the VPC Service Controls perimeter that was modified.
- Related links:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following field:
Click the JSON tab.
In the JSON, note the following fields.
sourceProperties
properties
name
: the name of the VPC Service Controls perimeter that was modifiedpolicyLink
: the link to the access policy that controls the perimeterdelta
: the changes, eitherREMOVE
orADD
, to a perimeter that reduced its protectionrestricted_resources
: the projects that follow the restrictions of this perimeter. Protection is reduced if you remove a projectrestricted_services
: the services that are forbidden from running by the restrictions of this perimeter. Protection is reduced if you remove a restricted serviceallowed_services
: the services that are allowed to run under this perimeter's restrictions. Protection is reduced if you add an allowed serviceaccess_levels
: the access levels that are configured to allow access to resources under the perimeter. Protection is reduced if you add more access levels
Step 2: Check logs
- On the Summary tab of the finding details panel, click the Cloud Logging URI link to open the Logs Explorer.
- Find admin activity logs related to VPC Service Controls changes by using
the following filters:
protoPayload.methodName:"AccessContextManager.UpdateServicePerimeter"
protoPayload.methodName:"AccessContextManager.ReplaceServicePerimeters"
Step 3: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Defense Evasion: Modify Authentication Process.
- Review related findings by clicking the link on the Related findings on the Related findings row in the Summary tab of the finding details.
- To develop a response plan, combine your investigation results with MITRE research.
Step 4: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the VPC Service Controls policy and perimeter.
- Consider reverting the changes for the perimeter until the investigation is completed.
- Consider revoking Access Context Manager roles on the principal that modified the perimeter until the investigation is completed.
- Investigate how the reduced protections have been used. For example, if "BigQuery Data Transfer Service API" is enabled, or added as allowed service, check who started using that service and what they are transferring.
Defense Evasion: Potential Kubernetes Pod Masquerading
Someone deployed a Pod with a naming convention similar to the default workloads that GKE creates for regular cluster operation. This technique is called masquerading. For more details, see the log message for this alert.
- Confirm that the Pod is legitimate.
- Determine whether there are other signs of malicious activity from the Pod or principal in the audit logs in Cloud Logging.
- If the principal isn't a service account (IAM or Kubernetes), contact the owner of the account to confirm whether the legitimate owner conducted the action.
- If the principal is a service account (IAM or Kubernetes), identify the source of the action to determine its legitimacy.
- If the Pod is not legitimate, remove it, along with any associated RBAC bindings and service accounts that the workload used and that allowed its creation.
Discovery: Can get sensitive Kubernetes object check
A potentially malicious actor attempted to determine what sensitive objects in
GKE they can query for, by using the kubectl
auth can-i get
command. Specifically,
the actor ran any of the following commands:
kubectl auth can-i get '*'
kubectl auth can-i get secrets
kubectl auth can-i get clusterroles/cluster-admin
Step 1: Review finding details
- Open the
Discovery: Can get sensitive Kubernetes object check
finding as directed in Reviewing findings. In the finding details, on the Summary tab, note the values of following fields:
- Under What was detected:
- Kubernetes access reviews: the requested access review information,
based on the
SelfSubjectAccessReview
k8s resource. - Principal email: the account that made the call.
- Kubernetes access reviews: the requested access review information,
based on the
- Under Affected resource:
- Resource display name: the Kubernetes cluster where the action occurred.
- Under Related links:
- Cloud Logging URI: link to Logging entries.
- Under What was detected:
Step 2: Check logs
- On the Summary tab of the finding details panel, click the Cloud Logging URI link to open the Logs Explorer.
On the page that loads, check for other actions taken by the principal by using the following filters:
resource.labels.cluster_name="CLUSTER_NAME"
protoPayload.authenticationInfo.principalEmail="PRINCIPAL_EMAIL"
Replace the following:
CLUSTER_NAME
: the value that you noted in the Resource display name field in the finding details.PRINCIPAL_EMAIL
: the value that you noted in the Principal email field in the finding details.
Step 3: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Discovery
- Confirm the sensitivity of the object queried and determine whether there are other signs of malicious activity by the principal in the logs.
If the account that you noted in the Principal email row in the finding details is not a service account, contact the owner of the account to confirm whether the legitimate owner conducted the action.
If the principal email is a service account (IAM or Kubernetes), identify the source of the access review to determine its legitimacy.
To develop a response plan, combine your investigation results with MITRE research.
Execution: Kubernetes Pod Created with Potential Reverse Shell Arguments
Someone created a Pod that contains commands or arguments commonly associated with a reverse shell. Attackers use reverse shells to expand or maintain their initial access to a cluster and to execute arbitrary commands. For more details, see the log message for this alert.
- Confirm that the Pod has a legitimate reason to specify these commands and arguments.
- Determine whether there are other signs of malicious activity from the Pod or principal in the audit logs in Cloud Logging.
- If the principal isn't a service account (IAM or Kubernetes), contact the owner of the account to confirm whether the legitimate owner conducted the action.
- If the principal is a service account (IAM or Kubernetes), identify the legitimacy of what caused the service account to perform this action
- If the Pod is not legitimate, remove it, along with any associated RBAC bindings and service accounts that the workload used and that allowed its creation.
Execution: Suspicious Exec or Attach to a System Pod
Someone used the exec
or attach
commands to get a shell or execute a command
on a container running in the kube-system
namespace. These methods are
sometimes used for legitimate debugging purposes. However, the kube-system
namespace
is intended for system objects created by Kubernetes, and unexpected
command execution or shell creation should be reviewed. For more details, see
the log message for this alert.
- Review the audit logs in Cloud Logging to determine if this was expected activity by the principal.
- Determine whether there are other signs of malicious activity by the principal in the logs.
Review the guidance for using the principle of least privilege for the RBAC roles and cluster roles that allowed this access.
Exfiltration: BigQuery Data Exfiltration
Findings that are returned by the Exfiltration: BigQuery
Data Exfiltration
contain one of two possible subrules. Each subrule has a
different severity:
- Subrule
exfil_to_external_table
with severity =HIGH
:- A resource was saved outside of your organization or project.
- Subrule
vpc_perimeter_violation
with severity =LOW
:- VPC Service Controls blocked a copy operation or an attempt to access BigQuery resources.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Exfiltration: BigQuery Data Exfiltration
finding, as directed in Reviewing findings. On the Summary tab of the finding details panel, review the listed values in the following sections:
- What was detected:
- Severity: the severity is either
HIGH
for subruleexfil_to_external_table
orLOW
for subrulevpc_perimeter_violation
. - Principal email: the account used to exfiltrate the data.
- Exfiltration sources: details about the tables from which data was exfiltrated.
- Exfiltration targets: details about the tables where exfiltrated data was stored.
- Severity: the severity is either
- Affected resource:
- Resource full name: the full resource name of the project, folder, or organization from which data was exfiltrated.
- Related links:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- Chronicle: link to Google SecOps.
- What was detected:
Click the Source Properties tab and review the fields shown, especially:
detectionCategory
:subRuleName
: eitherexfil_to_external_table
orvpc_perimeter_violation
.
evidence
:sourceLogId
:projectId
: the Google Cloud project that contains the source BigQuery dataset.
properties
dataExfiltrationAttempt
jobLink
: the link to the BigQuery job that exfiltrated data.query
: the SQL query run on the BigQuery dataset.
Optionally, click the JSON tab for the complete listing of the JSON properties of the finding.
Step 2: Investigate in Google Security Operations
You can use Google Security Operations to investigate this finding. Google SecOps is a Google Cloud service that lets you investigate threats and pivot through related entities in a unified timeline. Google SecOps enriches findings data, letting you identify indicators of interest and simplify investigations.
You can only use Google SecOps if you activate Security Command Center at the organization level.
Go to the Security Command Center Findings page in the Google Cloud console.
In the Quick filters panel, scroll down to Source display name.
In the Source display name section, select Event Threat Detection.
The table populates with findings from Event Threat Detection.
In the table, under category, click on an
Exfiltration: BigQuery Data Exfiltration
finding. The details panel for the finding opens.In the Related links section of the finding details panel, click Investigate in Chronicle.
Follow the instructions in Google SecOps's guided user interface.
Use the following guides to conduct investigations in Google SecOps:
Step 3: Review permissions and settings
In the Google Cloud console, go to the IAM page.
If necessary, select the project listed in the
projectId
field in the finding JSON.On the page that appears, in the Filter box, enter the email address listed in Principal email and check what permissions are assigned to the account.
Step 4: Check logs
- On the Summary tab of the finding details panel, click the Cloud Logging URI link to open the Logs Explorer.
Find admin activity logs related to BigQuery jobs by using the following filters:
protoPayload.methodName="Jobservice.insert"
protoPayload.methodName="google.cloud.bigquery.v2.JobService.InsertJob"
Step 5: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Exfiltration Over Web Service: Exfiltration to Cloud Storage.
- Review related findings by clicking the link on the Related findings on the Related findings row in the Summary tab of the finding details. Related findings are the same finding type on the same instance and network.
- To develop a response plan, combine your investigation results with MITRE research.
Step 6: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with exfiltrated data.
- Consider revoking permissions for
userEmail
until the investigation is completed. - To stop further exfiltration, add restrictive IAM policies to the impacted
BigQuery datasets (
exfiltration.sources
andexfiltration.targets
). - To scan impacted datasets for sensitive information, use Sensitive Data Protection. You can also send Sensitive Data Protection data to Security Command Center. Depending on the quantity of information, Sensitive Data Protection costs can be significant. Follow best practices for keeping Sensitive Data Protection costs under control.
- To limit access to the BigQuery API, use VPC Service Controls.
- To identify and fix overly permissive roles, use IAM Recommender.
Exfiltration: BigQuery Data Extraction
Data exfiltration from BigQuery is detected by examining audit logs for two scenarios:
- A resource is saved to a Cloud Storage bucket outside of your organization.
- A resource is saved to a publicly accessible Cloud Storage bucket owned by your organization.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To respond to this finding, do the following:
Step 1: Review finding details
- Open an
Exfiltration: BigQuery Data Extraction
finding, as directed in Reviewing findings. The details panel for the finding opens to the Summary tab. On the Summary tab of the finding details panel, review the listed values in the following sections:
- What was detected:
- Principal email: the account used to exfiltrate the data.
- Exfiltration sources: details about the tables from which data was exfiltrated.
- Exfiltration targets: details about the tables where exfiltrated data was stored.
- Affected resource:
- Resource full name: the name of the BigQuery resource whose data was exfiltrated.
- Project full name: the Google Cloud project that contains the source BigQuery dataset.
- Related links:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected:
In the finding details panel, click the JSON tab.
In the JSON, note the following fields.
sourceProperties
:evidence
:sourceLogId
:projectId
: the Google Cloud project that contains the source BigQuery dataset.
properties
:extractionAttempt
:jobLink
: the link to the BigQuery job that exfiltrated data
Step 2: Review permissions and settings
In the Google Cloud console, go to the IAM page.
If necessary, select the project listed in the
projectId
field in the finding JSON (from Step 1).On the page that appears, in the Filter box, enter the email address listed in Principal email (from Step 1) and check what permissions are assigned to the account.
Step 3: Check logs
- On the Summary tab of the finding details panel, click the Cloud Logging URI link to open the Logs Explorer.
- Find admin activity logs related to BigQuery jobs by using
the following filters:
protoPayload.methodName="Jobservice.insert"
protoPayload.methodName="google.cloud.bigquery.v2.JobService.InsertJob"
Step 4: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Exfiltration Over Web Service: Exfiltration to Cloud Storage.
- Review related findings by clicking the link on the Related findings row in the Summary tab of the finding details. Related findings are the same finding type on the same instance and network.
- To develop a response plan, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with exfiltrated data.
- Consider revoking permissions for the principal that listed on the Principal email row in the Summary tab of the finding details until the investigation is completed.
- To stop further exfiltration, add restrictive IAM policies to the impacted BigQuery datasets that are identified in the Exfiltration sources field on the Summary tab of the finding details.
- To scan impacted datasets for sensitive information, use Sensitive Data Protection. You can also send Sensitive Data Protection data to Security Command Center. Depending on the quantity of information, Sensitive Data Protection costs can be significant. Follow best practices for keeping Sensitive Data Protection costs under control.
- To limit access to the BigQuery API, use VPC Service Controls.
- If you are the owner of the bucket, consider revoking public access permissions.
- To identify and fix overly permissive roles, use IAM Recommender.
Exfiltration: BigQuery Data to Google Drive
Data exfiltration from BigQuery is detected by examining audit logs for the following scenario:
- A resource is saved to a Google Drive folder.
To respond to this finding, do the following:
Step 1: Review finding details
- Open an
Exfiltration: BigQuery Data to Google Drive
finding, as directed in Reviewing findings. On the Summary tab of the finding details panel, review the information in the following sections:
- What was detected, including:
- Principal email: the account used to exfiltrate the data.
- Exfiltration sources: details about the BigQuery table from which data was exfiltrated.
- Exfiltration targets: details about the destination in Google Drive.
- Affected resource, including:
- Resource full name: the name of the BigQuery resource whose data was exfiltrated.
- Project full name: the Google Cloud project that contains the source BigQuery dataset.
- Related links, including:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, including:
For additional information, click the JSON tab.
In the JSON, note the following fields.
sourceProperties
:evidence
:sourceLogId
:projectId
: the Google Cloud project that contains the source BigQuery dataset.
properties
:extractionAttempt
:jobLink
: the link to the BigQuery job that exfiltrated data
Step 2: Review permissions and settings
In the Google Cloud console, go to the IAM page.
If necessary, select the project listed in the
projectId
field in the finding JSON (from Step 1).On the page that appears, in the Filter box, enter the email address listed in
access.principalEmail
(from Step 1) and check what permissions are assigned to the account.
Step 3: Check logs
- On the Summary tab of the finding details panel, click the Cloud Logging URI link to open the Logs Explorer.
- Find admin activity logs related to BigQuery jobs by using
the following filters:
protoPayload.methodName="Jobservice.insert"
protoPayload.methodName="google.cloud.bigquery.v2.JobService.InsertJob"
Step 4: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Exfiltration Over Web Service: Exfiltration to Cloud Storage.
- Review related findings by clicking the link on the Related findings on the Related findings row in the Summary tab of the finding details. Related findings are the same finding type on the same instance and network.
- To develop a response plan, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with exfiltrated data.
- Consider revoking permissions for the
principal in the
access.principalEmail
field until the investigation is completed. - To stop further exfiltration, add restrictive IAM policies to the impacted
BigQuery datasets (
exfiltration.sources
). - To scan impacted datasets for sensitive information, use Sensitive Data Protection. You can also send Sensitive Data Protection data to Security Command Center. Depending on the quantity of information, Sensitive Data Protection costs can be significant. Follow best practices for keeping Sensitive Data Protection costs under control.
- To limit access to the BigQuery API, use VPC Service Controls.
- To identify and fix overly permissive roles, use IAM Recommender.
Exfiltration: Cloud SQL Data Exfiltration
Data exfiltration from Cloud SQL is detected by examining audit logs for two scenarios:
- Live instance data exported to a Cloud Storage bucket outside the organization.
- Live instance data exported to a Cloud Storage bucket that is owned by the organization and is publicly accessible.
All Cloud SQL instance types are supported.
For project-level activations of the Security Command Center Premium tier, this finding is available only if the Standard tier is enabled in the parent organization.
To respond to this finding, do the following:
Step 1: Review finding details
- Open an
Exfiltration: Cloud SQL Data Exfiltration
finding, as directed in Reviewing findings. The details panel for the finding opens to the Summary tab. On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email : the account used to exfiltrate the data.
- Exfiltration sources: details about the Cloud SQL instance whose data was exfiltrated.
- Exfiltration targets: details about the Cloud Storage bucket the data was exported to.
- Affected resource, especially the following fields:
- Resource full name: the resource name of the Cloud SQL whose data was exfiltrated.
- Project full name: the Google Cloud project that contains the source Cloud SQL data.
- Related links, including:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
Click the JSON tab.
In the JSON for the finding, note the following fields:
sourceProperties
:evidence
:sourceLogId
:projectId
: the Google Cloud project that contains the source Cloud SQL instance.
properties
bucketAccess
: whether the Cloud Storage bucket is publicly accessible or external to the organizationexportScope
: how much of the data was exported, such as, the whole instance, one or more databases, one or more tables, or a subset specified by a query)
Step 2: Review permissions and settings
In the Google Cloud console, go to the IAM page.
If necessary, select the project of the instance listed in the
projectId
field in the finding JSON (from Step 1).On the page that appears, in the Filter box, enter the email address listed on the Principal email row in the Summary tab of the finding details (from Step 1). Check what permissions are assigned to the account.
Step 3: Check logs
- In the Google Cloud console, go to Logs Explorer by clicking the link in Cloud Logging URI (from Step 1). The Logs Explorer page includes all logs related to the relevant Cloud SQL instance.
Step 4: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Exfiltration Over Web Service: Exfiltration to Cloud Storage.
- Review related findings by clicking the link on the Related findings row that was described in Step 1). Related findings have the same finding type on the same Cloud SQL instance.
- To develop a response plan, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with exfiltrated data.
- Consider revoking permissions for
access.principalEmail
until the investigation is completed. - To stop further exfiltration, add restrictive IAM policies to the impacted Cloud SQL instances.
- To limit access to and export from the Cloud SQL Admin API, use VPC Service Controls.
- To identify and fix overly permissive roles, use IAM Recommender.
Exfiltration: Cloud SQL Restore Backup to External Organization
Data exfiltration from a Cloud SQL backup is detected by examining audit logs to determine whether data from the backup has been restored to a Cloud SQL instance outside the organization or project. All Cloud SQL instance and backup types are supported.
To respond to this finding, do the following:
Step 1: Review finding details
- Open an
Exfiltration: Cloud SQL Restore Backup to External Organization
finding, as directed in Reviewing findings. On the Summary tab of the finding details panel, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the account used to exfiltrate the data.
- Exfiltration sources: details about the Cloud SQL instance the backup was created from.
- Exfiltration targets: details about the Cloud SQL instance the backup data was restored to.
- Affected resource, especially the following fields:
- Resource full name: the resource name of the backup that was restored.
- Project full name: the Google Cloud project that contains the Cloud SQL instance that the backup was created from.
- What was detected, especially the following fields:
Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
Click the JSON tab.
In the JSON, note the following fields.
resource
:parent_name
: the resource name of the Cloud SQL instance the backup was created from
evidence
:sourceLogId
:projectId
: the Google Cloud project that contains the source BigQuery dataset.
properties
:restoreToExternalInstance
:backupId
: the ID of the backup run that was restored
Step 2: Review permissions and settings
In the Google Cloud console, go to the IAM page.
If necessary, select the project of the instance that is listed in the
projectId
field in the finding JSON (from Step 1).On the page that appears, in the Filter box, enter the email address listed in Principal email (from Step 1) and check what permissions are assigned to the account.
Step 3: Check logs
- In the Google Cloud console, go to Logs Explorer by clicking the link in Cloud Logging URI (from Step 1). The Logs Explorer page includes all logs related to the relevant Cloud SQL instance.
Step 4: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Exfiltration Over Web Service: Exfiltration to Cloud Storage.
- Review related findings by clicking the link on the Related findings row. (from Step 1). Related findings have the same finding type on the same Cloud SQL instance.
- To develop a response plan, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with exfiltrated data.
- Consider revoking permissions the principal that is listed on the Principal email row in the Summary tab of the finding details until the investigation is completed.
- To stop further exfiltration, add restrictive IAM policies to the impacted Cloud SQL instances.
- To limit access to the Cloud SQL Admin API, use VPC Service Controls.
- To identify and fix overly permissive roles, use IAM Recommender.
Exfiltration: Cloud SQL Over-Privileged Grant
Detects when all privileges over a PostgreSQL database (or all functions or procedures in a database) are granted to one or more database users.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Exfiltration: Cloud SQL Over-Privileged Grant
finding, as directed in Reviewing findings. On the Summary tab of the finding details panel, review the information in the following sections:
- What was detected, especially the following fields:
- Database display name: the name of the database in the Cloud SQL PostgreSQL instance that was affected.
- Database user name: the PostgreSQL user who granted excess privileges.
- Database query: the PostgreSQL query executed that granted the privileges.
- Database grantees: the grantees of the overbroad privileges.
- Affected resource, especially the following fields:
- Resource full name: the resource name of the Cloud SQL PostgreSQL instance that was affected.
- Parent full name: the resource name of the Cloud SQL PostgreSQL instance.
- Project full name: the Google Cloud project that contains the Cloud SQL PostgreSQL instance.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
To see the complete JSON for the finding, click the JSON tab.
Step 2: Review database privileges
- Connect to the PostgreSQL database.
- List and show access privileges
for the following:
- Databases. Use the
\l
or\list
metacommand and check what privileges are assigned for the database listed in Database display name (from Step 1). - Functions or procedures. Use the
\df
metacommand and check what privileges are assigned for functions or procedures in the database listed in Database display name (from Step 1).
- Databases. Use the
Step 3: Check logs
- In the Google Cloud console, go to Logs Explorer by clicking the link in Cloud Logging URI (from Step 1). The Logs Explorer page includes all logs related to the relevant Cloud SQL instance.
- In the Logs explorer, check the PostgreSQL
pgaudit
logs, which record executed queries to the database, by using the following filters:protoPayload.request.database="var class="edit">database"
Step 4: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Exfiltration Over Web Service.
- To determine if additional remediation steps are necessary, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the instance with overprivileged grants.
- Consider revoking all permissions for the grantees that are listed in Database grantees until the investigation is completed.
- To limit access to the database (from Database display name of Step 1, revoke unnecessary permissions from the grantees (from Database grantees of Step 1.
Initial Access: Database Superuser Writes to User Tables
Detects when the Cloud SQL database superuser account (postgres
for PostgreSQL and root
for MySQL) writes to user
tables. The superuser (a role with very broad access) generally shouldn't be
used to write to user tables. A user account with more limited access should be used
for normal daily activity. When a superuser writes to a user table, that could
indicate that an attacker has escalated privileges or has compromised the
default database user and is modifying data. It could also indicate normal but
unsafe practices.
To respond to this finding, do the following:
Step 1: Review finding details
- Open an
Initial Access: Database Superuser Writes to User Tables
finding, as directed in Reviewing findings. On the Summary tab of the finding details panel, review the information in the following sections:
- What was detected, especially the following fields:
- Database display name: the name of the database in the Cloud SQL PostgreSQL or MySQL instance that was affected.
- Database user name: the superuser.
- Database query: the SQL query executed while writing to user tables.
- Affected resource, especially the following fields:
- Resource full name: the resource name of the Cloud SQL instance that was affected.
- Parent full name: the resource name of the Cloud SQL instance.
- Project full name: the Google Cloud project that contains the Cloud SQL instance.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
To see the complete JSON for the finding, click the JSON tab.
Step 2: Check logs
- In the Google Cloud console, go to Logs Explorer by clicking
the link in
cloudLoggingQueryURI
(from Step 1). The Logs Explorer page includes all logs related to the relevant Cloud SQL instance. - Check the logs for PostgreSQL pgaudit logs or Cloud SQL for MySQL audit
logs, which contain the queries executed by the superuser, by using the following filters:
protoPayload.request.user="SUPERUSER"
Step 3: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Exfiltration Over Web Service.
- To determine if additional remediation steps are necessary, combine your investigation results with MITRE research.
Step 4: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
Review the users allowed to connect to the database.
- For PostgreSQL, see Create and manage users
- For MySQL, see Manage users with built-in authentication
Consider changing the password for the superuser.
- For PostgreSQL, see Set the password for the default user
- For MySQL, see Set the password for the default user
Consider creating a new, limited access user for the different types of queries used on the instance.
Grant the new user only the necessary permissions needed to execute their queries.
- For PostgreSQL, see Grant (command)
- For MySQL, see Access Control and Account Management
Update the credentials for the clients that connect to the Cloud SQL instance
Initial Access: Anonymous GKE resource created from the internet
Detects when a potentially malicious actor used one of the following Kubernetes default users or user groups to create a new Kubernetes resource in the cluster:
system:anonymous
system:authenticated
system:unauthenticated
These users and groups are effectively anonymous. A role-based access control (RBAC) binding in your cluster granted that user permission to create those resources in the cluster.
Review the created resource and the associated RBAC binding to ensure that the binding is necessary. If the binding isn't necessary, remove it. For more details, see the log message for this finding.
To mitigate this issue, see Avoid default roles and groups.
Initial Access: GKE resource modified anonymously from the internet
Detects when a potentially malicious actor used one of the following Kubernetes default users or user groups to modify a Kubernetes resource in the cluster:
system:anonymous
system:authenticated
system:unauthenticated
These users and groups are effectively anonymous. A role-based access control (RBAC) binding in your cluster granted that user permission to modify those resources in the cluster.
Review the modified resource and the associated RBAC binding to ensure that the binding is necessary. If the binding isn't necessary, remove it. For more details, see the log message for this finding.
To mitigate this issue, see Avoid default roles and groups.
Initial Access: Dormant Service Account Action
Detects events where a dormant user-managed service account triggered an action. In this context, a service account is considered dormant if it has been inactive for more than 180 days.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Initial Access: Dormant Service Account Action
finding, as directed in Reviewing findings. In the finding details, on the Summary tab, note the values of following fields.
Under What was detected:
- Principal email: the dormant service account that performed the action
- Service name: the API name of the Google Cloud service that was accessed by the service account
- Method name: the method that was called
Step 2: Research attack and response methods
- Use service account tools, like Activity Analyzer, to investigate the activity of the dormant service account.
- Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project where the action was taken.
- Consider deleting the potentially compromised service account and rotate and delete all service account access keys for the potentially compromised project. After deletion, applications that use the service account for authentication lose access. Before proceeding, your security team should identify all impacted applications and work with application owners to ensure business continuity.
- Work with your security team to identify unfamiliar resources, including Compute Engine instances, snapshots, service accounts, and IAM users. Delete resources not created with authorized accounts.
- Respond to any notifications from Google Cloud Support.
- To limit who can create service accounts, use the Organization Policy Service.
- To identify and fix overly permissive roles, use IAM Recommender.
Initial Access: Dormant Service Account Key Created
Detects events where a dormant user-managed service account key is created. In this context, a service account is considered dormant if it has been inactive for more than 180 days.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Initial Access: Dormant Service Account Key Created
finding, as directed in Reviewing findings. In the finding details, on the Summary tab, note the values of following fields.
Under What was detected:
- Principal email: the user who created the service account key
Under Affected resource:
- Resource display name: the newly created dormant service account key
- Project full name: the project where that dormant service account resides
Step 2: Research attack and response methods
- Use service account tools, like Activity Analyzer, to investigate the activity of the dormant service account.
- Contact the owner of the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project where the action was taken.
- Remove the access of the owner of the Principal email if it is compromised.
- Invalidate the newly created service account key in the Service Accounts Page.
- Consider deleting the potentially compromised service account and rotate and delete all service account access keys for the potentially compromised project. After deletion, applications that use the service account for authentication lose access. Before proceeding, your security team should identify all impacted applications and work with application owners to ensure business continuity.
- Work with your security team to identify unfamiliar resources, including Compute Engine instances, snapshots, service accounts, and IAM users. Delete resources not created with authorized accounts.
- Respond to any notifications from Cloud Customer Care.
- To limit who can create service accounts, use the Organization Policy Service.
- To identify and fix overly permissive roles, use IAM recommender.
Initial Access: Leaked Service Account Key Used
Detects events where a leaked service account key is used to authenticate the action. In this context, a leaked service account key is one that was posted on the public internet. For example, service account keys are often mistakenly posted on public GitHub repository.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Initial Access: Leaked Service Account Key Used
finding, as directed in Reviewing findings. In the finding details, on the Summary tab, note the values of following fields.
Under What was detected:
- Principal email: the service account used in this action
- Service name: the API name of the Google Cloud service that was accessed by the service account
- Method name: the method name of the action
- Service account key name: the leaked service account key used to authenticate this action
- Description: the description of what was detected, including the location on the public internet where the service account key can be found
Under Affected resource:
- Resource display name: the resource involved in the action
Step 2: Check logs
- In the Google Cloud console, go to Logs Explorer by clicking the link in Cloud Logging URI.
- On the Google Cloud console toolbar, select your project or organization.
On the page that loads, find related logs by using the following filter:
protoPayload.authenticationInfo.principalEmail="PRINCIPAL_EMAIL"
protoPayload.authenticationInfo.serviceAccountKeyName="SERVICE_ACCOUNT_KEY_NAME"
Replace PRINCIPAL_EMAIL with the value that you noted in the Principal email field in the finding details. Replace SERVICE_ACCOUNT_KEY_NAME with the value that you noted in the Service account key name field in the finding details.
Step 3: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Revoke the service account key immediately in the Service Accounts page.
- Take down the web page or GitHub repository where the service account key is posted.
- Consider deleting the compromised service account.
- Rotate and delete all service account access keys for the potentially compromised project. After deletion, applications that use the service account for authentication lose access. Before deleting, your security team should identify all impacted applications and work with application owners to ensure business continuity.
- Work with your security team to identify unfamiliar resources, including Compute Engine instances, snapshots, service accounts, and IAM users. Delete resources not created with authorized accounts.
- Respond to any notifications from Cloud Customer Care.
Initial Access: Excessive Permission Denied Actions
Detects events where a principal repeatedly triggers permission denied errors across multiple methods and services.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Initial Access: Excessive Permission Denied Actions
finding, as directed in Reviewing findings. In the finding details, on the Summary tab, note the values of the following fields.
Under What was detected:
- Principal email: the principal that triggered multiple permission denied errors
- Service name: the API name of the Google Cloud service that the last permission denied error happened
- Method name: the method called when the last permission denied error happened
In the finding details, on the Source Properties tab, note the values of the following fields in the JSON:
- properties.failedActions: the permission denied errors that occurred. For each entry, details include the service name, method name, number of failed attempts, and the time the error last occurred. A maximum of 10 entries are shown.
Step 2: Check logs
- In the Google Cloud console, go to Logs Explorer by clicking the link in Cloud Logging URI.
- On the Google Cloud console toolbar, select your project.
On the page that loads, find related logs by using the following filter:
protoPayload.authenticationInfo.principalEmail="PRINCIPAL_EMAIL"
protoPayload.status.code=7
Replace PRINCIPAL_EMAIL with the value that you noted in the Principal email field in the finding details.
Step 3: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Valid Accounts: Cloud Accounts.
- To develop a response plan, combine your investigation results with MITRE research.
Step 4: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the account in the Principal email field. Confirm whether the legitimate owner conducted the action.
- Delete project resources created by that account, like unfamiliar Compute Engine instances, snapshots, service accounts, and IAM users etc.
- Contact the owner of the project with the account, and potentially delete or disable the account.
Brute Force: SSH
Detection of successful brute force of SSH on a host. To respond to this finding, do the following:
Step 1: Review finding details
- Open a
Brute Force: SSH
finding, as directed in Reviewing findings. On the Summary tab of the finding details panel, review the information in the following sections:
What was detected, especially the following fields:
- Caller IP: the IP address that launched the attack.
- User name: the account that logged in.
Affected resource
Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- Chronicle: link to Google SecOps.
Click the JSON tab.
In the JSON, note the following fields.
sourceProperties
:evidence
:sourceLogId
: the project ID and timestamp to identify the log entryprojectId
: the project that contains the finding
properties
:attempts
:Attempts
: the number of login attemptsusername
: the account that logged invmName
: the name of the virtual machineauthResult
: the SSH authentication result
Step 2: Investigate in Google Security Operations
You can use Google Security Operations to investigate this finding. Google SecOps is a Google Cloud service that lets you investigate threats and pivot through related entities in an easy to use timeline. Google SecOps enriches findings data, letting you identify indicators of interest and simplify investigations.
You can only use Google SecOps if you activate Security Command Center at the organization level.
Go to the Security Command Center Findings page in the Google Cloud console.
In the Quick filters panel, scroll down to Source display name.
In the Source display name section, select Event Threat Detection.
The table populates with findings for the source type you selected.
In the table, under category, click on a
Brute Force: SSH
finding. The details panel for the finding opens.In the Related links section of the finding details panel, click Investigate in Chronicle.
Follow the instructions in Google SecOps's guided user interface.
Use the following guides to conduct investigations in Google SecOps:
Step 3: Review permissions and settings
In the Google Cloud console, go to the Dashboard.
Select the project that is specified in
projectId
.Navigate to the Resources card and click Compute Engine.
Click the VM instance that matches the name and zone in
vmName
. Review instance details, including network and access settings.In the navigation pane, click VPC Network, then click Firewall. Remove or disable overly permissive firewall rules on port 22.
Step 4: Check logs
- In the Google Cloud console, go to Logs Explorer by clicking the link in Cloud Logging URI.
- On the page that loads, find VPC Flow Logs related to the IP
address that is listed on the Principal email row in the
Summary tab of the finding details by using the following filter:
logName="projects/projectId/logs/syslog"
labels."compute.googleapis.com/resource_name"="vmName"
Step 5: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Valid Accounts: Local Accounts.
- Review related findings by clicking the link on the Related findings on the Related findings row in the Summary tab of the finding details. Related findings are the same finding type and the same instance and network.
- To develop a response plan, combine your investigation results with MITRE research.
Step 6: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with the successful brute force attempt.
- Investigate the potentially compromised instance and remove any discovered malware. To assist with detection and removal, use an endpoint detection and response solution.
- Consider disabling SSH access to the VM. For information about disabling SSH keys, see Restrict SSH keys from VMs. This step could interrupt authorized access to the VM, so consider the needs of your organization before you proceed.
- Only use SSH authentication with authorized keys.
- Block the malicious IP addresses by updating firewall rules or by using Google Cloud Armor. You can enable Google Cloud Armor on the Security Command Center Integrated Services page. Depending on the quantity of information, Google Cloud Armor costs can be significant. See the Google Cloud Armor pricing guide for more information.
Credential Access: External Member Added To Privileged Group
This finding isn't available for project-level activations.
Detects when an external member is added to a privileged Google Group (a group granted sensitive roles or permissions). To respond to this finding, do the following:
Step 1: Review finding details
Open a
Credential Access: External Member Added To Privileged Group
finding, as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the account that made the changes.
- Affected resource
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
In the detail panel, click the JSON tab.
In the JSON, note the following fields.
groupName
: the Google Group where the changes were madeexternalMember
: the newly added external membersensitiveRoles
: the sensitive roles associated with this group
Step 2: Review group members
Go to Google Groups.
Click the name of the group you want to review.
In the navigation menu, click Members.
If the newly added external member should not be in this group, click the checkbox next to the members name, and then select
Remove member or Ban member.To remove or members, you must be a Google Workspace Admin, or assigned the Owner or Manager role in the Google Group. For more information, see Assign roles to a group's members.
Step 3: Check logs
- On the Summary tab of the finding details panel, click the Cloud Logging URI link to open the Logs Explorer.
If necessary, select your project.
On the page that loads, check logs for Google Group membership changes using the following filters:
protoPayload.methodName="google.apps.cloudidentity.groups.v1.MembershipsService.UpdateMembership"
protoPayload.authenticationInfo.principalEmail="principalEmail"
Step 4: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Valid Accounts.
- To determine if additional remediation steps are necessary, combine your investigation results with MITRE research.
Credential Access: Failed Attempt to Approve Kubernetes Certificate Signing Request (CSR)
Someone attempted to manually approve a certificate signing request (CSR) but the action failed. Creating a certificate for cluster authentication is a common method for attackers to create persistent access to a compromised cluster. The permissions associated with the certificate vary depending on which subject they included, but can be highly privileged. For more details, see the log message for this alert.
- Review the audit logs in Cloud Logging and additional alerts for other CSR
related events to determine if any CSR was
approved
and issued and if CSR related actions are expected activity by the principal. - Determine whether there are other signs of malicious activity by the
principal in the audit logs in Cloud Logging. For example:
- Was the principal who attempted to approve the CSR different from the one who created it?
- Has the principal tried requesting, creating, approving, or deleting other CSRs?
- If a CSR approval was not expected, or is determined to be malicious, the cluster will require a credential rotation to invalidate the certificate. Review the guidance for performing a rotation of your cluster credentials.
Credential Access: Manually Approved Kubernetes Certificate Signing Request (CSR)
Someone manually approved a certificate signing request (CSR). Creating a certificate for cluster authentication is a common method for attackers to create persistent access to a compromised cluster. The permissions associated with the certificate vary depending on which subject they included, but can be highly privileged. For more details, see the log message for this alert.
- Review the audit logs in Cloud Logging and additional alerts for other CSR related events to determine if CSR related actions are expected activity by the principal.
- Determine whether there are other signs of malicious activity by the
principal in the audit logs in Cloud Logging. For example:
- Was the principal who approved the CSR different from the one who created it?
- Did the CSR specify a built-in signer, but ultimately need to be manually approved because it did not meet the signer's criteria?
- Has the principal tried requesting, creating, approving, or deleting other CSRs?
- If a CSR approval was not expected, or is determined to be malicious, the cluster will require a credential rotation to invalidate the certificate. Review the guidance for performing a rotation of your cluster credentials.
Credential Access: Privileged Group Opened To Public
This finding isn't available for project-level activations.
Detects when a privileged Google Group (a group granted sensitive roles or permissions) is changed to be accessible to the general public. To respond to this finding, do the following:
Step 1: Review finding details
Open a
Credential Access: Privileged Group Opened To Public
finding, as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the account that made the changes, which might be compromised.
- Affected resource
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- Click the JSON tab.
- In the JSON, note the following fields.
groupName
: the Google Group where the changes were madesensitiveRoles
: the sensitive roles associated with this groupwhoCanJoin
: the joinability setting of the group
- What was detected, especially the following fields:
Step 2: Review group access settings
Go to the Admin Console for Google Groups. You must be a Google Workspace Admin to sign in to the console.
In the navigation pane, click Directory, and then select Groups.
Click the name of the group you want to review.
Click Access Settings, and then, under Who can join the group, review the group's joinability setting.
In the drop-down menu, if needed, change the joinability setting.
Step 3: Check logs
- On the Summary tab of the finding details panel, click the Cloud Logging URI link to open the Logs Explorer.
If necessary, select your project.
On the page that loads, check logs for Google Group settings changes using the following filters:
protoPayload.methodName="google.admin.AdminService.changeGroupSetting"
protoPayload.authenticationInfo.principalEmail="principalEmail"
Step 4: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Valid Accounts.
- To determine if additional remediation steps are necessary, combine your investigation results with MITRE research.
Credential Access: Secrets Accessed in Kubernetes Namespace
Detects when a Pod's
default
Kubernetes service account
was used to access Secret objects in the cluster. The default
Kubernetes
service account shouldn't have access to Secret objects unless you explicitly
granted that access with a Role object or a ClusterRole object.
Credential Access: Sensitive Role Granted To Hybrid Group
Detects when sensitive roles or permissions are granted to a Google Group with external members. To respond to this finding, do the following:
Step 1: Review finding details
Open a
Credential Access: Sensitive Role Granted To Hybrid Group
finding, as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the account that made the changes, which might be compromised.
- Affected resource, especially the following fields:
- Resource full name: the resource where the new role was granted.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- Click the JSON tab.
- In the JSON, note the following fields.
groupName
: the Google Group where the changes were madebindingDeltas
: the sensitive roles that are newly granted to this group.
- What was detected, especially the following fields:
Step 2: Review group permissions
Go to the IAM page in the Google Cloud console.
In the Filter field, enter the account name listed in
groupName
.Review the sensitive roles granted to the group.
If the newly added sensitive role isn't needed, revoke the role.
You need specific permissions to manage roles in your organization or project. For more information, see Required permissions.
Step 3: Check logs
- On the Summary tab of the finding details panel, click the Cloud Logging URI link to open the Logs Explorer.
If necessary, select your project.
On the page that loads, check logs for Google Group settings changes using the following filters:
protoPayload.methodName="SetIamPolicy"
protoPayload.authenticationInfo.principalEmail="principalEmail"
Step 4: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Valid Accounts.
- To determine if additional remediation steps are necessary, combine your investigation results with MITRE research.
Malware: Cryptomining Bad IP
Malware is detected by examining VPC Flow Logs and Cloud DNS logs for connections to known command and control domains and IP addresses. To respond to this finding, do the following:
Step 1: Review finding details
Open a
Malware: Cryptomining Bad IP
finding, as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Source IP: the suspected cryptomining IP address.
- Source port: the source port of the connection, if available.
- Destination IP: the target IP address.
- Destination port: the destination port of the connection, if available.
- Protocol: the IANA protocol that is associated with the connection.
- Affected resource
- Related links, including the following fields:
- Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- Flow Analyzer (Preview): link to the Flow Analyzer feature of Network Intelligence Center. This field displays only when VPC Flow Logs is enabled.
- What was detected, especially the following fields:
In the detail view of the finding, click the Source properties tab.
Expand properties and note project and instance values in the following field:
instanceDetails
: note both the project ID and the name of the Compute Engine instance. The project ID and instance name appear as shown in the following example:/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME
To see the complete JSON for the finding, click the JSON tab.
Step 2: Review permissions and settings
In the Google Cloud console, go to the Dashboard page.
Select the project that is specified in
properties_project_id
.Navigate to the Resources card and click Compute Engine.
Click the VM instance that matches
properties_sourceInstance
. Investigate the potentially compromised instance for malware.In the navigation pane, click VPC Network, then click Firewall. Remove or disable overly permissive firewall rules.
Step 3: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select your project.
On the page that loads, find VPC Flow Logs related to
Properties_ip_0
by using the following filter:logName="projects/properties_project_id/logs/compute.googleapis.com%2Fvpc_flows"
(jsonPayload.connection.src_ip="Properties_ip_0" OR jsonPayload.connection.dest_ip="Properties_ip_0")
Step 4: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Resource Hijacking.
- To develop a response plan, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project containing malware.
- Investigate the potentially compromised instance and remove any discovered malware. To assist with detection and removal, use an endpoint detection and response solution.
- If necessary, stop the compromised instance and replace it with a new instance.
- Block the malicious IP addresses by updating firewall rules or by using Google Cloud Armor. You can enable Google Cloud Armor on the Security Command Center Integrated Services page. Depending on the data volume, Google Cloud Armor costs can be significant. See the Google Cloud Armor pricing guide for more information.
Initial Access: Log4j Compromise Attempt
This finding is generated when Java Naming and Directory Interface (JNDI) lookups within headers or URL parameters are detected. These lookups may indicate attempts at Log4Shell exploitation. To respond to this finding, do the following:
Step 1: Review finding details
Open an
Initial Access: Log4j Compromise Attempt
finding, as directed in Reviewing finding details. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected
- Affected resource
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- In the detail view of the finding, click the JSON tab.
In the JSON, note the following fields.
properties
loadBalancerName
: the name of the load balancer that received the JNDI lookuprequestUrl
: the request URL of the HTTP request. If present, this contains a JNDI lookup.requestUserAgent
: the user agent that sent the HTTP request. If present, this contains a JNDI lookup.refererUrl
: the URL of the page that sent the HTTP request. If present, this contains a JNDI lookup.
Step 2: Check logs
- In the Google Cloud console, go to Logs Explorer by clicking the link in the Cloud Logging URI field from step 1.
On the page that loads, check the
httpRequest
fields for string tokens like${jndi:ldap://
that may indicate possible exploitation attempts.See CVE-2021-44228: Detecting Log4Shell exploit in the Logging documentation for example strings to search for and for an example query.
Step 3: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Exploit Public-Facing Application.
- Review related findings by clicking the link on the Related findings on the Related findings row in the Summary tab of the finding details. Related findings are the same finding type and the same instance and network.
- To develop a response plan, combine your investigation results with MITRE research.
Step 4: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Upgrade to the latest version of Log4j2.
- Follow Google Cloud's recommendations for investigating and responding to the "Apache Log4j 2" vulnerability.
- Implement the recommended mitigation techniques in Apache Log4j Security Vulnerabilities.
- If you use Google Cloud Armor, deploy the
cve-canary rule
into a new or existing Google Cloud Armor security policy. For more information, see Google Cloud Armor WAF rule to help mitigate Apache Log4j vulnerability.
Active Scan: Log4j Vulnerable to RCE
Supported Log4j vulnerability scanners inject obfuscated JNDI lookups in HTTP parameters, URLs, and text fields with callbacks to domains controlled by the scanners. This finding is generated when DNS queries for the unobfuscated domains are found. Such queries only occur if a JNDI lookup was successful, indicating an active Log4j vulnerability. To respond to this finding, do the following:
Step 1: Review finding details
Open an
Active Scan: Log4j Vulnerable to RCE
finding, as directed in Reviewing finding details. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected
- Affected resource, especially the following field:
- Resource full name: the full resource name of the Compute Engine instance that is vulnerable to the Log4j RCE.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
In the detail view of the finding, click the JSON tab.
In the JSON, note the following fields.
properties
scannerDomain
: the domain used by the scanner as part of the JNDI lookup. This tells you which scanner identified the vulnerability.sourceIp
: the IP address used to make the DNS queryvpcName
: the name of the network on the instance where the DNS query was made.
Step 2: Check logs
- In the Google Cloud console, go to Logs Explorer by clicking the link in the Cloud Logging URI field from step 1.
On the page that loads, check the
httpRequest
fields for string tokens like${jndi:ldap://
that may indicate possible exploitation attempts.See CVE-2021-44228: Detecting Log4Shell exploit in the Logging documentation for example strings to search for and for an example query.
Step 3: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Exploitation of Remote Services.
- Review related findings by clicking the link on the Related findings on the Related findings row in the Summary tab of the finding details. Related findings are the same finding type and the same instance and network.
- To develop a response plan, combine your investigation results with MITRE research.
Step 4: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Upgrade to the latest version of Log4j2.
- Follow Google Cloud's recommendations for investigating and responding to the "Apache Log4j 2" vulnerability.
- Implement the recommended mitigation techniques in Apache Log4j Security Vulnerabilities.
- If you use Google Cloud Armor, deploy the
cve-canary rule
into a new or existing Google Cloud Armor security policy. For more information, see Google Cloud Armor WAF rule to help mitigate Apache Log4j vulnerability.
Leaked credentials
This finding isn't available for project-level activations.
This finding is generated when Google Cloud service account credentials are accidentally leaked online or compromised. To respond to this finding, do the following:
Step 1: Review finding details
Open an
account_has_leaked_credentials
finding, as directed in Reviewing finding details. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected
- Affected resource
Click the Source Properties tab and note the following fields:
Compromised_account
: the potentially compromised service accountProject_identifier
: the project that contains the potentially leaked account credentialsURL
: the link to the GitHub repository
To see the complete JSON for the finding, click the JSON tab.
Step 2: Review project and service account permissions
In the Google Cloud console, go to the IAM page.
If necessary, select the project listed in
Project_identifier
.On the page that appears, in the Filter box, enter the account name listed in
Compromised_account
and check assigned permissions.In the Google Cloud console, go to the Service Accounts page.
On the page that appears, in the Filter box, enter the name of the compromised service account and check the service account's keys and key creation dates.
Step 3: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select your project.
On the page that loads, check logs for activity from new or updated IAM resources using the following filters:
proto_payload.method_name="google.iam.admin.v1.CreateServiceAccount"
protoPayload.methodName="SetIamPolicy"
resource.type="gce_instance" AND log_name="projects/Project_identifier/logs/cloudaudit.googleapis.com%2Factivity"
protoPayload.methodName="InsertProjectOwnershipInvite"
protoPayload.authenticationInfo.principalEmail="Compromised_account"
Step 4: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Valid Accounts: Cloud Accounts.
- Review related findings by clicking the link in
relatedFindingURI
. Related findings are the same finding type and the same instance and network. - To develop a response plan, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with leaked credentials.
- Consider deleting the compromised service account and rotate and delete all service account access keys for the compromised project. After deletion, resources that use the service account for authentication lose access. Before proceeding, your security team should identify all impacted resources and work with resource owners to ensure business continuity.
- Work with your security team to identify unfamiliar resources, including Compute Engine instances, snapshots, service accounts, and IAM users. Delete resources not created with authorized accounts.
- Respond to notifications from Google Cloud Support.
- To limit who can create service accounts, use the Organization Policy Service.
- To identify and fix overly permissive roles, use IAM Recommender.
- Open the
URL
link and delete the leaked credentials. Gather more information about the compromised account and contact the owner.
Malware
Malware is detected by examining VPC Flow Logs and
Cloud DNS logs for connections to known command and control domains and
IP addresses. Currently, Event Threat Detection provides general malware detection
(Malware: Bad IP
and Malware: Bad Domain
) and detectors
particularly for Log4j-related malware (Log4j Malware: Bad IP
and
Log4j Malware: Bad Domain
).
The following steps describe how to investigate and respond to IP-based findings. The remediation steps are similar for domain-based findings.
Step 1: Review finding details
Open the relevant malware finding. The following steps use the
Malware: Bad IP
finding, as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Indicator domain: for
Bad domain
findings, the domain that triggered the finding. - Indicator IP: for
Bad IP
findings, the IP address that triggered the finding. - Source IP: for
Bad IP
findings, a known malware command and control IP address. - Source port: for
Bad IP
findings, the source port of the connection. - Destination IP: for
Bad IP
findings, the target IP address of the malware. - Destination port: for
Bad IP
findings, the destination port of the connection. - Protocol: for
Bad IP
findings, the IANA protocol number associated with the connection.
- Indicator domain: for
- Affected resource, especially the following fields:
- Resource full name: the full resource name of the affected Compute Engine instance.
- Project full name: the full resource name of the project that contains the finding.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- Chronicle: link to Google SecOps.
- VirusTotal indicator: link to the VirusTotal analysis page.
- Flow Analyzer (Preview): link to the Flow Analyzer feature of Network Intelligence Center. This field displays only when VPC Flow Logs is enabled.
Click the JSON tab and note the following field:
evidence
:sourceLogId
:projectID
: the ID of the project in which the issue was detected.
properties
:InstanceDetails
: the resource address for the Compute Engine instance.
- What was detected, especially the following fields:
Step 2: Investigate in Google Security Operations
You can use Google Security Operations to investigate this finding. Google SecOps is a Google Cloud service that lets you investigate threats and pivot through related entities in an easy to use timeline. Google SecOps enriches findings data, letting you identify indicators of interest and simplify investigations.
You can only use Google SecOps if you activate Security Command Center at the organization level.
Go to the Security Command Center Findings page in the Google Cloud console.
In the Quick filters panel, scroll to Source display name.
In the Source display name section, select Event Threat Detection.
The table populates with findings for the source type you selected.
In the table, under category, click the
Malware: Bad IP
finding. The details panel for finding opens.In the Related links section of the finding details panel, click Investigate in Chronicle.
Follow the instructions in Google SecOps's guided user interface.
Use the following guides to conduct investigations in Google SecOps:
Step 3: Review permissions and settings
In the Google Cloud console, go to the Dashboard page.
Select the project that is specified in the Project full name row on the Summary tab.
Navigate to the Resources card and click Compute Engine.
Click the VM instance that matches the name and zone in Resource full name. Review instance details, including network and access settings.
In the navigation pane, click VPC Network, then click Firewall. Remove or disable overly permissive firewall rules.
Step 4: Check logs
- On the Summary tab of the finding details panel, click the Cloud Logging URI link to open the Logs Explorer.
On the page that loads, find VPC Flow Logs related to the IP address in Source IP by using the following filter:
logName="projects/projectId/logs/compute.googleapis.com%2Fvpc_flows" AND (jsonPayload.connection.src_ip="SOURCE_IP" OR jsonPayload.connection.dest_ip="destIP")
Replace the following:
PROJECT_ID
with select the project listed inprojectId
.SOURCE_IP
with the IP address listed on the Source IP row in the Summary tab of the finding details.
Step 5: Check Flow Analyzer
You must enable VPC Flow Logs to perform the following process.
- Ensure that you have upgraded your log bucket to use Log Analytics. For instructions, see Upgrade a bucket to use Log Analytics. There is no additional cost to upgrade.
In the Google Cloud console, go to the Flow Analyzer page:
You can also access Flow Analyzer through the Flow Analyzer URL link in the Related Links section on the Summary tab of the Finding details pane.
To further investigate information pertaining to the Event Threat Detection finding, use the time range picker in the action bar to change the time period. The time period should reflect when the finding was first reported. For example, if the finding was reported within the last 2 hours, you might set the time period to Last 6 hours. This ensures the time period in Flow Analyzer includes the time when the finding was reported.
Filter Flow Analyzer to display the appropriate results for the IP address associated with the malicious IP finding:
- From the Filter menu in the Source row of the Query section, select IP.
In the Value field, enter the IP address associated with the finding and click Run New Query.
If Flow Analyzer doesn't display any results for the IP address, clear the filter from the Source row, and run the query again with the same filter in the Destination row.
Analyze the results. For additional information about a specific flow, click Details in the All data flows table to open the Flow details pane.
Step 6: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Dynamic Resolution and Command and Control.
- Review related findings by clicking the link on the Related findings on the Related findings row in the Summary tab of the finding details. Related findings are the same finding type and the same instance and network.
- Check flagged URLs and domains on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
- To develop a response plan, combine your investigation results with MITRE research.
Step 6: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project containing malware.
- Investigate the potentially compromised instance and remove any discovered malware. To assist with detection and removal, use an endpoint detection and response solution.
- To track activity and vulnerabilities that allowed the insertion of malware, check audit logs and syslogs associated with the compromised instance.
- If necessary, stop the compromised instance and replace it with a new instance.
- Block the malicious IP addresses by updating firewall rules or by using Google Cloud Armor. You can enable Google Cloud Armor on the Security Command Center Integrated Services page. Depending on data volume, Google Cloud Armor costs can be significant. See the Google Cloud Armor pricing guide for more information.
- To control access and use of VM images, use Shielded VM and Trusted Images IAM policy.
Persistence: IAM Anomalous Grant
Audit logs are examined to detect the addition of IAM (IAM) role grants that might be considered suspicious.
The following are examples of anomalous grants:
- Inviting an external user, such as gmail.com user, as a project owner from the Google Cloud console
- A service account granting sensitive permissions
- A custom role granting sensitive permissions
- A service account added from outside your organization or project
The IAM Anomalous Grant
finding is unique in that it includes
sub-rules that provide more specific information about each instance
of this finding. The severity classification of this finding depends
on the sub-rule. Each sub-rule might require a different response.
The following list shows all possible sub-rules and their severities:
external_service_account_added_to_policy
:HIGH
, if a highly sensitive role was granted or if a medium-sensitivity role was granted at the organization level. For more information, see Highly-sensitive roles.MEDIUM
, if a medium sensitivity role was granted. For more information, see Medium-sensitivity roles.
external_member_invited_to_policy
:HIGH
external_member_added_to_policy
:HIGH
, if a highly sensitive role was granted or if a medium-sensitivity role was granted at the organization level. For more information, see Highly-sensitive roles.MEDIUM
, if a medium sensitivity role was granted. For more information, see Medium-sensitivity roles.
custom_role_given_sensitive_permissions
:MEDIUM
service_account_granted_sensitive_role_to_member
:HIGH
policy_modified_by_default_compute_service_account
:HIGH
To respond to this finding, do the following:
Step 1: Review finding details
Open a
Persistence: IAM Anomalous Grant
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: email address for the user or service account that assigned the role.
Affected resource
Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- VirusTotal indicator: link to the VirusTotal analysis page.
- Chronicle: link to Google SecOps.
- What was detected, especially the following fields:
Click the JSON tab. The complete JSON of the finding is displayed.
In the JSON for the finding, note the following fields:
detectionCategory
:subRuleName
: more specific information about the type of anomalous grant that occurred. The sub-rule determines the severity classification of this finding.
evidence
:sourceLogId
:projectId
: the ID of the project that contains the finding.
properties
:sensitiveRoleGrant
:bindingDeltas
:Action
: the action taken by the user.Role
: the role assigned to the user.member
: the email address of the user that received the role.
Step 2: Investigate in Google Security Operations
You can use Google Security Operations to investigate this finding. Google SecOps is a Google Cloud service that lets you investigate threats and pivot through related entities in an easy to use timeline. Google SecOps enriches findings data, letting you identify indicators of interest and simplify investigations.
You can't investigate Security Command Center findings in Chronicle if you activate Security Command Center at the project level.
Go to the Security Command Center Findings page in the Google Cloud console.
In the Quick filters panel, scroll down to Source display name.
In the Source display name section, select Event Threat Detection.
The table populates with findings for the source type you selected.
In the table, under category, click on a
Persistence: IAM Anomalous Grant
finding. The details panel for finding opens.In the Related links section of the finding details panel, click Investigate in Chronicle.
Follow the instructions in Google SecOps's guided user interface.
Use the following guides to conduct investigations in Google SecOps:
Step 3: Check logs
- On the Summary tab of the finding details panel, click the Cloud Logging URI link to open the Logs Explorer.
- On the page that loads, look for new or updated IAM
resources using the following filters:
protoPayload.methodName="SetIamPolicy"
protoPayload.methodName="google.iam.admin.v1.UpdateRole"
protoPayload.methodName="google.iam.admin.v1.CreateRole"
Step 4: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Valid Accounts: Cloud Accounts.
- Review related findings by clicking the link on the Related findings row in the Summary tab of the finding details. Related findings are the same finding type and the same instance and network.
- To develop a response plan, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with the compromised account.
- Delete the compromised service account and rotate and delete all service account access keys for the compromised project. After deletion, resources that use the service account for authentication lose access.
- Delete project resources created by unauthorized accounts, like unfamiliar Compute Engine instances, snapshots, service accounts, and IAM users.
- To restrict adding gmail.com users, use the Organization Policy.
- To identify and fix overly permissive roles, use IAM Recommender.
Persistence: Impersonation Role Granted for Dormant Service Account
Detects events where an impersonation role is granted to a principal that allows that principal to impersonate a dormant user-managed service account. In this finding, the dormant service account is the affected resource, and a service account is considered dormant if it has been inactive for more than 180 days.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Persistence: Impersonation Role Granted for Dormant Service Account
finding, as directed in Reviewing findings. In the finding details, on the Summary tab, note the values of following fields.
Under What was detected:
- Principal email: the user who conducted the granting action
- Offending access grants.Principal name: the principal who was granted the impersonation role
Under Affected resource:
- Resource display name: the dormant service account as a resource
- Project full name: the project where that dormant service account resides
Step 2: Research attack and response methods
- Use service account tools, like Activity Analyzer, to investigate the activity of the dormant service account.
- Contact the owner of the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Check logs
- On the Summary tab of the finding details panel, under the Related links click the Cloud Logging URI link to open the Logs Explorer.
Step 4: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project where the action was taken.
- Remove the access of the owner of the Principal email if it is compromised.
- Remove the newly granted impersonation role from the target member.
- Consider deleting the potentially compromised service account and rotate and delete all service account access keys for the potentially compromised project. After deletion, applications that use the service account for authentication lose access. Before proceeding, your security team should identify all impacted applications and work with application owners to ensure business continuity.
- Work with your security team to identify unfamiliar resources, including Compute Engine instances, snapshots, service accounts, and IAM users. Delete resources not created with authorized accounts.
- Respond to any notifications from Cloud Customer Care.
- To limit who can create service accounts, use the Organization Policy Service.
- To identify and fix overly permissive roles, use IAM recommender.
Persistence: Unmanaged Account Granted Sensitive Role
Detects events where a sensitive role is grant to an unmanaged account Unmanaged accounts can't be control by system administrators. For example, when the corresponding employee left the company, the administrator can't delete the account. Therefore, granting sensitive roles to unmanaged accounts creates a potential security risk for the organization.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Persistence: Unmanaged Account Granted Sensitive Role
finding, as directed in Reviewing findings. In the finding details, on the Summary tab, note the values of following fields.
Under What was detected:
- Principal email: the user who conducted the granting action
- Offending access grants.Principal name: the unmanaged account who receives the grant
- Offending access grants.Role granted: the sensitive role granted
Step 2: Research attack and response methods
- Contact the owner of the Principal email field. Confirm whether the legitimate owner conducted the action.
- Check with the owner of the Offending access grants.Principal name field, understand the origin of the unmanaged account.
Step 3: Check logs
- On the Summary tab of the finding details panel, under the Related links click the Cloud Logging URI link to open the Logs Explorer.
Step 4: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project where the action was taken.
- Remove the access of the owner of the Principal email if it is compromised.
- Remove the newly granted sensitive role from the unmanaged account.
- Consider convert the unmanaged account into managed account using the transfer tool, and move this account under the control of system administrators.
Persistence: New API Method
Anomalous admin activity by a potentially malicious actor was detected in an organization, folder, or project. Anomalous activity can be either of the following:
- New activity by a principal in an organization, folder, or project
- Activity that has not been seen in a while by a principal in an organization, folder, or project
Step 1: Review finding details
- Open the
Persistence: New API Method
finding as directed in Reviewing findings. In the finding details, on the Summary tab, note the values of the following fields:
- Under What was detected:
- Principal email: the account that made the call
- Service name: the API name of the Google Cloud service used in the action
- Method name: the method that was called
- Under Affected resource:
- Resource display name: the name of the affected resource, which could be the same as the name of the organization, folder, or project
- Resource path: the location in the resource hierarchy where the activity took place
- Under What was detected:
Step 2: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Persistence.
- Investigate whether the action was warranted in the organization, folder, or project and whether the action was taken by the legitimate owner of the account. The organization, folder, or project is displayed on the Resource path row and the account is displayed on the Principal email row.
- To develop a response plan, combine your investigation results with MITRE research.
Persistence: New Geography
This finding isn't available for project-level activations.
An IAM user or service account is accessing Google Cloud from an anomalous location, based on the geolocation of the requesting IP address.
Step 1: Review finding details
Open a
Persistence: New Geography
finding, as directed in Reviewing finding details earlier on this page. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the potentially compromised user account.
- Affected resource, especially the following fields:
- Project full name: the project that contains the potentially compromised user account.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- In the detail view of the finding, click the JSON tab.
In the JSON, note the following
sourceProperties
fields:affectedResources
:gcpResourceName
: the affected resource
evidence
:sourceLogId
:projectId
: The ID of the project that contains the finding.
properties
:anomalousLocation
:anomalousLocation
: the estimated current location of the user.callerIp
: the external IP address.notSeenInLast
: the time period used to establish a baseline for normal behavior.typicalGeolocations
: the locations where the user usually accesses Google Cloud resources.
Step 2: Review project and account permissions
In the Google Cloud console, go to the IAM page.
If necessary, select the project listed in the
projectID
field in the finding JSON.On the page that appears, in the Filter box, enter the account name listed in Principal email and check granted roles.
Step 3: Check logs
- On the Summary tab of the finding details panel, click the Cloud Logging URI link to open the Logs Explorer.
- If necessary, select your project.
- On the page that loads, check logs for activity from new or updated
IAM resources using the following filters:
protoPayload.methodName="SetIamPolicy"
protoPayload.methodName="google.iam.admin.v1.UpdateRole"
protoPayload.methodName="google.iam.admin.v1.CreateRole"
protoPayload.authenticationInfo.principalEmail="principalEmail"
Step 4: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Valid Accounts: Cloud Accounts.
- To develop a response plan, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with the compromised account.
- Review the
anomalousLocation
,typicalGeolocations
, andnotSeenInLast
fields to verify whether the access is abnormal and if the account has been compromised. - Delete project resources created by unauthorized accounts, like unfamiliar Compute Engine instances, snapshots, service accounts, and IAM users.
- To restrict the creation of new resources to specific regions, see Restricting Resource Locations.
- To identify and fix overly permissive roles, use IAM Recommender.
Persistence: New User Agent
This finding isn't available for project-level activations.
An IAM service account is accessing Google Cloud using suspicious software, as indicated by an anomalous user agent.
Step 1: Review finding details
Open a
Persistence: New User Agent
finding, as directed in Reviewing finding details earlier on this page. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the potentially compromised service account.
- Affected resource, especially the following fields:
- Project full name: the project that contains the potentially compromised service account.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- In the detail view of the finding, click the JSON tab.
- In the JSON, note the following fields.
projectId
: the project that contains the potentially compromised service account.callerUserAgent
: the anomalous user agent.anomalousSoftwareClassification
: the type of software.notSeenInLast
: the time period used to establish a baseline for normal behavior.
- What was detected, especially the following fields:
Step 2: Review project and account permissions
In the Google Cloud console, go to the IAM page.
If necessary, select the project listed in
projectId
.On the page that appears, in the Filter box, enter the account name that is listed on the Principal email row in the Summary tab of the finding details and check granted roles.
In the Google Cloud console, go to the Service Accounts page.
On the page that appears, in the Filter box, enter the account name that is listed on the Principal email row in the Summary tab of the finding details.
Check the service account's keys and key creation dates.
Step 3: Check logs
- On the Summary tab of the finding details panel, click the Cloud Logging URI link to open the Logs Explorer.
- If necessary, select your project.
- On the page that loads, check logs for activity from new or updated
IAM resources using the following filters:
proto_payload.method_name="google.iam.admin.v1.CreateServiceAccount"
protoPayload.methodName="SetIamPolicy"
protoPayload.methodName="google.iam.admin.v1.UpdateRole"
protoPayload.methodName="google.iam.admin.v1.CreateRole"
protoPayload.authenticationInfo.principalEmail="principalEmail"
Step 4: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Valid Accounts: Cloud Accounts.
- To develop a response plan, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with the compromised account.
- Review the
anomalousSoftwareClassification
,callerUserAgent
, andbehaviorPeriod
fields to verify whether the access is abnormal and if the account has been compromised. - Delete project resources created by unauthorized accounts, like unfamiliar Compute Engine instances, snapshots, service accounts, and IAM users.
- To restrict the creation of new resources to specific regions, see Restricting Resource Locations.
- To identify and fix overly permissive roles, use IAM Recommender.
Privilege Escalation: Changes to sensitive Kubernetes RBAC objects
To escalate privilege, a potentially malicious actor attempted to modify a
ClusterRole
, RoleBinding
, or ClusterRoleBinding
role-based access
control (RBAC) object of the sensitive cluster-admin
role by using a PUT
or PATCH
request.
Step 1: Review finding details
Open the
Privilege Escalation: Changes to sensitive Kubernetes RBAC objects
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the account that made the call.
- Method name: the method that was called.
- Kubernetes bindings: the sensitive Kubernetes
binding or
ClusterRoleBinding
that was modified.
- Affected resource, especially the following fields:
- Resource display name: the Kubernetes cluster where the action occurred.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
In the What was detected section, click on the name of the binding on the Kubernetes bindings row. The binding details are displayed.
In the displayed binding, note the binding details.
Step 2: Check logs
- On the Summary tab of the finding details in the Google Cloud console, go to Logs Explorer by clicking the link in the Cloud Logging URI field.
If the value in Method name was a
PATCH
method, check the request body to see what properties were modified.In
update
(PUT
) calls, the whole object is sent in the request, so the changes aren't as clear.Check for other actions taken by the principal by using the following filters:
resource.labels.cluster_name="CLUSTER_NAME"
protoPayload.authenticationInfo.principalEmail="PRINCIPAL_EMAIL"
Replace the following:
CLUSTER_NAME
: the value that you noted in the Resource display name field in the finding details.PRINCIPAL_EMAIL
: the value that you noted in the Principal email field in the finding details.
Step 3: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Privilege Escalation
- Confirm the sensitivity of the object and if the modification is warranted.
- For bindings, you can check the subject and investigate whether the subject needs the role it is binded to.
- Determine whether there are other signs of malicious activity by the principal in the logs.
If the principal email isn't a service account, contact the owner of the account to confirm whether the legitimate owner conducted the action.
If the principal email is a service account (IAM or Kubernetes), identify the source of the modification to determine its legitimacy.
To develop a response plan, combine your investigation results with MITRE research.
Privilege Escalation: Create Kubernetes CSR for master cert
To escalate privilege, a potentially malicious actor created a Kubernetes master
certificate signing request
(CSR), which gives them cluster-admin
access.
Step 1: Review finding details
Open the
Privilege Escalation: Create Kubernetes CSR for master cert
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the account that made the call.
- Method name: the method that was called.
- Affected resource, especially the following fields:
- Resource display name: the Kubernetes cluster where the action occurred.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
Step 2: Check logs
- On the Summary tab of the finding details in the Google Cloud console, go to Logs Explorer by clicking the link in the Cloud Logging URI field.
- Check the value in the
protoPayload.resourceName
field to identify the specific certificate signing request. Check for other actions taken by the principal by using the following filters:
resource.labels.cluster_name="CLUSTER_NAME"
protoPayload.authenticationInfo.principalEmail="PRINCIPAL_EMAIL"
Replace the following:
CLUSTER_NAME
: the value that you noted in the Resource display name field in the finding details.PRINCIPAL_EMAIL
: the value that you noted in the Principal email field in the finding details.
Step 3: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Privilege Escalation.
- Investigate whether giving
cluster-admin
access was warranted. If the principal email isn't a service account, contact the owner of the account to confirm whether the legitimate owner conducted the action.
If the principal email is a service account (IAM or Kubernetes), identify the source of the action to determine its legitimacy.
To develop a response plan, combine your investigation results with MITRE research.
Privilege Escalation: Creation of sensitive Kubernetes bindings
To escalate privilege, a potentially malicious actor attempted to create a new
RoleBinding
or ClusterRoleBinding
object for the cluster-admin
role.
Step 1: Review finding details
Open the
Privilege Escalation: Creation of sensitive Kubernetes bindings
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the account that made the call.
- Kubernetes bindings: the sensitive Kubernetes
binding or
ClusterRoleBinding
that was created.
- Affected resource, especially the following fields:
- Resource display name: the Kubernetes cluster where the action occurred.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
Step 2: Check logs
- On the Summary tab of the finding details in the Google Cloud console, go to Logs Explorer by clicking the link in the Cloud Logging URI field.
Check for other actions taken by the principal by using the following filters:
resource.labels.cluster_name="CLUSTER_NAME"
protoPayload.authenticationInfo.principalEmail="PRINCIPAL_EMAIL"
Replace the following:
CLUSTER_NAME
: the value that you noted in the Resource display name field in the finding details.PRINCIPAL_EMAIL
: the value that you noted in the Principal email field in the finding details.
Step 3: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Privilege Escalation.
- Confirm the sensitivity of the binding created and if the roles are necessary for the subjects.
- For bindings, you can check the subject and investigate whether the subject needs the role it is binded to.
- Determine whether there are other signs of malicious activity by the principal in the logs.
If the principal email isn't a service account, contact the owner of the account to confirm whether the legitimate owner conducted the action.
If the principal email is a service account (IAM or Kubernetes), identify the source of the action to determine its legitimacy.
To develop a response plan, combine your investigation results with MITRE research.
Privilege Escalation: Effectively Anonymous Users Granted GKE Cluster Access
Someone created an RBAC binding that references one of the following users or groups:
system:anonymous
system:authenticated
system:unauthenticated
These users and groups are effectively anonymous and should be avoided when creating role bindings or cluster role bindings to any RBAC roles. Review the binding to ensure that it is necessary. If the binding isn't necessary, remove it. For more details, see the log message for this finding.
- Review any bindings created that granted permissions to the
system:anonymous
user,system:unauthenticated group
, orsystem:authenticated
group. - Determine whether there are other signs of malicious activity by the principal in the audit logs in Cloud Logging.
If there are signs of malicious activity, review guidance for investigating and removing the bindings that allowed this access.
Privilege Escalation: Get Kubernetes CSR with compromised bootstrap credentials
To escalate privilege, a potentially malicious actor queried for a certificate
signing request (CSR), with the kubectl
command, using
compromised bootstrap credentials.
The following is an example of a command that this rule detects:
kubectl --client-certificate kubelet.crt --client-key kubelet.key --server YOUR_SERVER get csr CSR_NAME
Step 1: Review finding details
Open the
Privilege Escalation: Get Kubernetes CSR with compromised bootstrap credentials
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the account that made the call.
- Method name: the method that was called.
- Under Affected resource:
- Resource display name: the Kubernetes cluster where the action occurred.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
Step 2: Check logs
If the method name, which you noted in the Method name field in the finding
details, is a GET
method, do the following:
- On the Summary tab of the finding details in the Google Cloud console, go to Logs Explorer by clicking the link in the Cloud Logging URI field.
- Check the value in the
protoPayload.resourceName
field to identify the specific certificate signing request.
Step 3: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Privilege Escalation.
- If the specific CSR is available in the log entry, investigate the sensitivity of the certificate and whether the action was warranted.
- To develop a response plan, combine your investigation results with MITRE research.
Privilege Escalation: Launch of privileged Kubernetes container
A potentially malicious actor created a Pod that contains privileged containers or containers with privilege escalation capabilities.
A privileged container has the privileged
field set to
true
. A container with privilege escalation capabilities has the
allowPrivilegeEscalation
field set to true
. For more
information, see the SecurityContext
v1 core API reference in the Kubernetes documentation.
Step 1: Review finding details
Open the
Privilege Escalation: Launch of privileged Kubernetes container
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the account that made the call.
- Kubernetes pods: the newly created Pod with privileged containers.
- Affected resource, especially the following fields:
- Resource display name: the Kubernetes cluster where the action occurred.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
On the JSON tab, note the values of the finding fields:
- findings.kubernetes.pods[].containers: the privileged container turned up within the Pod.
Step 2: Check logs
- On the Summary tab of the finding details in the Google Cloud console, go to Logs Explorer by clicking the link in the Cloud Logging URI field.
Check for other actions taken by the principal by using the following filters:
resource.labels.cluster_name="CLUSTER_NAME"
protoPayload.authenticationInfo.principalEmail="PRINCIPAL_EMAIL"
Replace the following:
CLUSTER_NAME
: the value that you noted in the Resource display name field in the finding details.PRINCIPAL_EMAIL
: the value that you noted in the Principal email field in the finding details.
Step 3: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Privilege Escalation.
- Confirm that the container created requires access to host resources and kernel capabilities.
- Determine whether there are other signs of malicious activity by the principal in the logs.
If the principal email isn't a service account, contact the owner of the account to confirm whether the legitimate owner conducted the action.
If the principal email is a service account (IAM or Kubernetes), identify the source of the action to determine its legitimacy.
To develop a response plan, combine your investigation results with MITRE research.
Privilege Escalation: Dormant Service Account Granted Sensitive Role
Detects events where a sensitive IAM role is granted to a dormant user-managed service account. In this context, a service account is considered dormant if it has been inactive for more than 180 days.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Privilege Escalation: Dormant Service Account Granted Sensitive Role
finding, as directed in Reviewing findings. In the finding details, on the Summary tab, note the values of following fields.
Under What was detected:
- Principal email: the user who conducted the granting action
- Offending access grants.Principal name: The dormant service account that received the sensitive role
- Offending access grants.Role granted: The sensitive IAM role that are assigned
Under Affected resource:
- Resource display name: the organization, folder or project in which the sensitive IAM role was granted to the dormant service account.
Step 2: Research attack and response methods
- Use service account tools, like Activity Analyzer, to investigate the activity of the dormant service account.
- Contact the owner of the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Check logs
- On the Summary tab of the finding details panel, under the Related links click the Cloud Logging URI link to open the Logs Explorer.
Step 4: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project where the action was taken.
- Remove the access of the owner of the Principal email if it is compromised.
- Remove the newly assigned sensitive IAM role from the dormant service account.
- Consider deleting the potentially compromised service account and rotate and delete all service account access keys for the potentially compromised project. After deletion, resources that use the service account for authentication lose access. Before proceeding, your security team should identify all impacted resources and work with resource owners to ensure business continuity.
- Work with your security team to identify unfamiliar resources, including Compute Engine instances, snapshots, service accounts, and IAM users. Delete resources not created with authorized accounts.
- Respond to any notifications from Cloud Customer Care.
- To limit who can create service accounts, use the Organization Policy Service.
- To identify and fix overly permissive roles, use IAM recommender.
Privilege Escalation: Anomalous Impersonation of Service Account for Admin Activity
Anomalous Impersonation of Service Account is detected by examining the Admin Activity Audit Logs to see if any anomaly occurred in a service account impersonation request.
To respond to this finding, do the following:
Step 1: Review finding details
Open the
Privilege Escalation: Anomalous Impersonation of Service Account for Admin Activity
finding, as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the final service account in the impersonation request that was used to access Google Cloud.
- Service name: the API name of the Google Cloud service involved in the impersonation request.
- Method name: the method that was called.
- Service account delegation information: details of service accounts in the delegation chain, the principal at the bottom of the list is the caller of the impersonation request.
- Affected resource, especially the following fields:
- Resource full name: The name of the cluster.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
Step 2: Research attack and response methods
- Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
- Investigate the principals in the delegation chain to verify whether the request is abnormal and if any account has been compromised.
- Contact the owner of the impersonation caller in the Service account delegation info list. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project where the action was taken.
- Consider deleting the potentially compromised service account and rotate and delete all service account access keys for the potentially compromised project. After deletion, resources that use the service account for authentication lose access. Before proceeding, your security team should identify all impacted resources and work with resource owners to ensure business continuity.
- Work with your security team to identify unfamiliar resources, including Compute Engine instances, snapshots, service accounts, and IAM users. Delete resources not created with authorized accounts.
- Respond to any notifications from Google Cloud Support.
- To limit who can create service accounts, use the Organization Policy Service.
- To identify and fix overly permissive roles, use IAM Recommender.
Privilege Escalation: Anomalous Multistep Service Account Delegation for Admin Activity
Anomalous Multistep Service Account Delegation
is detected by examining the
Admin Activity Audit Logs to see if any anomaly occurred in a service account
impersonation request.
To respond to this finding, do the following:
Step 1: Review finding details
Open the
Privilege Escalation: Anomalous Multistep Service Account Delegation for Admin Activity
finding, as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the final service account in the impersonation request that was used to access Google Cloud.
- Service name: the API name of the Google Cloud service involved in the impersonation request.
- Method name: the method that was called.
- Service account delegation information: details of service accounts in the delegation chain, the principal at the bottom of the list is the caller of the impersonation request.
- Affected resource
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
Step 2: Research attack and response methods
- Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
- Investigate the principals in the delegation chain to verify whether the request is abnormal and if any account has been compromised.
- Contact the owner of the impersonation caller in the Service account delegation info list. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project where the action was taken.
- Consider deleting the potentially compromised service account and rotate and delete all service account access keys for the potentially compromised project. After deletion, resources that use the service account for authentication lose access. Before proceeding, your security team should identify all impacted resources and work with resource owners to ensure business continuity.
- Work with your security team to identify unfamiliar resources, including Compute Engine instances, snapshots, service accounts, and IAM users. Delete resources not created with authorized accounts.
- Respond to any notifications from Google Cloud Support.
- To limit who can create service accounts, use the Organization Policy Service.
- To identify and fix overly permissive roles, use IAM Recommender.
Privilege Escalation: Anomalous Multistep Service Account Delegation for Data Access
Anomalous Multistep Service Account Delegation
is detected by examining the Data
Access Audit Logs to see if any anomaly occurred in a service account
impersonation request.
To respond to this finding, do the following:
Step 1: Review finding details
Open the
Privilege Escalation: Anomalous Multistep Service Account Delegation for Data Access
finding, as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Principal email: the final service account in the impersonation request that was used to access Google Cloud
- Service name: the API name of the Google Cloud service involved in the impersonation request
- Method name: the method that was called
- Service account delegation information: details of service accounts in the delegation chain, the principal at the bottom of the list is the caller of the impersonation request
- Affected resource
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- What was detected, especially the following fields:
Step 2: Research attack and response methods
- Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
- Investigate the principals in the delegation chain to verify whether the request is abnormal and if any account has been compromised.
- Contact the owner of the impersonation caller in the Service account delegation info list. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project where the action was taken.
- Consider deleting the potentially compromised service account and rotate and delete all service account access keys for the potentially compromised project. After deletion, resources that use the service account for authentication lose access. Before proceeding, your security team should identify all impacted resources and work with resource owners to ensure business continuity.
- Work with your security team to identify unfamiliar resources, including Compute Engine instances, snapshots, service accounts, and IAM users. Delete resources not created with authorized accounts.
- Respond to any notifications from Google Cloud Support.
- To limit who can create service accounts, use the Organization Policy Service.
- To identify and fix overly permissive roles, use IAM Recommender.
Privilege Escalation: Anomalous Service Account Impersonator for Admin Activity
Anomalous Service Account Impersonator
is detected by examining the Admin
Activity Audit Logs to see if any anomaly occurred in a service account
impersonation request.
To respond to this finding, do the following:
Step 1: Review finding details
Open the
Privilege Escalation: Anomalous Service Account Impersonator for Admin Activity
finding, as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
What was detected, especially the following fields:
- Principal email: the final service account in the impersonation request that was used to access Google Cloud
- Service name: the API name of the Google Cloud service involved in the impersonation request
- Method name: the method that was called
- Service account delegation information: details of service accounts in the delegation chain, the principal at the bottom of the list is the caller of the impersonation request
Affected resource
Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
Step 2: Research attack and response methods
- Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
- Investigate the principals in the delegation chain to verify whether the request is abnormal and if any account has been compromised.
- Contact the owner of the impersonation caller in the Service account delegation info list. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project where the action was taken.
- Consider deleting the potentially compromised service account and rotate and delete all service account access keys for the potentially compromised project. After deletion, resources that use the service account for authentication lose access. Before proceeding, your security team should identify all impacted resources and work with resource owners to ensure business continuity.
- Work with your security team to identify unfamiliar resources, including Compute Engine instances, snapshots, service accounts, and IAM users. Delete resources not created with authorized accounts.
- Respond to any notifications from Google Cloud Support.
- To limit who can create service accounts, use the Organization Policy Service.
- To identify and fix overly permissive roles, use IAM Recommender.
Privilege Escalation: Anomalous Service Account Impersonator for Data Access
Anomalous Service Account Impersonator is detected by examining the Data Access Audit Logs to see if any anomaly occurred in a service account impersonation request.
To respond to this finding, do the following:
Step 1: Review finding details
- Open
the
Privilege Escalation: Anomalous Service Account Impersonator for Data Access
finding, as directed in Reviewing findings. In the finding details, on the Summary tab, note the values of following fields.
Under What was detected:
- Principal email: the final service account in the impersonation request that was used to access Google Cloud
- Service name: the API name of the Google Cloud service involved in the impersonation request
- Method name: the method that was called
- Service account delegation information: details of service accounts in the delegation chain, the principal at the bottom of the list is the caller of the impersonation request
Step 2: Research attack and response methods
- Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
- Investigate the principals in the delegation chain to verify whether the request is abnormal and if any account has been compromised.
- Contact the owner of the impersonation caller in the Service account delegation info list. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project where the action was taken.
- Consider deleting the potentially compromised service account and rotate and delete all service account access keys for the potentially compromised project. After deletion, resources that use the service account for authentication lose access. Before proceeding, your security team should identify all impacted resources and work with resource owners to ensure business continuity.
- Work with your security team to identify unfamiliar resources, including Compute Engine instances, snapshots, service accounts, and IAM users. Delete resources not created with authorized accounts.
- Respond to any notifications from Google Cloud Support.
- To limit who can create service accounts, use the Organization Policy Service.
- To identify and fix overly permissive roles, use IAM Recommender.
Privilege Escalation: ClusterRole with Privileged Verbs
Someone created an RBAC ClusterRole
object that contains the bind
, escalate
, or
impersonate
verbs. A subject that's bound to a role with these verbs can
impersonate other users with higher privileges, bind to additional Role
or ClusterRole
objects that contain additional permissions, or modify their own
ClusterRole
permissions. This might lead to those subjects gaining
cluster-admin
privileges. For more details, see the log message for this
alert.
- Review the
ClusterRole
and associatedClusterRoleBindings
to check whether the subjects actually require these permissions. - If possible, avoid creating roles that involve the
bind
,escalate
, orimpersonate
verbs. - Determine whether there are other signs of malicious activity by the principal in the audit logs in Cloud Logging.
- When assigning permissions in an RBAC role, use the principle of least privilege and grant the minimum permissions needed to perform a task. Using the principle of least privilege reduces the potential for privilege escalation if your cluster is compromised, and reduces the likelihood that excessive access results in a security incident.
Privilege Escalation: ClusterRoleBinding to Privileged Role
Someone created an RBAC ClusterRoleBinding
that references the default
system:controller:clusterrole-aggregation-controller
ClusterRole
. This
default ClusterRole
has the escalate
verb, which allows subjects to modify
the privileges of their own roles, allowing for privilege escalation. For more
details, see the log message for this alert.
- Review any
ClusterRoleBinding
that references thesystem:controller:clusterrole-aggregation-controller
ClusterRole
. - Review any modifications to the
system:controller:clusterrole-aggregation-controller
ClusterRole
. - Determine whether there are other signs of malicious activity by the
principal who created the
ClusterRoleBinding
in the audit logs in Cloud Logging.
Privilege Escalation: Suspicious Kubernetes Container Names - Exploitation and Escape
Someone deployed a Pod with a naming convention similar to common tools used for container escapes or to execute other attacks on the cluster. For more details, see the log message for this alert.
- Confirm that the Pod is legitimate.
- Determine whether there are other signs of malicious activity from the Pod or principal in the audit logs in Cloud Logging.
- If the principal isn't a service account (IAM or Kubernetes), contact the owner of the account to confirm whether the legitimate owner conducted the action.
- If the principal is a service account (IAM or Kubernetes), identify the source of the action to determine its legitimacy.
- If the Pod is not legitimate, remove it, along with any associated RBAC bindings and service accounts that the workload used and that allowed its creation.
Privilege Escalation: Workload Created with a Sensitive Host Path Mount
Someone created a workload that contains a hostPath
volume mount to a
sensitive path on the host node's file system. Access to these paths on the host
file system can be used to access privileged or sensitive information on the
node and for container escapes. If possible, don't allow any hostPath
volumes
in your cluster. For more details, see the log message for this alert.
- Review the workload to determine if this
hostPath
volume is necessary for the intended functionality. If yes, ensure that the path is to the most specific directory possible. For example,/etc/myapp/myfiles
instead of/
or/etc
. - Determine whether there are other signs of malicious activity related to this workload in the audit logs in Cloud Logging.
To block hostPath
volume mounts in the cluster, see guidance for enforcing Pod Security Standards.
Privilege Escalation: Workload with shareProcessNamespace enabled
Someone deployed a workload with the shareProcessNamespace
option set to
true
, allowing all containers to share the same Linux process namespace. This
could allow an untrusted or compromised container to escalate privileges by
accessing and controlling environment variables, memory, and other sensitive
data from processes running in other containers. Some workloads might require
this functionality to operate for legitimate reasons, such as log handling
sidecar containers or debugging containers. For more details, see the log
message for this alert.
- Confirm that the workload actually requires access to a shared process namespace for all containers in the workload.
- Check whether there are other signs of malicious activity by the principal in the audit logs in Cloud Logging.
- If the principal isn't a service account (IAM or Kubernetes), contact the owner of the account to confirm whether they conducted the action.
- If the principal is a service account (IAM or Kubernetes), identify the legitimacy of what caused the service account to perform this action.
Service account self-investigation
A service account credential is being used to investigate the roles and permissions associated with that same service account. This finding indicates that service account credentials might be compromised and immediate action should be taken.
Step 1: Review finding details
Open a
Discovery: Service Account Self-Investigation
finding, as directed in Reviewing finding details earlier on this page. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Severity: the risk level assigned to the finding. The severity
is
HIGH
if the API call that triggered this finding was unauthorized—the service account doesn't have permission to query its own IAM permissions with theprojects.getIamPolicy
API. - Principal email: the potentially compromised service account.
- Caller IP: the internal or external IP address
- Severity: the risk level assigned to the finding. The severity
is
- Affected resource, especially the following fields:
- Resource full name:
- Project full name: the project that contains the potentially leaked account credentials.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- To see the complete JSON for the finding, click the JSON tab.
- What was detected, especially the following fields:
Step 2: Review project and service account permissions
In the Google Cloud console, go to the IAM page.
If necessary, select the project listed in the
projectID
field of the finding JSON.On the page that appears, in the Filter box, enter the account name listed in Principal email and check assigned permissions.
In the Google Cloud console, go to the Service Accounts page.
On the page that appears, in the Filter box, enter the name of the compromised service account and check the service account's keys and key creation dates.
Step 3: Check logs
- On the Summary tab of the finding details panel, click the Cloud Logging URI link to open the Logs Explorer.
- If necessary, select your project.
- On the page that loads, check logs for activity from new or updated
IAM resources using the following filters:
proto_payload.method_name="google.iam.admin.v1.CreateServiceAccount"
protoPayload.methodName="SetIamPolicy"
protoPayload.authenticationInfo.principalEmail="principalEmail"
Step 4: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Permission Groups Discovery: Cloud Groups.
- To develop a response plan, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with the compromised account.
- Delete the compromised service account and rotate and delete all service account access keys for the compromised project. After deletion, resources that use the service account for authentication lose access.
- Delete project resources created by the compromised account, like unfamiliar Compute Engine instances, snapshots, service accounts, and IAM users.
Inhibit System Recovery: Deleted Google Cloud Backup and DR host
Event Threat Detection examines audit logs to detect the deletion of hosts that are running applications protected by the Backup and DR Service. After a host is deleted, applications that are associated with the host cannot be backed up.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Inhibit System Recovery: Deleted Google Cloud Backup and DR host
finding, as detailed in Reviewing findings. The details panel for the finding opens to the Summary tab. - On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Application name: the name of a database or VM connected to Backup and DR
- Host name: the name of a host connected to Backup and DR
- Principal subject: a user that has successfully executed an action
- Affected resource
- Resource display name: the project in which the host was deleted
- Related links, especially the following fields:
- MITRE ATTACK method: link to the MITRE ATT&CK documentation
- Logging URI: link to open the Logs Explorer
- What was detected, especially the following fields:
Step 2: Research attack and response methods
Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- In the project where the action was taken, navigate to the management console.
- Confirm that the deleted host is no longer in the list of Backup and DR hosts.
- Select the Add Host option to re-add the deleted host.
Inhibit System Recovery: Google Cloud Backup and DR remove plan
Security Command Center examines audit logs to detect the anomalous deletion of a Backup and DR Service backup plan used to apply backup policies to an application.
Step 1: Review finding details
- Open the
Inhibit System Recovery: Google Cloud Backup and DR remove plan
finding, as detailed in Reviewing findings. The details panel for the finding opens to the Summary tab. - On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Application name: the name of a database or VM connected to Backup and DR
- Profile name: specifies the storage target for backups of application and VM data
- Template name: the name for a set of policies that define backup frequency, schedule, and retention time
- Affected resource
- Resource display name: the project in which the plan was deleted
- Related links, especially the following fields:
- MITRE ATTACK method: link to the MITRE ATT&CK documentation
- Logging URI: link to open the Logs Explorer
- What was detected, especially the following fields:
Step 2: Research attack and response methods
Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- In the project where the action was taken, navigate to the management console.
- In the App Manager tab, find the affected applications that are no longer protected and review backup policies for each.
Inhibit System Recovery: Google Cloud Backup and DR delete template
Security Command Center examines audit logs to detect the anomalous deletion of a template. A template is a base configuration for backups that can be applied to multiple applications.
Step 1: Review finding details
- Open the
Inhibit System Recovery: Google Cloud Backup and DR delete template
finding, as detailed in Reviewing findings. The details panel for the finding opens to the Summary tab. - On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Template name: the name for a set of policies that define backup frequency, schedule, and retention time
- Principal subject: a user that has successfully executed an action
- Affected resource
- Resource display name: the project in which the template was deleted
- Related links, especially the following fields:
- MITRE ATTACK method: link to the MITRE ATT&CK documentation
- Logging URI: link to open the Logs Explorer
- What was detected, especially the following fields:
Step 2: Research attack and response methods
Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- In the project where the action was taken, navigate to the management console.
- In the App Manager tab, find the affected applications that are no longer protected and review backup policies for each.
- To re-add a template, navigate to the Backup Plans tab, select Templates, then select the Create Template option.
Inhibit System Recovery: Google Cloud Backup and DR delete policy
Audit logs are examined to detect the deletion of a policy. A policy defines how a backup is taken and where it is stored.
Step 1: Review finding details
- Open the
Inhibit System Recovery: Google Cloud Backup and DR delete policy
finding, as detailed in Reviewing findings. The details panel for the finding opens to the Summary tab. - On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Policy name: the name for a single policy, which defines backup frequency, schedule, and retention time
- Principal subject: a user that has successfully executed an action
- Affected resource
- Resource display name: the project in which the policy was deleted
- Related links, especially the following fields:
- MITRE ATTACK method: link to the MITRE ATT&CK documentation
- Logging URI: link to open the Logs Explorer.
- What was detected, especially the following fields:
Step 2: Research attack and response methods
Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings. 1. In the project where the action was taken, navigate to the management console. 2. In the App Manager tab, select the affected application and review the Policy settings applied to that application.
Inhibit System Recovery: Google Cloud Backup and DR delete profile
Audit logs are examined to detect the deletion of a profile. A profile defines which storage pools are used to store backups.
Step 1: Review finding details
- Open the
Inhibit System Recovery: Google Cloud Backup and DR delete profile
finding, as detailed in Reviewing findings. The details panel for the finding opens to the Summary tab. - On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Profile name: specifies the storage target for backups of application and VM data
- Principal subject: a user that has successfully executed an action
- Affected resource
- Resource display name: the project in which the profile was deleted
- Related links, especially the following fields:
- MITRE ATTACK method: link to the MITRE ATT&CK documentation
- Logging URI: link to open the Logs Explorer
- What was detected, especially the following fields:
Step 2: Research attack and response methods
Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings. 1. In the project where the action was taken, navigate to the management console. 2. In the Backup Plans tab, select Profiles for a list of all profiles. 3. Review profiles to verify that all required profiles are in place. 4. If the deleted profile was removed in error, select Create Profile to define storage targets for your Backup and DR appliances.
Inhibit System Recovery: Google Cloud Backup and DR delete storage pool
Audit logs are examined to detect the deletion of a storage pool. A storage pool associates a Cloud Storage bucket with Backup and DR.
Step 1: Review finding details
- Open the
Inhibit System Recovery: Google Cloud Backup and DR delete storage pool
finding, as detailed in Reviewing findings. The details panel for the finding opens to the Summary tab. - On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Storage pool name: the name for storage buckets where backups are stored
- Principal subject: a user that has successfully executed an action
- Affected resource
- Resource display name: the project in which the storage pool was deleted
- Related links, especially the following fields:
- MITRE ATTACK method: link to the MITRE ATT&CK documentation
- Logging URI: link to open the Logs Explorer
- What was detected, especially the following fields:
Step 2: Research attack and response methods
Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings. 1. In the project where the action was taken, navigate to the management console. 2. In the Manage tab, select Storage Pools for a list of all storage pools. 3. Review storage pool associations with Backup Appliances. 4. If an active appliance does not have an associated storage pool, select Add OnVault Pool to re-add.
Data Destruction: Google Cloud Backup and DR expire image
A potentially malicious actor has requested to delete a backup image.
Step 1: Review finding details
- Open the
Inhibit System Recovery: Google Cloud Backup and DR expire image
finding, as detailed in Reviewing findings. The details panel for the finding opens to the Summary tab. - On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Policy name: the name for a single policy, which defines backup frequency, schedule, and retention time
- Template name: the name for a set of policies that define backup frequency, schedule, and retention time
- Profile name: specifies the storage target for backups of application and VM data
- Principal subject: a user that has successfully executed an action
- Affected resource
- Resource display name: the project in which the backup image was deleted
- Related links, especially the following fields:
- MITRE ATTACK method: link to the MITRE ATT&CK documentation
- Logging URI: link to open the Logs Explorer
- What was detected, especially the following fields:
Step 2: Research attack and response methods
Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings. 1. In the project where the action was taken, navigate to the management console. 2. Navigate to the Monitor tab and select Jobs to review the status of the delete backup job. 3. If a delete job is not authorized, navigate to IAM permissions to review users with access to backup data.
Data Destruction: Google Cloud Backup and DR expire all images
A potentially malicious actor has requested to delete all backup images associated with an application.
Step 1: Review finding details
- Open the
Inhibit System Recovery: Google Cloud Backup and DR expire all images
finding, as detailed in Reviewing findings. The details panel for the finding opens to the Summary tab. - On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Policy name: the name for a single policy, which defines backup frequency, schedule, and retention time
- Template name: the name for a set of policies that define backup frequency, schedule, and retention time
- Profile name: specifies the storage target for backups of application and VM data
- Principal subject: a user that has successfully executed an action
- Affected resource
- Resource display name: the project in which the backup images were deleted
- Related links, especially the following fields:
- MITRE ATTACK method: link to the MITRE ATT&CK documentation
- Logging URI: link to open the Logs Explorer.
- What was detected, especially the following fields:
Step 2: Research attack and response methods
Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings. 1. In the project where the action was taken, navigate to the management console. 2. Navigate to the Monitor tab and select Jobs to review the status of the delete backup job. 3. If a delete job is not authorized, navigate to IAM permissions to review users with access to backup data.
Data Destruction: Google Cloud Backup and DR remove appliance
Audit logs are examined to detect the removal of a backup and recovery appliance. A backup and recovery appliance is a critical component for backup operations.
Step 1: Review finding details
- Open the
Inhibit System Recovery: Google Cloud Backup and DR remove appliance
finding, as detailed in Reviewing findings. The details panel for the finding opens to the Summary tab. - On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Appliance name: the name of a database or VM connected to Backup and DR
- Principal subject: a user that has successfully executed an action
- Affected resource
- Resource display name: the project in which the appliance was deleted
- Related links, especially the following fields:
- MITRE ATTACK method: link to the MITRE ATT&CK documentation
- Logging URI: link to open the Logs Explorer
- What was detected, especially the following fields:
Step 2: Research attack and response methods
Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings. 1. In the project where the action was taken, navigate to the management console. 2. In the App Manager tab, find the affected applications that are no longer protected and review backup policies for each. 3. To create a new appliance and reapply protections to unprotected apps, navigate to Backup and DR in the Google Cloud console and select the Deploy another backup or recovery appliance option. 4. In the Storage menu, configure each new appliance with a storage target. After you configure an appliance, it appears as an option when you create a profile for your applications.
Impact: Google Cloud Backup and DR reduced backup expiration
Event Threat Detection examines audit logs to detect whether the expiration date for a backup on a Backup and DR Service appliance has been reduced.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Impact: Google Cloud Backup and DR reduced backup expiration
finding, as detailed in Reviewing findings. The details panel for the finding opens to the Summary tab. - On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Description: information about the detection
- Principal subject: a user or service account that has successfully executed an action
- Affected resource
- Resource display name: the project in which the backup's expiration was reduced.
- Related links, especially the following fields:
- MITRE ATTACK method: link to the MITRE ATT&CK documentation.
- Logging URI: link to open the Logs Explorer.
- What was detected, especially the following fields:
Step 2: Research attack and response methods
Contact the owner of the service account in the Principal subject field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- In the project where the action was taken, navigate to the management console.
- In the App Manager tab, find the affected application for which backup expiration was reduced and verify that the expiration was intended by the principal.
- To initiate a new backup of the application, select Manage Backup Configurations to create an on-demand backup or to schedule a new backup.
Impact: Google Cloud Backup and DR reduced backup frequency
Event Threat Detection examines audit logs to detect whether the backup plan has been modified to reduce backup frequency.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Impact: Google Cloud Backup and DR reduced backup frequency
finding, as detailed in Reviewing findings. The details panel for the finding opens to the Summary tab. - On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Description: information about the detection
- Principal subject: a user or service account that has successfully executed an action
- Affected resource
- Resource display name: the project in which the backup frequency was reduced.
- Related links, especially the following fields:
- MITRE ATTACK method: link to the MITRE ATT&CK documentation.
- Logging URI: link to open the Logs Explorer.
- What was detected, especially the following fields:
Step 2: Research attack and response methods
Contact the owner of the service account in the Principal subject field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- In the project where the action was taken, navigate to the management console.
- In the App Manager tab, find the affected application for which backup frequency was reduced and verify that the change was intended by the principal.
- To initiate a new backup of the application, select Manage Backup Configurations to create an on-demand backup or to schedule a new backup.
Impact: Suspicious Kubernetes Container Names - Coin Mining
Someone deployed a Pod with a naming convention similar to common cryptocurrency coin miners. This may be an attempt by an attacker who has achieved initial access to the cluster to use the cluster's resources for cryptocurrency mining. For more details, see the log message for this alert.
- Confirm that the Pod is legitimate.
- Determine whether there are other signs of malicious activity from the Pod or principal in the audit logs in Cloud Logging.
- If the principal isn't a service account (IAM or Kubernetes), contact the owner of the account to confirm whether the legitimate owner conducted the action.
- If the principal is a service account (IAM or Kubernetes), identify the source of the action to determine its legitimacy.
- If the Pod is not legitimate, remove it, along with any associated RBAC bindings and service accounts that the workload used and that allowed its creation.
Lateral Movement: Modified Boot Disk Attached to Instance
Audit logs are examined to detect suspicious disk movements among Compute Engine instance resources. A potentially modified boot disk has been attached to your Compute Engine.
Step 1: Review finding details
- Open the
Lateral Movement: Modify Boot Disk Attaching to Instance
finding, as detailed in Reviewing findings. The details panel for the finding opens to the Summary tab. On the Summary tab, note the values of following fields.
Under What was detected:
- Principal email: the service account that performed the action
- Service name: the API name of the Google Cloud service that was accessed by the service account
- Method name: the method that was called
Step 2: Research attack and response methods
- Use service account tools, like Activity Analyzer, to investigate the activity of the associated service account.
- Contact the owner of the service account in the Principal email field. Confirm whether the legitimate owner conducted the action.
Step 3: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project where the action was taken.
- Consider using Secure Boot for your Compute Engine VM instances.
- Consider deleting the potentially compromised service account and rotate and delete all service account access keys for the potentially compromised project. After deletion, applications that use the service account for authentication lose access. Before proceeding, your security team should identify all impacted applications and work with application owners to ensure business continuity.
- Work with your security team to identify unfamiliar resources, including Compute Engine instances, snapshots, service accounts, and IAM users. Delete resources not created with authorized accounts.
- Respond to any notifications from Google Cloud Support.
Privilege Escalation: AlloyDB Over-Privileged Grant
Detects when all privileges over a AlloyDB for PostgreSQL database (or all functions or procedures in a database) are granted to one or more database users.
To respond to this finding, do the following:
Step 1: Review finding details
- Open the
Privilege Escalation: AlloyDB Over-Privileged Grant
finding, as directed in Reviewing findings. On the Summary tab of the finding details panel, review the information in the following sections:
- What was detected, especially the following fields:
- Database display name: the name of the database in the AlloyDB for PostgreSQL instance that was affected.
- Database user name: the PostgreSQL user who granted excess privileges.
- Database query: the PostgreSQL query executed that granted the privileges.
- Database grantees: the grantees of the overbroad privileges.
- Affected resource, especially the following fields:
- Resource full name: the resource name of the AlloyDB for PostgreSQL instance that was affected.
- Parent full name: the resource name of the AlloyDB for PostgreSQL instance.
- Project full name: the Google Cloud project that contains the AlloyDB for PostgreSQL instance.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- What was detected, especially the following fields:
To see the complete JSON for the finding, click the JSON tab.
Step 2: Review database privileges
- Connect to the AlloyDB for PostgreSQL instance.
- List and show access privileges
for the following:
- Databases. Use the
\l
or\list
metacommand and check what privileges are assigned for the database listed in Database display name (from Step 1). - Functions or procedures. Use the
\df
metacommand and check what privileges are assigned for functions or procedures in the database listed in Database display name (from Step 1).
- Databases. Use the
Step 3: Check logs
- In the Google Cloud console, go to Logs Explorer by clicking the link in Cloud Logging URI (from Step 1). The Logs Explorer page includes all logs related to the relevant Cloud SQL instance.
- In the Logs explorer, check the PostgreSQL
pgaudit
logs, which record executed queries to the database, by using the following filters:protoPayload.request.database="var class="edit">database"
Step 4: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Exfiltration Over Web Service.
- To determine if additional remediation steps are necessary, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the instance with overprivileged grants.
- Consider revoking all permissions for the grantees that are listed in Database grantees until the investigation is completed.
- To limit access to the database (from Database display name of Step 1, revoke unnecessary permissions from the grantees (from Database grantees of Step 1.
Privilege Escalation: AlloyDB Database Superuser Writes to User Tables
Detects when the AlloyDB for PostgreSQL database superuser account (postgres
)
writes to user tables. The superuser (a role with very broad access) generally
shouldn't be used to write to user tables. A user account with more limited access
should be used for normal daily activity. When a superuser writes to a user
table, that could indicate that an attacker has escalated privileges or has
compromised the default database user and is modifying data. It could also
indicate normal but unsafe practices.
To respond to this finding, do the following:
Step 1: Review finding details
- Open an
Privilege Escalation: AlloyDB Database Superuser Writes to User Tables
finding, as directed in Reviewing findings. On the Summary tab of the finding details panel, review the information in the following sections:
- What was detected, especially the following fields:
- Database display name: the name of the database in the AlloyDB for PostgreSQL instance that was affected.
- Database user name: the superuser.
- Database query: the SQL query executed while writing to user tables.
- Affected resource, especially the following fields:
- Resource full name: the resource name of the AlloyDB for PostgreSQL instance that was affected.
- Parent full name: the resource name of the AlloyDB for PostgreSQL instance.
- Project full name: the Google Cloud project that contains the AlloyDB for PostgreSQL instance.
- Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- What was detected, especially the following fields:
To see the complete JSON for the finding, click the JSON tab.
Step 2: Check logs
- In the Google Cloud console, go to Logs Explorer by clicking
the link in
cloudLoggingQueryURI
(from Step 1). The Logs Explorer page includes all logs related to the relevant AlloyDB for PostgreSQL instance. - Check the logs for PostgreSQL pgaudit logs, which contain the queries
executed by the superuser, by using the following filters:
protoPayload.request.user="postgres"
Step 3: Research attack and response methods
- Review the MITRE ATT&CK framework entry for this finding type: Exfiltration Over Web Service.
- To determine if additional remediation steps are necessary, combine your investigation results with MITRE research.
Step 4: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Review the users allowed to connect to the database.
- Consider changing the password for the superuser.
- Consider creating a new, limited access user
for the different types of queries used on the instance.
- Grant the new user only the necessary permissions needed to execute their queries.
- Update the credentials for the clients that connect to the AlloyDB for PostgreSQL instance
Compute Engine Admin Metadata detections
Persistence: GCE Admin Added SSH Key
Description | Actions | |
---|---|---|
The ssh-keys Compute Engine instance metadata key was changed on an established instance.
|
The Compute Engine instance metadata key ssh-keys was modified on an instance that was created more than seven days ago. Verify
whether the change was done intentionally by a member, or if it was
implemented by an adversary to introduce new access to your organization.
|
Check logs using the following filters:
Replace the following:
|
Research events that trigger this finding: |
Persistence: GCE Admin Added Startup Script
Description | Actions | |
---|---|---|
The startup-script or startup-script-url Compute Engine instance metadata key was changed on an established instance.
|
The Compute Engine instance metadata key startup-script or
startup-script-url was modified on an instance that was created more than
seven days ago. Verify whether the change was done intentionally by a member, or if it was
implemented by an adversary to introduce new access to your organization.
|
Check logs using the following filters:
Replace the following:
|
Research events that trigger this finding: |
Google Workspace logs detections
If you share your Google Workspace logs with Cloud Logging, Event Threat Detection generates findings for several Google Workspace threats. Because Google Workspace logs are at the organization level, Event Threat Detection can only scan them if you activate Security Command Center at the organization level.
Event Threat Detection enriches log events and writes findings to Security Command Center. The following table contains Google Workspace threats, relevant MITRE ATT&CK framework entries, and details about the events that trigger findings. You can also check logs using specific filters, and combine all of the information to respond to Google Workspace threats.
Initial Access: Disabled Password Leak
This finding isn't available if you activate Security Command Center at the project level.
Description | Actions | |
---|---|---|
A member's account is disabled because a password leak was detected. | Reset passwords for affected accounts, and advise members to use strong, unique passwords for corporate accounts. |
Check logs using the following filters:
Replace |
Research events that trigger this finding: |
Initial Access: Suspicious Login Blocked
This finding isn't available if you activate Security Command Center at the project level.
Description | Actions | |
---|---|---|
A suspicious login to a member's account was detected and blocked. | This account might be targeted by adversaries. Ensure the user account follows your organization's security guidelines for strong passwords and multifactor authentication. |
Check logs using the following filters:
Replace |
Research events that trigger this finding: |
Initial Access: Account Disabled Hijacked
This finding isn't available if you activate Security Command Center at the project level.
Description | Actions | |
---|---|---|
A member's account was suspended due to suspicious activity. | This account was hijacked. Reset the account password and encourage users to create strong, unique passwords for corporate accounts. |
Check logs using the following filters:
Replace |
Research events that trigger this finding: |
Impair Defenses: Two Step Verification Disabled
This finding isn't available if you activate Security Command Center at the project level.
Description | Actions | |
---|---|---|
A member disabled 2-step verification. | Verify if the user intended to disable 2-step verification. If your organization requires 2-step verification, ensure that the user enables it promptly. |
Check logs using the following filters:
Replace |
Research events that trigger this finding:
|
Initial Access: Government Based Attack
This finding isn't available if you activate Security Command Center at the project level.
Description | Actions | |
---|---|---|
Government-backed attackers might have tried to compromise a member account or computer. | This account might be targeted by adversaries. Ensure the user account follows your organization's security guidelines for strong passwords and multifactor authentication. |
Check logs using the following filters:
Replace |
Research events that trigger this finding: |
Persistence: SSO Enablement Toggle
This finding isn't available if you activate Security Command Center at the project level.
Description | Actions | |
---|---|---|
The Enable SSO (single sign-on) setting on the admin account was disabled. | SSO settings for your organization were changed. Verify whether the change was done intentionally by a member or if it was implemented by an adversary to introduce new access to your organization. |
Check logs using the following filters:
Replace the following:
|
Research events that trigger this finding: |
Persistence: SSO Settings Changed
This finding isn't available if you activate Security Command Center at the project level.
Description | Actions | |
---|---|---|
The SSO settings for the admin account were changed. | SSO settings for your organization were changed. Verify whether the change was done intentionally by a member or if it was implemented by an adversary to introduce new access to your organization. |
Check logs using the following filters:
Replace the following:
|
Research events that trigger this finding: |
Impair Defenses: Strong Authentication Disabled
This finding isn't available if you activate Security Command Center at the project level.
Description | Actions | |
---|---|---|
2-step verification was disabled for the organization. | 2-step verification is no longer required for your organization. Find out if this was an intentional policy change by an administrator, or if this is an attempt by an adversary to make account hijacking easier. |
Check logs using the following filters:
Replace |
Research events that trigger this finding: |
Respond to Google Workspace threats
Findings for Google Workspace are only available for organization-level activations of Security Command Center. Google Workspace logs cannot be scanned for project-level activations.
If you're a Google Workspace Admin, you can use the service's security tools to resolve these threats:
The tools include alerts, a security dashboard, security recommendations, and can help you investigate and remediate threats.
If you're not a Google Workspace Admin, do the following:
- If you still have access to your account, change or reset your password and turn on 2-Step verification.
- Reach out to your Google Workspace Admin or the team at your company that manages your Google Workspace account. Use these findings as an indication that an account might be compromised.
Cloud IDS threat detections
Cloud IDS: THREAT_ID
Cloud IDS findings are generated by Cloud IDS, which is a security service that monitors traffic to and from your Google Cloud resources for threats. When Cloud IDS detects a threat, it sends information about the threat, such as the source IP address, destination address, and port number, to Event Threat Detection, which then issues a threat finding.
Step 1: Review finding details
Open the
Cloud IDS: THREAT_ID
finding, as directed in Reviewing findings.In the finding details, on the Summary tab, review the listed values in the following sections:
- What was detected, especially the following fields:
- Protocol: the network protocol used
- Event time: When the event occurred
- Description: More information about the finding
- Severity: What severity the alert was
- Destination IP: The target IP of the network traffic
- Destination Port: The target port of the network traffic
- Source IP: The source IP of the network traffic
- Source Port: The source port of the network traffic
- Affected resource, especially the following fields:
- Resource full name: The project containing the network with the threat
- Related links, especially the following fields:
- Cloud Logging URI: link to Cloud IDS Logging entries - these entries have the necessary information to search Palo Alto Networks' Threat Vault
- Detection Service
- Finding Category The Cloud IDS threat name
- What was detected, especially the following fields:
To see the complete JSON for the finding, click the JSON tab.
Step 2: Look up attack and response methods
After you have reviewed the finding details, please refer the Cloud IDS documentation on investigating threat alerts to determine an appropriate response.
You can find more information about the detected event in the original log entry by clicking the link in the Cloud Logging URI field in the finding details.
Container Threat Detection responses
To learn more about Container Threat Detection, see how Container Threat Detection works.
Added Binary Executed
A binary that was not part of the original container image was
executed. Attackers commonly install exploitation tooling and malware after the initial compromise. Ensuring that your containers are immutable is an important best practice. This
is a low-severity finding, because your organization might not be following this
best practice. There are corresponding
Execution: Added Malicious Binary Executed
findings when the hash of the
binary is a known indicator of compromise (IoC). To respond to this finding,
do the following:
Step 1: Review finding details
Open an
Added Binary Executed
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Program binary: the absolute path of the added binary.
- Arguments: the arguments provided when invoking the added binary.
- Affected resource, especially the following fields:
- Resource full name: the full resource name of the cluster including the project number, location, and cluster name.
- Related links, especially the following fields:
- VirusTotal indicator: link to the VirusTotal analysis page.
- What was detected, especially the following fields:
Click the JSON and note the following fields:
resource
:project_display_name
: the name of the project that contains the cluster.
sourceProperties
:Pod_Namespace
: the name of the Pod's Kubernetes namespace.Pod_Name
: the name of the GKE Pod.Container_Name
: the name of the affected container.Container_Image_Uri
: the name of the container image being deployed.VM_Instance_Name
: the name of the GKE node where the Pod executed.
Identify other findings that occurred at a similar time for this container. Related findings might indicate that this activity was malicious, instead of a failure to follow best practices.
Step 2: Review cluster and node
In the Google Cloud console, go to the Kubernetes clusters page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Select the cluster listed on the Resource full name row in the Summary tab of the finding details. Note any metadata about the cluster and its owner.
Click the Nodes tab. Select the node listed in
VM_Instance_Name
.Click the Details tab and note the
container.googleapis.com/instance_id
annotation.
Step 3: Review Pod
In the Google Cloud console, go to the Kubernetes Workloads page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Filter on the cluster listed on the Resource full name row in the Summary tab of the finding details and the Pod namespace listed in
Pod_Namespace
, if necessary.Select the Pod listed in
Pod_Name
. Note any metadata about the Pod and its owner.
Step 4: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Set Select time range to the period of interest.
On the page that loads, do the following:
- Find Pod logs for
Pod_Name
by using the following filter:resource.type="k8s_container"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
resource.labels.namespace_name="Pod_Namespace"
resource.labels.pod_name="Pod_Name"
- Find cluster audit logs by using the following filter:
logName="projects/resource.project_display_name/logs/cloudaudit.googleapis.com%2Factivity"
resource.type="k8s_cluster"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
Pod_Name
- Find GKE node console logs by using the following filter:
resource.type="gce_instance"
resource.labels.instance_id="instance_id"
- Find Pod logs for
Step 5: Investigate running container
If the container is still running, it might be possible to investigate the container environment directly.
Go to the Google Cloud console.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Click Activate Cloud Shell
Obtain GKE credentials for your cluster by running the following commands.
For zonal clusters:
gcloud container clusters get-credentials cluster_name --zone location --project project_name
For regional clusters:
gcloud container clusters get-credentials cluster_name --region location --project project_name
Replace the following:
cluster_name
: the cluster listed inresource.labels.cluster_name
location
: the location listed inresource.labels.location
project_name
: the project name listed inresource.project_display_name
Retrieve the added binary by running:
kubectl cp Pod_Namespace/Pod_Name:Process_Binary_Fullpath -c Container_Name local_file
Replace
local_file
with a local file path to store the added binary.Connect to the container environment by running:
kubectl exec --namespace=Pod_Namespace -ti Pod_Name -c Container_Name -- /bin/sh
This command requires the container to have a shell installed at
/bin/sh
.
Step 6: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Ingress Tool Transfer, Native API.
- Check the SHA-256 hash value for the binary flagged as malicious on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
- To develop a response plan, combine your investigation results with the MITRE research and VirusTotal analysis.
Step 7: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- If the binary was intended to be included in the container, rebuild the container image with the binary included. This way, the container can be immutable.
- Otherwise, contact the owner of the project with the compromised container.
- Stop or delete the compromised container and replace it with a new container.
Added Library Loaded
A library that was not part of the original container image was loaded.
Attackers might load malicious libraries into existing programs in order to
bypass code execution protections and to hide malicious code. Ensuring that your containers are immutable is an important best practice. This is a low-severity finding, because
your organization might not be following this best practice. There are
corresponding Execution: Added Malicious Library Loaded
findings when the hash
of the binary is a known indicator of compromise (IoC). To respond to this
finding, do the following:
Step 1: Review finding details
Open an
Added Library Loaded
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Program binary: the full path of the process binary that loaded the library.
- Libraries: details about the added library.
- Arguments: the arguments provided when invoking the process binary.
- Affected resource, especially the following fields:
- Resource full name: the full resource name of the cluster.
- Related links, especially the following fields:
- VirusTotal indicator: link to the VirusTotal analysis page.
- What was detected, especially the following fields:
Click the JSON tab and note the following fields:
resource
:project_display_name
: the name of the project that contains the asset.
sourceProperties
:Pod_Namespace
: the name of the Pod's Kubernetes namespace.Pod_Name
: the name of the GKE Pod.Container_Name
: the name of the affected container.Container_Image_Uri
: the name of the container image being executed.VM_Instance_Name
: the name of the GKE node where the Pod executed.
Identify other findings that occurred at a similar time for this container. Related findings might indicate that this activity was malicious, instead of a failure to follow best practices.
Step 2: Review cluster and node
In the Google Cloud console, go to the Kubernetes clusters page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Select the cluster listed in
resource.name
. Note any metadata about the cluster and its owner.Click the Nodes tab. Select the node listed in
VM_Instance_Name
.Click the Details tab and note the
container.googleapis.com/instance_id
annotation.
Step 3: Review Pod
In the Google Cloud console, go to the Kubernetes Workloads page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Filter on the cluster listed on the Resource full name row in the Summary tab of the finding details and the Pod namespace listed in
Pod_Namespace
, if necessary.Select the Pod listed in
Pod_Name
. Note any metadata about the Pod and its owner.
Step 4: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Set Select time range to the period of interest.
On the page that loads, do the following:
- Find Pod logs for
Pod_Name
by using the following filter:resource.type="k8s_container"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
resource.labels.namespace_name="Pod_Namespace"
resource.labels.pod_name="Pod_Name"
- Find cluster audit logs by using the following filter:
logName="projects/resource.project_display_name/logs/cloudaudit.googleapis.com%2Factivity"
resource.type="k8s_cluster"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
Pod_Name
- Find GKE node console logs by using the following filter:
resource.type="gce_instance"
resource.labels.instance_id="instance_id"
- Find Pod logs for
Step 5: Investigate running container
If the container is still running, it might be possible to investigate the container environment directly.
Go to the Google Cloud console.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Click Activate Cloud Shell.
Obtain GKE credentials for your cluster by running the following commands.
For zonal clusters:
gcloud container clusters get-credentials cluster_name --zone location --project resource.project_display_name
For regional clusters:
gcloud container clusters get-credentials cluster_name --region location --project resource.project_display_name
Retrieve the added library by running:
kubectl cp Pod_Namespace/Pod_Name: Added_Library_Fullpath -c Container_Name local_file
Replace local_file with a local file path to store the added library.
Connect to the container environment by running:
kubectl exec --namespace=Pod_Namespace -ti Pod_Name -c Container_Name -- /bin/sh
This command requires the container to have a shell installed at
/bin/sh
.
Step 6: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Ingress Tool Transfer, Shared Modules.
- Check the SHA-256 hash value for the binary flagged as malicious on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
- To develop a response plan, combine your investigation results with the MITRE research and VirusTotal analysis.
Step 7: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- If the library was intended to be included in the container, rebuild the container image with the library included. This way, the container can be immutable.
- Otherwise, contact the owner of the project with the compromised container.
- Stop or delete the compromised container and replace it with a new container.
Execution: Added Malicious Binary Executed
A malicious binary that was not part of the original container image was executed. Attackers commonly install exploitation tooling and malware after the initial compromise. To respond to this finding, do the following:
Step 1: Review finding details
Open an
Execution: Added Malicious Binary Executed
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Program binary: the absolute path of the added binary.
- Arguments: the arguments provided when invoking the added binary.
- Containers: the name of the affected container.
- Containers URI: the name of the container image being deployed.
- Affected resource, especially the following fields:
- Resource full name: the full resource name of the cluster including the project number, location, and cluster name.
- Related links, especially the following fields:
- VirusTotal indicator: link to the VirusTotal analysis page.
- What was detected, especially the following fields:
Click the JSON tab and note the following fields:
sourceProperties
:VM_Instance_Name
: the name of the GKE node where the Pod executed.
Step 2: Review cluster and node
In the Google Cloud console, go to the Kubernetes clusters page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Select the cluster listed on the Resource full name row in the Summary tab of the finding details. Note any metadata about the cluster and its owner.
Click the Nodes tab. Select the node listed in
VM_Instance_Name
.Click the Details tab and note the
container.googleapis.com/instance_id
annotation.
Step 3: Review Pod
In the Google Cloud console, go to the Kubernetes Workloads page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Filter on the cluster listed on the Resource full name row in the Summary tab of the finding details and the Pod namespace listed in
Pod_Namespace
, if necessary.Select the Pod listed in
Pod_Name
. Note any metadata about the Pod and its owner.
Step 4: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Set Select time range to the period of interest.
On the page that loads, do the following:
- Find Pod logs for
Pod_Name
by using the following filter:resource.type="k8s_container"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
resource.labels.namespace_name="Pod_Namespace"
resource.labels.pod_name="Pod_Name"
- Find cluster audit logs by using the following filter:
logName="projects/resource.project_display_name/logs/cloudaudit.googleapis.com%2Factivity"
resource.type="k8s_cluster"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
Pod_Name
- Find GKE node console logs by using the following filter:
resource.type="gce_instance"
resource.labels.instance_id="instance_id"
- Find Pod logs for
Step 5: Investigate running container
If the container is still running, it might be possible to investigate the container environment directly.
Go to the Google Cloud console.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Click Activate Cloud Shell
Obtain GKE credentials for your cluster by running the following commands.
For zonal clusters:
gcloud container clusters get-credentials cluster_name --zone location --project project_name
For regional clusters:
gcloud container clusters get-credentials cluster_name --region location --project project_name
Replace the following:
cluster_name
: the cluster listed inresource.labels.cluster_name
location
: the location listed inresource.labels.location
project_name
: the project name listed inresource.project_display_name
Retrieve the added malicious binary:
kubectl cp Pod_Namespace/Pod_Name:Process_Binary_Fullpath -c Container_Name local_file
Replace
local_file
with a local path to store the added malicious binary.Connect to the container environment:
kubectl exec --namespace=Pod_Namespace -ti Pod_Name -c Container_Name -- /bin/sh
This command requires the container to have a shell installed at
/bin/sh
.
Step 6: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Ingress Tool Transfer, Native API.
- Check the SHA-256 hash value for the binary flagged as malicious on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
- To develop a response plan, combine your investigation results with the MITRE research and VirusTotal analysis.
Step 7: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with the compromised container.
- Stop or delete the compromised container and replace it with a new container.
Execution: Added Malicious Library Loaded
A malicious library that was not part of the original container image was loaded. Attackers might load malicious libraries into existing programs in order to bypass code execution protections and to hide malicious code. To respond to this finding, do the following:
Step 1: Review finding details
Open an
Execution: Added Malicious Library Loaded
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Program binary: the full path of the process binary that loaded the library.
- Libraries: details about the added library.
- Arguments: the arguments provided when invoking the process binary.
- Containers: the name of the affected container.
- Containers URI: the name of the container image being deployed.
- Affected resource, especially the following fields:
- Resource full name: the full resource name of the cluster.
- Related links, especially the following fields:
- VirusTotal indicator: link to the VirusTotal analysis page.
- What was detected, especially the following fields:
Click the JSON tab and note the following fields:
sourceProperties
:VM_Instance_Name
: the name of the GKE node where the Pod executed.
Step 2: Review cluster and node
In the Google Cloud console, go to the Kubernetes clusters page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Select the cluster listed on the Resource full name row in the Summary tab of the finding details. Note any metadata about the cluster and its owner.
Click the Nodes tab. Select the node listed in
VM_Instance_Name
.Click the Details tab and note the
container.googleapis.com/instance_id
annotation.
Step 3: Review Pod
In the Google Cloud console, go to the Kubernetes Workloads page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Filter on the cluster listed on the Resource full name row in the Summary tab of the finding details and the Pod namespace listed in
Pod_Namespace
, if necessary.Select the Pod listed in
Pod_Name
. Note any metadata about the Pod and its owner.
Step 4: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Set Select time range to the period of interest.
On the page that loads, do the following:
- Find Pod logs for
Pod_Name
by using the following filter:resource.type="k8s_container"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
resource.labels.namespace_name="Pod_Namespace"
resource.labels.pod_name="Pod_Name"
- Find cluster audit logs by using the following filter:
logName="projects/resource.project_display_name/logs/cloudaudit.googleapis.com%2Factivity"
resource.type="k8s_cluster"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
Pod_Name
- Find GKE node console logs by using the following filter:
resource.type="gce_instance"
resource.labels.instance_id="instance_id"
- Find Pod logs for
Step 5: Investigate running container
If the container is still running, it might be possible to investigate the container environment directly.
Go to the Google Cloud console.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Click Activate Cloud Shell.
Obtain GKE credentials for your cluster by running the following commands.
For zonal clusters:
gcloud container clusters get-credentials cluster_name --zone location --project resource.project_display_name
For regional clusters:
gcloud container clusters get-credentials cluster_name --region location --project resource.project_display_name
Retrieve the added malicious library:
kubectl cp Pod_Namespace/Pod_Name: Added_Library_Fullpath -c Container_Name local_file
Replace local_file with a local path to store the added malicious library.
Connect to the container environment:
kubectl exec --namespace=Pod_Namespace -ti Pod_Name -c Container_Name -- /bin/sh
This command requires the container to have a shell installed at
/bin/sh
.
Step 6: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Ingress Tool Transfer, Shared Modules.
- Check the SHA-256 hash value for the binary flagged as malicious on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
- To develop a response plan, combine your investigation results with the MITRE research and VirusTotal analysis.
Step 7: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with the compromised container.
- Stop or delete the compromised container and replace it with a new container.
Execution: Built in Malicious Binary Executed
A binary that was executed, with the binary:
- Included in the original container image.
- Identified as malicious based on on threat intelligence.
Attacker has control of the container image repository or creation pipeline, where the malicious binary is injected in to the container image. To respond to this finding, do the following:
Step 1: Review finding details
Open an
Execution: Built in Malicious Binary Executed
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Program binary: the absolute path of the built-in binary.
- Arguments: the arguments provided when invoking the built-in binary.
- Containers: the name of the affected container.
- Containers URI: the name of the container image being deployed.
- Affected resource, especially the following fields:
- Resource full name: the full resource name of the cluster including the project number, location, and cluster name.
- Related links, especially the following fields:
- VirusTotal indicator: link to the VirusTotal analysis page.
- What was detected, especially the following fields:
Click the JSON and note the following fields:
sourceProperties
:VM_Instance_Name
: the name of the GKE node where the Pod executed.
Step 2: Review cluster and node
In the Google Cloud console, go to the Kubernetes clusters page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Select the cluster listed on the Resource full name row in the Summary tab of the finding details. Note any metadata about the cluster and its owner.
Click the Nodes tab. Select the node listed in
VM_Instance_Name
.Click the Details tab and note the
container.googleapis.com/instance_id
annotation.
Step 3: Review Pod
In the Google Cloud console, go to the Kubernetes Workloads page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Filter on the cluster listed on the Resource full name row in the Summary tab of the finding details and the Pod namespace listed in
Pod_Namespace
, if necessary.Select the Pod listed in
Pod_Name
. Note any metadata about the Pod and its owner.
Step 4: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Set Select time range to the period of interest.
On the page that loads, do the following:
- Find Pod logs for
Pod_Name
by using the following filter:resource.type="k8s_container"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
resource.labels.namespace_name="Pod_Namespace"
resource.labels.pod_name="Pod_Name"
- Find cluster audit logs by using the following filter:
logName="projects/resource.project_display_name/logs/cloudaudit.googleapis.com%2Factivity"
resource.type="k8s_cluster"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
Pod_Name
- Find GKE node console logs by using the following filter:
resource.type="gce_instance"
resource.labels.instance_id="instance_id"
- Find Pod logs for
Step 5: Investigate running container
If the container is still running, it might be possible to investigate the container environment directly.
Go to the Google Cloud console.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Click Activate Cloud Shell
Obtain GKE credentials for your cluster by running the following commands.
For zonal clusters:
gcloud container clusters get-credentials cluster_name --zone location --project project_name
For regional clusters:
gcloud container clusters get-credentials cluster_name --region location --project project_name
Replace the following:
cluster_name
: the cluster listed inresource.labels.cluster_name
location
: the location listed inresource.labels.location
project_name
: the project name listed inresource.project_display_name
Retrieve the built-in malicious binary:
kubectl cp Pod_Namespace/Pod_Name:Process_Binary_Fullpath -c Container_Name local_file
Replace
local_file
with a local path to store the built tin malicious binary.Connect to the container environment:
kubectl exec --namespace=Pod_Namespace -ti Pod_Name -c Container_Name -- /bin/sh
This command requires the container to have a shell installed at
/bin/sh
.
Step 6: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Ingress Tool Transfer, Native API.
- Check the SHA-256 hash value for the binary flagged as malicious on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
- To develop a response plan, combine your investigation results with the MITRE research and VirusTotal analysis.
Step 7: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with the compromised container.
- Stop or delete the compromised container and replace it with a new container.
Execution: Malicious Python executed
A machine learning model identified executed Python code as malicious. Attackers can use Python to transfer tools and execute commands without binaries. Ensuring that your containers are immutable is an important best practice. Using scripts to transfer tools can mimic the attacker technique of ingress tool transfer and result in unwanted detections.
To respond to this finding, do the following:
Step 1: Review finding details
Open an
Execution: Malicious Python executed
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Program binary: details about the interpreter that invoked the script.
- Script: absolute path of the name of the script on disk; this
attribute only appears for scripts written to disk, not for literal
script execution—for example,
python3 -c
. - Arguments: the arguments provided when invoking the script.
- Affected resource, especially the following fields:
- Resource full name: the full resource name of the cluster, including the project number, location, and cluster name.
- Related links, especially the following fields:
- VirusTotal indicator: link to the VirusTotal analysis page.
- What was detected, especially the following fields:
In the detail view of the finding, click the JSON tab.
In the JSON, note the following fields.
finding
:processes
:script
:contents
: contents of the executed script, which might be truncated for performance reasons; this can aid in your investigationsha256
: the SHA-256 hash ofscript.contents
resource
:project_display_name
: the name of the project that contains the asset.
sourceProperties
:Pod_Namespace
: the name of the Pod's Kubernetes namespace.Pod_Name
: the name of the GKE Pod.Container_Name
: the name of the affected container.Container_Image_Uri
: the name of the container image being executed.VM_Instance_Name
: the name of the GKE node where the Pod executed.
Identify other findings that occurred at a similar time for this container. For instance, if the script drops a binary, check for findings related to the binary.
Step 2: Review cluster and node
In the Google Cloud console, go to the Kubernetes clusters page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Select the cluster listed on the Resource full name row in the Summary tab of the finding details. Note any metadata about the cluster and its owner.
Click the Nodes tab. Select the node listed in
VM_Instance_Name
.Click the Details tab and note the
container.googleapis.com/instance_id
annotation.
Step 3: Review Pod
In the Google Cloud console, go to the Kubernetes Workloads page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Filter on the cluster listed in
resource.name
and the Pod namespace listed inPod_Namespace
, if necessary.Select the Pod listed in
Pod_Name
. Note any metadata about the Pod and its owner.
Step 4: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Set Select time range to the period of interest.
On the page that loads, do the following:
- Find Pod logs for
Pod_Name
by using the following filter:resource.type="k8s_container"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
resource.labels.namespace_name="Pod_Namespace"
resource.labels.pod_name="Pod_Name"
- Find cluster audit logs by using the following filter:
logName="projects/resource.project_display_name/logs/cloudaudit.googleapis.com%2Factivity"
resource.type="k8s_cluster"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
Pod_Name
- Find GKE node console logs by using the following filter:
resource.type="gce_instance"
resource.labels.instance_id="instance_id"
- Find Pod logs for
Step 5: Investigate running container
If the container is still running, it might be possible to investigate the container environment directly.
In the Google Cloud console, go to the Kubernetes clusters page.
Click the name of the cluster shown in
resource.labels.cluster_name
.On the Clusters page, click Connect, and then click Run in Cloud Shell.
Cloud Shell launches and adds commands for the cluster in the terminal.
Press Enter and, if the Authorize Cloud Shell dialog appears, click Authorize.
Connect to the container environment by running the following command:
kubectl exec --namespace=Pod_Namespace -ti Pod_Name -c Container_Name -- /bin/sh
This command requires the container to have a shell installed at
/bin/sh
.
Step 6: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Command and Scripting Interpreter, Ingress Tool Transfer.
- Check the SHA-256 hash value for the binary flagged as malicious on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
- To develop a response plan, combine your investigation results with the MITRE research and VirusTotal analysis.
Step 7: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- If Python was making intended changes to the container, rebuild the container image such that no changes are needed. This way, the container can be immutable.
- Otherwise, contact the owner of the project with the compromised container.
- Stop or delete the compromised container and replace it with a new container.
Execution: Modified Malicious Binary Executed
A binary that was executed, with the binary:
- Included in the original container image.
- Modified during the container runtime.
- Identified as malicious based on on threat intelligence.
Attackers commonly install exploitation tooling and malware after the initial compromise. To respond to this finding, do the following:
Step 1: Review finding details
Open an
Execution: Modified Malicious Binary Executed
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Program binary: the absolute path of the modified binary.
- Arguments: the arguments provided when invoking the modified binary.
- Containers: the name of the affected container.
- Containers URI: the name of the container image being deployed.
- Affected resource, especially the following fields:
- Resource full name: the full resource name of the cluster including the project number, location, and cluster name.
- Related links, especially the following fields:
- VirusTotal indicator: link to the VirusTotal analysis page.
- What was detected, especially the following fields:
Click the JSON and note the following fields:
sourceProperties
:VM_Instance_Name
: the name of the GKE node where the Pod executed.
Step 2: Review cluster and node
In the Google Cloud console, go to the Kubernetes clusters page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Select the cluster listed on the Resource full name row in the Summary tab of the finding details. Note any metadata about the cluster and its owner.
Click the Nodes tab. Select the node listed in
VM_Instance_Name
.Click the Details tab and note the
container.googleapis.com/instance_id
annotation.
Step 3: Review Pod
In the Google Cloud console, go to the Kubernetes Workloads page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Filter on the cluster listed on the Resource full name row in the Summary tab of the finding details and the Pod namespace listed in
Pod_Namespace
, if necessary.Select the Pod listed in
Pod_Name
. Note any metadata about the Pod and its owner.
Step 4: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Set Select time range to the period of interest.
On the page that loads, do the following:
- Find Pod logs for
Pod_Name
by using the following filter:resource.type="k8s_container"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
resource.labels.namespace_name="Pod_Namespace"
resource.labels.pod_name="Pod_Name"
- Find cluster audit logs by using the following filter:
logName="projects/resource.project_display_name/logs/cloudaudit.googleapis.com%2Factivity"
resource.type="k8s_cluster"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
Pod_Name
- Find GKE node console logs by using the following filter:
resource.type="gce_instance"
resource.labels.instance_id="instance_id"
- Find Pod logs for
Step 5: Investigate running container
If the container is still running, it might be possible to investigate the container environment directly.
Go to the Google Cloud console.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Click Activate Cloud Shell
Obtain GKE credentials for your cluster by running the following commands.
For zonal clusters:
gcloud container clusters get-credentials cluster_name --zone location --project project_name
For regional clusters:
gcloud container clusters get-credentials cluster_name --region location --project project_name
Replace the following:
cluster_name
: the cluster listed inresource.labels.cluster_name
location
: the location listed inresource.labels.location
project_name
: the project name listed inresource.project_display_name
Retrieve the modified malicious binary:
kubectl cp Pod_Namespace/Pod_Name:Process_Binary_Fullpath -c Container_Name local_file
Replace
local_file
with a local path to store the modified malicious binary.Connect to the container environment:
kubectl exec --namespace=Pod_Namespace -ti Pod_Name -c Container_Name -- /bin/sh
This command requires the container to have a shell installed at
/bin/sh
.
Step 6: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Ingress Tool Transfer, Native API.
- Check the SHA-256 hash value for the binary flagged as malicious on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
- To develop a response plan, combine your investigation results with the MITRE research and VirusTotal analysis.
Step 7: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with the compromised container.
- Stop or delete the compromised container and replace it with a new container.
Execution: Modified Malicious Library Loaded
A library that was loaded, with the library:
- Included in the original container image.
- Modified during the container runtime.
- Identified as malicious based on on threat intelligence.
Attackers might load malicious libraries into existing programs in order to bypass code execution protections and to hide malicious code. To respond to this finding, do the following:
Step 1: Review finding details
Open an
Execution: Modified Malicious Library Loaded
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Program binary: the full path of the process binary that loaded the library.
- Libraries: details about the modified library.
- Arguments: the arguments provided when invoking the process binary.
- Containers: the name of the affected container.
- Containers URI: the name of the container image being deployed.
- Affected resource, especially the following fields:
- Resource full name: the full resource name of the cluster.
- Related links, especially the following fields:
- VirusTotal indicator: link to the VirusTotal analysis page.
- What was detected, especially the following fields:
Click the JSON tab and note the following fields:
sourceProperties
:VM_Instance_Name
: the name of the GKE node where the Pod executed.
Step 2: Review cluster and node
In the Google Cloud console, go to the Kubernetes clusters page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Select the cluster listed in
resource.name
. Note any metadata about the cluster and its owner.Click the Nodes tab. Select the node listed in
VM_Instance_Name
.Click the Details tab and note the
container.googleapis.com/instance_id
annotation.
Step 3: Review Pod
In the Google Cloud console, go to the Kubernetes Workloads page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Filter on the cluster listed on the Resource full name row in the Summary tab of the finding details and the Pod namespace listed in
Pod_Namespace
, if necessary.Select the Pod listed in
Pod_Name
. Note any metadata about the Pod and its owner.
Step 4: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Set Select time range to the period of interest.
On the page that loads, do the following:
- Find Pod logs for
Pod_Name
by using the following filter:resource.type="k8s_container"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
resource.labels.namespace_name="Pod_Namespace"
resource.labels.pod_name="Pod_Name"
- Find cluster audit logs by using the following filter:
logName="projects/resource.project_display_name/logs/cloudaudit.googleapis.com%2Factivity"
resource.type="k8s_cluster"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
Pod_Name
- Find GKE node console logs by using the following filter:
resource.type="gce_instance"
resource.labels.instance_id="instance_id"
- Find Pod logs for
Step 5: Investigate running container
If the container is still running, it might be possible to investigate the container environment directly.
Go to the Google Cloud console.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Click Activate Cloud Shell.
Obtain GKE credentials for your cluster by running the following commands.
For zonal clusters:
gcloud container clusters get-credentials cluster_name --zone location --project resource.project_display_name
For regional clusters:
gcloud container clusters get-credentials cluster_name --region location --project resource.project_display_name
Retrieve the modified malicious library:
kubectl cp Pod_Namespace/Pod_Name: Added_Library_Fullpath -c Container_Name local_file
Replace local_file with a local path to store the modified malicious library.
Connect to the container environment:
kubectl exec --namespace=Pod_Namespace -ti Pod_Name -c Container_Name -- /bin/sh
This command requires the container to have a shell installed at
/bin/sh
.
Step 6: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Ingress Tool Transfer, Shared Modules.
- Check the SHA-256 hash value for the binary flagged as malicious on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
- To develop a response plan, combine your investigation results with the MITRE research and VirusTotal analysis.
Step 7: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with the compromised container.
- Stop or delete the compromised container and replace it with a new container.
Malicious Script Executed
A machine learning model identified executed Bash code as malicious. Attackers can use Bash to transfer tools and execute commands without binaries. Ensuring that your containers are immutable is an important best practice. Using scripts to transfer tools can mimic the attacker technique of ingress tool transfer and result in unwanted detections.
To respond to this finding, do the following:
Step 1: Review finding details
Open a
Malicious Script Executed
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Program binary: details about the interpreter that invoked the script.
- Script: absolute path of the name of the script on disk; this
attribute only appears for scripts written to disk, not for literal
script execution, for example,
bash -c
. - Arguments: the arguments provided when invoking the script.
- Affected resource, especially the following fields:
- Resource full name: the full resource name of the cluster, including the project number, location, and cluster name.
- Related links, especially the following fields:
- VirusTotal indicator: link to the VirusTotal analysis page.
- What was detected, especially the following fields:
In the detail view of the finding, click the JSON tab.
In the JSON, note the following fields.
finding
:processes
:script
:contents
: contents of the executed script, which might be truncated for performance reasons; this can aid in your investigationsha256
: the SHA-256 hash ofscript.contents
resource
:project_display_name
: the name of the project that contains the asset.
sourceProperties
:Pod_Namespace
: the name of the Pod's Kubernetes namespace.Pod_Name
: the name of the GKE Pod.Container_Name
: the name of the affected container.Container_Image_Uri
: the name of the container image being executed.VM_Instance_Name
: the name of the GKE node where the Pod executed.
Identify other findings that occurred at a similar time for this container. For instance, if the script drops a binary, check for findings related to the binary.
Step 2: Review cluster and node
In the Google Cloud console, go to the Kubernetes clusters page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Select the cluster listed on the Resource full name row in the Summary tab of the finding details. Note any metadata about the cluster and its owner.
Click the Nodes tab. Select the node listed in
VM_Instance_Name
.Click the Details tab and note the
container.googleapis.com/instance_id
annotation.
Step 3: Review Pod
In the Google Cloud console, go to the Kubernetes Workloads page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Filter on the cluster listed in
resource.name
and the Pod namespace listed inPod_Namespace
, if necessary.Select the Pod listed in
Pod_Name
. Note any metadata about the Pod and its owner.
Step 4: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Set Select time range to the period of interest.
On the page that loads, do the following:
- Find Pod logs for
Pod_Name
by using the following filter:resource.type="k8s_container"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
resource.labels.namespace_name="Pod_Namespace"
resource.labels.pod_name="Pod_Name"
- Find cluster audit logs by using the following filter:
logName="projects/resource.project_display_name/logs/cloudaudit.googleapis.com%2Factivity"
resource.type="k8s_cluster"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
Pod_Name
- Find GKE node console logs by using the following filter:
resource.type="gce_instance"
resource.labels.instance_id="instance_id"
- Find Pod logs for
Step 5: Investigate running container
If the container is still running, it might be possible to investigate the container environment directly.
In the Google Cloud console, go to the Kubernetes clusters page.
Click the name of the cluster shown in
resource.labels.cluster_name
.On the Clusters page, click Connect, and then click Run in Cloud Shell.
Cloud Shell launches and adds commands for the cluster in the terminal.
Press enter and, if the Authorize Cloud Shell dialog appears, click Authorize.
Connect to the container environment by running the following command:
kubectl exec --namespace=Pod_Namespace -ti Pod_Name -c Container_Name -- /bin/sh
This command requires the container to have a shell installed at
/bin/sh
.
Step 6: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Command and Scripting Interpreter, Ingress Tool Transfer.
- Check the SHA-256 hash value for the binary flagged as malicious on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
- To develop a response plan, combine your investigation results with the MITRE research and VirusTotal analysis.
Step 7: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- If the script was making intended changes to the container, rebuild the container image such that no changes are needed. This way, the container can be immutable.
- Otherwise, contact the owner of the project with the compromised container.
- Stop or delete the compromised container and replace it with a new container.
Malicious URL Observed
Container Threat Detection observed a malicious URL in the argument list of an executable process. Attackers can load malware or malicious libraries through malicious URLs.
To respond to this finding, perform the following steps.
Step 1: Review finding details
Open a
Malicious URL Observed
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- URI: the malicious URI observed.
- Added binary: the full path of the process binary that received the arguments that contain the malicious URL.
- Arguments: the arguments provided when invoking the process binary.
- Environment variables: the environment variables that were in effect when the process binary was invoked.
- Containers: the name of the container.
- Kubernetes pods: the pod name and namespace.
- Affected resource, especially the following fields:
- Resource display name: the name of the affected resource.
- Resource full name: the full resource name
of the cluster. The full resource name includes the following
information:
- The project that contains the cluster:
projects/PROJECT_ID
- The location in which the cluster is located: either
zone/ZONE
orlocations/LOCATION
- The name of the cluster:
projects/CLUSTER_NAME
- The project that contains the cluster:
- Related links, especially the following fields:
- VirusTotal indicator: link to the VirusTotal analysis page.
- What was detected, especially the following fields:
On the JSON tab, in the
sourceProperties
attribute, note the value of theVM_Instance_Name
property.
Step 2: Review cluster and node
In the Google Cloud console, go to the Kubernetes clusters page.
On the Google Cloud console toolbar, select the project that appears in Resource full name (
resource.name
), if necessary. The project name appears after/projects/
in the full resource name.Click on the cluster name that you noted in Resource display name (
resource.display_name
) of the finding summary. The Clusters page opens.In the Metadata section on the **Cluster details page, note any of the user-defined information that might be helpful in resolving the threat, such as information that identifies the cluster owner.
Click the Nodes tab.
From the listed nodes, select the node that matches the value of
VM_Instance_Name
that you noted in the finding JSON earlier.On the Details tab of the Node details page, in the Annotations section, note the value of the
container.googleapis.com/instance_id
annotation.
Step 3: Review Pod
In the Google Cloud console, go to the Kubernetes Workloads page.
On the Google Cloud console toolbar, select the project that you noted in the Resource full name (
resource.name
) of the cluster in the finding summary, if necessary.Click Show system workloads.
Filter the list of workloads by the cluster name that you noted in Resource full name (
resource.name
)of the finding summary and, if necessary, the pod Namespace (kubernetes.pods.ns
) that you noted.Click on the workload name that matches the value of the
VM_Instance_Name
property that you noted in the finding JSON earlier. The Pod details page opens.On the Pod details page, note any information about the Pod that might help you resolve the threat.
Step 4: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project that appears in Resource full name (
resource.name
), if necessary.Set Select time range to the period of interest.
On the page that loads, do the following:
- Find Pod logs for your pod (
kubernetes.pods.name
) by using the following filter:resource.type="k8s_container"
resource.labels.project_id="PROJECT_ID"
resource.labels.location="LOCATION"
resource.labels.cluster_name="CLUSTER_NAME"
resource.labels.namespace_name="NAMESPACE_NAME"
resource.labels.pod_name="POD_NAME"
- Find cluster audit logs by using the following filter:
logName="projects/PROJECT_NAME/logs/cloudaudit.googleapis.com%2Factivity"
resource.type="k8s_cluster"
resource.labels.project_id="PROJECT_ID"
resource.labels.location="LOCATION_OR_ZONE"
resource.labels.cluster_name="CLUSTER_NAME/var>"
POD_NAME
- Find GKE node console logs by using the following filter:
resource.type="gce_instance"
resource.labels.instance_id="INSTANCE_ID"
- Find Pod logs for your pod (
Step 5: Investigate the running container
If the container is still running, it might be possible to investigate the container environment directly.
In the Google Cloud console, go to the Kubernetes clusters page.
Click the name of the cluster shown in
resource.labels.cluster_name
.On the Clusters page, click Connect, and then click Run in Cloud Shell.
Cloud Shell launches and adds commands for the cluster in the terminal.
Press enter and, if the Authorize Cloud Shell dialog appears, click Authorize.
Connect to the container environment by running the following command:
kubectl exec --namespace=POD_NAMESPACE -ti POD_NAME -c CONTAINER_NAME -- /bin/sh
Replace
CONTAINER_NAME
with the name of the container that you noted in the finding summary earlier.This command requires the container to have a shell installed at
/bin/sh
.
Step 6: Research attack and response methods
- Check Safe Browsing site status to get details on why the URL is classified as malicious.
- Review MITRE ATT&CK framework entries for this finding type: Ingress Tool Transfer.
- Check the SHA-256 hash value for the binary flagged as malicious on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
- To develop a response plan, combine your investigation results with the MITRE research and VirusTotal analysis.
Step 7: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with the compromised container.
- Stop or delete the compromised container and replace it with a new container.
Reverse Shell
A process started with stream redirection to a remote connected socket. Spawning a network-connected shell can allow an attacker to perform arbitrary actions after a limited initial compromise. To respond to this finding, do the following:
Step 1: Review finding details
Open a
Reverse Shell
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Program binary: the absolute path of the process started with stream redirection to a remote socket.
- Arguments: the arguments provided when invoking the process binary.
- Affected resource, especially the following fields:
- Resource full name: the full resource name of the cluster.
- Project full name: the affected Google Cloud project.
- Related links, especially the following fields:
- VirusTotal indicator: link to the VirusTotal analysis page.
- What was detected, especially the following fields:
In the detail view of the finding, click the JSON tab.
In the JSON, note the following fields.
resource
:project_display_name
: the name of the project that contains the asset.
sourceProperties
:Pod_Namespace
: the name of the Pod's Kubernetes namespace.Pod_Name
: the name of the GKE Pod.Container_Name
: the name of the affected container.VM_Instance_Name
: the name of the GKE node where the Pod executed.Reverse_Shell_Stdin_Redirection_Dst_Ip
: the remote IP address of the connectionReverse_Shell_Stdin_Redirection_Dst_Port
: the remote portReverse_Shell_Stdin_Redirection_Src_Ip
: the local IP address of the connectionReverse_Shell_Stdin_Redirection_Src_Port
: the local portContainer_Image_Uri
: the name of the container image being executed.
Step 2: Review cluster and node
In the Google Cloud console, go to the Kubernetes clusters page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Select the cluster listed in
resource.name
. Note any metadata about the cluster and its owner.Click the Nodes tab. Select the node listed in
VM_Instance_Name
.Click the Details tab and note the
container.googleapis.com/instance_id
annotation.
Step 3: Review Pod
In the Google Cloud console, go to the Kubernetes Workloads page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Filter on the cluster listed in
resource.name
and the Pod namespace listed inPod_Namespace
, if necessary.Select the Pod listed in
Pod_Name
. Note any metadata about the Pod and its owner.
Step 4: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Set Select time range to the period of interest.
On the page that loads, do the following:
- Find Pod logs for
Pod_Name
by using the following filter:resource.type="k8s_container"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
resource.labels.namespace_name="Pod_Namespace"
resource.labels.pod_name="Pod_Name"
- Find cluster audit logs by using the following filter:
logName="projects/resource.project_display_name/logs/cloudaudit.googleapis.com%2Factivity"
resource.type="k8s_cluster"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
Pod_Name
- Find GKE node console logs by using the following filter:
resource.type="gce_instance"
resource.labels.instance_id="instance_id"
- Find Pod logs for
Step 5: Investigate running container
If the container is still running, it might be possible to investigate the container environment directly.
Go to the Google Cloud console.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Click Activate Cloud Shell.
Obtain GKE credentials for your cluster by running the following commands.
For zonal clusters:
gcloud container clusters get-credentials cluster_name --zone location --project resource.project_display_name
For regional clusters:
gcloud container clusters get-credentials cluster_name --region location --project resource.project_display_name
Launch a shell within the container environment by running:
kubectl exec --namespace=Pod_Namespace -ti Pod_Name -c Container_Name -- /bin/sh
This command requires the container to have a shell installed at
/bin/sh
.To view all processes running in the container, run the following command in the container shell:
ps axjf
This command requires the container to have
/bin/ps
installed.
Step 6: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Command and Scripting Interpreter, Ingress Tool Transfer.
- Check the SHA-256 hash value for the binary flagged as malicious on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
- To develop a response plan, combine your investigation results with the MITRE research and VirusTotal analysis.
Step 7: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with the compromised container.
- Stop or delete the compromised container and replace it with a new container.
Unexpected Child Shell
Container Threat Detection observed a process that unexpectedly spawned a child shell process. This event might indicate that an attacker is trying to abuse shell commands and scripts.
To respond to this finding, perform the following steps.
Step 1: Review finding details
Open an
Unexpected Child Shell
finding as directed in Reviewing findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- Parent process: the process that unexpectedly created the child shell process.
- Child process: the child shell process.
- Arguments: the arguments provided to the child shell process binary.
- Environment variables: the environment variables of the child shell process binary.
- Containers: the name of the container.
- Containers URI: the image URI of the container.
- Kubernetes pods: the Pod name and namespace.
- Affected resource, especially the following fields:
- Resource display name: the name of the affected resource.
- Resource full name: the full resource name
of the cluster. The full resource name includes the following
information:
- The project that contains the cluster:
projects/PROJECT_ID
- The location in which the cluster is located: either
zone/ZONE
orlocations/LOCATION
- The name of the cluster:
projects/CLUSTER_NAME
- The project that contains the cluster:
- Related links, especially the following fields:
- VirusTotal indicator: link to the VirusTotal analysis page.
- What was detected, especially the following fields:
Click the JSON tab and note the following fields:
+processes
: an array containing all processes related to the finding. This array includes the child shell process and the parent process.
+resource
:
+project_display_name
: The name of the project that contains the assets.
+sourceProperties
:
+VM_Instance_Name
: the name of the GKE node where the
Pod executed.
Step 2: Review cluster and node
In the Google Cloud console, go to the Kubernetes clusters page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
, if necessary.Select the cluster listed in
resource.name
. Note any metadata about the cluster and its owner.Click the Nodes tab. Select the node listed in
VM_Instance_Name
.Click the Details tab and note the
container.googleapis.com/instance_id
annotation.
Step 3: Review Pod
In the Google Cloud console, go to the Kubernetes Workloads page.
On the Google Cloud console toolbar, select the project that you noted in the Resource full name (
resource.name
) of the cluster in the finding summary, if necessary.Click Show system workloads.
Filter the list of workloads by the cluster name that you noted in Resource full name (
resource.name
) of the finding summary and, if necessary, the pod Namespace (kubernetes.pods.ns
) that you noted.Click the workload name that matches the value of the
VM_Instance_Name
property that you noted in the finding JSON earlier. The Pod details page opens.On the Pod details page, note any information about the Pod that might help you resolve the threat.
Step 4: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
.Set Select time range to the period of interest.
On the page that loads, do the following:
- Find Pod logs for
Pod_Name
by using the following filter:resource.type="k8s_container"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
resource.labels.namespace_name="Pod_Namespace"
resource.labels.pod_name="Pod_Name"
- Find cluster audit logs by using the following filter:
logName="projects/resource.project_display_name/logs/cloudaudit.googleapis.com%2Factivity"
resource.type="k8s_cluster"
resource.labels.project_id="resource.project_display_name"
resource.labels.location="location"
resource.labels.cluster_name="cluster_name"
Pod_Name
- Find GKE node console logs by using the following filter:
resource.type="gce_instance"
resource.labels.instance_id="instance_id"
- Find Pod logs for
Step 5: Investigate the running container
If the container is still running, it might be possible to investigate the container environment directly.
Go to the Google Cloud console.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name
.Click Activate Cloud Shell.
Obtain GKE credentials for your cluster by running the following commands.
For zonal clusters, run the following:
gcloud container clusters get-credentials cluster_name --zone location --project resource.project_display_name
For regional clusters, run the following:
gcloud container clusters get-credentials cluster_name --region location --project resource.project_display_name
To launch a shell within the container environment, run the following:
kubectl exec --namespace=Pod_Namespace -ti Pod_Name -c Container_Name -- /bin/sh
This command requires the container to have a shell installed at
/bin/sh
.To view all processes running in the container, run the following command in the container shell:
ps axjf
This command requires the container to have
/bin/ps
installed.
Step 6: Research attack and response methods
- Review MITRE ATT&CK framework entries for this finding type: Command and Scripting Interpreter: Unix Shell.
- Check the SHA-256 hash value for the binary flagged as malicious on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
- To develop a response plan, combine your investigation results with the MITRE research and VirusTotal analysis.
Step 7: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
- Contact the owner of the project with the compromised container.
- Stop or delete the compromised container and replace it with a new container.
VM Threat Detection response
To learn more about VM Threat Detection, see VM Threat Detection overview.
Defense Evasion: Rootkit
VM Threat Detection detected a combination of signals that match a known kernel-mode rootkit in a Compute Engine VM instance.
The Defense Evasion: Rootkit
finding category is a superset of the following
finding categories. Therefore, this section applies to these finding categories
as well.
Defense Evasion: Unexpected ftrace handler
(Preview)Defense Evasion: Unexpected interrupt handler
(Preview)Defense Evasion: Unexpected kernel code modification
(Preview)Defense Evasion: Unexpected kernel modules
(Preview)Defense Evasion: Unexpected kernel read-only data modification
(Preview)Defense Evasion: Unexpected kprobe handler
(Preview)Defense Evasion: Unexpected processes in runqueue
(Preview)Defense Evasion: Unexpected system call handler
(Preview)
To respond to these findings, do the following.
Step 1: Review finding details
Open finding, as directed in Review findings. The details panel for the finding opens to the Summary tab.
On the Summary tab, review the information in the following sections:
What was detected, especially the following fields:
- Kernel rootkit name: the family name of the rootkit that was
detected—for example,
Diamorphine
. - Unexpected kernel code pages: whether kernel code pages are present in kernel or module code regions where they aren't expected.
- Unexpected system call handler: whether system call handlers are present in kernel or module code regions where they aren't expected.
- Kernel rootkit name: the family name of the rootkit that was
detected—for example,
Affected resource, especially the following field:
- Resource full name: the full resource name of the affected VM instance, including the ID of the project that contains it.
To see the complete JSON for this finding, in the detail view of the finding, click the JSON tab.
Step 2: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project that contains the VM instance, as specified on the Resource full name row in the Summary tab of the finding details.
Check the logs for signs of intrusion on the affected VM instance. For example, check for suspicious or unknown activities and signs of compromised credentials.
Step 3: Review permissions and settings
- On the Summary tab of the finding details, in the Resource full name field, click the link.
- Review the details of the VM instance, including the network and access settings.
Step 4: Inspect the affected VM
Follow the instructions in Inspect a VM for signs of kernel memory tampering.
Step 5: Research attack and response methods
- Review MITRE ATT&CK framework entries for Defense Evasion.
- To develop a response plan, combine your investigation results with MITRE research.
Step 6: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
Contact the owner of the VM.
If necessary, stop the compromised instance and replace it with a new instance.
For forensic analysis, consider backing up the virtual machines and persistent disks. For more information, see Data protection options in the Compute Engine documentation.
Delete the VM instance.
For further investigation, consider using incident response services like Mandiant.
Execution: Cryptocurrency Mining Hash Match
VM Threat Detection detected cryptocurrency mining activities by matching memory hashes of running programs against memory hashes of known cryptocurrency mining software.
To respond to these findings, do the following:
Step 1: Review finding details
Open an
Execution: Cryptocurrency Mining Hash Match
finding, as directed in Review findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
What was detected, especially the following fields:
- Binary family: the cryptocurrency application that was detected.
- Program binary: the absolute path of the process.
- Arguments: the arguments provided when invoking the process binary.
- Process names: the name of the process running in the VM instance that is associated with the detected signature matches.
VM Threat Detection can recognize kernel builds from major Linux distributions. If it can recognize the affected VM's kernel build, it can identify the application's process details and populate the
processes
field of the finding. If VM Threat Detection can't regognize the kernel—for example, if the kernel is custom built—the finding'sprocesses
field isn't populated.Affected resource, especially the following fields:
- Resource full name: the full resource name of the affected VM instance, including the ID of the project that contains it.
To see the complete JSON for this finding, in the detail view of the finding, click the JSON tab.
indicator
signatures
:memory_hash_signature
: a signature corresponding to memory page hashes.detections
binary
: the name of the cryptocurrency application's binary—for example,linux--x86-64_ethminer_0.19.0_alpha.0_cuda10.0
.percent_pages_matched
: the percentage of pages in memory that match pages in known cryptocurrency applications in the page-hash database.
Step 2: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project that contains the VM instance, as specified on the Resource full name row in the Summary tab of the finding details.
Check the logs for signs of intrusion on the affected VM instance. For example, check for suspicious or unknown activities and signs of compromised credentials.
Step 3: Review permissions and settings
- On the Summary tab of the finding details, in the Resource full name field, click the link.
- Review the details of the VM instance, including the network and access settings.
Step 4: Research attack and response methods
- Review MITRE ATT&CK framework entries for Execution.
- To develop a response plan, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
To assist with detection and removal, use an endpoint detection and response solution.
- Contact the owner of the VM.
Confirm whether the application is a mining application:
If the detected application's process name and binary path are available, consider the values on the Program binary, Arguments, and Process names rows on the Summary tab of the finding details in your investigation.
If the process details aren't available, check if the binary name from the memory hash signature can provide clues. Consider a binary called
linux-x86-64_xmrig_2.14.1
. You can use thegrep
command to search for notable files in storage. Use a meaningful portion of the binary name in your search pattern, in this case,xmrig
. Examine the search results.Examine the running processes, especially the processes with high CPU usage, to see if there are any that you don't recognize. Determine whether the associated applications are miner applications.
Search the files in storage for common strings that mining applications use, such as
btc.com
,ethminer
,xmrig
,cpuminer
, andrandomx
. For more examples of strings you can search for, see Software names and YARA rules and the related documentation for each software listed.
If you determine that the application is a miner application, and its process is still running, terminate the process. Locate the application's executable binary in the VM's storage, and delete it.
If necessary, stop the compromised instance and replace it with a new instance.
Execution: Cryptocurrency Mining YARA Rule
VM Threat Detection detected cryptocurrency mining activities by matching memory patterns, such as proof-of-work constants, known to be used by cryptocurrency mining software.
To respond to these findings, do the following:
Step 1: Review finding details
Open an
Execution: Cryptocurrency Mining YARA Rule
finding, as directed in Review findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
What was detected, especially the following fields:
- YARA rule name: the rule triggered for YARA detectors.
- Program binary: the absolute path of the process.
- Arguments: the arguments provided when invoking the process binary.
- Process names: the name of the processes running in the VM instance that is associated with the detected signature matches.
VM Threat Detection can recognize kernel builds from major Linux distributions. If it can recognize the affected VM's kernel build, it can identify the application's process details and populate the
processes
field of the finding. If VM Threat Detection can't regognize the kernel—for example, if the kernel is custom built—the finding'sprocesses
field isn't populated.Affected resource, especially the following fields:
- Resource full name: the full resource name of the affected VM instance, including the ID of the project that contains it.
Related links, especially the following fields:
- Cloud Logging URI: link to Logging entries.
- MITRE ATT&CK method: link to the MITRE ATT&CK documentation.
- Related findings: links to any related findings.
- VirusTotal indicator: link to the VirusTotal analysis page.
- Chronicle: link to Google SecOps.
To see the complete JSON for this finding, in the detail view of the finding, click the JSON tab.
Step 2: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project that contains the VM instance, as specified on the Resource full name row in the Summary tab of the finding details.
Check the logs for signs of intrusion on the affected VM instance. For example, check for suspicious or unknown activities and signs of compromised credentials.
Step 3: Review permissions and settings
- On the Summary tab of the finding details, in the Resource full name field, click the link.
- Review the details of the VM instance, including the network and access settings.
Step 4: Research attack and response methods
- Review MITRE ATT&CK framework entries for Execution.
- To develop a response plan, combine your investigation results with MITRE research.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
To assist with detection and removal, use an endpoint detection and response solution.
- Contact the owner of the VM.
Confirm whether the application is a mining application:
If the detected application's process name and binary path are available, consider the values on the Program binary, Arguments, and Process names rows on the Summary tab of the finding details in your investigation.
Examine the running processes, especially the processes with high CPU usage, to see if there are any that you don't recognize. Determine whether the associated applications are miner applications.
Search the files in storage for common strings that mining applications use, such as
btc.com
,ethminer
,xmrig
,cpuminer
, andrandomx
. For more examples of strings you can search for, see Software names and YARA rules and the related documentation for each software listed.
If you determine that the application is a miner application, and its process is still running, terminate the process. Locate the application's executable binary in the VM's storage, and delete it.
If necessary, stop the compromised instance and replace it with a new instance.
Execution: cryptocurrency mining combined detection
VM Threat Detection detected multiple categories of findings within a single
day from a single source. A single application can simultaneously trigger
Execution: Cryptocurrency Mining YARA Rule
and
Execution: Cryptocurrency Mining Hash Match findings
.
To respond to a combined finding, follow the response instructions for both
Execution: Cryptocurrency Mining YARA Rule
and
Execution: Cryptocurrency Mining Hash Match findings
.
Malware: Malicious file on disk (YARA)
VM Threat Detection detected a potentially malicious file by scanning a VM's persistent disks for known malware signatures.
To respond to these findings, do the following:
Step 1: Review finding details
Open the
Malware: Malicious file on disk (YARA)
finding, as directed in Review findings. The details panel for the finding opens to the Summary tab.On the Summary tab, review the information in the following sections:
- What was detected, especially the following fields:
- YARA rule name: the YARA rule that was matched.
- Files: the partition UUID and the relative path of the potentially malicious file that was detected.
- Affected resource, especially the following fields:
- Resource full name: the full resource name of the affected VM instance, including the ID of the project that contains it.
- What was detected, especially the following fields:
To see the complete JSON for this finding, in the detail view of the finding, click the JSON tab.
In the JSON, note the following fields:
indicator
signatures
:yaraRuleSignature
: a signature corresponding to the YARA rule that was matched.
Step 2: Check logs
In the Google Cloud console, go to Logs Explorer.
On the Google Cloud console toolbar, select the project that contains the VM instance, as specified on the Resource full name row in the Summary tab of the finding details.
Check the logs for signs of intrusion on the affected VM instance. For example, check for suspicious or unknown activities and signs of compromised credentials.
Step 3: Review permissions and settings
- On the Summary tab of the finding details, in the Resource full name field, click the link.
- Review the details of the VM instance, including the network and access settings.
Step 4: Research attack and response methods
Check the SHA-256 hash value for the binary flagged as malicious on VirusTotal by clicking the link in VirusTotal indicator. VirusTotal is an Alphabet-owned service that provides context on potentially malicious files, URLs, domains, and IP addresses.
Step 5: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations. Carefully evaluate the information you gather in your investigation to determine the best way to resolve findings.
Contact the owner of the VM.
If necessary, locate and delete the potentially malicious file. To get the partition UUID and relative path of the file, refer to the Files field on the Summary tab of the finding details. To assist with detection and removal, use an endpoint detection and response solution.
If necessary, stop the compromised instance and replace it with a new instance.
For forensic analysis, consider backing up the virtual machines and persistent disks. For more information, see Data protection options in the Compute Engine documentation.
For further investigation, consider using incident response services like Mandiant.
Fix related vulnerabilities
To help keep threats from reoccurring, review and fix related vulnerability and misconfiguration findings.
To find any related findings, follow these steps:
In the Google Cloud console, go to the Security Command Center Findings page.
Review the threat finding and copy the value of an attribute that is likely to appear in any related vulnerability or misconfiguration finding, such as the principal email address or the name of the affected resource.
On the Findings page, open the Query editor by clicking Edit query.
Click Add filter. The Select filter menu opens.
From the list of filter categories on the left side of the menu, select the category that contains the attribute that you noted in the threat finding.
For example, if you noted the full name of the affected resource, select Resource. The attribute types of the Resource category are displayed in the column to the right, including the Full name attribute.
From the displayed attributes, select the type of attribute that you noted in the threat finding. A search panel for attribute values opens to the right and displays all found values of the selected attribute type.
In the Filter field, paste the attribute value that you copied from the threat finding. The displayed list of values is updated to show only the values that match the pasted value.
From the list of displayed values, select one or more values and click Apply. The Findings query results panel updates to show only the matching findings.
If there are a lot of findings in the results, filter the findings by selecting additional filters from the Quick filters panel.
For example, to show only the
Vulnerability
andMisconfiguration
class findings that contain the selected attribute values, scroll down to the Finding class section of the Quick filters panel and select Vulnerability and Misconfiguration.
In addition to Google-provided indicators of compromise, users who are customers of Palo Alto Networks can integrate Palo Alto Networks' AutoFocus Threat Intelligence with Event Threat Detection. AutoFocus is a threat intelligence service that provides information about network threats. To learn more, visit the AutoFocus page in Google Cloud console.
Remediating threats
Remediating Event Threat Detection and Container Threat Detection findings isn't as simple as fixing misconfigurations and vulnerabilities identified by Security Command Center.
Misconfigurations and compliance violations identify weaknesses in resources that could be exploited. Typically, misconfigurations have known, easily implemented fixes, like enabling a firewall or rotating an encryption key.
Threats differ from vulnerabilities in that they are dynamic and indicate a possible active exploit against one or more resources. A remediation recommendation might not be effective in securing your resources because the exact methods used to achieve the exploit might not be known.
For example, an Added Binary Executed
finding indicates that an unauthorized
binary was launched in a container. A basic remediation recommendation might
advise you to quarantine the container and delete the binary, but that might not
resolve the underlying root cause that allowed the attacker access to execute
the binary. You need to find out how the container image was corrupted to fix
the exploit. Determining whether the file was added through a misconfigured port
or by some other means requires a thorough investigation. An analyst with
expert-level knowledge of your system might need to review it for weaknesses.
Bad actors attack resources using different techniques, so applying a fix for a
specific exploit might not be effective against variations of that attack. For
example, in response to a Brute Force: SSH
finding, you might lower permission
levels for some user accounts to limit access to resources. However, weak
passwords might still provide an attack path.
The breadth of attack vectors makes it difficult to provide remediation steps that work in all situations. Security Command Center's role in your cloud security plan is to identify impacted resources in near-real time, tell you what threats you face, and provide evidence and context to aid your investigations. However, your security personnel must use the extensive information in Security Command Center findings to determine the best ways to remediate issues and secure resources against future attacks.
What's next
See Event Threat Detection overview to learn more about the service and the threats it detects.
See Container Threat Detection overview to learn how the service works.
See VM Threat Detection overview to learn more about the service and the threats it detects.