Execution: Possible Remote Command Execution Detected
Stay organized with collections
Save and categorize content based on your preferences.
This document describes a threat finding type in Security Command Center. Threat findings are generated by
threat detectors when they detect
a potential threat in your cloud resources. For a full list of available threat findings, see Threat findings index.
Overview
A process was detected spawning common UNIX commands via a network socket,
potentially emulating a reverse shell. This behavior suggests an attempt to
establish unauthorized remote access to the system, granting the attacker the
ability to execute arbitrary commands as if they were directly interacting with
the compromised machine. Adversaries frequently utilize reverse shells to bypass
firewall restrictions and gain persistent control over a target. The detection
of command execution initiated through a socket signifies a significant security
risk, as it allows for a wide range of malicious activities, including data
exfiltration, lateral movement, and further exploitation, making this a critical
finding that demands immediate investigation to identify the source of the
connection and the actions performed.
How to respond
To respond to this finding, do the following:
Step 1: Review finding details
Open an Execution: Possible Remote Command Execution Detected finding as
directed in Reviewing findings. The details panel for
the finding opens to the Summary tab.
On the Summary tab, review the information in the following sections:
What was detected, especially the following fields:
Program binary: the absolute path of the executed binary.
Arguments: the arguments passed during binary execution.
Affected resource, especially the following fields:
Resource full name: the full resource name
of the cluster including the project number, location, and cluster name.
In the detail view of the finding, click the JSON tab.
In the JSON, note the following fields.
resource:
project_display_name: the name of the project that contains
the cluster.
finding:
processes:
binary:
path: the full path of the executed binary.
args: the arguments that were provided while executing the binary.
sourceProperties:
Pod_Namespace: the name of the Pod's Kubernetes namespace.
Pod_Name: the name of the GKE Pod.
Container_Name: the name of the affected container.
Container_Image_Uri: the name of the container image being deployed.
VM_Instance_Name: the name of the GKE node where the
Pod executed.
Identify other findings that occurred at a similar time for this container.
Related findings might indicate that this activity was malicious, instead of
a failure to follow best practices.
Step 2: Review cluster and node
In the Google Cloud console, go to the Kubernetes clusters page.
On the Google Cloud console toolbar, select the project listed in
resource.project_display_name, if necessary.
Filter on the cluster listed on the Resource full name row in the
Summary tab of the finding details and the Pod namespace
listed in Pod_Namespace, if necessary.
Select the Pod listed in Pod_Name. Note any metadata about the Pod and
its owner.
To develop a response plan, combine your investigation results with MITRE
research.
Step 7: Implement your response
The following response plan might be appropriate for this finding, but might also impact operations.
Carefully evaluate the information you gather in your investigation to determine the best way to
resolve findings.
Contact the owner of the project with the compromised container.
Stop or delete the
compromised container and replace it with a
new container.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["| Premium and Enterprise [service tiers](/security-command-center/docs/service-tiers)\n\nThis document describes a threat finding type in Security Command Center. Threat findings are generated by\n[threat detectors](/security-command-center/docs/concepts-security-sources#threats) when they detect\na potential threat in your cloud resources. For a full list of available threat findings, see [Threat findings index](/security-command-center/docs/threat-findings-index).\n\nOverview\n\nA process was detected spawning common UNIX commands via a network socket,\npotentially emulating a reverse shell. This behavior suggests an attempt to\nestablish unauthorized remote access to the system, granting the attacker the\nability to execute arbitrary commands as if they were directly interacting with\nthe compromised machine. Adversaries frequently utilize reverse shells to bypass\nfirewall restrictions and gain persistent control over a target. The detection\nof command execution initiated through a socket signifies a significant security\nrisk, as it allows for a wide range of malicious activities, including data\nexfiltration, lateral movement, and further exploitation, making this a critical\nfinding that demands immediate investigation to identify the source of the\nconnection and the actions performed.\n\nHow to respond\n\nTo respond to this finding, do the following:\n\nStep 1: Review finding details\n\n1. Open an `Execution: Possible Remote Command Execution Detected` finding as\n directed in [Reviewing findings](/security-command-center/docs/how-to-investigate-threats#reviewing_findings). The details panel for\n the finding opens to the **Summary** tab.\n\n2. On the **Summary** tab, review the information in the following sections:\n\n - **What was detected** , especially the following fields:\n - **Program binary**: the absolute path of the executed binary.\n - **Arguments**: the arguments passed during binary execution.\n - **Affected resource** , especially the following fields:\n - **Resource full name** : the [full resource name](/apis/design/resource_names) of the cluster including the project number, location, and cluster name.\n3. In the detail view of the finding, click the **JSON** tab.\n\n4. In the JSON, note the following fields.\n\n - `resource`:\n - `project_display_name`: the name of the project that contains the cluster.\n - `finding`:\n - `processes`:\n - `binary`:\n - `path`: the full path of the executed binary.\n - `args`: the arguments that were provided while executing the binary.\n - `sourceProperties`:\n - `Pod_Namespace`: the name of the Pod's Kubernetes namespace.\n - `Pod_Name`: the name of the GKE Pod.\n - `Container_Name`: the name of the affected container.\n - `Container_Image_Uri`: the name of the container image being deployed.\n - `VM_Instance_Name`: the name of the GKE node where the Pod executed.\n5. Identify other findings that occurred at a similar time for this container.\n Related findings might indicate that this activity was malicious, instead of\n a failure to follow best practices.\n\nStep 2: Review cluster and node\n\n1. In the Google Cloud console, go to the **Kubernetes clusters** page.\n\n [Go to Kubernetes clusters](https://console.cloud.google.com/kubernetes/list)\n\n \u003cbr /\u003e\n\n2. On the Google Cloud console toolbar, select the project listed in\n `resource.project_display_name`, if necessary.\n\n3. Select the cluster listed on the **Resource full name** row in the\n **Summary** tab of the finding details. Note any metadata about\n the cluster and its owner.\n\n4. Click the **Nodes** tab. Select the node listed in `VM_Instance_Name`.\n\n5. Click the **Details** tab and note the\n `container.googleapis.com/instance_id` annotation.\n\nStep 3: Review Pod\n\n1. In the Google Cloud console, go to the **Kubernetes Workloads** page.\n\n [Go to Kubernetes Workloads](https://console.cloud.google.com/kubernetes/workload)\n\n \u003cbr /\u003e\n\n2. On the Google Cloud console toolbar, select the project listed in\n `resource.project_display_name`, if necessary.\n\n3. Filter on the cluster listed on the **Resource full name** row in the\n **Summary** tab of the finding details and the Pod namespace\n listed in `Pod_Namespace`, if necessary.\n\n4. Select the Pod listed in `Pod_Name`. Note any metadata about the Pod and\n its owner.\n\nStep 4: Check logs\n\n1. In the Google Cloud console, go to **Logs Explorer**.\n\n [Go to Logs Explorer](https://console.cloud.google.com/logs/query)\n2. On the Google Cloud console toolbar, select the project listed in\n `resource.project_display_name`, if necessary.\n\n3. Set **Select time range** to the period of interest.\n\n4. On the page that loads, do the following:\n\n 1. Find Pod logs for `Pod_Name` by using the following filter:\n - `resource.type=\"k8s_container\"`\n - `resource.labels.project_id=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eRESOURCE.PROJECT_DISPLAY_NAME\u003c/var\u003e`\"`\n - `resource.labels.location=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eLOCATION\u003c/var\u003e`\"`\n - `resource.labels.cluster_name=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e`\"`\n - `resource.labels.namespace_name=\"`\u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAMESPACE\u003c/var\u003e`\"`\n - `resource.labels.pod_name=\"`\u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAME\u003c/var\u003e`\"`\n 2. Find cluster audit logs by using the following filter:\n - `logName=\"projects/`\u003cvar class=\"edit\" translate=\"no\"\u003eRESOURCE.PROJECT_DISPLAY_NAME\u003c/var\u003e`/logs/cloudaudit.googleapis.com%2Factivity\"`\n - `resource.type=\"k8s_cluster\"`\n - `resource.labels.project_id=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eRESOURCE.PROJECT_DISPLAY_NAME\u003c/var\u003e`\"`\n - `resource.labels.location=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eLOCATION\u003c/var\u003e`\"`\n - `resource.labels.cluster_name=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e`\"`\n - \u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAME\u003c/var\u003e\n 3. Find GKE node console logs by using the following filter:\n - `resource.type=\"gce_instance\"`\n - `resource.labels.instance_id=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eINSTANCE_ID\u003c/var\u003e`\"`\n\nStep 5: Investigate running container\n\nIf the container is still running, it might be possible to investigate the\ncontainer environment directly.\n\n1. Go to the Google Cloud console.\n\n [Open Google Cloud console](https://console.cloud.google.com/)\n2. On the Google Cloud console toolbar, select the project listed in\n `resource.project_display_name`, if necessary.\n\n3. Click **Activate Cloud Shell**\n\n4. Obtain GKE credentials for your cluster by running the\n following commands.\n\n For zonal clusters: \n\n gcloud container clusters get-credentials \u003cvar class=\"edit\" translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --zone \u003cvar class=\"edit\" translate=\"no\"\u003eLOCATION\u003c/var\u003e \\\n --project \u003cvar class=\"edit\" translate=\"no\"\u003ePROJECT_NAME\u003c/var\u003e\n\n For regional clusters: \n\n gcloud container clusters get-credentials \u003cvar class=\"edit\" translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --region \u003cvar class=\"edit\" translate=\"no\"\u003eLOCATION\u003c/var\u003e \\\n --project \u003cvar class=\"edit\" translate=\"no\"\u003ePROJECT_NAME\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the cluster listed in `resource.labels.cluster_name`\n - \u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e: the location listed in `resource.labels.location`\n - \u003cvar translate=\"no\"\u003ePROJECT_NAME\u003c/var\u003e: the project name listed in `resource.project_display_name`\n5. Retrieve the executed binary:\n\n kubectl cp \\\n \u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAMESPACE\u003c/var\u003e/\u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAME\u003c/var\u003e:\u003cvar class=\"edit\" translate=\"no\"\u003ePROCESS_BINARY_FULLPATH\u003c/var\u003e \\\n -c \u003cvar class=\"edit\" translate=\"no\"\u003eCONTAINER_NAME\u003c/var\u003e \\\n \u003cvar translate=\"no\"\u003eLOCAL_FILE\u003c/var\u003e\n\n Replace \u003cvar translate=\"no\"\u003elocal_file\u003c/var\u003e with a local file path to store the\n added binary.\n6. Connect to the container environment by running the following command:\n\n kubectl exec \\\n --namespace=\u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAMESPACE\u003c/var\u003e \\\n -ti \u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAME\u003c/var\u003e \\\n -c \u003cvar class=\"edit\" translate=\"no\"\u003eCONTAINER_NAME\u003c/var\u003e \\\n -- /bin/sh\n\n This command requires the container to have a shell installed at `/bin/sh`.\n\nStep 6: Research attack and response methods\n\n1. Review MITRE ATT\\&CK framework entries for this finding type: [Command and Scripting Interpreter](https://attack.mitre.org/techniques/T1059/).\n2. To develop a response plan, combine your investigation results with MITRE research.\n\nStep 7: Implement your response\n\n\nThe following response plan might be appropriate for this finding, but might also impact operations.\nCarefully evaluate the information you gather in your investigation to determine the best way to\nresolve findings.\n\n- Contact the owner of the project with the compromised container.\n- Stop or [delete](/container-registry/docs/managing#deleting_images) the compromised container and replace it with a [new container](/compute/docs/containers).\n\nWhat's next\n\n- Learn [how to work with threat\n findings in Security Command Center](/security-command-center/docs/how-to-investigate-threats).\n- Refer to the [Threat findings index](/security-command-center/docs/threat-findings-index).\n- Learn how to [review a\n finding](/security-command-center/docs/how-to-investigate-threats#reviewing_findings) through the Google Cloud console.\n- Learn about the [services that\n generate threat findings](/security-command-center/docs/concepts-security-sources#threats)."]]