이 문서에서는 Security Command Center의 위협 발견 항목 유형에 대해 설명합니다. 위협 발견 항목은 위협 감지기가 클라우드 리소스에서 잠재적인 위협을 감지할 때 생성됩니다. 사용 가능한 위협 발견 항목의 전체 목록은 위협 발견 항목 색인을 참고하세요.
개요
대량의 데이터 삭제 작업을 수행하는 프로세스가 식별되었습니다. 이는 포렌식 증거를 제거하거나, 서비스를 방해하거나, 데이터 삭제 공격을 수행하려는 시도일 수 있습니다. 이러한 활동은 로그, 데이터베이스, 중요 파일 등을 제거해 흔적을 지우거나 시스템을 손상시키려는 의도로 해석될 수 있어 매우 우려됩니다. 데이터 파괴 행위는 종종 랜섬웨어 공격, 내부자 위협, 또는 탐지를 회피하고 운영 환경에 피해를 주려는 지능형 지속 위협(APT)의 일환으로 발생합니다.
대응 방법
이 발견 항목에 대응하려면 다음을 수행하세요.
1단계: 발견 항목 세부정보 검토하기
발견 항목 검토의 지시에 따라 Impact: Suspicious crypto mining activity using the Stratum
Protocol 발견 항목을 엽니다.
발견 항목의 세부정보 패널의 요약 탭이 열립니다.
요약 탭에서 다음 섹션의 정보를 검토합니다.
특히 다음 필드를 포함하는 감지된 항목:
프로그램 바이너리: 실행된 바이너리의 절대 경로
인수: 바이너리 실행 중에 전달되는 인수입니다.
특히 다음 필드를 포함하는 영향을 받는 리소스:
리소스 전체 이름: 프로젝트 번호, 위치, 클러스터 이름을 포함한 클러스터의 전체 리소스 이름
발견 항목의 세부정보 보기에서 JSON 탭을 클릭합니다.
JSON에서 다음 필드를 확인합니다.
resource:
project_display_name: 클러스터가 포함된 프로젝트 이름
finding:
processes:
binary:
path: 실행된 바이너리의 전체 경로입니다.
args: 바이너리를 실행하는 동안 제공된 인수입니다.
sourceProperties:
Pod_Namespace: 포드의 Kubernetes 네임스페이스 이름
Pod_Name: GKE 포드의 이름
Container_Name: 영향을 받는 컨테이너의 이름
Container_Image_Uri: 배포 중인 컨테이너 이미지의 이름
VM_Instance_Name: 포드가 실행된 GKE 노드의 이름
이 컨테이너에서 비슷한 시점에 발생한 다른 발견 항목을 식별합니다.
관련 발견 항목은 권장사항을 따르지 않은 것이 아니라 악의적인 활동이었음을 의미할 수 있습니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-05(UTC)"],[],[],null,["| Premium and Enterprise [service tiers](/security-command-center/docs/service-tiers)\n\nThis document describes a threat finding type in Security Command Center. Threat findings are generated by\n[threat detectors](/security-command-center/docs/concepts-security-sources#threats) when they detect\na potential threat in your cloud resources. For a full list of available threat findings, see [Threat findings index](/security-command-center/docs/threat-findings-index).\n\nOverview\n\nA process was identified performing bulk data deletion operations, which could\nindicate an attempt to erase forensic evidence, disrupt services, or execute a\ndata-wiping attack. This activity is concerning because attackers may remove\nlogs, databases, or important files to cover their tracks or sabotage the\nsystem. Data destruction is often part of ransomware attacks, insider threats,\nor advanced persistent threats (APTs) attempting to evade detection and cause\noperational damage.\n\nHow to respond\n\nTo respond to this finding, do the following:\n\nStep 1: Review finding details\n\n1. Open an `Impact: Suspicious crypto mining activity using the Stratum\n Protocol` finding as directed in [Reviewing findings](/security-command-center/docs/how-to-investigate-threats#reviewing_findings).\n The details panel for the finding opens to the **Summary** tab.\n\n2. On the **Summary** tab, review the information in the following sections:\n\n - **What was detected** , especially the following fields:\n - **Program binary**: the absolute path of the executed binary.\n - **Arguments**: the arguments passed during binary execution.\n - **Affected resource** , especially the following fields:\n - **Resource full name** : the [full resource name](/apis/design/resource_names) of the cluster including the project number, location, and cluster name.\n3. In the detail view of the finding, click the **JSON** tab.\n\n4. In the JSON, note the following fields.\n\n - `resource`:\n - `project_display_name`: the name of the project that contains the cluster.\n - `finding`:\n - `processes`:\n - `binary`:\n - `path`: the full path of the executed binary.\n - `args`: the arguments that were provided while executing the binary.\n - `sourceProperties`:\n - `Pod_Namespace`: the name of the Pod's Kubernetes namespace.\n - `Pod_Name`: the name of the GKE Pod.\n - `Container_Name`: the name of the affected container.\n - `Container_Image_Uri`: the name of the container image being deployed.\n - `VM_Instance_Name`: the name of the GKE node where the Pod executed.\n5. Identify other findings that occurred at a similar time for this container.\n Related findings might indicate that this activity was malicious, instead of\n a failure to follow best practices.\n\nStep 2: Review cluster and node\n\n1. In the Google Cloud console, go to the **Kubernetes clusters** page.\n\n [Go to Kubernetes clusters](https://console.cloud.google.com/kubernetes/list)\n\n \u003cbr /\u003e\n\n2. On the Google Cloud console toolbar, select the project listed in\n `resource.project_display_name`, if necessary.\n\n3. Select the cluster listed on the **Resource full name** row in the\n **Summary** tab of the finding details. Note any metadata about\n the cluster and its owner.\n\n4. Click the **Nodes** tab. Select the node listed in `VM_Instance_Name`.\n\n5. Click the **Details** tab and note the\n `container.googleapis.com/instance_id` annotation.\n\nStep 3: Review Pod\n\n1. In the Google Cloud console, go to the **Kubernetes Workloads** page.\n\n [Go to Kubernetes Workloads](https://console.cloud.google.com/kubernetes/workload)\n\n \u003cbr /\u003e\n\n2. On the Google Cloud console toolbar, select the project listed in\n `resource.project_display_name`, if necessary.\n\n3. Filter on the cluster listed on the **Resource full name** row in the\n **Summary** tab of the finding details and the Pod namespace\n listed in `Pod_Namespace`, if necessary.\n\n4. Select the Pod listed in `Pod_Name`. Note any metadata about the Pod and\n its owner.\n\nStep 4: Check logs\n\n1. In the Google Cloud console, go to **Logs Explorer**.\n\n [Go to Logs Explorer](https://console.cloud.google.com/logs/query)\n2. On the Google Cloud console toolbar, select the project listed in\n `resource.project_display_name`, if necessary.\n\n3. Set **Select time range** to the period of interest.\n\n4. On the page that loads, do the following:\n\n 1. Find Pod logs for `Pod_Name` by using the following filter:\n - `resource.type=\"k8s_container\"`\n - `resource.labels.project_id=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eRESOURCE.PROJECT_DISPLAY_NAME\u003c/var\u003e`\"`\n - `resource.labels.location=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eLOCATION\u003c/var\u003e`\"`\n - `resource.labels.cluster_name=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e`\"`\n - `resource.labels.namespace_name=\"`\u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAMESPACE\u003c/var\u003e`\"`\n - `resource.labels.pod_name=\"`\u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAME\u003c/var\u003e`\"`\n 2. Find cluster audit logs by using the following filter:\n - `logName=\"projects/`\u003cvar class=\"edit\" translate=\"no\"\u003eRESOURCE.PROJECT_DISPLAY_NAME\u003c/var\u003e`/logs/cloudaudit.googleapis.com%2Factivity\"`\n - `resource.type=\"k8s_cluster\"`\n - `resource.labels.project_id=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eRESOURCE.PROJECT_DISPLAY_NAME\u003c/var\u003e`\"`\n - `resource.labels.location=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eLOCATION\u003c/var\u003e`\"`\n - `resource.labels.cluster_name=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e`\"`\n - \u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAME\u003c/var\u003e\n 3. Find GKE node console logs by using the following filter:\n - `resource.type=\"gce_instance\"`\n - `resource.labels.instance_id=\"`\u003cvar class=\"edit\" translate=\"no\"\u003eINSTANCE_ID\u003c/var\u003e`\"`\n\nStep 5: Investigate running container\n\nIf the container is still running, it might be possible to investigate the\ncontainer environment directly.\n\n1. Go to the Google Cloud console.\n\n [Open Google Cloud console](https://console.cloud.google.com/)\n2. On the Google Cloud console toolbar, select the project listed in\n `resource.project_display_name`, if necessary.\n\n3. Click **Activate Cloud Shell**\n\n4. Obtain GKE credentials for your cluster by running the\n following commands.\n\n For zonal clusters: \n\n gcloud container clusters get-credentials \u003cvar class=\"edit\" translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --zone \u003cvar class=\"edit\" translate=\"no\"\u003eLOCATION\u003c/var\u003e \\\n --project \u003cvar class=\"edit\" translate=\"no\"\u003ePROJECT_NAME\u003c/var\u003e\n\n For regional clusters: \n\n gcloud container clusters get-credentials \u003cvar class=\"edit\" translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --region \u003cvar class=\"edit\" translate=\"no\"\u003eLOCATION\u003c/var\u003e \\\n --project \u003cvar class=\"edit\" translate=\"no\"\u003ePROJECT_NAME\u003c/var\u003e\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the cluster listed in `resource.labels.cluster_name`\n - \u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e: the location listed in `resource.labels.location`\n - \u003cvar translate=\"no\"\u003ePROJECT_NAME\u003c/var\u003e: the project name listed in `resource.project_display_name`\n5. Retrieve the executed binary:\n\n kubectl cp \\\n \u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAMESPACE\u003c/var\u003e/\u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAME\u003c/var\u003e:\u003cvar class=\"edit\" translate=\"no\"\u003ePROCESS_BINARY_FULLPATH\u003c/var\u003e \\\n -c \u003cvar class=\"edit\" translate=\"no\"\u003eCONTAINER_NAME\u003c/var\u003e \\\n \u003cvar translate=\"no\"\u003eLOCAL_FILE\u003c/var\u003e\n\n Replace \u003cvar translate=\"no\"\u003elocal_file\u003c/var\u003e with a local file path to store the\n added binary.\n6. Connect to the container environment by running the following command:\n\n kubectl exec \\\n --namespace=\u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAMESPACE\u003c/var\u003e \\\n -ti \u003cvar class=\"edit\" translate=\"no\"\u003ePOD_NAME\u003c/var\u003e \\\n -c \u003cvar class=\"edit\" translate=\"no\"\u003eCONTAINER_NAME\u003c/var\u003e \\\n -- /bin/sh\n\n This command requires the container to have a shell installed at `/bin/sh`.\n\nStep 6: Research attack and response methods\n\n1. Review MITRE ATT\\&CK framework entries for this finding type: [Resource Hijacking](https://attack.mitre.org/techniques/T1496/).\n2. To develop a response plan, combine your investigation results with MITRE research.\n\nStep 7: Implement your response\n\n\nThe following response plan might be appropriate for this finding, but might also impact operations.\nCarefully evaluate the information you gather in your investigation to determine the best way to\nresolve findings.\n\n- Contact the owner of the project with the compromised container.\n- Stop or [delete](/container-registry/docs/managing#deleting_images) the compromised container and replace it with a [new container](/compute/docs/containers).\n\nWhat's next\n\n- Learn [how to work with threat\n findings in Security Command Center](/security-command-center/docs/how-to-investigate-threats).\n- Refer to the [Threat findings index](/security-command-center/docs/threat-findings-index).\n- Learn how to [review a\n finding](/security-command-center/docs/how-to-investigate-threats#reviewing_findings) through the Google Cloud console.\n- Learn about the [services that\n generate threat findings](/security-command-center/docs/concepts-security-sources#threats)."]]