이 페이지에서는 Cloud Logging을 사용하여 스토리지 일괄 작업 로그를 구성하고 보는 방법을 설명합니다. 스토리지 일괄 작업 작업은 각 변환 작업에 대해 Cloud Logging 로그 항목을 생성하도록 구성할 수 있습니다. 각 로그 항목은 객체의 변환 시도에 해당합니다.
스토리지 일괄 작업은 Cloud Logging과 Cloud Storage Cloud 감사 로그 모두에 로깅을 지원합니다. 두 옵션 모두 스토리지 일괄 작업 작업을 캡처하지만 Cloud Logging을 사용하는 것이 좋습니다. Cloud Logging은 로그 분석, 실시간 모니터링, 고급 필터링을 위한 중앙 집중식 플랫폼을 제공하여 일괄 작업 활동을 관리하고 이해하기 위한 강력한 솔루션을 제공합니다.
시작하기 전에
Cloud Logging에 액세스할 수 있는지 확인합니다. Cloud Logging을 사용하려면 Logs Viewer (roles/logging.viewer) Identity and Access Management 역할을 부여하는 것이 좋습니다. Logs Viewer (roles/logging.viewer) Identity and Access Management 역할은 Cloud Logging 데이터를 보는 데 필요한 Identity and Access Management 권한을 제공합니다.
Logging 액세스 권한에 대한 자세한 내용은 IAM으로 액세스 제어를 참조하세요.
모든 스토리지 일괄 작업 관련 필드가 jsonPayload 객체 내에 포함되어 있습니다. jsonPayload의 정확한 콘텐츠는 작업 유형에 따라 다르지만 모든 TransformActivityLog 항목에서 공유되는 공통 구조가 있습니다. 이 섹션에서는 일반적인 로그 필드를 간략하게 설명한 후 작업별 필드를 자세히 설명합니다.
일반 로그 필드
다음 필드는 모든 로그에 표시됩니다.
jsonPayload:{"@type":"type.googleapis.com/google.cloud.storagebatchoperations.logging.TransformActivityLog","completeTime":"YYYY-MM-DDTHH:MM:SS.SSSSSSSSSZ","status":{"errorMessage":"String indicating error","errorType":"ENUM_VALUE","statusCode":"ENUM_VALUE"},"logName":"projects/PROJECT_ID/logs/storagebatchoperations.googleapis.com%2Ftransform_activity","receiveTimestamp":"YYYY-MM-DDTHH:MM:SS.SSSSSSSSSZ","resource":{"labels":{"location":"us-central1","job_id":"BATCH_JOB_ID","resource_container":"RESOURCE_CONTAINER",// ... other labels},"type":"storagebatchoperations.googleapis.com/Job"},// Operation-specific details will be nested here (for example,// "DeleteObject", "PutObjectHold", "RewriteObject", "PutMetadata")// Each operation-specific object will also contain the following// object: "objectMetadataBefore": {// "gcsObject": {// "bucket": "BUCKET_NAME",// "generation": "GENERATION_NUMBER",// "objectKey": "OBJECT_PATH"// }// }}
값이 True이면 스토리지 일괄 작업 작업이 완료된 후 객체에 임시 보존이 적용되었음을 나타냅니다. 유효한 값은 True 또는 False입니다.
PutObjectHold.eventBasedHoldAfter
Boolean
값이 True이면 스토리지 일괄 작업 작업이 완료된 후 객체에 이벤트 기반 보존 조치가 적용되었음을 나타냅니다. 유효한 값은 True 또는 False입니다.
RewriteObject
객체
객체에 대한 재작성 작업을 나타냅니다.
RewriteObject.kmsKeyVersionAfter
문자열
재작성 작업 후에 사용되는 Cloud Key Management Service 키 버전입니다. kmsKeyVersionAfter 필드는 재작성으로 인해 객체의 암호화 키가 변경된 경우 채워집니다.
선택사항 필드이므로 재작성 후 Cloud KMS 키 버전이 변경되지 않은 경우 표시되지 않을 수 있습니다.
PutMetadata
객체
객체의 메타데이터 업데이트 작업을 나타냅니다.
PutMetadata.content_disposition_after
문자열
PutMetadata 작업이 완료된 후의 Content-Disposition 헤더 값을 지정합니다. 선택사항 필드로, 콘텐츠 처리 방식이 설정되거나 수정된 경우에만 채워집니다.
PutMetadata.content_encoding_after
문자열
PutMetadata 작업이 완료된 후의 Content-Encoding 헤더 값을 지정합니다. 선택사항 필드로, 콘텐츠 인코딩이 설정되거나 수정된 경우에만 채워집니다.
PutMetadata.content_language_after
문자열
PutMetadata 작업이 완료된 후의 Content-Language 헤더 값을 지정합니다. 선택사항 필드로, 콘텐츠 언어가 설정되거나 수정된 경우에만 채워집니다.
PutMetadata.content_type_after
문자열
PutMetadata 작업이 완료된 후의 Content-Type 헤더 값을 지정합니다. 선택사항 필드로, 콘텐츠 유형이 설정되거나 수정된 경우에만 채워집니다.
PutMetadata.cache_control_after
문자열
PutMetadata 작업이 완료된 후의 Cache-Control 헤더 값을 지정합니다. 선택사항 필드로, 캐시 관리가 설정되거나 수정된 경우에만 값이 채워집니다.
PutMetadata.custom_time_after
문자열
PutMetadata 작업이 완료된 후의 Custom-Time 헤더 값을 지정합니다. 선택사항인 필드로, 커스텀 시간이 설정되거나 수정된 경우에만 채워집니다.
PutMetadata.custom_metadata_after
맵(키: 문자열, 값: 문자열)
변환 후 Custom-
Metadata 키-값 쌍의 맵을 포함합니다. 이 필드에는 객체에서 설정되거나 수정된 사용자 정의 메타데이터가 포함됩니다. 이를 통해 추가 메타데이터를 유연하게 저장할 수 있습니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-05(UTC)"],[],[],null,["# Cloud Logging for storage batch operations\n\nThis page describes how to configure and view [storage batch operations](/storage/docs/batch-operations/overview) logs by using Cloud Logging. A\nstorage batch operations job can be configured to generate\nCloud Logging log entries for each transformation job. Each log entry\ncorresponds to the attempted transformation of an object.\n\nStorage batch operations support logging to both Cloud Logging and\n[Cloud Storage Cloud Audit Logs](/storage/docs/audit-logging). While both options capture\nstorage batch operations actions, we recommend using\nCloud Logging. Cloud Logging provides a\ncentralized platform for log analysis, real-time monitoring, and advanced\nfiltering, offering a robust solution for managing and understanding your\nbatch operation activity.\n\nBefore you begin\n----------------\n\nVerify that you have access to Cloud Logging. To use Cloud Logging, we recommend\ngranting the `Logs Viewer (roles/logging.viewer)` Identity and Access Management role. The `Logs Viewer (roles/logging.viewer)` Identity and Access Management role provides the Identity and Access Management permissions required to view your Cloud Logging data.\nFor more information about Logging access permissions, see [Access control\nwith IAM](/logging/docs/access-control).\n\nTo verify and grant the IAM permissions, complete the following steps:\n\n- [View current access](/iam/docs/granting-changing-revoking-access#view-access) to verify the access that each principal has.\n- [Grant a role](/iam/docs/granting-changing-revoking-access#single-role) to relevant principals in your project.\n\nUnderstand logging details\n--------------------------\n\nWhen logging is enabled, storage batch operations capture\nthe following details:\n\n- **Loggable action** : The loggable action value is always `transform`.\n\n- **Loggable states**: For each action, you can choose to log one or both of\n the following states:\n\n - `SUCCEEDED`: The action was successful.\n - `FAILED`: The action failed.\n\nEnable logging\n--------------\n\nTo enable logging, specify the actions and the states to log. \n\n### Command line\n\nWhen creating a storage batch operations job with `gcloud\nstorage batch-operations jobs create`, use the `--log-actions` and\n`--log-action-states` flags to enable logging. \n\n```\ngcloud storage batch-operations jobs create JOB_NAME \\\n --manifest-location=MANIFEST_LOCATION \\\n --delete-object \\\n --log-actions=transform \\\n --log-action-states=LOG_ACTION_STATES\n```\n\nWhere:\n\n- \u003cvar translate=\"no\"\u003eJOB_NAME\u003c/var\u003e is the name you want to give your job. For example, `my-job`.\n- \u003cvar translate=\"no\"\u003eMANIFEST_LOCATION\u003c/var\u003e is the location of your manifest. For example, `gs://my-bucket/manifest.csv`.\n- \u003cvar translate=\"no\"\u003eLOG_ACTION_STATES\u003c/var\u003e is a comma-separated list of states to log. For example, `succeeded,failed`.\n\n### REST API\n\n[`Create a storage batch operations\njob`](/storage/docs/batch-operations/create-manage-batch-operation-jobs#rest-apis)\nwith a\n[`LoggingConfig`](/storage/docs/storagebatchoperations/reference/rest/v1/projects.locations.jobs#LoggingConfig). \n\n```json\n{\n \"loggingConfig\": {\n \"logActions\": [\"TRANSFORM\"],\n \"logActionStates\": [\"\u003cvar translate=\"no\"\u003eLOG_ACTION_STATES\u003c/var\u003e\"],\n }\n}\n```\n\nWhere:\n\n\u003cvar translate=\"no\"\u003eLOG_ACTION_STATES\u003c/var\u003e is a comma-separated\nlist of states to log. For example, `\"SUCCEEDED\",\"FAILED\"`.\n\nView logs\n---------\n\nTo view storage batch operations logs, do the following: \n\n### Console\n\n1. Go to the Google Cloud navigation menu\n *menu* and select\n **Logging \\\u003e Logs Explorer** :\n\n [Go to the Logs Explorer](https://console.cloud.google.com/logs/query)\n2. Select a Google Cloud project.\n\n3. From the **Upgrade** menu, switch from **Legacy Logs Viewer** to\n **Logs Explorer**.\n\n4. To filter your logs to show only storage batch operations entries, type\n `storage_batch_operations_job` into the query field and click\n **Run query**.\n\n5. In the **Query results** pane, click **Edit time** to change the time\n period for which to return results.\n\nFor more information on using the Logs Explorer, see [Using the\nLogs Explorer](/logging/docs/view/logs-viewer-interface).\n\n### Command line\n\nTo use the gcloud CLI to search for storage batch operations\nlogs, use the\n[`gcloud logging read`](/logging/docs/reference/tools/gcloud-logging#reading_log_entries)\ncommand.\n\nSpecify a filter to limit your results to storage batch operations logs. \n\n gcloud logging read \"resource.type=storage_batch_operations_job\"\n\n### REST API\n\nUse the [`entries.list`](/logging/docs/reference/v2/rest/v2/entries/list)\nCloud Logging API method.\n\nTo filter your results to include only storage batch operations-related entries,\nuse the `filter` field. The following is a sample JSON request object: \n\n {\n \"resourceNames\":\n [\n \"projects/\u003cvar translate=\"no\"\u003emy-project-name\u003c/var\u003e\"\n ],\n \"orderBy\": \"timestamp desc\",\n \"filter\": \"resource.type=\\\"storage_batch_operations_job\\\"\"\n }\n\nWhere:\n\n\u003cvar translate=\"no\"\u003emy-project-name\u003c/var\u003e is the name of your project.\n\nStorage batch operations log format\n-----------------------------------\n\nAll storage batch operations-specific fields are contained within a\n`jsonPayload` object. While the exact content of `jsonPayload` varies based on\nthe [job type](/storage/docs/batch-operations/overview#job-type), there is a common structure shared across all\n`TransformActivityLog` entries. This section outlines the common log fields and\nthen details the operation-specific fields.\n\n- **Common log fields**\n\n The following fields appear in all logs: \n\n jsonPayload: {\n \"@type\": \"type.googleapis.com/google.cloud.storagebatchoperations.logging.TransformActivityLog\",\n \"completeTime\": \"YYYY-MM-DDTHH:MM:SS.SSSSSSSSSZ\",\n \"status\": {\n \"errorMessage\": \"String indicating error\",\n \"errorType\": \"ENUM_VALUE\",\n \"statusCode\": \"ENUM_VALUE\"\n },\n \"logName\": \"projects/PROJECT_ID/logs/storagebatchoperations.googleapis.com%2Ftransform_activity\",\n \"receiveTimestamp\": \"YYYY-MM-DDTHH:MM:SS.SSSSSSSSSZ\",\n \"resource\": {\n \"labels\": {\n \"location\":\"us-central1\",\n \"job_id\": \"BATCH_JOB_ID\",\n \"resource_container\": \"RESOURCE_CONTAINER\",\n // ... other labels\n },\n \"type\": \"storagebatchoperations.googleapis.com/Job\"\n },\n // Operation-specific details will be nested here (for example,\n // \"DeleteObject\", \"PutObjectHold\", \"RewriteObject\", \"PutMetadata\")\n // Each operation-specific object will also contain the following\n // object: \"objectMetadataBefore\": {\n // \"gcsObject\": {\n // \"bucket\": \"BUCKET_NAME\",\n // \"generation\": \"GENERATION_NUMBER\",\n // \"objectKey\": \"OBJECT_PATH\"\n // }\n // }\n }\n\n The following table describes each of the common log fields:\n\n- **Operation-specific `jsonPayload` contents**\n\n The difference between log entries for different batch operations lies in the top-level object nested within the `jsonPayload`. Only one of the following objects is available in a given log entry, corresponding to the specific batch operation performed:\n - Delete object (`DeleteObject`)\n\n jsonPayload:\n {\n \"DeleteObject\": {\n \"objectMetadataBefore\": {\n \"gcsObject\": {\n \"bucket\": \"test-bucket\",\n \"generation\": \"1678912345678901\",\n \"objectKey\": \"test_object.txt\"\n }\n }\n }\n }\n\n - Put object hold (`PutObjectHold`)\n\n jsonPayload:\n {\n \"PutObjectHold\": {\n \"objectMetadataBefore\": {\n \"gcsObject\": {\n \"bucket\": \"test-bucket\",\n \"generation\": \"1678912345678901\",\n \"objectKey\": \"test_object.txt\"\n }\n },\n \"temporaryHoldAfter\": True,\n \"eventBasedHoldAfter\": True\n }\n }\n\n - Rewrite object (`RewriteObject`)\n\n jsonPayload:\n {\n \"RewriteObject\": {\n \"objectMetadataBefore\": {\n \"gcsObject\": {\n \"bucket\": \"test-bucket\",\n \"generation\": \"1678912345678901\",\n \"objectKey\": \"test_object.txt\"\n }\n },\n \"kmsKeyVersionAfter\": \"projects/my-gcp-project/locations/us-central1/keyRings/my-keyring-01/cryptoKeys/my-encryption-key/cryptoKeyVersions/1\"\n }\n }\n\n - Put metadata (`PutMetadata`)\n\n jsonPayload:\n {\n \"PutMetadata\": {\n \"objectMetadataBefore\": {\n \"gcsObject\": {\n \"bucket\": \"test-bucket\",\n \"generation\": \"1678912345678901\",\n \"objectKey\": \"test_object.txt\"\n }\n },\n \"content_disposition_after\": \"attachment; filename=\\\"report_final.pdf\\\"\",\n \"content_encoding_after\": \"gzip\",\n \"content_language_after\": \"en-US\",\n \"content_type_after\": \"application/pdf\",\n \"cache_control_after\": \"public, max-age=3600\",\n \"custom_time_after\": \"2025-06-27T10:00:00Z\",\n \"custom_metadata_after\": {\n \"project\": \"marketing\",\n \"version\": \"2.0\",\n \"approvedBy\": \"Admin\"\n }\n }\n }\n\n The following table describes the operation-specific log fields:"]]