이제 Cloud Data Loss Prevention(Cloud DLP)은 민감한 정보 보호에 포함됩니다. API 이름은 Cloud Data Loss Prevention API(DLP API)로 그대로 유지됩니다. 민감한 정보 보호를 구성하는 서비스에 대한 자세한 내용은 민감한 정보 보호 개요를 참조하세요.
작업은 민감한 정보 보호가 민감한 정보 내용을 스캔하거나 재식별화에 따른 위험을 계산하기 위해 실행하는 작업을 의미합니다.
Sensitive Data Protection은 개발자가 데이터 검사를 명령할 때마다 작업 리소스를 만들고 실행합니다.
현재 민감한 정보 보호 작업에는 두 가지 유형이 있습니다.
검사 작업은 조건에 따라 콘텐츠에서 민감한 정보를 검사하고 민감한 정보의 유형과 위치가 포함된 요약 보고서를 생성합니다.
위험 분석 작업은 익명화된 데이터를 분석하고 데이터가 재식별화될 수 있는 가능성에 대한 측정항목을 반환합니다.
작업 트리거를 만들어 민감한 정보 보호가 작업을 실행할 시간을 예약할 수 있습니다. 작업 트리거는 Cloud Storage 버킷, BigQuery 테이블, Datastore 종류를 비롯한 Google Cloud 스토리지 저장소를 스캔하기 위한 민감한 정보 보호 작업 생성을 자동화하는 이벤트입니다.
작업 트리거를 사용하면 각 트리거가 사용 중지되는 간격을 설정하여 스캔 작업을 예약할 수 있습니다. 마지막 스캔 실행 이후 달라진 부분을 찾아서 변경되거나 추가된 콘텐츠를 모니터링하거나 최신 검사 결과 보고서를 생성하도록 구성할 수 있습니다. 예약된 트리거는 설정된 간격(1~60일)에 따라 실행됩니다.
다음 단계
다음 주제에서 작업 및 작업 트리거를 생성, 편집, 실행하는 방법에 대한 자세한 내용을 확인하세요.
작업 및 작업 트리거에 대한 서비스 수준 목표(SLO)가 보장되지 않습니다. 지연 시간은 스캔할 데이터 양, 스캔하는 스토리지 저장소, 스캔하는 infoType 유형 및 개수, 작업이 처리되는 리전, 사용 가능한 컴퓨팅 리소스를 포함한 여러 요인의 영향을 받습니다. 따라서 검사 작업의 지연 시간을 미리 확인할 수 없습니다.
Cloud Storage 또는 BigQuery에 저장된 파일의 기간 날짜를 자동으로 설정하도록 작업 트리거를 구성할 수 있습니다. TimespanConfig 객체를 자동으로 채우도록 설정하면 민감한 정보 보호는 트리거가 마지막으로 실행된 이후 추가되었거나 수정된 데이터만 스캔합니다.
BigQuery 검사의 경우 3시간 이상 된 행만 스캔에 포함됩니다. 이 작업과 관련된 알려진 문제를 참고하세요.
파일 업로드 시 작업 트리거
민감한 정보 보호에 기본 제공되는 작업 트리거 지원 외에도Google Cloud 에는 민감한 정보 보호 작업을 통합하거나 트리거하는 데 사용할 수 있는 다양한 구성요소가 있습니다. 예를 들어 Cloud Run Functions를 사용하여 Cloud Storage에 파일이 업로드될 때마다 민감한 정보 보호 스캔을 트리거할 수 있습니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[],[],null,["# Jobs and job triggers\n\nA *job* is an action that Sensitive Data Protection runs to either scan content for\nsensitive data or calculate the risk of re-identification.\nSensitive Data Protection creates and runs a job resource whenever you tell it to\ninspect your data.\n\nThere are currently two types of Sensitive Data Protection jobs:\n\n- *Inspection jobs* inspect your content for sensitive data according to your criteria and generate summary reports of where and what type of sensitive data exists.\n- *Risk analysis jobs* analyze de-identified data and return metrics about the likelihood that the data can be re-identified.\n\nYou can schedule when Sensitive Data Protection runs jobs by creating *job\ntriggers*. A job trigger is an event that automates the creation of\nSensitive Data Protection jobs to scan Google Cloud storage repositories,\nincluding Cloud Storage buckets, BigQuery tables, and\nDatastore kinds.\n\nJob triggers enable you to schedule scan jobs by setting intervals at which\neach trigger goes off. They can be configured to look for new findings since\nthe last scan run to help monitor changes or additions to content, or to\ngenerate up-to-date findings reports. Scheduled triggers run on an interval\nthat you set, from 1 day to 60 days.\n\n\u003cbr /\u003e\n\n| **Note:** Prematurely canceling an operation midway through a job still incurs costs for the portion of the job that was completed. For more information about billing, see [Sensitive Data Protection pricing](https://cloud.google.com/sensitive-data-protection/pricing).\n\n\u003cbr /\u003e\n\nNext steps\n----------\n\nMore information about how to create, edit, and run jobs and job triggers in\nthe following topics:\n\n- [Creating Sensitive Data Protection inspection jobs and job\n triggers](/sensitive-data-protection/docs/creating-job-triggers)\n- [Measuring re-identification and disclosure risk](/sensitive-data-protection/docs/compute-risk-analysis) (Covers risk analysis jobs.)\n\nIn addition, the following quickstart is available:\n\n- [Quickstart creating a Sensitive Data Protection job\n trigger](/sensitive-data-protection/docs/quickstart-create-job-trigger)\n\nThe `JobTrigger` object\n-----------------------\n\nA job trigger is represented in the DLP API by the\n[`JobTrigger`](/sensitive-data-protection/docs/reference/rest/v2/projects.jobTriggers)\nobject.\n\n### Job trigger configuration fields\n\nEach `JobTrigger` contains several configuration fields, including:\n\n- The trigger's name and display name, and a description.\n- A collection of [`Trigger`](/sensitive-data-protection/docs/reference/rest/v2/projects.jobTriggers#trigger) objects, each of which contains a [`Schedule`](/sensitive-data-protection/docs/reference/rest/v2/projects.jobTriggers#schedule) object, which defines the scan recurrence in seconds.\n- An [`InspectJobConfig`](/sensitive-data-protection/docs/reference/rest/v2/InspectJobConfig) object, which contains the configuration information for the triggered job.\n- A [`Status`](/sensitive-data-protection/docs/reference/rest/v2/projects.jobTriggers#status) enumeration, which indicates whether the trigger is currently active.\n- Timestamp fields representing creation, update, and last run times.\n- A collection of [`Error`](/sensitive-data-protection/docs/reference/rest/v2/Error) objects, if any were encountered when the trigger was activated.\n\n### Job trigger methods\n\nEach `JobTrigger` object also includes several built-in methods. Using these\nmethods you can:\n\n- Create a new job trigger: [`projects.jobTriggers.create`](/sensitive-data-protection/docs/reference/rest/v2/projects.jobTriggers/create)\n- Update an existing job trigger: [`projects.jobTriggers.patch`](/sensitive-data-protection/docs/reference/rest/v2/projects.jobTriggers/patch)\n- Delete an existing job trigger: [`projects.jobTriggers.delete`](/sensitive-data-protection/docs/reference/rest/v2/projects.jobTriggers/delete)\n- Retrieve an existing job trigger, including its configuration and status: [`projects.jobTriggers.get`](/sensitive-data-protection/docs/reference/rest/v2/projects.jobTriggers/get)\n- List all existing job triggers: [`projects.jobTriggers.list`](/sensitive-data-protection/docs/reference/rest/v2/projects.jobTriggers/list)\n\nJob latency\n-----------\n\nThere are no service level objectives (SLO) guaranteed for jobs and job\ntriggers. Latency is affected by several factors, including the amount of data\nto scan, the storage repository being scanned, the type and number of infoTypes\nyou are scanning for, the region where the job is processed, and the computing\nresources available in that region. Therefore, the latency of inspection jobs\ncan't be determined in advance.\n\nTo help reduce job latency, you can try the following:\n\n- If [sampling](/sensitive-data-protection/docs/inspecting-storage#sampling) is available for your job or job trigger, enable it.\n- Avoid enabling infoTypes that you don't need. Although the following are\n useful in certain scenarios, these infoTypes can make requests run much more\n slowly than requests that don't include them:\n\n - `PERSON_NAME`\n - `FEMALE_NAME`\n - `MALE_NAME`\n - `FIRST_NAME`\n - `LAST_NAME`\n - `DATE_OF_BIRTH`\n - `LOCATION`\n - `STREET_ADDRESS`\n - `ORGANIZATION_NAME`\n- Always specify infoTypes explicitly. Do not use an empty infoTypes list.\n\n- If possible, use a different processing region.\n\nIf you're still having latency issues with jobs after trying these techniques,\nconsider using\n[`content.inspect`](/sensitive-data-protection/docs/reference/rest/v2/projects.content/inspect) or\n[`content.deidentify`](/sensitive-data-protection/docs/reference/rest/v2/projects.content/deidentify)\nrequests instead of jobs. These methods are covered under the Service Level\nAgreement. For more information, see [Sensitive Data Protection Service Level\nAgreement](/sensitive-data-protection/sla-20201006).\n\nLimit scans to only new content\n-------------------------------\n\nYou can configure your job trigger to automatically set the timespan date for\nfiles stored in [Cloud Storage](/storage) or\n[BigQuery](/bigquery). When you set the\n[`TimespanConfig`](/sensitive-data-protection/docs/reference/rest/v2/InspectJobConfig#timespanconfig)\nobject to auto-populate, Sensitive Data Protection only scans data that was\nadded or modified since the trigger last ran: \n\n ...\n timespan_config {\n enable_auto_population_of_timespan_config: true\n }\n ...\n\nFor BigQuery inspection, only rows that are at least three hours old\nare included in the scan. See the [known\nissue](/sensitive-data-protection/docs/known-issues#bq-timespan) related to this\noperation.\n\nTrigger jobs at file upload\n---------------------------\n\nIn addition to the support for job triggers---which is built into\nSensitive Data Protection---Google Cloud also has a variety of other\ncomponents that you can use to integrate or trigger Sensitive Data Protection\njobs. For example, you can use [Cloud Run functions](/functions) to\ntrigger a Sensitive Data Protection scan every time a file is uploaded to\nCloud Storage.\n\nFor information about how to set up this operation, see [Automating the\nclassification of data uploaded to\nCloud Storage](/sensitive-data-protection/docs/automating-classification-of-data-uploaded-to-cloud-storage)."]]