[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Moderate text\n\n**Text moderation** analyzes a document against a list of\nsafety attributes, which include \"harmful categories\" and topics that may be considered sensitive. To moderate\nthe text in a document, call the [`moderateText`](/natural-language/reference/rest/v1/documents/moderateText) method.\n\nA complete list of categories returned for the [`moderateText`](/natural-language/reference/rest/v1/documents/moderateText)\nmethod are found here:\n\n#### Safety attribute confidence scores\n\nEach safety attribute has an associated confidence score\nbetween 0.00 and 1.00, reflecting the likelihood of\nthe input or response belonging to a given category.\n\n**Sample response** \n\n {\n \"moderationCategories\": [\n {\n \"name\": \"Toxic\",\n \"confidence\": 0.10\n },\n {\n \"name\": \"Insult\",\n \"confidence\": 0.12\n },\n {\n \"name\": \"Profanity\",\n \"confidence\": 0.07\n },\n {\n \"name\": \"Derogatory\",\n \"confidence\": 0.04\n },\n {\n \"name\": \"Sexual\",\n \"confidence\": 0.00\n },\n {\n \"name\": \"Death, Harm & Tragedy\",\n \"confidence\": 0.00\n },\n {\n \"name\": \"Violent\",\n \"confidence\": 0.00\n },\n {\n \"name\": \"Firearms & Weapons\",\n \"confidence\": 0.00\n },\n {\n \"name\": \"Public Safety\",\n \"confidence\": 0.01\n },\n {\n \"name\": \"Health\",\n \"confidence\": 0.01\n },\n {\n \"name\": \"Religion & Belief\",\n \"confidence\": 0.00\n },\n {\n \"name\": \"Illicit Drugs\",\n \"confidence\": 0.01\n },\n {\n \"name\": \"War & Conflict\",\n \"confidence\": 0.02\n },\n {\n \"name\": \"Politics\",\n \"confidence\": 0.01\n },\n {\n \"name\": \"Finance\",\n \"confidence\": 0.00\n },\n {\n \"name\": \"Legal\",\n \"confidence\": 0.00\n }\n ]\n }\n\n**Test your confidence thresholds**\n\nYou can test Google's safety filters and define confidence thresholds that are\nright for your business. By using these thresholds, you can take comprehensive\nmeasures to detect content that violates Google's usage policies or terms of\nservice and take appropriate action.\n\nThe confidence scores are only predictions. You shouldn't depend on the\nscores for reliability or accuracy. Google is not responsible for interpreting\nor using these scores for business decisions.\n\n**Difference between probability and severity**\n\nThe confidence scores indicate that\nthe content belongs to the specified category and not the severity. This is important to\nconsider because some content can have low probability of being unsafe even\nthough the severity of harm could still be high. For example, comparing the\nsentences:\n\n1. The robot punched me.\n2. The robot slashed me up.\n\nSentence 1 might cause a higher probability of being unsafe but you might\nconsider sentence 2 to be a higher severity in terms of violence.\n\nTherefore, it is important for you to carefully test and consider what the\nappropriate level of blocking is for your use cases while\nminimizing harm to end users.\n\n**Language support**\n\n| **Note:** The API does not have a hard restriction on language support, so unlisted languages will still return predictions. We cannot verify the quality of predictions for unlisted languages. Similarly, \"limited\" support means results may vary by type of text (eg. webpage, chat message, etc) for some attributes. In any case, we recommend thorough evaluation for your use case.\n\n### How to moderate text\n\nThis section demonstrates how to moderate text in a document.\nYou need to submit a separate request for each document.\n\nHere is an example of moderating text provided as a string: \n\n### Protocol\n\nTo moderate content from a document, make a `POST` request to the\n[`documents:moderateText`](/natural-language/docs/reference/rest/v1/documents/moderateText)\nREST method and provide\nthe appropriate request body as shown in the following example.\n\nThe example uses the `gcloud auth application-default print-access-token`\ncommand to obtain an access token for a service account set up for the\nproject using the Google Cloud Platform [gcloud CLI](https://cloud.google.com/sdk/).\nFor instructions on installing the gcloud CLI,\nsetting up a project with a service account\nsee the [Quickstart](/natural-language/docs/setup). \n\n```bash\ncurl -X POST \\\n -H \"Authorization: Bearer \"$(gcloud auth application-default print-access-token) \\\n -H \"Content-Type: application/json; charset=utf-8\" \\\n --data \"{\n 'document':{\n 'type':'PLAIN_TEXT',\n 'content':'Shut up!'\n }\n}\" \"https://language.googleapis.com/v1/documents:moderateText\"\n```\n\n\u003cbr /\u003e\n\n#### Moderate text from Cloud Storage\n\nHere is an example of moderating text stored in a text file on Cloud Storage: \n\n### Protocol\n\nTo moderate text from a document stored in Cloud Storage,\nmake a `POST` request to the\n[`documents:moderateText`](/natural-language/docs/reference/rest/v1/documents/moderateText)\nREST method and provide\nthe appropriate request body with the path to the document\nas shown in the following example. \n\n```bash\ncurl -X POST \\\n -H \"Authorization: Bearer $(gcloud auth application-default print-access-token)\" \\\n -H \"Content-Type: application/json; charset=utf-8\" \\\n https://language.googleapis.com/v1/documents:moderateText -d \"{\n 'document':{\n 'type':'PLAIN_TEXT',\n 'gcsContentUri':'gs://\u003cvar translate=\"no\"\u003e<bucket-name>\u003c/var\u003e/\u003cvar translate=\"no\"\u003e<object-name>\u003c/var\u003e'\n }\n}\"\n```\n\n\u003cbr /\u003e"]]