안전 설정으로 텍스트 생성
컬렉션을 사용해 정리하기
내 환경설정을 기준으로 콘텐츠를 저장하고 분류하세요.
이 샘플은 안전 설정과 함께 Gemini 모델을 사용하여 텍스트를 생성하는 방법을 보여줍니다.
더 살펴보기
이 코드 샘플이 포함된 자세한 문서는 다음을 참조하세요.
코드 샘플
달리 명시되지 않는 한 이 페이지의 콘텐츠에는 Creative Commons Attribution 4.0 라이선스에 따라 라이선스가 부여되며, 코드 샘플에는 Apache 2.0 라이선스에 따라 라이선스가 부여됩니다. 자세한 내용은 Google Developers 사이트 정책을 참조하세요. 자바는 Oracle 및/또는 Oracle 계열사의 등록 상표입니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],[],[],[],null,["# Generate text with safety settings\n\nThis sample demonstrates how to use the Gemini model with safety settings to generate text.\n\nExplore further\n---------------\n\n\nFor detailed documentation that includes this code sample, see the following:\n\n- [Safety and content filters](/vertex-ai/generative-ai/docs/multimodal/configure-safety-filters)\n\nCode sample\n-----------\n\n### Python\n\n\nBefore trying this sample, follow the Python setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Python API\nreference documentation](/python/docs/reference/aiplatform/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n from google import genai\n from google.genai.types import (\n GenerateContentConfig,\n HarmCategory,\n HarmBlockThreshold,\n HttpOptions,\n SafetySetting,\n )\n\n client = genai.Client(http_options=HttpOptions(api_version=\"v1\"))\n\n system_instruction = \"Be as mean as possible.\"\n\n prompt = \"\"\"\n Write a list of 5 disrespectful things that I might say to the universe after stubbing my toe in the dark.\n \"\"\"\n\n safety_settings = [\n SafetySetting(\n category=HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,\n threshold=HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,\n ),\n SafetySetting(\n category=HarmCategory.HARM_CATEGORY_HARASSMENT,\n threshold=HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,\n ),\n SafetySetting(\n category=HarmCategory.HARM_CATEGORY_HATE_SPEECH,\n threshold=HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,\n ),\n SafetySetting(\n category=HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,\n threshold=HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,\n ),\n ]\n\n response = client.models.generate_content(\n model=\"gemini-2.5-flash\",\n contents=prompt,\n config=GenerateContentConfig(\n system_instruction=system_instruction,\n safety_settings=safety_settings,\n ),\n )\n\n # Response will be `None` if it is blocked.\n print(response.text)\n # Example response:\n # None\n\n # Finish Reason will be `SAFETY` if it is blocked.\n print(response.candidates[0].finish_reason)\n # Example response:\n # FinishReason.SAFETY\n\n # For details on all the fields in the response\n for each in response.candidates[0].safety_ratings:\n print('\\nCategory: ', str(each.category))\n print('Is Blocked:', True if each.blocked else False)\n print('Probability: ', each.probability)\n print('Probability Score: ', each.probability_score)\n print('Severity:', each.severity)\n print('Severity Score:', each.severity_score)\n # Example response:\n #\n # Category: HarmCategory.HARM_CATEGORY_HATE_SPEECH\n # Is Blocked: False\n # Probability: HarmProbability.NEGLIGIBLE\n # Probability Score: 2.547714e-05\n # Severity: HarmSeverity.HARM_SEVERITY_NEGLIGIBLE\n # Severity Score: None\n #\n # Category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT\n # Is Blocked: False\n # Probability: HarmProbability.NEGLIGIBLE\n # Probability Score: 3.6103818e-06\n # Severity: HarmSeverity.HARM_SEVERITY_NEGLIGIBLE\n # Severity Score: None\n #\n # Category: HarmCategory.HARM_CATEGORY_HARASSMENT\n # Is Blocked: True\n # Probability: HarmProbability.MEDIUM\n # Probability Score: 0.71599233\n # Severity: HarmSeverity.HARM_SEVERITY_MEDIUM\n # Severity Score: 0.30782545\n #\n # Category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT\n # Is Blocked: False\n # Probability: HarmProbability.NEGLIGIBLE\n # Probability Score: 1.5624657e-05\n # Severity: HarmSeverity.HARM_SEVERITY_NEGLIGIBLE\n # Severity Score: None\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=googlegenaisdk)."]]