Starting April 29, 2025, Gemini 1.5 Pro and Gemini 1.5 Flash models are not available in projects that have no prior usage of these models, including new projects. For details, see
Model versions and lifecycle.
Generate text with safety settings
Stay organized with collections
Save and categorize content based on your preferences.
This sample demonstrates how to use the Gemini model with safety settings to generate text.
Explore further
For detailed documentation that includes this code sample, see the following:
Code sample
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],[],[],[],null,["# Generate text with safety settings\n\nThis sample demonstrates how to use the Gemini model with safety settings to generate text.\n\nExplore further\n---------------\n\n\nFor detailed documentation that includes this code sample, see the following:\n\n- [Safety and content filters](/vertex-ai/generative-ai/docs/multimodal/configure-safety-filters)\n\nCode sample\n-----------\n\n### Python\n\n\nBefore trying this sample, follow the Python setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Python API\nreference documentation](/python/docs/reference/aiplatform/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n from google import genai\n from google.genai.types import (\n GenerateContentConfig,\n HarmCategory,\n HarmBlockThreshold,\n HttpOptions,\n SafetySetting,\n )\n\n client = genai.Client(http_options=HttpOptions(api_version=\"v1\"))\n\n system_instruction = \"Be as mean as possible.\"\n\n prompt = \"\"\"\n Write a list of 5 disrespectful things that I might say to the universe after stubbing my toe in the dark.\n \"\"\"\n\n safety_settings = [\n SafetySetting(\n category=HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,\n threshold=HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,\n ),\n SafetySetting(\n category=HarmCategory.HARM_CATEGORY_HARASSMENT,\n threshold=HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,\n ),\n SafetySetting(\n category=HarmCategory.HARM_CATEGORY_HATE_SPEECH,\n threshold=HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,\n ),\n SafetySetting(\n category=HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,\n threshold=HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,\n ),\n ]\n\n response = client.models.generate_content(\n model=\"gemini-2.5-flash\",\n contents=prompt,\n config=GenerateContentConfig(\n system_instruction=system_instruction,\n safety_settings=safety_settings,\n ),\n )\n\n # Response will be `None` if it is blocked.\n print(response.text)\n # Example response:\n # None\n\n # Finish Reason will be `SAFETY` if it is blocked.\n print(response.candidates[0].finish_reason)\n # Example response:\n # FinishReason.SAFETY\n\n # For details on all the fields in the response\n for each in response.candidates[0].safety_ratings:\n print('\\nCategory: ', str(each.category))\n print('Is Blocked:', True if each.blocked else False)\n print('Probability: ', each.probability)\n print('Probability Score: ', each.probability_score)\n print('Severity:', each.severity)\n print('Severity Score:', each.severity_score)\n # Example response:\n #\n # Category: HarmCategory.HARM_CATEGORY_HATE_SPEECH\n # Is Blocked: False\n # Probability: HarmProbability.NEGLIGIBLE\n # Probability Score: 2.547714e-05\n # Severity: HarmSeverity.HARM_SEVERITY_NEGLIGIBLE\n # Severity Score: None\n #\n # Category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT\n # Is Blocked: False\n # Probability: HarmProbability.NEGLIGIBLE\n # Probability Score: 3.6103818e-06\n # Severity: HarmSeverity.HARM_SEVERITY_NEGLIGIBLE\n # Severity Score: None\n #\n # Category: HarmCategory.HARM_CATEGORY_HARASSMENT\n # Is Blocked: True\n # Probability: HarmProbability.MEDIUM\n # Probability Score: 0.71599233\n # Severity: HarmSeverity.HARM_SEVERITY_MEDIUM\n # Severity Score: 0.30782545\n #\n # Category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT\n # Is Blocked: False\n # Probability: HarmProbability.NEGLIGIBLE\n # Probability Score: 1.5624657e-05\n # Severity: HarmSeverity.HARM_SEVERITY_NEGLIGIBLE\n # Severity Score: None\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=googlegenaisdk)."]]