Generate text with safety settings

This sample demonstrates how to use the Gemini model with safety settings to generate text.

Explore further

For detailed documentation that includes this code sample, see the following:

Code sample

Python

Before trying this sample, follow the Python setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Python API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

from google import genai
from google.genai.types import (
    GenerateContentConfig,
    HarmCategory,
    HarmBlockThreshold,
    HttpOptions,
    SafetySetting,
)

client = genai.Client(http_options=HttpOptions(api_version="v1"))

system_instruction = "Be as mean as possible."

prompt = """
    Write a list of 5 disrespectful things that I might say to the universe after stubbing my toe in the dark.
"""

safety_settings = [
    SafetySetting(
        category=HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
        threshold=HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
    ),
    SafetySetting(
        category=HarmCategory.HARM_CATEGORY_HARASSMENT,
        threshold=HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
    ),
    SafetySetting(
        category=HarmCategory.HARM_CATEGORY_HATE_SPEECH,
        threshold=HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
    ),
    SafetySetting(
        category=HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
        threshold=HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
    ),
]

response = client.models.generate_content(
    model="gemini-2.0-flash-001",
    contents=prompt,
    config=GenerateContentConfig(
        system_instruction=system_instruction,
        safety_settings=safety_settings,
    ),
)

# Response will be `None` if it is blocked.
print(response.text)
# Finish Reason will be `SAFETY` if it is blocked.
print(response.candidates[0].finish_reason)
# Safety Ratings show the levels for each filter.
for safety_rating in response.candidates[0].safety_ratings:
    print(safety_rating)

# Example response:
# None
# FinishReason.SAFETY
# blocked=None category=<HarmCategory.HARM_CATEGORY_HATE_SPEECH: 'HARM_CATEGORY_HATE_SPEECH'> probability=<HarmProbability.NEGLIGIBLE: 'NEGLIGIBLE'> probability_score=1.767152e-05 severity=<HarmSeverity.HARM_SEVERITY_NEGLIGIBLE: 'HARM_SEVERITY_NEGLIGIBLE'> severity_score=None
# blocked=None category=<HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: 'HARM_CATEGORY_DANGEROUS_CONTENT'> probability=<HarmProbability.NEGLIGIBLE: 'NEGLIGIBLE'> probability_score=2.399503e-06 severity=<HarmSeverity.HARM_SEVERITY_NEGLIGIBLE: 'HARM_SEVERITY_NEGLIGIBLE'> severity_score=0.061250776
# blocked=True category=<HarmCategory.HARM_CATEGORY_HARASSMENT: 'HARM_CATEGORY_HARASSMENT'> probability=<HarmProbability.MEDIUM: 'MEDIUM'> probability_score=0.64129734 severity=<HarmSeverity.HARM_SEVERITY_MEDIUM: 'HARM_SEVERITY_MEDIUM'> severity_score=0.2686556
# blocked=None category=<HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: 'HARM_CATEGORY_SEXUALLY_EXPLICIT'> probability=<HarmProbability.NEGLIGIBLE: 'NEGLIGIBLE'> probability_score=5.2786977e-06 severity=<HarmSeverity.HARM_SEVERITY_NEGLIGIBLE: 'HARM_SEVERITY_NEGLIGIBLE'> severity_score=0.043162167

What's next

To search and filter code samples for other Google Cloud products, see the Google Cloud sample browser.