Class SafetySetting (1.48.0)

SafetySetting(
    *,
    category: google.cloud.aiplatform_v1beta1.types.content.HarmCategory,
    threshold: google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold,
    method: typing.Optional[
        google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockMethod
    ] = None
)

Parameters for the generation.

Classes

HarmBlockMethod

HarmBlockMethod(value)

Probability vs severity.

Values: HARM_BLOCK_METHOD_UNSPECIFIED (0): The harm block method is unspecified. SEVERITY (1): The harm block method uses both probability and severity scores. PROBABILITY (2): The harm block method uses the probability score.

HarmBlockThreshold

HarmBlockThreshold(value)

Probability based thresholds levels for blocking.

Values: HARM_BLOCK_THRESHOLD_UNSPECIFIED (0): Unspecified harm block threshold. BLOCK_LOW_AND_ABOVE (1): Block low threshold and above (i.e. block more). BLOCK_MEDIUM_AND_ABOVE (2): Block medium threshold and above. BLOCK_ONLY_HIGH (3): Block only high threshold (i.e. block less). BLOCK_NONE (4): Block none.

HarmCategory

HarmCategory(value)

Harm categories that will block the content.

Values: HARM_CATEGORY_UNSPECIFIED (0): The harm category is unspecified. HARM_CATEGORY_HATE_SPEECH (1): The harm category is hate speech. HARM_CATEGORY_DANGEROUS_CONTENT (2): The harm category is dangerous content. HARM_CATEGORY_HARASSMENT (3): The harm category is harassment. HARM_CATEGORY_SEXUALLY_EXPLICIT (4): The harm category is sexually explicit content.

Methods

SafetySetting

SafetySetting(
    *,
    category: google.cloud.aiplatform_v1beta1.types.content.HarmCategory,
    threshold: google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold,
    method: typing.Optional[
        google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockMethod
    ] = None
)

Safety settings.