- 1.18.0 (latest)
- 1.17.0
- 1.16.0
- 1.15.0
- 1.14.0
- 1.13.1
- 1.12.0
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.39.0
- 0.38.0
- 0.37.1
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.2
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.13.0
- 0.12.0
- 0.11.1
- 0.10.0
Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class HarmCategory.
Harm categories that will block the content.
Protobuf type google.cloud.aiplatform.v1.HarmCategory
Namespace
Google \ Cloud \ AIPlatform \ V1Methods
static::name
Parameter | |
---|---|
Name | Description |
value |
mixed
|
static::value
Parameter | |
---|---|
Name | Description |
name |
mixed
|
Constants
HARM_CATEGORY_UNSPECIFIED
Value: 0
The harm category is unspecified.
Generated from protobuf enum HARM_CATEGORY_UNSPECIFIED = 0;
HARM_CATEGORY_HATE_SPEECH
Value: 1
The harm category is hate speech.
Generated from protobuf enum HARM_CATEGORY_HATE_SPEECH = 1;
HARM_CATEGORY_DANGEROUS_CONTENT
Value: 2
The harm category is dangerous content.
Generated from protobuf enum HARM_CATEGORY_DANGEROUS_CONTENT = 2;
HARM_CATEGORY_HARASSMENT
Value: 3
The harm category is harassment.
Generated from protobuf enum HARM_CATEGORY_HARASSMENT = 3;
HARM_CATEGORY_SEXUALLY_EXPLICIT
Value: 4
The harm category is sexually explicit content.
Generated from protobuf enum HARM_CATEGORY_SEXUALLY_EXPLICIT = 4;
HARM_CATEGORY_CIVIC_INTEGRITY
Value: 5
The harm category is civic integrity.
Generated from protobuf enum HARM_CATEGORY_CIVIC_INTEGRITY = 5;