Class SafeSearchAnnotation (2.0.0)

SafeSearchAnnotation(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).

Attributes

NameDescription
adult .image_annotator.Likelihood
Represents the adult content likelihood for the image. Adult content may contain elements such as nudity, pornographic images or cartoons, or sexual activities.
spoof .image_annotator.Likelihood
Spoof likelihood. The likelihood that an modification was made to the image's canonical version to make it appear funny or offensive.
medical .image_annotator.Likelihood
Likelihood that this is a medical image.
violence .image_annotator.Likelihood
Likelihood that this image contains violent content.
racy .image_annotator.Likelihood
Likelihood that the request image contains racy content. Racy content may include (but is not limited to) skimpy or sheer clothing, strategically covered nudity, lewd or provocative poses, or close-ups of sensitive body areas.
adult_confidence float
Confidence of adult_score. Range [0, 1]. 0 means not confident, 1 means very confident.
spoof_confidence float
Confidence of spoof_score. Range [0, 1]. 0 means not confident, 1 means very confident.
medical_confidence float
Confidence of medical_score. Range [0, 1]. 0 means not confident, 1 means very confident.
violence_confidence float
Confidence of violence_score. Range [0, 1]. 0 means not confident, 1 means very confident.
racy_confidence float
Confidence of racy_score. Range [0, 1]. 0 means not confident, 1 means very confident.
nsfw_confidence float
Confidence of nsfw_score. Range [0, 1]. 0 means not confident, 1 means very confident.