Class SafeSearchAnnotation (2.0.0)

SafeSearchAnnotation(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).


adult .image_annotator.Likelihood
Represents the adult content likelihood for the image. Adult content may contain elements such as nudity, pornographic images or cartoons, or sexual activities.
spoof .image_annotator.Likelihood
Spoof likelihood. The likelihood that an modification was made to the image's canonical version to make it appear funny or offensive.
medical .image_annotator.Likelihood
Likelihood that this is a medical image.
violence .image_annotator.Likelihood
Likelihood that this image contains violent content.
racy .image_annotator.Likelihood
Likelihood that the request image contains racy content. Racy content may include (but is not limited to) skimpy or sheer clothing, strategically covered nudity, lewd or provocative poses, or close-ups of sensitive body areas.