Detecting explicit content in videos

Explicit Content Detection detects adult content in videos. Adult content is generally inappropriate for those under under 18 years of age and includes, but is not limited to, nudity, sexual activities, and pornography. Such content detected in cartoons or anime is also identified.

The response includes a bucketized likelihood value, from VERY_UNLIKELY to VERY_LIKELY.

When Explicit Content Detection evaluates a video, it does so on a per-frame basis and considers visual content only. The audio component of the video is not used to evaluate explicit content level.

Here is an example of performing video analysis for Explicit Content Detection features on a file located in Cloud Storage.


Send video annotation request

The following shows how to send a POST request to the videos:annotate method. The example uses the access token for a service account set up for the project using the Cloud SDK. For instructions on installing the Cloud SDK, setting up a project with a service account, and obtaining an access token, see the Video Intelligence API Quickstart.

Before using any of the request data below, make the following replacements:

  • input-uri: a Cloud Storage bucket that contains the file you want to annotate, including the file name. Must start with gs://.
    For example: "inputUri": "gs://cloud-videointelligence-demo/assistant.mp4",

HTTP method and URL:


Request JSON body:

  "inputUri": "input-uri",

To send your request, expand one of these options:

You should receive a JSON response similar to the following:

  "name": "projects/project-number/locations/location-id/operations/operation-id"

If the response is successful, the Video Intelligence API returns the name for your operation. The above shows an example of such a response, where:

  • project-number: the number of your project.
  • location-id: the Cloud region where annotation should take place. Supported cloud regions are: us-east1, us-west1, europe-west1, asia-east1. If no region is specified, a region will be determined based on video file location.
  • operation-id: the ID of the long running operation created for the request and provided in the response when you started the operation, for example 12345...

Get annotation results

To retrieve the result of the operation, make a GET request, using the operation name returned from the call to videos:annotate, as shown in the following example.

Before using any of the request data below, make the following replacements:

  • operation-name: the name of the operation as returned by Video Intelligence API. The operation name has the format projects/project-number/locations/location-id/operations/operation-id

HTTP method and URL:


To send your request, expand one of these options:

You should receive a JSON response similar to the following:

  "name": "projects/project-number/locations/location-id/operations/operation-id",
  "metadata": {
    "@type": "",
    "annotationProgress": [
      "inputUri": "/demomaker/gbikes_dinosaur.mp4",
      "progressPercent": 100,
      "startTime": "2020-03-26T00:16:35.112404Z",
      "updateTime": "2020-03-26T00:16:55.937889Z"
   "done": true,
   "response": {
    "@type": "",
    "annotationResults": [
      "inputUri": "/demomaker/gbikes_dinosaur.mp4",
      "explicitAnnotation": {
       "frames": [
         "timeOffset": "0.056149s",
         "pornographyLikelihood": "VERY_UNLIKELY"
         "timeOffset": "1.166841s",
         "pornographyLikelihood": "VERY_UNLIKELY"
         "timeOffset": "41.678209s",
         "pornographyLikelihood": "VERY_UNLIKELY"
         "timeOffset": "42.596413s",
         "pornographyLikelihood": "VERY_UNLIKELY"
Shot detection annotations are returned as a shotAnnotations list. Note: The done field is only returned when its value is True. It's not included in responses for which the operation has not completed.

Download annotation results

Copy the annotation from the source to the destination bucket: (see Copy files and objects)

gsutil cp gcs_uri gs://my-bucket

Note: If the output gcs uri is provided by the user, then the annotation is stored in that gcs uri.


public static object AnalyzeExplicitContentGcs(string uri)
    var client = VideoIntelligenceServiceClient.Create();
    var request = new AnnotateVideoRequest()
        InputUri = uri,
        Features = { Feature.ExplicitContentDetection }
    var op = client.AnnotateVideo(request).PollUntilCompleted();
    foreach (var result in op.Result.AnnotationResults)
        foreach (var frame in result.ExplicitAnnotation.Frames)
            Console.WriteLine("Time Offset: {0}", frame.TimeOffset);
            Console.WriteLine("Pornography Likelihood: {0}", frame.PornographyLikelihood);
    return 0;


func explicitContentURI(w io.Writer, file string) error {
	ctx := context.Background()
	client, err := video.NewClient(ctx)
	if err != nil {
		return err

	op, err := client.AnnotateVideo(ctx, &videopb.AnnotateVideoRequest{
		Features: []videopb.Feature{
		InputUri: file,
	if err != nil {
		return err
	resp, err := op.Wait(ctx)
	if err != nil {
		return err

	// A single video was processed. Get the first result.
	result := resp.AnnotationResults[0].ExplicitAnnotation

	for _, frame := range result.Frames {
		offset, _ := ptypes.Duration(frame.TimeOffset)
		fmt.Fprintf(w, "%s - %s\n", offset, frame.PornographyLikelihood.String())

	return nil


// Instantiate a
try (VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.create()) {
  // Create an operation that will contain the response when the operation completes.
  AnnotateVideoRequest request =

  OperationFuture<AnnotateVideoResponse, AnnotateVideoProgress> response =

  System.out.println("Waiting for operation to complete...");
  // Print detected annotations and their positions in the analyzed video.
  for (VideoAnnotationResults result : response.get().getAnnotationResultsList()) {
    for (ExplicitContentFrame frame : result.getExplicitAnnotation().getFramesList()) {
      double frameTime =
          frame.getTimeOffset().getSeconds() + frame.getTimeOffset().getNanos() / 1e9;
      System.out.printf("Location: %.3fs\n", frameTime);
      System.out.println("Adult: " + frame.getPornographyLikelihood());


// Imports the Google Cloud Video Intelligence library
const video = require('@google-cloud/video-intelligence').v1;

// Creates a client
const client = new video.VideoIntelligenceServiceClient();

 * TODO(developer): Uncomment the following line before running the sample.
// const gcsUri = 'GCS URI of video to analyze, e.g. gs://my-bucket/my-video.mp4';

const request = {
  inputUri: gcsUri,

// Human-readable likelihoods
const likelihoods = [

// Detects unsafe content
const [operation] = await client.annotateVideo(request);
console.log('Waiting for operation to complete...');
const [operationResult] = await operation.promise();
// Gets unsafe content
const explicitContentResults =
console.log('Explicit annotation results:');
explicitContentResults.frames.forEach(result => {
  if (result.timeOffset === undefined) {
    result.timeOffset = {};
  if (result.timeOffset.seconds === undefined) {
    result.timeOffset.seconds = 0;
  if (result.timeOffset.nanos === undefined) {
    result.timeOffset.nanos = 0;
    `\tTime: ${result.timeOffset.seconds}` +
      `.${(result.timeOffset.nanos / 1e6).toFixed(0)}s`
    `\t\tPornography likelihood: ${likelihoods[result.pornographyLikelihood]}`


For more information on installing and using the Cloud Video Intelligence API Client Library for Python, refer to Cloud Video Intelligence API Client Libraries.
""" Detects explicit content from the GCS path to a video. """
video_client = videointelligence.VideoIntelligenceServiceClient()
features = [videointelligence.Feature.EXPLICIT_CONTENT_DETECTION]

operation = video_client.annotate_video(
    request={"features": features, "input_uri": path}
print("\nProcessing video for explicit content annotations:")

result = operation.result(timeout=90)
print("\nFinished processing.")

# Retrieve first result because a single video was processed
for frame in result.annotation_results[0].explicit_annotation.frames:
    likelihood = videointelligence.Likelihood(frame.pornography_likelihood)
    frame_time = frame.time_offset.seconds + frame.time_offset.microseconds / 1e6
    print("Time: {}s".format(frame_time))
    print("\tpornography: {}".format(


use Google\Cloud\VideoIntelligence\V1\VideoIntelligenceServiceClient;
use Google\Cloud\VideoIntelligence\V1\Feature;
use Google\Cloud\VideoIntelligence\V1\Likelihood;

/** Uncomment and populate these variables in your code */
// $uri = 'The cloud storage object to analyze (gs://your-bucket-name/your-object-name)';
// $options = []; // Optional, can be used to increate "pollingIntervalSeconds"

$video = new VideoIntelligenceServiceClient();

# Execute a request.
$operation = $video->annotateVideo([
    'inputUri' => $uri,
    'features' => [Feature::EXPLICIT_CONTENT_DETECTION]

# Wait for the request to complete.

# Print the result.
if ($operation->operationSucceeded()) {
    $results = $operation->getResult()->getAnnotationResults()[0];
    $explicitAnnotation = $results->getExplicitAnnotation();
    foreach ($explicitAnnotation->getFrames() as $frame) {
        $time = $frame->getTimeOffset();
        printf('At %ss:' . PHP_EOL, $time->getSeconds() + $time->getNanos()/1000000000.0);
        printf('  pornography: ' . Likelihood::name($frame->getPornographyLikelihood()) . PHP_EOL);
} else {


# path = "Path to a video file on Google Cloud Storage: gs://bucket/video.mp4"

require "google/cloud/video_intelligence"

video = Google::Cloud::VideoIntelligence.video_intelligence_service

# Register a callback during the method call
operation = video.annotate_video features: [:EXPLICIT_CONTENT_DETECTION], input_uri: path

puts "Processing video for label annotations:"

raise operation.results.message? if operation.error?
puts "Finished Processing."

explicit_annotation = operation.results.annotation_results.first.explicit_annotation
print_explicit_annotation_frames explicit_annotation