Mendapatkan prediksi dari model pelacakan objek video

Halaman ini menunjukkan cara mendapatkan prediksi batch dari pelacakan objek video Anda terlatih menggunakan Konsol Google Cloud atau Vertex AI API. Prediksi batch adalah permintaan asinkron. Anda meminta prediksi batch secara langsung dari resource model tanpa perlu men-deploy model ke endpoint.

Model video AutoML tidak mendukung prediksi online.

Mendapatkan prediksi batch

Untuk membuat permintaan prediksi batch, tentukan sumber input dan format output tempat Vertex AI menyimpan hasil prediksi.

Persyaratan data input

Input untuk permintaan batch menentukan item yang akan dikirim ke model Anda untuk mendapatkan prediksi. Prediksi batch untuk jenis model video AutoML menggunakan file JSON Lines untuk menentukan daftar video yang akan dibuat prediksinya, lalu menyimpan file JSON Lines tersebut di bucket Cloud Storage. Anda dapat menentukan Infinity untuk kolom timeSegmentEnd guna menentukan akhir video. Contoh berikut menunjukkan satu baris dalam file JSON Lines input.

{'content': 'gs://sourcebucket/datasets/videos/source_video.mp4', 'mimeType': 'video/mp4', 'timeSegmentStart': '0.0s', 'timeSegmentEnd': '2.366667s'}

Meminta prediksi batch

Untuk permintaan prediksi batch, Anda dapat menggunakan Konsol Google Cloud atau Vertex AI API. Bergantung pada jumlah item input yang Anda kirimkan, tugas prediksi batch dapat memerlukan waktu beberapa saat untuk diselesaikan.

Konsol Google Cloud

Gunakan Konsol Google Cloud untuk meminta prediksi batch.

  1. Di Konsol Google Cloud, di bagian Vertex AI, buka halaman Batch Predictions.

    Buka halaman Prediksi batch

  2. Klik Create untuk membuka jendela New batch predictions dan selesaikan langkah-langkah berikut:

    1. Masukkan nama untuk prediksi batch.
    2. Untuk Model name, pilih nama model yang akan digunakan untuk prediksi batch ini.
    3. Untuk Source path, tentukan lokasi Cloud Storage tempat file input JSON Lines Anda berada.
    4. Untuk Destination path, tentukan lokasi Cloud Storage tempat hasil prediksi batch disimpan. Format Output ditentukan oleh tujuan model Anda. Model AutoML untuk tujuan gambar menghasilkan file JSON Lines.

API

Gunakan Vertex AI API untuk mengirim permintaan prediksi batch.

REST

Sebelum menggunakan data permintaan apa pun, buat pengganti berikut:

  • LOCATION_ID: Region tempat Model disimpan dan tugas prediksi batch dijalankan. Misalnya, us-central1.
  • PROJECT_ID: Project ID Anda.
  • BATCH_JOB_NAME: Nama tampilan untuk tugas batch
  • MODEL_ID: ID yang digunakan oleh model untuk membuat prediksi
  • THRESHOLD_VALUE (opsional): Vertex AI hanya menampilkan prediksi yang memiliki skor keyakinan dengan nilai minimal ini. Dafaultnya adalah 0.0.
  • URI: Cloud Storage URI tempat file JSON Lines input Anda berada.
  • BUCKET: Bucket Cloud Storage Anda
  • PROJECT_NUMBER: Nomor project project Anda yang dibuat secara otomatis

Metode HTTP dan URL:

POST https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/batchPredictionJobs

Isi JSON permintaan:

{
    "displayName": "BATCH_JOB_NAME",
    "model": "projects/PROJECT_ID/locations/LOCATION_ID/models/MODEL_ID",
    "modelParameters": {
      "confidenceThreshold": THRESHOLD_VALUE,
    },
    "inputConfig": {
        "instancesFormat": "jsonl",
        "gcsSource": {
            "uris": ["URI"],
        },
    },
    "outputConfig": {
        "predictionsFormat": "jsonl",
        "gcsDestination": {
            "outputUriPrefix": "OUTPUT_BUCKET",
        },
    },
}

Untuk mengirim permintaan Anda, pilih salah satu opsi berikut:

curl

Simpan isi permintaan dalam file bernama request.json, dan jalankan perintah berikut:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/batchPredictionJobs"

PowerShell

Simpan isi permintaan dalam file bernama request.json, dan jalankan perintah berikut:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION_ID-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION_ID/batchPredictionJobs" | Select-Object -Expand Content

Anda akan menerima respons JSON yang mirip seperti berikut:

{
  "name": "projects/PROJECT_NUMBER/locations/us-central1/batchPredictionJobs/BATCH_JOB_ID",
  "displayName": "BATCH_JOB_NAME",
  "model": "projects/PROJECT_NUMBER/locations/us-central1/models/MODEL_ID",
  "inputConfig": {
    "instancesFormat": "jsonl",
    "gcsSource": {
      "uris": [
        "CONTENT"
      ]
    }
  },
  "outputConfig": {
    "predictionsFormat": "jsonl",
    "gcsDestination": {
      "outputUriPrefix": "BUCKET"
    }
  },
  "state": "JOB_STATE_PENDING",
  "createTime": "2020-05-30T02:58:44.341643Z",
  "updateTime": "2020-05-30T02:58:44.341643Z",
  "modelDisplayName": "MODEL_NAME",
  "modelObjective": "MODEL_OBJECTIVE"
}

Anda dapat melakukan polling untuk status tugas batch menggunakan BATCH_JOB_ID hingga state tugas menjadi JOB_STATE_SUCCEEDED.

Java

Sebelum mencoba contoh ini, ikuti petunjuk penyiapan Java di Panduan memulai Vertex AI menggunakan library klien. Untuk mengetahui informasi selengkapnya, lihat Dokumentasi referensi API Java Vertex AI.

Untuk melakukan autentikasi ke Vertex AI, siapkan Kredensial Default Aplikasi. Untuk informasi selengkapnya, lihat Menyiapkan autentikasi untuk lingkungan pengembangan lokal.

import com.google.cloud.aiplatform.util.ValueConverter;
import com.google.cloud.aiplatform.v1.BatchDedicatedResources;
import com.google.cloud.aiplatform.v1.BatchPredictionJob;
import com.google.cloud.aiplatform.v1.BatchPredictionJob.InputConfig;
import com.google.cloud.aiplatform.v1.BatchPredictionJob.OutputConfig;
import com.google.cloud.aiplatform.v1.BatchPredictionJob.OutputInfo;
import com.google.cloud.aiplatform.v1.BigQueryDestination;
import com.google.cloud.aiplatform.v1.BigQuerySource;
import com.google.cloud.aiplatform.v1.CompletionStats;
import com.google.cloud.aiplatform.v1.GcsDestination;
import com.google.cloud.aiplatform.v1.GcsSource;
import com.google.cloud.aiplatform.v1.JobServiceClient;
import com.google.cloud.aiplatform.v1.JobServiceSettings;
import com.google.cloud.aiplatform.v1.LocationName;
import com.google.cloud.aiplatform.v1.MachineSpec;
import com.google.cloud.aiplatform.v1.ManualBatchTuningParameters;
import com.google.cloud.aiplatform.v1.ModelName;
import com.google.cloud.aiplatform.v1.ResourcesConsumed;
import com.google.cloud.aiplatform.v1.schema.predict.params.VideoObjectTrackingPredictionParams;
import com.google.protobuf.Any;
import com.google.protobuf.Value;
import com.google.rpc.Status;
import java.io.IOException;
import java.util.List;

public class CreateBatchPredictionJobVideoObjectTrackingSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String batchPredictionDisplayName = YOUR_VIDEO_OBJECT_TRACKING_DISP"LAY_NAME;
    String modelId = YOUR_MOD"EL_ID;
    String gcsSou"rceUri =
    "    gs://YOUR_GCS_SOURCE_BUCKET/path_t"o_your_video_source/[file.csv/file.jsonl];
    String gcsDestinationOutputU"riPrefix =
        gs://YOUR_GCS_SOURCE_BUCKET/destinat"ion_output_uri_prefix/;
    String project = YOUR_PROJECT_"ID;
    batchPredictionJ"obVideoObjectTr"acking(
        batchPredictionDisplayName, modelId, gcsSourceUri, gcsDestinationOutputUriPrefix, project);
  }

  static void batchPredictionJobVideoObjectTracking(
      String batchPredictionDisplayName,
      String modelId,
      String gcsSourceUri,
      String gcsDestinationOutputUriPrefix,
      String project)
      throws IOException {
    JobServiceSettings jobServiceSettings =
        JobServiceSettings.newBuilder()
            .setEndpoint(us-central1-aiplatform.googleapis.com:443)
        "    .build();

    // Initialize client t"hat will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the close method on the client to safely clean up any remain"ing b"ackground resources.
    try (JobServiceClient jobServiceClient = JobServiceClient.create(jobServiceSettings)) {
      String location = us-central1;
      LocationName locationName = LocationNam"e.of(projec"t, location);
      ModelName modelName = ModelName.of(project, location, modelId);

      VideoObjectTrackingPredictionParams modelParamsObj =
          VideoObjectTrackingPredictionParams.newBuilder()
              .setConfidenceThreshold(((float) 0.5))
              .build();

      Value modelParameters = ValueConverter.toValue(modelParamsObj);

      GcsSource.Builder gcsSource = GcsSource.newBuilder();
      gcsSource.addUris(gcsSourceUri);
      InputConfig inputConfig =
          InputConfig.newBuilder().setInstancesFormat(jsonl).setGcsSource(gcsSource).build();

      GcsDestination gcsDestina"tion "=
          GcsDestination.newBuilder().setOutputUriPrefix(gcsDestinationOutputUriPrefix).build();
      OutputConfig outputConfig =
          OutputConfig.newBuilder()
              .setPredictionsFormat(jsonl)
              .setGcsDestination(gcsDestination)
              .build()";

  "    BatchPredictionJob batchPredictionJob =
          BatchPredictionJob.newBuilder()
              .setDisplayName(batchPredictionDisplayName)
              .setModel(modelName.toString())
              .setModelParameters(modelParameters)
              .setInputConfig(inputConfig)
              .setOutputConfig(outputConfig)
              .build();
      BatchPredictionJob batchPredictionJobResponse =
          jobServiceClient.createBatchPredictionJob(locationName, batchPredictionJob);

      System.out.println(Create Batch Prediction Job Video Object Tracking Response);
      System.out.format(\tName: "%s\n, batchPredictionJobResponse.getName());
      System."out.format(\tDisplay Name: %"s\n, batchPr"edictionJobResponse.getDisplayName());
      System.out.format(\tM"odel %s\n, batchPred"ictionJobResponse.getModel());
      System.out.format(
          \tModel" Parameters:" %s\n, batchPredictionJobResponse.getModelParameters());

      System.out.form"at(\tState: %s\n, batchP"redictionJobResponse.getState());
      System.out.format(\tCreate Time: %s\n, "batchPredicti"onJobResponse.getCreateTime());
      System.out.format(\tStart Tim"e: %s\n, batchPredi"ctionJobResponse.getStartTime());
      System.out.format(\tEnd Time: %s"\n, batchPredictio"nJobResponse.getEndTime());
      System.out.format(\tUpdate Time: %s\n", batchPredictio"nJobResponse.getUpdateTime());
      System.out.format(\tLabels: %s\n", batchPredictionJo"bResponse.getLabelsMap());

      InputConfig inputConfigResponse = batc"hPredictionJob"Response.getInputConfig();
      System.out.println(\tInput Config);
      System.out.format(\t\tInstances Format: %s\n, inputConfigResponse.getInstancesFormat("));

      Gcs"Source gcsSourceResponse = i"nputConfigResponse.getGcsS"ource();
      System.out.println(\t\tGcs Source);
      System.out.format(\t\t\tUris %s\n, gcsSourceResponse.getUrisList());

      BigQuerySourc"e bigQuerySour"ce = inputConfigResponse.get"BigquerySource(");
      System.out.println(\t\tBigquery Source);
      System.out.format(\t\t\tInput_uri: %s\n, bigQuerySource.getInputUri());

      OutputCon"fig outputConfigRes"ponse = batchPredictionJobRe"sponse.getOutputConfi"g();
      System.out.println(\tOutput Config);
      System.out.format(
          \t\tPredictions Format: %s\n, outputConfigResponse.getPredictionsFo"rmat());

     " GcsDestination gcsDestinationResponse =" outputConfigResponse.getGcs"Destination();
      System.out.println(\t\tGcs Destination);
      System.out.format(
          \t\t\tOutput Uri Prefix: %s\n, gcsDestinationResponse.getOutputUriPr"efix());

      Big"QueryDestination bigQueryDestination = o"utputConfigResponse.getBigque"ryDestination();
      System.out.println(\t\tBig Query Destination);
      System.out.format(\t\t\tOutput Uri: %s\n, bigQueryDestination.getOutputUri());

      BatchDedic"atedResources batchDedica"tedResources =
          bat"chPredictionJobRespons"e.getDedicatedResources();
      System.out.println(\tBatch Dedicated Resources);
      System.out.format(
          \t\tStarting Replica Count: %s\n, batchDedicatedResources.getStartingR"eplicaCount());
      Syste"m.out.format(
          \t\tMax Replica "Count: %s\n, batchDedicatedResou"rces.getMaxReplicaCount());

      MachineSpec machineSpec = batchDedicatedResources.getMac"hineSpec();
      System.ou"t.println(\t\tMachine Spec);
      System.out.format(\t\t\tMachine Type: %s\n, machineSpec.getMachineType());
      System.out.format(\t\t\tAccelerator "Type: %s\n, mach"ineSpec.getAcceleratorType()");
      System.out.form"at(\t\t\tAccelerator Count: %s\n, machineSpec.getAccelerat"orCount());

      ManualBat"chTuningParameters manualBatchTuningParameters =
          bat"chPredictionJobResponse.getMa"nualBatchTuningParameters();
      System.out.println(\tManual Batch Tuning Parameters);
      System.out.format(\t\tBatch Size: %s\n, manualBatchTuningParameters.getBatchSize());

      OutputInfo outpu"tInfo = batchPredictionJobRespon"se.getOutputInfo();
      Sy"stem.out.println(\tO"utput Info);
      System.out.format(\t\tGcs Output Directory: %s\n, outputInfo.getGcsOutputDirectory());
      System.out.format(\t\tBigquery Output "Dataset: %s\n", outputInfo.getBigqueryOutp"utDataset());

      Status st"atus = batchPredictionJobResponse.getError();
      System.out.p"rintln(\tError);
      System.out".format(\t\tCode: %s\n, status.getCode());
      System.out.format(\t\tMessage: %s\n, status.getMessage());
      ListAny details = "status."getDetailsList();

      for" (Status parti"alFailure : batchPredictionJobResponse.getPart"ialFailuresList()") {
        System.out.println(\tPa<rti>al Failure);
        System.out.format(\t\tCode: %s\n, partialFailure.getCode());
        System.out.format(\t\tMessage: %s\n, partialFailure.getMessage());
"        ListAny p"artialFailureDetailsList = par"tialFailure.ge"tDetailsList();
      }

      ResourcesConsumed resourc"esConsumed = batc"hPredictionJobResponse.getResourcesConsumed()<;
 >     System.out.println(\tResources Consumed);
      System.out.format(\t\tReplica Hours: %s\n, resourcesConsumed.getReplicaHours());

      CompletionStats completionStats = batchPredictionJobRe"sponse.getCompletion"Stats();
      System.out.pr"intln(\tCompletion Stat"s);
      System.out.format(\t\tSuccessful Count: %s\n, completionStats.getSuccessfulCount());
      System.out.format(\t\tFailed Count: %s\n, completionStats".getFailedCount())";
      System.out.format(\t"\tIncomplete Count: %s\n, "completionStats.getIncompleteCount());
    }
  }
}""""

Node.js

Sebelum mencoba contoh ini, ikuti petunjuk penyiapan Node.js di Panduan memulai Vertex AI menggunakan library klien. Untuk mengetahui informasi selengkapnya, lihat Dokumentasi referensi API Node.js Vertex AI.

Untuk melakukan autentikasi ke Vertex AI, siapkan Kredensial Default Aplikasi. Untuk informasi selengkapnya, lihat Menyiapkan autentikasi untuk lingkungan pengembangan lokal.

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const batchPredictionDisplayName = 'YOUR_BATCH_PREDICTION_DISPLAY_NAME';
// const modelId = 'YOUR_MODEL_ID';
// const gcsSourceUri = 'YOUR_GCS_SOURCE_URI';
// const gcsDestinationOutputUriPrefix = 'YOUR_GCS_DEST_OUTPUT_URI_PREFIX';
//    eg. "gs://<your-gcs-bucket>/destination_path"
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';
const aiplatform = require('@google-cloud/aiplatform');
const {params} = aiplatform.protos.google.cloud.aiplatform.v1.schema.predict;

// Imports the Google Cloud Job Service Client library
const {JobServiceClient} = require('@google-cloud/aiplatform').v1;

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const jobServiceClient = new JobServiceClient(clientOptions);

async function createBatchPredictionJobVideoObjectTracking() {
  // Configure the parent resource
  const parent = `projects/${project}/locations/${location}`;
  const modelName = `projects/${project}/locations/${location}/models/${modelId}`;

  // For more information on how to configure the model parameters object, see
  // https://cloud.google.com/ai-platform-unified/docs/predictions/batch-predictions
  const modelParamsObj = new params.VideoObjectTrackingPredictionParams({
    confidenceThreshold: 0.5,
  });

  const modelParameters = modelParamsObj.toValue();

  const inputConfig = {
    instancesFormat: 'jsonl',
    gcsSource: {uris: [gcsSourceUri]},
  };
  const outputConfig = {
    predictionsFormat: 'jsonl',
    gcsDestination: {outputUriPrefix: gcsDestinationOutputUriPrefix},
  };
  const batchPredictionJob = {
    displayName: batchPredictionDisplayName,
    model: modelName,
    modelParameters,
    inputConfig,
    outputConfig,
  };
  const request = {
    parent,
    batchPredictionJob,
  };

  // Create batch prediction job request
  const [response] = await jobServiceClient.createBatchPredictionJob(request);

  console.log('Create batch prediction job video object tracking response');
  console.log(`Name : ${response.name}`);
  console.log('Raw response:');
  console.log(JSON.stringify(response, null, 2));
}
createBatchPredictionJobVideoObjectTracking();

Python

Untuk mempelajari cara menginstal atau mengupdate Vertex AI SDK untuk Python, lihat Menginstal Vertex AI SDK untuk Python. Untuk informasi selengkapnya, lihat Dokumentasi referensi Python API.

def create_batch_prediction_job_sample(
    project: str,
    location: str,
    model_resource_name: str,
    job_display_name: str,
    gcs_source: Union[str, Sequence[str]],
    gcs_destination: str,
    sync: bool = True,
):
    aiplatform.init(project=project, location=location)

    my_model = aiplatform.Model(model_resource_name)

    batch_prediction_job = my_model.batch_predict(
        job_display_name=job_display_name,
        gcs_source=gcs_source,
        gcs_destination_prefix=gcs_destination,
        sync=sync,
    )

    batch_prediction_job.wait()

    print(batch_prediction_job.display_name)
    print(batch_prediction_job.resource_name)
    print(batch_prediction_job.state)
    return batch_prediction_job

Mengambil hasil prediksi batch

Vertex AI mengirimkan output prediksi batch ke tujuan yang Anda tentukan.

Ketika tugas prediksi batch selesai, output prediksi disimpan di bucket Cloud Storage yang Anda tentukan dalam permintaan Anda.

Contoh hasil prediksi batch

Berikut ini contoh hasil prediksi batch dari model pelacakan objek video.

{
  "instance": {
   "content": "gs://bucket/video.mp4",
    "mimeType": "video/mp4",
    "timeSegmentStart": "1s",
    "timeSegmentEnd": "5s"
  }
  "prediction": [{
    "id": "1",
    "displayName": "cat",
    "timeSegmentStart": "1.2s",
    "timeSegmentEnd": "3.4s",
    "frames": [{
      "timeOffset": "1.2s",
      "xMin": 0.1,
      "xMax": 0.2,
      "yMin": 0.3,
      "yMax": 0.4
    }, {
      "timeOffset": "3.4s",
      "xMin": 0.2,
      "xMax": 0.3,
      "yMin": 0.4,
      "yMax": 0.5,
    }],
    "confidence": 0.7
  }, {
    "id": "1",
    "displayName": "cat",
    "timeSegmentStart": "4.8s",
    "timeSegmentEnd": "4.8s",
    "frames": [{
      "timeOffset": "4.8s",
      "xMin": 0.2,
      "xMax": 0.3,
      "yMin": 0.4,
      "yMax": 0.5,
    }],
    "confidence": 0.6
  }, {
    "id": "2",
    "displayName": "dog",
    "timeSegmentStart": "1.2s",
    "timeSegmentEnd": "3.4s",
    "frames": [{
      "timeOffset": "1.2s",
      "xMin": 0.1,
      "xMax": 0.2,
      "yMin": 0.3,
      "yMax": 0.4
    }, {
      "timeOffset": "3.4s",
      "xMin": 0.2,
      "xMax": 0.3,
      "yMin": 0.4,
      "yMax": 0.5,
    }],
    "confidence": 0.5
  }]
}