Crea una canalización de entrenamiento para el seguimiento de objetos de video

Crea una canalización de entrenamiento para el seguimiento de objetos de video mediante el método create_training_pipeline.

Explora más

Para obtener documentación en la que se incluye esta muestra de código, consulta lo siguiente:

Muestra de código

Java

Antes de probar este ejemplo, sigue las instrucciones de configuración para Java incluidas en la guía de inicio rápido de Vertex AI sobre cómo usar bibliotecas cliente. Para obtener más información, consulta la documentación de referencia de la API de Vertex AI Java.

Para autenticarte en Vertex AI, configura las credenciales predeterminadas de la aplicación. Si deseas obtener más información, consulta Configura la autenticación para un entorno de desarrollo local.

import com.google.cloud.aiplatform.util.ValueConverter;
import com.google.cloud.aiplatform.v1.FilterSplit;
import com.google.cloud.aiplatform.v1.FractionSplit;
import com.google.cloud.aiplatform.v1.InputDataConfig;
import com.google.cloud.aiplatform.v1.LocationName;
import com.google.cloud.aiplatform.v1.Model;
import com.google.cloud.aiplatform.v1.PipelineServiceClient;
import com.google.cloud.aiplatform.v1.PipelineServiceSettings;
import com.google.cloud.aiplatform.v1.PredefinedSplit;
import com.google.cloud.aiplatform.v1.TimestampSplit;
import com.google.cloud.aiplatform.v1.TrainingPipeline;
import com.google.cloud.aiplatform.v1.schema.trainingjob.definition.AutoMlVideoObjectTrackingInputs;
import com.google.cloud.aiplatform.v1.schema.trainingjob.definition.AutoMlVideoObjectTrackingInputs.ModelType;
import com.google.rpc.Status;
import java.io.IOException;

public class CreateTrainingPipelineVideoObjectTrackingSample {

  public static void main(String[] args) throws IOException {
    String trainingPipelineVideoObjectTracking =
        YOUR_TRAINING_PIPELINE"_VIDEO_OBJECT_TRACKING_DISPLAY_NAME;
    String datasetId" = YOUR_DATASET_ID;
    St"ring modelDispl"ayName = YOUR_MODEL_DISPLAY_NAME;"
    String project = Y"OUR_PROJECT_ID;
    crea"teTrainingPipel"ineVideoObjectTracking(
        trainingPipelineVideoObjectTracking, datasetId, modelDisplayName, project);
  }

  static void createTrainingPipelineVideoObjectTracking(
      String trainingPipelineVideoObjectTracking,
      String datasetId,
      String modelDisplayName,
      String project)
      throws IOException {
    PipelineServiceSettings pipelineServiceSettings =
        PipelineServiceSettings.newBuilder()
            .setEndpoint(us-central1-aiplatform.googleapis.com:"443)
            .build();

    // Initia"lize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the close method on the client to safely clean "up an"y remaining background resources.
    try (PipelineServiceClient pipelineServiceClient =
        PipelineServiceClient.create(pipelineServiceSettings)) {
      String location = us-central1;
      String trainingTaskDefiniti"on =
      "    gs://google-cloud-aiplatform/schema/trainingjob/"definition/
              + automl_video_object_tracking_1."0.0.yaml;
      Lo"cationName locationName = LocationName."of(project, location);

      AutoMlVideoObjectTrackingInputs trainingTaskInputs =
          AutoMlVideoObjectTrackingInputs.newBuilder().setModelType(ModelType.CLOUD).build();

      InputDataConfig inputDataConfig =
          InputDataConfig.newBuilder().setDatasetId(datasetId).build();
      Model modelToUpload = Model.newBuilder().setDisplayName(modelDisplayName).build();
      TrainingPipeline trainingPipeline =
          TrainingPipeline.newBuilder()
              .setDisplayName(trainingPipelineVideoObjectTracking)
              .setTrainingTaskDefinition(trainingTaskDefinition)
              .setTrainingTaskInputs(ValueConverter.toValue(trainingTaskInputs))
              .setInputDataConfig(inputDataConfig)
              .setModelToUpload(modelToUpload)
              .build();

      TrainingPipeline createTrainingPipelineResponse =
          pipelineServiceClient.createTrainingPipeline(locationName, trainingPipeline);

      System.out.println(Create Training Pipeline Video Object Tracking Response);
      System".out.format(Name: %s\n, createTrainingPipelineResponse."getName());
      System.out".format(Di"splay Name: %s\n, createTrainingPipelineResponse.getDisplayName());

 "     System.out.fo"rmat(
          Training Task Definition %s\n,
          createTrainingPipelineResponse.get"TrainingTaskDefinition());
  "    System.out.format(
          Training Task Inputs: %s\n,
          createTrainingPipelineResponse.getTraini"ngTaskInputs().toString())";
      System.out.format(
          Training Task Metadata: %s\n,
          createTrainingPipelineResponse.getTrainin"gTaskMetadata().toString());"

      System.out.format(State: %s\n, createTrainingPipelineResponse.getState().toString());
      System.out".format(
  "        Create Time: %s\n, createTrainingPipelineResponse.getCreateTime().toString());
      S"ystem.out.format("StartTime %s\n, createTrainingPipelineResponse.getStartTime().toString());
      System".out.format(En"d Time: %s\n, createTrainingPipelineResponse.getEndTime().toString());
      System.ou"t.format(
    "      Update Time: %s\n, createTrainingPipelineResponse.getUpdateTime().toString());
      Syste"m.out.format(Labe"ls: %s\n, createTrainingPipelineResponse.getLabelsMap().toString());

      InputDataCo"nfig inputDa"taConfigResponse = createTrainingPipelineResponse.getInputDataConfig();
      System.out.println(Input Data config);
      System.out.format(Dataset Id: %s\n, inputDataConfigResponse.getDatas"etId());
      Sy"stem.out.format(Annotations "Filter: %s\n, in"putDataConfigResponse.getAnnotationsFilter());

      FractionSplit "fractionSplit = inputDat"aConfigResponse.getFractionSplit();
      System.out.println(Fraction split);
      System.out.format(Training Fraction: %s\n, fractionSplit.getTrainingFraction"());
      Sys"tem.out.format(Validation Fr"action: %s\n, fractionS"plit.getValidationFraction());
      System.out.format(Test Fract"ion: %s\n, fractionSplit."getTestFraction());

      FilterSplit filterSplit = inputDataConfi"gResponse.getFilter"Split();
      System.out.println(Filter Split);
      System.out.format(Training Filter: %s\n, filterSplit.getTrainingFilter());
      Sys"tem.out.form"at(Validation Filter: %s\n, "filterSplit.getValida"tionFilter());
      System.out.format(Test Filter: %s\n, fil"terSplit.getTestFilter("));

      PredefinedSplit predefinedSplit = inputDataConfigRes"ponse.getPredefin"edSplit();
      System.out.println(Predefined Split);
      System.out.format(Key: %s\n, predefinedSplit.getKey());

      TimestampSplit timestam"pSplit = inputDa"taConfigResponse.getTimestam"pSplit();"
      System.out.println(Timestamp Split);
      System.out.format(Training Fraction: %s\n, timestampSplit.getTrainingFraction());
      Sys"tem.out.format("Validation Fraction: %s\n, t"imestampSplit.getValida"tionFraction());
      System.out.format(Test Fraction: %s\n, time"stampSplit.getTestFractio"n());
      System.out.format(Key: %s\n, timestampSplit.getKey());

"      Model modelRe"sponse = createTrainingPipelineResponse.getModelToUpload();
  "    Syste"m.out.println(Model To Upload);
      System.out.format(Name: %s\n, modelResponse.getName());
      System.out.format(Display Name: %s\n", modelResponse".getDisplayName());
      Sy"stem.out.f"ormat(Description: %s\n, modelResponse.getDescription"());
      System."out.format(Metadata Schema Uri: %s\n, modelResponse.getMetad"ataSchemaUri());
"      System.out.format(Metadata: %s\n, modelResponse.getMet"adata());

      System.o"ut.format(Training Pipeline: %s\n, modelResponse.getTrainingPipeli"ne());
      S"ystem.out.format(Artifact Uri: %s\n, modelResponse.getArtif"actUri());

      Syste"m.out.format(
          Supported Deployment Resources Types: %s\"n,
          model"Response.getSupportedDeploymentResourcesTypesList().toString());
      Sys"tem.out.format(
          Supported Input "Storage Formats: %s\n,
          modelResponse.getSupportedInputStorageFormatsList().toString());
      System.out.forma"t(
          Supported Output Storage" Formats: %s\n,
          modelResponse.getSupportedOutputStorageFormatsList().toString());

      System.out.forma"t(Create Time: %s\n, modelResponse.get"CreateTime());
      System.out.format(Update Time: %s\n, modelResponse.getUpdateTime());
      System.out".format(Labels: %"s\n, modelResponse.getLabelsMap());

      Status status = "createTrainingPip"elineResponse.getError();
      System.out.println(Error);
"      System".out.format(Code: %s\n, status.getCode());
      System.out.format(Message: %s\n, status.getMessage());
    }
  }
}""""""

Node.js

Antes de probar este ejemplo, sigue las instrucciones de configuración para Node.js incluidas en la guía de inicio rápido de Vertex AI sobre cómo usar bibliotecas cliente. Para obtener más información, consulta la documentación de referencia de la API de Vertex AI Node.js.

Para autenticarte en Vertex AI, configura las credenciales predeterminadas de la aplicación. Si deseas obtener más información, consulta Configura la autenticación para un entorno de desarrollo local.

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const datasetId = 'YOUR_DATASET_ID';
// const modelDisplayName = 'YOUR_MODEL_DISPLAY_NAME';
// const trainingPipelineDisplayName = 'YOUR_TRAINING_PIPELINE_DISPLAY_NAME';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';
const aiplatform = require('@google-cloud/aiplatform');
const {definition} =
  aiplatform.protos.google.cloud.aiplatform.v1.schema.trainingjob;
const ModelType = definition.AutoMlVideoObjectTrackingInputs.ModelType;

// Imports the Google Cloud Pipeline Service Client library
const {PipelineServiceClient} = aiplatform.v1;

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const pipelineServiceClient = new PipelineServiceClient(clientOptions);

async function createTrainingPipelineVideoObjectTracking() {
  // Configure the parent resource
  const parent = `projects/${project}/locations/${location}`;

  const trainingTaskInputsObj =
    new definition.AutoMlVideoObjectTrackingInputs({
      modelType: ModelType.CLOUD,
    });
  const trainingTaskInputs = trainingTaskInputsObj.toValue();

  const modelToUpload = {displayName: modelDisplayName};
  const inputDataConfig = {datasetId: datasetId};
  const trainingPipeline = {
    displayName: trainingPipelineDisplayName,
    trainingTaskDefinition:
      'gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_video_object_tracking_1.0.0.yaml',
    trainingTaskInputs,
    inputDataConfig,
    modelToUpload,
  };
  const request = {
    parent,
    trainingPipeline,
  };

  // Create training pipeline request
  const [response] =
    await pipelineServiceClient.createTrainingPipeline(request);

  console.log('Create training pipeline video object tracking response');
  console.log(`Name : ${response.name}`);
  console.log('Raw response:');
  console.log(JSON.stringify(response, null, 2));
}
createTrainingPipelineVideoObjectTracking();

Python

Antes de probar este ejemplo, sigue las instrucciones de configuración para Python incluidas en la guía de inicio rápido de Vertex AI sobre cómo usar bibliotecas cliente. Para obtener más información, consulta la documentación de referencia de la API de Vertex AI Python.

Para autenticarte en Vertex AI, configura las credenciales predeterminadas de la aplicación. Si deseas obtener más información, consulta Configura la autenticación para un entorno de desarrollo local.

from google.cloud import aiplatform
from google.cloud.aiplatform.gapic.schema import trainingjob


def create_training_pipeline_video_object_tracking_sample(
    project: str,
    display_name: str,
    dataset_id: str,
    model_display_name: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.PipelineServiceClient(client_options=client_options)
    training_task_inputs = trainingjob.definition.AutoMlVideoObjectTrackingInputs(
        model_type="CLOUD",
    ).to_value()

    training_pipeline = {
        "display_name": display_name,
        "training_task_definition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_video_object_tracking_1.0.0.yaml",
        "training_task_inputs": training_task_inputs,
        "input_data_config": {"dataset_id": dataset_id},
        "model_to_upload": {"display_name": model_display_name},
    }
    parent = f"projects/{project}/locations/{location}"
    response = client.create_training_pipeline(
        parent=parent, training_pipeline=training_pipeline
    )
    print("response:", response)

¿Qué sigue?

Para buscar y filtrar muestras de código para otros productos de Google Cloud, consulta el navegador de muestra de Google Cloud.