Organiza tus páginas con colecciones
Guarda y categoriza el contenido según tus preferencias.
El recurso BatchPredictionJob te permite ejecutar una solicitud de predicción asíncrona. Solicita predicciones por lotes directamente desde el recurso model. No es necesario
que implementes el modelo en un endpoint. Para los tipos de datos que admiten predicciones por lotes y en línea, puedes usar predicciones por lotes.
Esto es útil cuando no necesitas una respuesta inmediata y deseas procesar datos acumulados mediante una sola solicitud.
Si deseas realizar una predicción por lotes, especifica una fuente de entrada y una ubicación de salida para que Vertex AI almacene ahí los resultados de las predicciones. Las entradas y salidas dependen del tipo de model con el que trabajas. Por ejemplo, las predicciones por lotes para el tipo de modelo de imagen de AutoML requieren un archivo de
líneas JSON
de entrada y el nombre de un bucket de Cloud Storage para almacenar el resultado.
Para obtener más información sobre la predicción por lotes, consulta Obtén predicciones por lotes.
Puedes usar el componente ModelBatchPredictOp para acceder a este recurso a través de Vertex AI Pipelines.
[[["Fácil de comprender","easyToUnderstand","thumb-up"],["Resolvió mi problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Información o código de muestra incorrectos","incorrectInformationOrSampleCode","thumb-down"],["Faltan la información o los ejemplos que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2025-09-04 (UTC)"],[],[],null,["# Batch prediction components\n\n| To learn more,\n| run the \"Learn how to use prebuilt Pipeline Components to train a custom model\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/custom_model_training_and_batch_prediction.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fpipelines%2Fcustom_model_training_and_batch_prediction.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fpipelines%2Fcustom_model_training_and_batch_prediction.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/custom_model_training_and_batch_prediction.ipynb)\n\nThe `BatchPredictionJob` resource lets you run an asynchronous\nprediction request. Request batch predictions directly from the `model`\nresource. You don't need to deploy the model to an `endpoint`. For data types\nthat support both batch and online predictions you can use batch predictions.\nThis is useful when you don't require an immediate response and want to process\naccumulated data by using a single request.\n\nTo make a batch prediction, specify an input source and an output location\nfor Vertex AI to store predictions results. The inputs and outputs\ndepend on the `model` type that you're working with. For example, batch\npredictions for the AutoML image model type require an input\n[JSON Lines](https://jsonlines.org/)\nfile and the name of a Cloud Storage bucket to store the output.\nFor more information about batch prediction, see\n[Get batch predictions](/vertex-ai/docs/predictions/batch-predictions).\n\nYou can use the [`ModelBatchPredictOp`](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/batch_predict_job.html#v1.batch_predict_job.ModelBatchPredictOp) component to access this resource through Vertex AI Pipelines.\n\nAPI reference\n-------------\n\n- For component reference, see the [Google Cloud SDK reference for Batch prediction components](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/api/v1/batch_predict_job.html).\n- For Vertex AI API reference, see the [`BatchPredictionJob` resource](/vertex-ai/docs/reference/rest/v1/projects.locations.batchPredictionJobs) page.\n\nTutorials\n---------\n\n- [Custom training with prebuilt Google Cloud Pipeline Components](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/custom_model_training_and_batch_prediction.ipynb)\n\n### Version history and release notes\n\nTo learn more about the version history and changes to the Google Cloud Pipeline Components SDK, see the [Google Cloud Pipeline Components SDK Release Notes](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-2.19.0/release.html).\n\n### Technical support contacts\n\nIf you have any questions, reach out to\n[kubeflow-pipelines-components@google.com](mailto: kubeflow-pipelines-components@google.com)."]]