Para crear un extremo público dedicado (no compartido), selecciona la casilla de verificación
Habilitar DNS dedicado.
Haz clic en Continuar.
Para implementar tu modelo en un extremo existente, sigue estos pasos:
Selecciona
radio_button_checkedAgregar a extremo existente.
Selecciona el extremo de la lista desplegable.
Haz clic en Continuar.
Puedes implementar varios modelos en un extremo o puedes implementar el
mismo modelo en varios extremos.
Si implementas tu modelo en un extremo existente que tiene uno o más modelos implementados, debes actualizar el porcentaje de división del tráfico del modelo que estás implementando y el que ya se implementó, para que todos los porcentajes sumen 100%.
Si implementas tu modelo en un extremo nuevo, acepta 100 para la división del tráfico. De lo contrario, ajusta los valores de división del tráfico para todos los modelos en el extremo para sumar hasta 100.
Ingresa la cantidad mínima de nodos de procesamiento que deseas proporcionar para el modelo.
Esta es la cantidad de nodos que deben estar disponibles para el modelo en todo momento.
Se te cobrará por los nodos que se usaron, ya sea para controlar la carga de inferencia o para los nodos en espera (mínimo), incluso sin tráfico de inferencia. Consulta la página de precios.
La cantidad de nodos de procesamiento puede aumentar si es necesario para manejar el tráfico de inferencia, pero nunca superará la cantidad máxima de nodos.
Para usar el ajuste de escala automático, escribe la cantidad máxima de nodos de procesamiento que quieres que Vertex AI escale de forma vertical.
Selecciona un Tipo de acelerador y un Recuento de acelerador.
Si habilitaste el uso del acelerador cuando importaste o creaste el modelo, se muestra esta opción.
Para conocer la cantidad de aceleradores, consulta la tabla de GPU a fin de verificar la cantidad válida de GPU que puedes usar con cada tipo de máquina de CPU. El recuento de aceleradores se refiere a la cantidad de aceleradores por nodo, no a la cantidad total de aceleradores en tu implementación.
Si deseas usar una cuenta de servicio personalizada para la implementación, elige una en el cuadro desplegable Cuenta de servicio.
[[["Fácil de comprender","easyToUnderstand","thumb-up"],["Resolvió mi problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Información o código de muestra incorrectos","incorrectInformationOrSampleCode","thumb-down"],["Faltan la información o los ejemplos que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2025-09-04 (UTC)"],[],[],null,["# Deploy a model by using the Google Cloud console\n\nIn the Google Cloud console, you can create a\n[public endpoint](/vertex-ai/docs/predictions/choose-endpoint-type)\nand deploy a model to it.\n\nModels can be deployed from the\nOnline prediction page or the Model Registry\npage.\n\nDeploy a model from the Online prediction page\n----------------------------------------------\n\nIn the Online prediction page, you can create an endpoint and deploy\none or more models to it as follows:\n\n1. In the Google Cloud console, in the Vertex AI section, go\n to the **Online prediction** page.\n\n [Go to the Online prediction page](https://console.cloud.google.com/vertex-ai/online-prediction/endpoints)\n2. Click add **Create**.\n\n3. In the **New endpoint** pane:\n\n 1. Enter the **Endpoint name**.\n\n 2. Select **Standard** for the access type.\n\n 3. To create a dedicated (not shared) public endpoint, select the\n **Enable dedicated DNS** checkbox.\n\n 4. Click **Continue**.\n\n4. In the **Model settings** pane:\n\n 1. Select your model from the drop-down list.\n\n 2. Choose the model version from the drop-down list.\n\n 3. Enter the **Traffic split** percentage for the model.\n\n 4. Click **Done**.\n\n 5. Repeat these steps for any additional models to be deployed.\n\nDeploy a model from the Model Registry page\n-------------------------------------------\n\nIn the Model Registry page, you can deploy a model to one\nor more new or existing endpoints as follows:\n\n1. In the Google Cloud console, in the Vertex AI section, go\n to the **Models** page.\n\n [Go to the Models page](https://console.cloud.google.com/vertex-ai/models)\n2. Click the name and version ID of the model you want to deploy to open\n its details page.\n\n3. Select the **Deploy \\& Test** tab.\n\n If your model is already deployed to any endpoints, they are listed in the\n **Deploy your model** section.\n4. Click **Deploy to endpoint**.\n\n5. To deploy your model to a new endpoint:\n\n 1. Select radio_button_checked**Create new endpoint**\n 2. Provide a name for the new endpoint.\n 3. To create a dedicated (not shared) public endpoint, select the **Enable dedicated DNS** checkbox.\n 4. Click **Continue**.\n\n To deploy your model to an existing endpoint:\n 1. Select radio_button_checked**Add to existing endpoint**.\n 2. Select the endpoint from the drop-down list.\n 3. Click **Continue**.\n\n You can deploy multiple models to an endpoint, or you can deploy the\n same model to multiple endpoints.\n6. If you deploy your model to an existing endpoint that has one or more\n models deployed to it, you must update the **Traffic split** percentage\n for the model you are deploying and the already deployed models so that all\n of the percentages add up to 100%.\n\n7.\n If you're deploying your model to a new endpoint, accept 100 for the\n **Traffic split**. Otherwise, adjust the traffic split values for\n all models on the endpoint so they add up to 100.\n\n8. Enter the **Minimum number of compute nodes** you want to provide for\n your model.\n\n This is the number of nodes that need to be available to the model at all times.\n\n You are charged for the nodes used, whether to handle inference load or for\n standby (minimum) nodes, even without inference traffic. See the\n [pricing page](/vertex-ai/pricing).\n\n The number of compute nodes can increase if needed to handle inference\n traffic, but it will never go higher than the maximum number of nodes.\n9. To use autoscaling, enter the **Maximum number of compute nodes** you\n want Vertex AI to scale up to.\n\n10. Select your **Machine type**.\n\n Larger machine resources increase your inference performance and\n increase costs.\n [Compare the available machine types](/vertex-ai/docs/predictions/configure-compute#machine_type_comparison).\n11. Select an **Accelerator type** and an **Accelerator count**.\n\n If you enabled accelerator use when you [imported](/vertex-ai/docs/model-registry/import-model)\n or created the model, this option displays.\n\n For the accelerator count, refer to the [GPU\n table](/vertex-ai/docs/predictions/configure-compute#gpus) to check for valid numbers\n of GPUs that you can use with each CPU machine type. The accelerator\n count refers to the number of accelerators per node, not the total\n number of accelerators in your deployment.\n12. If you want to use a [custom service\n account](/vertex-ai/docs/general/custom-service-account) for the deployment, select\n a service account in the **Service account** drop-down box.\n\n13.\n Learn how to [change the\n default settings for inference logging](/vertex-ai/docs/predictions/online-prediction-logging#enabling-and-disabling).\n\n14.\n Click **Done** for your model, and when all the **Traffic split**\n percentages are correct, click **Continue**.\n\n The region where your model deploys is displayed. This\n must be the region where you created your model.\n\n \u003cbr /\u003e\n\n15.\n Click **Deploy** to deploy your model to the endpoint.\n\nWhat's next\n-----------\n\n- Learn how to [get an online inference](/vertex-ai/docs/predictions/get-online-predictions).\n- Learn how to [change the\n default settings for inference logging](/vertex-ai/docs/predictions/online-prediction-logging#enabling-and-disabling)."]]