Class ModelContainerSpec (0.7.1)

ModelContainerSpec(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Specification of a container for serving predictions. This message is a subset of the Kubernetes Container v1 core specification <https://tinyurl.com/k8s-io-api/v1.18/#container-v1-core>__.

Attributes

NameDescription
image_uri str
Required. Immutable. URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry or Container Registry. Learn more about the container publishing requirements, including permissions requirements for the AI Platform Service Agent, `here
command Sequence[str]
Immutable. Specifies the command that runs when the container starts. This overrides the container's `ENTRYPOINT
args Sequence[str]
Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's ```CMD``
env Sequence[google.cloud.aiplatform_v1beta1.types.EnvVar]
Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the ``command`` and ``args`` fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable ``VAR_2`` to have the value ``foo bar``: .. code:: json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ] If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to the ``env`` field of the Kubernetes Containers `v1 core API
ports Sequence[google.cloud.aiplatform_v1beta1.types.Port]
Immutable. List of ports to expose from the container. AI Platform sends any prediction requests that it receives to the first port on this list. AI Platform also sends `liveness and health checks
predict_route str
Immutable. HTTP path on the container to send prediction requests to. AI Platform forwards requests sent using ``projects.locations.endpoints.predict`` to this path on the container's IP address and port. AI Platform then returns the container's response in the API response. For example, if you set this field to ``/foo``, then when AI Platform receives a prediction request, it forwards the request body in a POST request to the ``/foo`` path on the port of your container specified by the first value of this ``ModelContainerSpec``'s ``ports`` field. If you don't specify this field, it defaults to the following value when you [deploy this Model to an Endpoint][google.cloud.aiplatform.v1beta1.EndpointService.DeployModel]: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: - ENDPOINT: The last segment (following ``endpoints/``)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (AI Platform makes this value available to your container code as the ```AIP_ENDPOINT_ID``
health_route str
Immutable. HTTP path on the container to send health checks to. AI Platform intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about `health checks

Inheritance

builtins.object > proto.message.Message > ModelContainerSpec