使用 upload_model 方法上传用于说明表格代管式容器的模型。
代码示例
Python
在尝试此示例之前,请按照《Vertex AI 快速入门:使用客户端库》中的 Python 设置说明执行操作。如需了解详情,请参阅 Vertex AI Python API 参考文档。
如需向 Vertex AI 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证。
from google.cloud import aiplatform_v1beta1
def upload_model_explain_tabular_managed_container_sample(
project: str,
display_name: str,
container_spec_image_uri: str,
artifact_uri: str,
input_tensor_name: str,
output_tensor_name: str,
feature_names: list,
location: str = "us-central1",
api_endpoint: str = "us-central1-aiplatform.googleapis.com",
timeout: int = 300,
):
# The AI Platform services require regional API endpoints.
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
# This client only needs to be created once, and can be reused for multiple requests.
client = aiplatform_v1beta1.ModelServiceClient(client_options=client_options)
# Container specification for deploying the model
container_spec = {"image_uri": container_spec_image_uri, "command": [], "args": []}
# The explainabilty method and corresponding parameters
parameters = aiplatform_v1beta1.ExplanationParameters(
{"xrai_attribution": {"step_count": 1}}
)
# The input tensor for feature attribution to the output
# For single input model, y = f(x), this will be the serving input layer.
input_metadata = aiplatform_v1beta1.ExplanationMetadata.InputMetadata(
{
"input_tensor_name": input_tensor_name,
# Input is tabular data
"modality": "numeric",
# Assign feature names to the inputs for explanation
"encoding": "BAG_OF_FEATURES",
"index_feature_mapping": feature_names,
}
)
# The output tensor to explain
# For single output model, y = f(x), this will be the serving output layer.
output_metadata = aiplatform_v1beta1.ExplanationMetadata.OutputMetadata(
{"output_tensor_name": output_tensor_name}
)
# Assemble the explanation metadata
metadata = aiplatform_v1beta1.ExplanationMetadata(
inputs={"features": input_metadata}, outputs={"prediction": output_metadata}
)
# Assemble the explanation specification
explanation_spec = aiplatform_v1beta1.ExplanationSpec(
parameters=parameters, metadata=metadata
)
model = aiplatform_v1beta1.Model(
display_name=display_name,
# The Cloud Storage location of the custom model
artifact_uri=artifact_uri,
explanation_spec=explanation_spec,
container_spec=container_spec,
)
parent = f"projects/{project}/locations/{location}"
response = client.upload_model(parent=parent, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=timeout)
print("upload_model_response:", upload_model_response)
后续步骤
如需搜索和过滤其他 Google Cloud 产品的代码示例,请参阅 Google Cloud 示例浏览器。