- 1.74.0 (latest)
- 1.73.0
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
Vertex AI SDK
The vertexai module.
vertexai.init(*, project: Optional[str] = None, location: Optional[str] = None, experiment: Optional[str] = None, experiment_description: Optional[str] = None, experiment_tensorboard: Optional[Union[google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str]] = None, staging_bucket: Optional[str] = None, credentials: Optional[google.auth.credentials.Credentials] = None, encryption_spec_key_name: Optional[str] = None, network: Optional[str] = None, service_account: Optional[str] = None)
Updates common initialization parameters with provided options.
Parameters
project (str) – The default project to use when making API calls.
location (str) – The default location to use when making API calls. If not set defaults to us-central-1.
experiment (str) – Optional. The experiment name.
experiment_description (str) – Optional. The description of the experiment.
experiment_tensorboard (Union[str, *[tensorboard_resource.Tensorboard](../aiplatform/services.md#google.cloud.aiplatform.Tensorboard)]*) – Optional. The Vertex AI TensorBoard instance, Tensorboard resource name, or Tensorboard resource ID to use as a backing Tensorboard for the provided experiment.
Example tensorboard resource name format: “projects/123/locations/us-central1/tensorboards/456”
If experiment_tensorboard is provided and experiment is not, the provided experiment_tensorboard will be set as the global Tensorboard. Any subsequent calls to aiplatform.init() with experiment and without experiment_tensorboard will automatically assign the global Tensorboard to the experiment.
staging_bucket (str) – The default staging bucket to use to stage artifacts when making API calls. In the form gs://…
credentials (google.auth.credentials.Credentials) – The default custom credentials to use when making API calls. If not provided credentials will be ascertained from the environment.
encryption_spec_key_name (Optional[str]) – Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.If set, this resource and all sub-resources will be secured by this key.
network (str) – Optional. The full name of the Compute Engine network to which jobs and resources should be peered. E.g. “projects/12345/global/networks/myVPC”. Private services access must already be configured for the network. If specified, all eligible jobs and resources created will be peered with this VPC.
service_account (str) – Optional. The service account used to launch jobs and deploy models. Jobs that use service_account: BatchPredictionJob, CustomJob, PipelineJob, HyperparameterTuningJob, CustomTrainingJob, CustomPythonPackageTrainingJob, CustomContainerTrainingJob, ModelEvaluationJob.
Raises
ValueError – If experiment_description is provided but experiment is not.
Classes for working with language models.
class vertexai.language_models.ChatMessage(content: str, author: str)
Bases: object
A chat message.
content()
Content of the message.
Type
author()
Author of the message.
Type
class vertexai.language_models.ChatModel(model_id: str, endpoint_name: Optional[str] = None)
Bases: vertexai.language_models._language_models._ChatModelBase
ChatModel represents a language model that is capable of chat.
Examples:
chat_model = ChatModel.from_pretrained("chat-bison@001")
chat = chat_model.start_chat(
context="My name is Ned. You are my personal assistant. My favorite movies are Lord of the Rings and Hobbit.",
examples=[
InputOutputTextPair(
input_text="Who do you work for?",
output_text="I work for Ned.",
),
InputOutputTextPair(
input_text="What do I like?",
output_text="Ned likes watching movies.",
),
],
temperature=0.3,
)
chat.send_message("Do you know any cool events this weekend?")
Creates a LanguageModel.
This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Vertex LLM. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
classmethod from_pretrained(model_name: str)
Loads a _ModelGardenModel.
Parameters
model_name – Name of the model.
Returns
An instance of a class derieved from _ModelGardenModel.
Raises
ValueError – If model_name is unknown.
ValueError – If model does not support this class.
start_chat(*, context: Optional[str] = None, examples: Optional[List[vertexai.language_models.InputOutputTextPair]] = None, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, message_history: Optional[List[vertexai.language_models.ChatMessage]] = None, stop_sequences: Optional[List[str]] = None)
Starts a chat session with the model.
Parameters
context – Context shapes how the model responds throughout the conversation. For example, you can use context to specify words the model can or cannot use, topics to focus on or avoid, or the response format or style
examples – List of structured messages to the model to learn how to respond to the conversation. A list of InputOutputTextPair objects.
max_output_tokens – Max length of the output text in tokens. Range: [1, 1024].
temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0.
top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40.
top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95.
message_history – A list of previously sent and received messages.
stop_sequences – Customized stop sequences to stop the decoding process.
Returns
A ChatSession object.
class vertexai.language_models.ChatSession(model: vertexai.language_models.ChatModel, context: Optional[str] = None, examples: Optional[List[vertexai.language_models.InputOutputTextPair]] = None, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, message_history: Optional[List[vertexai.language_models.ChatMessage]] = None, stop_sequences: Optional[List[str]] = None)
Bases: vertexai.language_models._language_models._ChatSessionBase
ChatSession represents a chat session with a language model.
Within a chat session, the model keeps context and remembers the previous conversation.
property message_history(: List[vertexai.language_models.ChatMessage )
List of previous messages.
send_message(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Sends message to the language model and gets a response.
Parameters
message – Message to send to the model
max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.
temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.
top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40. Uses the value specified when calling ChatModel.start_chat by default.
top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95. Uses the value specified when calling ChatModel.start_chat by default.
stop_sequences – Customized stop sequences to stop the decoding process.
Returns
A TextGenerationResponse object that contains the text produced by the model.
async send_message_async(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Asynchronously sends message to the language model and gets a response.
Parameters
message – Message to send to the model
max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.
temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.
top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40. Uses the value specified when calling ChatModel.start_chat by default.
top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95. Uses the value specified when calling ChatModel.start_chat by default.
stop_sequences – Customized stop sequences to stop the decoding process.
Returns
A TextGenerationResponse object that contains the text produced by the model.
send_message_streaming(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Sends message to the language model and gets a streamed response.
The response is only added to the history once it’s fully read.
Parameters
message – Message to send to the model
max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.
temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.
top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40. Uses the value specified when calling ChatModel.start_chat by default.
top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95. Uses the value specified when calling ChatModel.start_chat by default.
stop_sequences – Customized stop sequences to stop the decoding process. Uses the value specified when calling ChatModel.start_chat by default.
Yields
A stream of TextGenerationResponse objects that contain partial responses produced by the model.
class vertexai.language_models.CodeChatModel(model_id: str, endpoint_name: Optional[str] = None)
Bases: vertexai.language_models._language_models._ChatModelBase
CodeChatModel represents a model that is capable of completing code.
Examples
code_chat_model = CodeChatModel.from_pretrained(”codechat-bison@001”)
code_chat = code_chat_model.start_chat(
context=”I’m writing a large-scale enterprise application.”,
max_output_tokens=128,
temperature=0.2,
)
code_chat.send_message(“Please help write a function to calculate the min of two numbers”)
Creates a LanguageModel.
This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Vertex LLM. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
classmethod from_pretrained(model_name: str)
Loads a _ModelGardenModel.
Parameters
model_name – Name of the model.
Returns
An instance of a class derieved from _ModelGardenModel.
Raises
ValueError – If model_name is unknown.
ValueError – If model does not support this class.
start_chat(*, context: Optional[str] = None, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, message_history: Optional[List[vertexai.language_models.ChatMessage]] = None, stop_sequences: Optional[List[str]] = None)
Starts a chat session with the code chat model.
Parameters
context – Context shapes how the model responds throughout the conversation. For example, you can use context to specify words the model can or cannot use, topics to focus on or avoid, or the response format or style.
max_output_tokens – Max length of the output text in tokens. Range: [1, 1000].
temperature – Controls the randomness of predictions. Range: [0, 1].
stop_sequences – Customized stop sequences to stop the decoding process.
Returns
A ChatSession object.
class vertexai.language_models.CodeChatSession(model: vertexai.language_models.CodeChatModel, context: Optional[str] = None, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, message_history: Optional[List[vertexai.language_models.ChatMessage]] = None, stop_sequences: Optional[List[str]] = None)
Bases: vertexai.language_models._language_models._ChatSessionBase
CodeChatSession represents a chat session with code chat language model.
Within a code chat session, the model keeps context and remembers the previous converstion.
property message_history(: List[vertexai.language_models.ChatMessage )
List of previous messages.
send_message(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Sends message to the code chat model and gets a response.
Parameters
message – Message to send to the model
max_output_tokens – Max length of the output text in tokens. Range: [1, 1000]. Uses the value specified when calling CodeChatModel.start_chat by default.
temperature – Controls the randomness of predictions. Range: [0, 1]. Uses the value specified when calling CodeChatModel.start_chat by default.
stop_sequences – Customized stop sequences to stop the decoding process.
Returns
A TextGenerationResponse object that contains the text produced by the model.
async send_message_async(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None)
Asynchronously sends message to the code chat model and gets a response.
Parameters
message – Message to send to the model
max_output_tokens – Max length of the output text in tokens. Range: [1, 1000]. Uses the value specified when calling CodeChatModel.start_chat by default.
temperature – Controls the randomness of predictions. Range: [0, 1]. Uses the value specified when calling CodeChatModel.start_chat by default.
Returns
A TextGenerationResponse object that contains the text produced by the model.
send_message_streaming(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Sends message to the language model and gets a streamed response.
The response is only added to the history once it’s fully read.
Parameters
message – Message to send to the model
max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.
temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.
stop_sequences – Customized stop sequences to stop the decoding process. Uses the value specified when calling ChatModel.start_chat by default.
Returns
A stream of TextGenerationResponse objects that contain partial responses produced by the model.
class vertexai.language_models.CodeGenerationModel(model_id: str, endpoint_name: Optional[str] = None)
Bases: vertexai.language_models._language_models._LanguageModel
A language model that generates code.
Examples
Getting answers:
generation_model = CodeGenerationModel.from_pretrained(”code-bison@001”) print(generation_model.predict(
prefix=”Write a function that checks if a year is a leap year.”,
))
completion_model = CodeGenerationModel.from_pretrained(”code-gecko@001”) print(completion_model.predict(
prefix=”def reverse_string(s):”,
))
Creates a LanguageModel.
This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Vertex LLM. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
classmethod from_pretrained(model_name: str)
Loads a _ModelGardenModel.
Parameters
model_name – Name of the model.
Returns
An instance of a class derieved from _ModelGardenModel.
Raises
ValueError – If model_name is unknown.
ValueError – If model does not support this class.
predict(prefix: str, suffix: Optional[str] = None, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Gets model response for a single prompt.
Parameters
prefix – Code before the current point.
suffix – Code after the current point.
max_output_tokens – Max length of the output text in tokens. Range: [1, 1000].
temperature – Controls the randomness of predictions. Range: [0, 1].
stop_sequences – Customized stop sequences to stop the decoding process.
Returns
A TextGenerationResponse object that contains the text produced by the model.
async predict_async(prefix: str, suffix: Optional[str] = None, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Asynchronously gets model response for a single prompt.
Parameters
prefix – Code before the current point.
suffix – Code after the current point.
max_output_tokens – Max length of the output text in tokens. Range: [1, 1000].
temperature – Controls the randomness of predictions. Range: [0, 1].
stop_sequences – Customized stop sequences to stop the decoding process.
Returns
A TextGenerationResponse object that contains the text produced by the model.
predict_streaming(prefix: str, suffix: Optional[str] = None, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Predicts the code based on previous code.
The result is a stream (generator) of partial responses.
Parameters
prefix – Code before the current point.
suffix – Code after the current point.
max_output_tokens – Max length of the output text in tokens. Range: [1, 1000].
temperature – Controls the randomness of predictions. Range: [0, 1].
stop_sequences – Customized stop sequences to stop the decoding process.
Yields
A stream of TextGenerationResponse objects that contain partial responses produced by the model.
class vertexai.language_models.InputOutputTextPair(input_text: str, output_text: str)
Bases: object
InputOutputTextPair represents a pair of input and output texts.
class vertexai.language_models.TextEmbedding(values: List[float], statistics: Optional[vertexai.language_models.TextEmbeddingStatistics] = None, _prediction_response: Optional[google.cloud.aiplatform.models.Prediction] = None)
Bases: object
Text embedding vector and statistics.
class vertexai.language_models.TextEmbeddingInput(text: str, task_type: Optional[str] = None, title: Optional[str] = None)
Bases: object
Structural text embedding input.
text()
The main text content to embed.
Type
task_type()
The name of the downstream task the embeddings will be used for. Valid values: RETRIEVAL_QUERY
Specifies the given text is a query in a search/retrieval setting.
RETRIEVAL_DOCUMENT
Specifies the given text is a document from the corpus being searched.
SEMANTIC_SIMILARITY
Specifies the given text will be used for STS.
CLASSIFICATION
Specifies that the given text will be classified.
CLUSTERING
Specifies that the embeddings will be used for clustering.
Type
Optional[str]
title()
Optional identifier of the text content.
Type
Optional[str]
class vertexai.language_models.TextEmbeddingModel(model_id: str, endpoint_name: Optional[str] = None)
Bases: vertexai.language_models._language_models._LanguageModel
TextEmbeddingModel class calculates embeddings for the given texts.
Examples:
# Getting embedding:
model = TextEmbeddingModel.from_pretrained("textembedding-gecko@001")
embeddings = model.get_embeddings(["What is life?"])
for embedding in embeddings:
vector = embedding.values
print(len(vector))
Creates a LanguageModel.
This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Vertex LLM. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
classmethod from_pretrained(model_name: str)
Loads a _ModelGardenModel.
Parameters
model_name – Name of the model.
Returns
An instance of a class derieved from _ModelGardenModel.
Raises
ValueError – If model_name is unknown.
ValueError – If model does not support this class.
get_embeddings(texts: List[Union[str, vertexai.language_models.TextEmbeddingInput]], *, auto_truncate: bool = True)
Calculates embeddings for the given texts.
Parameters
Returns
A list of TextEmbedding objects.
async get_embeddings_async(texts: List[Union[str, vertexai.language_models.TextEmbeddingInput]], *, auto_truncate: bool = True)
Asynchronously calculates embeddings for the given texts.
Parameters
Returns
A list of TextEmbedding objects.
class vertexai.language_models.TextGenerationModel(model_id: str, endpoint_name: Optional[str] = None)
Bases: vertexai.language_models._language_models._TextGenerationModel
, vertexai.language_models._language_models._TunableTextModelMixin
, vertexai.language_models._language_models._ModelWithBatchPredict
Creates a LanguageModel.
This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Vertex LLM. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
batch_predict(*, dataset: Union[str, List[str]], destination_uri_prefix: str, model_parameters: Optional[Dict] = None)
Starts a batch prediction job with the model.
Parameters
dataset – The location of the dataset. gs:// and bq:// URIs are supported.
destination_uri_prefix – The URI prefix for the prediction. gs:// and bq:// URIs are supported.
model_parameters – Model-specific parameters to send to the model.
Returns
A BatchPredictionJob object
Raises
ValueError – When source or destination URI is not supported.
classmethod from_pretrained(model_name: str)
Loads a _ModelGardenModel.
Parameters
model_name – Name of the model.
Returns
An instance of a class derieved from _ModelGardenModel.
Raises
ValueError – If model_name is unknown.
ValueError – If model does not support this class.
classmethod get_tuned_model(tuned_model_name: str)
Loads the specified tuned language model.
list_tuned_model_names()
Lists the names of tuned models.
Returns
A list of tuned models that can be used with the get_tuned_model method.
predict(prompt: str, *, max_output_tokens: Optional[int] = 128, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Gets model response for a single prompt.
Parameters
prompt – Question to ask the model.
max_output_tokens – Max length of the output text in tokens. Range: [1, 1024].
temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0.
top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40.
top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95.
stop_sequences – Customized stop sequences to stop the decoding process.
Returns
A TextGenerationResponse object that contains the text produced by the model.
async predict_async(prompt: str, *, max_output_tokens: Optional[int] = 128, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Asynchronously gets model response for a single prompt.
Parameters
prompt – Question to ask the model.
max_output_tokens – Max length of the output text in tokens. Range: [1, 1024].
temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0.
top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40.
top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95.
stop_sequences – Customized stop sequences to stop the decoding process.
Returns
A TextGenerationResponse object that contains the text produced by the model.
predict_streaming(prompt: str, *, max_output_tokens: int = 128, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Gets a streaming model response for a single prompt.
The result is a stream (generator) of partial responses.
Parameters
prompt – Question to ask the model.
max_output_tokens – Max length of the output text in tokens. Range: [1, 1024].
temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0.
top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40.
top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95.
stop_sequences – Customized stop sequences to stop the decoding process.
Yields
A stream of TextGenerationResponse objects that contain partial responses produced by the model.
tune_model(training_data: Union[str, pandas.core.frame.DataFrame], *, train_steps: Optional[int] = None, learning_rate_multiplier: Optional[float] = None, tuning_job_location: Optional[str] = None, tuned_model_location: Optional[str] = None, model_display_name: Optional[str] = None, tuning_evaluation_spec: Optional[TuningEvaluationSpec] = None)
Tunes a model based on training data.
This method launches and returns an asynchronous model tuning job. Usage:
``
` tuning_job = model.tune_model(…) … do some other work tuned_model = tuning_job.get_tuned_model() # Blocks until tuning is complete
Parameters
training_data – A Pandas DataFrame or a URI pointing to data in JSON lines format. The dataset schema is model-specific. See https://cloud.google.com/vertex-ai/docs/generative-ai/models/tune-models#dataset_format
train_steps – Number of training batches to tune on (batch size is 8 samples).
learning_rate_multiplier – Learning rate multiplier to use in tuning.
tuning_job_location – GCP location where the tuning job should be run. Only “europe-west4” and “us-central1” locations are supported for now.
tuned_model_location – GCP location where the tuned model should be deployed. Only “us-central1” is supported for now.
model_display_name – Custom display name for the tuned model.
tuning_evaluation_spec – Specification for the model evaluation during tuning.
Returns
A LanguageModelTuningJob object that represents the tuning job. Calling job.result() blocks until the tuning is complete and returns a LanguageModel object.
Raises
ValueError – If the “tuning_job_location” value is not supported
ValueError – If the “tuned_model_location” value is not supported
RuntimeError – If the model does not support tuning
class vertexai.language_models.TextGenerationResponse(text: str, _prediction_response: typing.Any, is_blocked: bool = False, safety_attributes: typing.Dict[str, float] =
Bases: object
TextGenerationResponse represents a response of a language model. .. attribute:: text
The generated text
type
str
is_blocked()
Whether the the request was blocked.
Type
safety_attributes()
Scores for safety attributes. Learn more about the safety attributes here: https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai#safety_attribute_descriptions
property raw_prediction_response(: google.cloud.aiplatform.models.Predictio )
Raw prediction response.
Classes for working with language models.
class vertexai.language_models._language_models._TunableModelMixin(model_id: str, endpoint_name: Optional[str] = None)
Model that can be tuned.
Creates a LanguageModel.
This constructor should not be called directly. Use LanguageModel.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Vertex LLM. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
classmethod get_tuned_model(tuned_model_name: str)
Loads the specified tuned language model.
list_tuned_model_names()
Lists the names of tuned models.
Returns
A list of tuned models that can be used with the get_tuned_model method.
tune_model(training_data: Union[str, pandas.core.frame.DataFrame], *, train_steps: Optional[int] = None, learning_rate: Optional[float] = None, learning_rate_multiplier: Optional[float] = None, tuning_job_location: Optional[str] = None, tuned_model_location: Optional[str] = None, model_display_name: Optional[str] = None, tuning_evaluation_spec: Optional[TuningEvaluationSpec] = None, default_context: Optional[str] = None)
Tunes a model based on training data.
This method launches and returns an asynchronous model tuning job.
Usage:
\
tuning_job = model.tune_model(...)
... do some other work
tuned_model = tuning_job.get_tuned_model() # Blocks until tuning is complete
``
Parameters
training_data – A Pandas DataFrame or a URI pointing to data in JSON lines format. The dataset schema is model-specific. See https://cloud.google.com/vertex-ai/docs/generative-ai/models/tune-models#dataset_format
train_steps – Number of training batches to tune on (batch size is 8 samples).
learning_rate – Deprecated. Use learning_rate_multiplier instead. Learning rate to use in tuning.
learning_rate_multiplier – Learning rate multiplier to use in tuning.
tuning_job_location – GCP location where the tuning job should be run. Only “europe-west4” and “us-central1” locations are supported for now.
tuned_model_location – GCP location where the tuned model should be deployed. Only “us-central1” is supported for now.
model_display_name – Custom display name for the tuned model.
tuning_evaluation_spec – Specification for the model evaluation during tuning.
default_context – The context to use for all training samples by default.
Returns
A LanguageModelTuningJob object that represents the tuning job. Calling job.result() blocks until the tuning is complete and returns a LanguageModel object.
Raises
ValueError – If the “tuning_job_location” value is not supported
ValueError – If the “tuned_model_location” value is not supported
RuntimeError – If the model does not support tuning
class vertexai.preview.VertexModel()
Bases: object
mixin class that can be used to add Vertex AI remote execution to a custom model.
vertexai.preview.end_run(state: google.cloud.aiplatform_v1.types.execution.Execution.State = State.COMPLETE)
Ends the the current experiment run.
\
aiplatform.start_run('my-run')
...
aiplatform.end_run()
``
vertexai.preview.from_pretrained(*, model_name: Optional[str] = None, custom_job_name: Optional[str] = None)
Pulls a model from Model Registry or from a CustomJob ID for retraining.
The returned model is wrapped with a Vertex wrapper for running remote jobs on Vertex, unless an unwrapped model was registered to Model Registry.
Parameters
model_name (str) – Optional. The resource ID or fully qualified resource name of a registered model. Format: “12345678910” or “projects/123/locations/us-central1/models/12345678910@1”. One of model_name or custom_job_name is required.
custom_job_name (str) – Optional. The resource ID or fully qualified resource name of a CustomJob created with Vertex SDK remote training. If the job has completed successfully, this will load the trained model created in the CustomJob. One of model_name or custom_job_name is required.
Returns
local model for uptraining.
Return type
model
Raises
ValueError – If registered model is not registered through vertexai.preview.register If custom job was not created with Vertex SDK remote training If both or neither model_name or custom_job_name are provided
vertexai.preview.get_experiment_df(experiment: Optional[str] = None)
Returns a Pandas DataFrame of the parameters and metrics associated with one experiment.
Example:
aiplatform.init(experiment=’exp-1’) aiplatform.start_run(run=’run-1’) aiplatform.log_params({‘learning_rate’: 0.1}) aiplatform.log_metrics({‘accuracy’: 0.9})
aiplatform.start_run(run=’run-2’) aiplatform.log_params({‘learning_rate’: 0.2}) aiplatform.log_metrics({‘accuracy’: 0.95})
aiplatform.get_experiments_df()
experiment_name | run_name | param.learning_rate | metric.accuracy |exp-1 | run-1 | 0.1 | 0.9 |exp-1 | run-2 | 0.2 | 0.95 |
Parameters
experiment (str) –
set (Name of the Experiment to filter results. If not) –
experiment. (return results of current active) –
Returns
Pandas Dataframe of Experiment with metrics and parameters.
Raises
NotFound exception if experiment does not exist. –
ValueError if given experiment is not associated with a wrong schema. –
vertexai.preview.init(*, remote: Optional[bool] = None, autolog: Optional[bool] = None)
Updates preview global parameters for Vertex remote execution.
Parameters
remote (bool) – Optional. A global flag to indicate whether or not a method will be executed remotely. Default is Flase. The method level remote flag has higher priority than this global flag.
autolog (bool) – Optional. Whether or not to turn on autologging feature for remote execution. To learn more about the autologging feature, see https://cloud.google.com/vertex-ai/docs/experiments/autolog-data.
vertexai.preview.log_classification_metrics(*, labels: Optional[List[str]] = None, matrix: Optional[List[List[int]]] = None, fpr: Optional[List[float]] = None, tpr: Optional[List[float]] = None, threshold: Optional[List[float]] = None, display_name: Optional[str] = None)
Create an artifact for classification metrics and log to ExperimentRun. Currently support confusion matrix and ROC curve.
``
` my_run = aiplatform.ExperimentRun(‘my-run’, experiment=’my-experiment’) classification_metrics = my_run.log_classification_metrics(
display_name=’my-classification-metrics’, labels=[‘cat’, ‘dog’], matrix=[[9, 1], [1, 9]], fpr=[0.1, 0.5, 0.9], tpr=[0.1, 0.7, 0.9], threshold=[0.9, 0.5, 0.1],
Parameters
labels (List[str]) – Optional. List of label names for the confusion matrix. Must be set if ‘matrix’ is set.
matrix (List[List[int]) – Optional. Values for the confusion matrix. Must be set if ‘labels’ is set.
fpr (List[float]) – Optional. List of false positive rates for the ROC curve. Must be set if ‘tpr’ or ‘thresholds’ is set.
tpr (List[float]) – Optional. List of true positive rates for the ROC curve. Must be set if ‘fpr’ or ‘thresholds’ is set.
threshold (List[float]) – Optional. List of thresholds for the ROC curve. Must be set if ‘fpr’ or ‘tpr’ is set.
display_name (str) – Optional. The user-defined name for the classification metric artifact.
Raises
ValueError – if ‘labels’ and ‘matrix’ are not set together or if ‘labels’ and ‘matrix’ are not in the same length or if ‘fpr’ and ‘tpr’ and ‘threshold’ are not set together or if ‘fpr’ and ‘tpr’ and ‘threshold’ are not in the same length
vertexai.preview.log_metrics(metrics: Dict[str, Union[float, int, str]])
Log single or multiple Metrics with specified key and value pairs.
Metrics with the same key will be overwritten.
\
aiplatform.start_run('my-run', experiment='my-experiment')
aiplatform.log_metrics({'accuracy': 0.9, 'recall': 0.8})
``
Parameters
metrics (Dict[str, **Union[float, *[int](https://python.readthedocs.io/en/latest/library/functions.html#int), [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str)]*]) – Required. Metrics key/value pairs.
vertexai.preview.log_params(params: Dict[str, Union[float, int, str]])
Log single or multiple parameters with specified key and value pairs.
Parameters with the same key will be overwritten.
\
aiplatform.start_run('my-run')
aiplatform.log_params({'learning_rate': 0.1, 'dropout_rate': 0.2})
``
Parameters
params (Dict[str, **Union[float, *[int](https://python.readthedocs.io/en/latest/library/functions.html#int), [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str)]*]) – Required. Parameter key/value pairs.
vertexai.preview.log_time_series_metrics(metrics: Dict[str, float], step: Optional[int] = None, wall_time: Optional[google.protobuf.timestamp_pb2.Timestamp] = None)
Logs time series metrics to to this Experiment Run.
Requires the experiment or experiment run has a backing Vertex Tensorboard resource.
``
` my_tensorboard = aiplatform.Tensorboard(…) aiplatform.init(experiment=’my-experiment’, experiment_tensorboard=my_tensorboard) aiplatform.start_run(‘my-run’)
increments steps as logged
for i in range(10):
aiplatform.log_time_series_metrics({‘loss’: loss})
explicitly log steps
for i in range(10):
aiplatform.log_time_series_metrics({‘loss’: loss}, step=i)
``
`
Parameters
metrics (Dict[str, **Union[str, *[float](https://python.readthedocs.io/en/latest/library/functions.html#float)]*]) – Required. Dictionary of where keys are metric names and values are metric values.
step (int) – Optional. Step index of this data point within the run.
If not provided, the latest step amongst all time series metrics already logged will be used.
wall_time (timestamp_pb2.Timestamp) – Optional. Wall clock timestamp when this data point is generated by the end user.
If not provided, this will be generated based on the value from time.time()
Raises
RuntimeError – If current experiment run doesn’t have a backing Tensorboard resource.
vertexai.preview.register(model: Union[sklearn.base.BaseEstimator, tf.Module, torch.nn.Module], use_gpu: bool = False)
Registers a model and returns a Model representing the registered Model resource.
Parameters
model (Union["sklearn.base.BaseEstimator", **"tensorflow.Module", **"torch.nn.Module"]) – Required. An OSS model. Supported frameworks: sklearn, tensorflow, pytorch.
use_gpu (bool) – Optional. Whether to use GPU for model serving. Default to False.
Returns
Instantiated representation of the registered model resource.
Return type
vertex_model (aiplatform.Model)
Raises
ValueError – if default staging bucket is not set or if the framework is not supported.
vertexai.preview.remote(cls_or_method: Any)
Takes a class or method and add Vertex remote execution support.
LogisticRegression = vertexai.preview.remote(LogisticRegression) model = LogisticRegression() model.fit.vertex.remote_config.staging_bucket = REMOTE_JOB_BUCKET model.fit.vertex.remote=True model.fit(X_train, y_train)
``
`
Parameters
cls_or_method (Any) – Required. A class or method that will be added Vertex remote execution support.
Returns
A class or method that can be executed remotely.
vertexai.preview.start_run(run: str, *, tensorboard: Optional[Union[google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str]] = None, resume=False)
Start a run to current session.
\
aiplatform.init(experiment='my-experiment')
aiplatform.start_run('my-run')
aiplatform.log_params({'learning_rate':0.1})
``
Use as context manager. Run will be ended on context exit:
``
` aiplatform.init(experiment=’my-experiment’) with aiplatform.start_run(‘my-run’) as my_run:
my_run.log_params({‘learning_rate’:0.1})
``
`
Resume a previously started run:
``
` aiplatform.init(experiment=’my-experiment’) with aiplatform.start_run(‘my-run’, resume=True) as my_run:
my_run.log_params({‘learning_rate’:0.1})
``
`
Parameters
run (str) – Required. Name of the run to assign current session with.
Union[str (tensorboard) – Optional. Backing Tensorboard Resource to enable and store time series metrics logged to this Experiment Run using log_time_series_metrics.
If not provided will the the default backing tensorboard of the currently set experiment.
tensorboard_resource.Tensorboard] – Optional. Backing Tensorboard Resource to enable and store time series metrics logged to this Experiment Run using log_time_series_metrics.
If not provided will the the default backing tensorboard of the currently set experiment.
resume (bool) – Whether to resume this run. If False a new run will be created.
Raises
ValueError – if experiment is not set. Or if run execution or metrics artifact is already created but with a different schema.
Classes for working with language models.
class vertexai.preview.language_models.ChatMessage(content: str, author: str)
Bases: object
A chat message.
content()
Content of the message.
Type
author()
Author of the message.
Type
vertexai.preview.language_models.ChatModel()
alias of vertexai.preview.language_models._PreviewChatModel
class vertexai.preview.language_models.ChatSession(model: vertexai.language_models.ChatModel, context: Optional[str] = None, examples: Optional[List[vertexai.language_models.InputOutputTextPair]] = None, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, message_history: Optional[List[vertexai.language_models.ChatMessage]] = None, stop_sequences: Optional[List[str]] = None)
Bases: vertexai.language_models._language_models._ChatSessionBase
ChatSession represents a chat session with a language model.
Within a chat session, the model keeps context and remembers the previous conversation.
property message_history(: List[vertexai.language_models.ChatMessage )
List of previous messages.
send_message(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Sends message to the language model and gets a response.
Parameters
message – Message to send to the model
max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.
temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.
top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40. Uses the value specified when calling ChatModel.start_chat by default.
top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95. Uses the value specified when calling ChatModel.start_chat by default.
stop_sequences – Customized stop sequences to stop the decoding process.
Returns
A TextGenerationResponse object that contains the text produced by the model.
async send_message_async(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Asynchronously sends message to the language model and gets a response.
Parameters
message – Message to send to the model
max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.
temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.
top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40. Uses the value specified when calling ChatModel.start_chat by default.
top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95. Uses the value specified when calling ChatModel.start_chat by default.
stop_sequences – Customized stop sequences to stop the decoding process.
Returns
A TextGenerationResponse object that contains the text produced by the model.
send_message_streaming(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Sends message to the language model and gets a streamed response.
The response is only added to the history once it’s fully read.
Parameters
message – Message to send to the model
max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.
temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.
top_k – The number of highest probability vocabulary tokens to keep for top-k-filtering. Range: [1, 40]. Default: 40. Uses the value specified when calling ChatModel.start_chat by default.
top_p – The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Range: [0, 1]. Default: 0.95. Uses the value specified when calling ChatModel.start_chat by default.
stop_sequences – Customized stop sequences to stop the decoding process. Uses the value specified when calling ChatModel.start_chat by default.
Yields
A stream of TextGenerationResponse objects that contain partial responses produced by the model.
vertexai.preview.language_models.CodeChatModel()
alias of vertexai.preview.language_models._PreviewCodeChatModel
class vertexai.preview.language_models.CodeChatSession(model: vertexai.language_models.CodeChatModel, context: Optional[str] = None, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, message_history: Optional[List[vertexai.language_models.ChatMessage]] = None, stop_sequences: Optional[List[str]] = None)
Bases: vertexai.language_models._language_models._ChatSessionBase
CodeChatSession represents a chat session with code chat language model.
Within a code chat session, the model keeps context and remembers the previous converstion.
property message_history(: List[vertexai.language_models.ChatMessage )
List of previous messages.
send_message(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Sends message to the code chat model and gets a response.
Parameters
message – Message to send to the model
max_output_tokens – Max length of the output text in tokens. Range: [1, 1000]. Uses the value specified when calling CodeChatModel.start_chat by default.
temperature – Controls the randomness of predictions. Range: [0, 1]. Uses the value specified when calling CodeChatModel.start_chat by default.
stop_sequences – Customized stop sequences to stop the decoding process.
Returns
A TextGenerationResponse object that contains the text produced by the model.
async send_message_async(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None)
Asynchronously sends message to the code chat model and gets a response.
Parameters
message – Message to send to the model
max_output_tokens – Max length of the output text in tokens. Range: [1, 1000]. Uses the value specified when calling CodeChatModel.start_chat by default.
temperature – Controls the randomness of predictions. Range: [0, 1]. Uses the value specified when calling CodeChatModel.start_chat by default.
Returns
A TextGenerationResponse object that contains the text produced by the model.
send_message_streaming(message: str, *, max_output_tokens: Optional[int] = None, temperature: Optional[float] = None, stop_sequences: Optional[List[str]] = None)
Sends message to the language model and gets a streamed response.
The response is only added to the history once it’s fully read.
Parameters
message – Message to send to the model
max_output_tokens – Max length of the output text in tokens. Range: [1, 1024]. Uses the value specified when calling ChatModel.start_chat by default.
temperature – Controls the randomness of predictions. Range: [0, 1]. Default: 0. Uses the value specified when calling ChatModel.start_chat by default.
stop_sequences – Customized stop sequences to stop the decoding process. Uses the value specified when calling ChatModel.start_chat by default.
Returns
A stream of TextGenerationResponse objects that contain partial responses produced by the model.
vertexai.preview.language_models.CodeGenerationModel()
alias of vertexai.preview.language_models._PreviewCodeGenerationModel
class vertexai.preview.language_models.EvaluationClassificationMetric(label_name: Optional[str] = None, auPrc: Optional[float] = None, auRoc: Optional[float] = None, logLoss: Optional[float] = None, confidenceMetrics: Optional[List[Dict[str, Any]]] = None, confusionMatrix: Optional[Dict[str, Any]] = None)
Bases: vertexai.language_models._evaluatable_language_models._EvaluationMetricBase
The evaluation metric response for classification metrics.
Parameters
label_name (str) – Optional. The name of the label associated with the metrics. This is only returned when only_summary_metrics=False is passed to evaluate().
auPrc (float) – Optional. The area under the precision recall curve.
auRoc (float) – Optional. The area under the receiver operating characteristic curve.
logLoss (float) – Optional. Logarithmic loss.
confidenceMetrics (List[Dict[str, **Any]]) – Optional. This is only returned when only_summary_metrics=False is passed to evaluate().
confusionMatrix (Dict[str, **Any]) – Optional. This is only returned when only_summary_metrics=False is passed to evaluate().
property input_dataset_paths(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )
The Google Cloud Storage paths to the dataset used for this evaluation.
property task_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )
The type of evaluation task for the evaluation..
class vertexai.preview.language_models.EvaluationMetric(bleu: Optional[float] = None, rougeLSum: Optional[float] = None)
Bases: vertexai.language_models._evaluatable_language_models._EvaluationMetricBase
The evaluation metric response.
Parameters
property input_dataset_paths(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )
The Google Cloud Storage paths to the dataset used for this evaluation.
property task_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )
The type of evaluation task for the evaluation..
class vertexai.preview.language_models.EvaluationQuestionAnsweringSpec(ground_truth_data: Union[List[str], str, pandas.DataFrame], task_name: str = 'question-answering')
Bases: vertexai.language_models._evaluatable_language_models._EvaluationTaskSpec
Spec for question answering model evaluation tasks.
class vertexai.preview.language_models.EvaluationTextClassificationSpec(ground_truth_data: Union[List[str], str, pandas.DataFrame], target_column_name: str, class_names: List[str])
Bases: vertexai.language_models._evaluatable_language_models._EvaluationTaskSpec
Spec for text classification model evaluation tasks.
Parameters
class vertexai.preview.language_models.EvaluationTextGenerationSpec(ground_truth_data: Union[List[str], str, pandas.DataFrame])
Bases: vertexai.language_models._evaluatable_language_models._EvaluationTaskSpec
Spec for text generation model evaluation tasks.
class vertexai.preview.language_models.EvaluationTextSummarizationSpec(ground_truth_data: Union[List[str], str, pandas.DataFrame], task_name: str = 'summarization')
Bases: vertexai.language_models._evaluatable_language_models._EvaluationTaskSpec
Spec for text summarization model evaluation tasks.
class vertexai.preview.language_models.InputOutputTextPair(input_text: str, output_text: str)
Bases: object
InputOutputTextPair represents a pair of input and output texts.
class vertexai.preview.language_models.TextEmbedding(values: List[float], statistics: Optional[vertexai.language_models.TextEmbeddingStatistics] = None, _prediction_response: Optional[google.cloud.aiplatform.models.Prediction] = None)
Bases: object
Text embedding vector and statistics.
class vertexai.preview.language_models.TextEmbeddingInput(text: str, task_type: Optional[str] = None, title: Optional[str] = None)
Bases: object
Structural text embedding input.
text()
The main text content to embed.
Type
task_type()
The name of the downstream task the embeddings will be used for. Valid values: RETRIEVAL_QUERY
Specifies the given text is a query in a search/retrieval setting.
RETRIEVAL_DOCUMENT
Specifies the given text is a document from the corpus being searched.
SEMANTIC_SIMILARITY
Specifies the given text will be used for STS.
CLASSIFICATION
Specifies that the given text will be classified.
CLUSTERING
Specifies that the embeddings will be used for clustering.
Type
Optional[str]
title()
Optional identifier of the text content.
Type
Optional[str]
vertexai.preview.language_models.TextEmbeddingModel()
alias of vertexai.preview.language_models._PreviewTextEmbeddingModel
vertexai.preview.language_models.TextGenerationModel()
alias of vertexai.preview.language_models._PreviewTextGenerationModel
class vertexai.preview.language_models.TextGenerationResponse(text: str, _prediction_response: typing.Any, is_blocked: bool = False, safety_attributes: typing.Dict[str, float] =
Bases: object
TextGenerationResponse represents a response of a language model. .. attribute:: text
The generated text
type
str
is_blocked()
Whether the the request was blocked.
Type
safety_attributes()
Scores for safety attributes. Learn more about the safety attributes here: https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai#safety_attribute_descriptions
property raw_prediction_response(: google.cloud.aiplatform.models.Predictio )
Raw prediction response.
class vertexai.preview.language_models.TuningEvaluationSpec(evaluation_data: str, evaluation_interval: Optional[int] = None, enable_early_stopping: Optional[bool] = None, tensorboard: Optional[Union[google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str]] = None)
Bases: object
Specification for model evaluation to perform during tuning.
evaluation_data()
GCS URI of the evaluation dataset. This will run model evaluation as part of the tuning job.
Type
evaluation_interval()
The evaluation will run at every evaluation_interval tuning steps. Default: 20.
Type
Optional[int]
enable_early_stopping()
If True, the tuning may stop early before completing all the tuning steps. Requires evaluation_data.
Type
Optional[bool]
tensorboard()
Vertex Tensorboard where to write the evaluation metrics. The Tensorboard must be in the same location as the tuning job.
Type
Optional[Union[google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str]]
Classes for working with vision models.
class vertexai.vision_models.Image(image_bytes: bytes)
Bases: object
Image.
Creates an Image object.
Parameters
image_bytes – Image file bytes. Image can be in PNG or JPEG format.
static load_from_file(location: str)
Loads image from file.
Parameters
location – Local path from where to load the image.
Returns
Loaded image as an Image object.
save(location: str)
Saves image to a file.
Parameters
location – Local path where to save the image.
show()
Shows the image.
This method only works when in a notebook environment.
class vertexai.vision_models.ImageCaptioningModel(model_id: str, endpoint_name: Optional[str] = None)
Bases: vertexai._model_garden._model_garden_models._ModelGardenModel
Generates captions from image.
Examples:
model = ImageCaptioningModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
captions = model.get_captions(
image=image,
# Optional:
number_of_results=1,
language="en",
)
Creates a _ModelGardenModel.
This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Model Garden Model. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
classmethod from_pretrained(model_name: str)
Loads a _ModelGardenModel.
Parameters
model_name – Name of the model.
Returns
An instance of a class derieved from _ModelGardenModel.
Raises
ValueError – If model_name is unknown.
ValueError – If model does not support this class.
get_captions(image: vertexai.vision_models.Image, *, number_of_results: int = 1, language: str = 'en')
Generates captions for a given image.
Parameters
image – The image to get captions for. Size limit: 10 MB.
number_of_results – Number of captions to produce. Range: 1-3.
language – Language to use for captions. Supported languages: “en”, “fr”, “de”, “it”, “es”
Returns
A list of image caption strings.
class vertexai.vision_models.ImageQnAModel(model_id: str, endpoint_name: Optional[str] = None)
Bases: vertexai._model_garden._model_garden_models._ModelGardenModel
Answers questions about an image.
Examples:
model = ImageQnAModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
answers = model.ask_question(
image=image,
question="What color is the car in this image?",
# Optional:
number_of_results=1,
)
Creates a _ModelGardenModel.
This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Model Garden Model. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
ask_question(image: vertexai.vision_models.Image, question: str, *, number_of_results: int = 1)
Answers questions about an image.
Parameters
image – The image to get captions for. Size limit: 10 MB.
question – Question to ask about the image.
number_of_results – Number of captions to produce. Range: 1-3.
Returns
A list of answers.
classmethod from_pretrained(model_name: str)
Loads a _ModelGardenModel.
Parameters
model_name – Name of the model.
Returns
An instance of a class derieved from _ModelGardenModel.
Raises
ValueError – If model_name is unknown.
ValueError – If model does not support this class.
class vertexai.vision_models.ImageTextModel(model_id: str, endpoint_name: Optional[str] = None)
Bases: vertexai.vision_models.ImageCaptioningModel
, vertexai.vision_models.ImageQnAModel
Generates text from images.
Examples:
model = ImageTextModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
captions = model.get_captions(
image=image,
# Optional:
number_of_results=1,
language="en",
)
answers = model.ask_question(
image=image,
question="What color is the car in this image?",
# Optional:
number_of_results=1,
)
Creates a _ModelGardenModel.
This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Model Garden Model. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
ask_question(image: vertexai.vision_models.Image, question: str, *, number_of_results: int = 1)
Answers questions about an image.
Parameters
image – The image to get captions for. Size limit: 10 MB.
question – Question to ask about the image.
number_of_results – Number of captions to produce. Range: 1-3.
Returns
A list of answers.
classmethod from_pretrained(model_name: str)
Loads a _ModelGardenModel.
Parameters
model_name – Name of the model.
Returns
An instance of a class derieved from _ModelGardenModel.
Raises
ValueError – If model_name is unknown.
ValueError – If model does not support this class.
get_captions(image: vertexai.vision_models.Image, *, number_of_results: int = 1, language: str = 'en')
Generates captions for a given image.
Parameters
image – The image to get captions for. Size limit: 10 MB.
number_of_results – Number of captions to produce. Range: 1-3.
language – Language to use for captions. Supported languages: “en”, “fr”, “de”, “it”, “es”
Returns
A list of image caption strings.
class vertexai.vision_models.MultiModalEmbeddingModel(model_id: str, endpoint_name: Optional[str] = None)
Bases: vertexai._model_garden._model_garden_models._ModelGardenModel
Generates embedding vectors from images.
Examples:
model = MultiModalEmbeddingModel.from_pretrained("multimodalembedding@001")
image = Image.load_from_file("image.png")
embeddings = model.get_embeddings(
image=image,
contextual_text="Hello world",
)
image_embedding = embeddings.image_embedding
text_embedding = embeddings.text_embedding
Creates a _ModelGardenModel.
This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Model Garden Model. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
classmethod from_pretrained(model_name: str)
Loads a _ModelGardenModel.
Parameters
model_name – Name of the model.
Returns
An instance of a class derieved from _ModelGardenModel.
Raises
ValueError – If model_name is unknown.
ValueError – If model does not support this class.
get_embeddings(image: Optional[vertexai.vision_models.Image] = None, contextual_text: Optional[str] = None)
Gets embedding vectors from the provided image.
Parameters
image (Image) – Optional. The image to generate embeddings for. One of image or contextual_text is required.
contextual_text (str) – Optional. Contextual text for your input image. If provided, the model will also generate an embedding vector for the provided contextual text. The returned image and text embedding vectors are in the same semantic space with the same dimensionality, and the vectors can be used interchangeably for use cases like searching image by text or searching text by image. One of image or contextual_text is required.
Returns
The image and text embedding vectors.
Return type
ImageEmbeddingResponse
class vertexai.vision_models.MultiModalEmbeddingResponse(_prediction_response: Any, image_embedding: Optional[List[float]] = None, text_embedding: Optional[List[float]] = None)
Bases: object
The image embedding response.
image_embedding()
Optional. The embedding vector generated from your image.
Type
List[float]
text_embedding()
Optional. The embedding vector generated from the contextual text provided for your image.
Type
List[float]
Classes for working with vision models.
class vertexai.preview.vision_models.GeneratedImage(image_bytes: bytes, generation_parameters: Dict[str, Any])
Bases: vertexai.vision_models.Image
Generated image.
Creates a GeneratedImage object.
Parameters
image_bytes – Image file bytes. Image can be in PNG or JPEG format.
generation_parameters – Image generation parameter values.
property generation_parameters()
Image generation parameters as a dictionary.
static load_from_file(location: str)
Loads image from file.
Parameters
location – Local path from where to load the image.
Returns
Loaded image as a GeneratedImage object.
save(location: str, include_generation_parameters: bool = True)
Saves image to a file.
Parameters
location – Local path where to save the image.
include_generation_parameters – Whether to include the image generation parameters in the image’s EXIF metadata.
show()
Shows the image.
This method only works when in a notebook environment.
class vertexai.preview.vision_models.Image(image_bytes: bytes)
Bases: object
Image.
Creates an Image object.
Parameters
image_bytes – Image file bytes. Image can be in PNG or JPEG format.
static load_from_file(location: str)
Loads image from file.
Parameters
location – Local path from where to load the image.
Returns
Loaded image as an Image object.
save(location: str)
Saves image to a file.
Parameters
location – Local path where to save the image.
show()
Shows the image.
This method only works when in a notebook environment.
class vertexai.preview.vision_models.ImageCaptioningModel(model_id: str, endpoint_name: Optional[str] = None)
Bases: vertexai._model_garden._model_garden_models._ModelGardenModel
Generates captions from image.
Examples:
model = ImageCaptioningModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
captions = model.get_captions(
image=image,
# Optional:
number_of_results=1,
language="en",
)
Creates a _ModelGardenModel.
This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Model Garden Model. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
classmethod from_pretrained(model_name: str)
Loads a _ModelGardenModel.
Parameters
model_name – Name of the model.
Returns
An instance of a class derieved from _ModelGardenModel.
Raises
ValueError – If model_name is unknown.
ValueError – If model does not support this class.
get_captions(image: vertexai.vision_models.Image, *, number_of_results: int = 1, language: str = 'en')
Generates captions for a given image.
Parameters
image – The image to get captions for. Size limit: 10 MB.
number_of_results – Number of captions to produce. Range: 1-3.
language – Language to use for captions. Supported languages: “en”, “fr”, “de”, “it”, “es”
Returns
A list of image caption strings.
class vertexai.preview.vision_models.ImageGenerationModel(model_id: str, endpoint_name: Optional[str] = None)
Bases: vertexai._model_garden._model_garden_models._ModelGardenModel
Generates images from text prompt.
Examples:
model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
prompt="Astronaut riding a horse",
# Optional:
number_of_images=1,
seed=0,
)
response[0].show()
response[0].save("image1.png")
Creates a _ModelGardenModel.
This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Model Garden Model. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
edit_image(*, prompt: str, base_image: vertexai.vision_models.Image, mask: Optional[vertexai.vision_models.Image] = None, negative_prompt: Optional[str] = None, number_of_images: int = 1, guidance_scale: Optional[float] = None, seed: Optional[int] = None)
Edits an existing image based on text prompt.
Parameters
prompt – Text prompt for the image.
base_image – Base image from which to generate the new image.
mask – Mask for the base image.
negative_prompt – A description of what you want to omit in the generated images.
number_of_images – Number of images to generate. Range: 1..8.
guidance_scale – Controls the strength of the prompt. Suggested values are: * 0-9 (low strength) * 10-20 (medium strength) * 21+ (high strength)
seed – Image generation random seed.
Returns
An ImageGenerationResponse object.
classmethod from_pretrained(model_name: str)
Loads a _ModelGardenModel.
Parameters
model_name – Name of the model.
Returns
An instance of a class derieved from _ModelGardenModel.
Raises
ValueError – If model_name is unknown.
ValueError – If model does not support this class.
generate_images(prompt: str, *, negative_prompt: Optional[str] = None, number_of_images: int = 1, guidance_scale: Optional[float] = None, seed: Optional[int] = None)
Generates images from text prompt.
Parameters
prompt – Text prompt for the image.
negative_prompt – A description of what you want to omit in the generated images.
number_of_images – Number of images to generate. Range: 1..8.
guidance_scale – Controls the strength of the prompt. Suggested values are: * 0-9 (low strength) * 10-20 (medium strength) * 21+ (high strength)
seed – Image generation random seed.
Returns
An ImageGenerationResponse object.
upscale_image(image: Union[vertexai.vision_models.Image, vertexai.preview.vision_models.GeneratedImage], new_size: Optional[int] = 2048)
Upscales an image.
This supports upscaling images generated through the generate_images() method, or upscaling a new image that is 1024x1024.
Examples:
# Upscale a generated image
model = ImageGenerationModel.from_pretrained("imagegeneration@002")
response = model.generate_images(
prompt="Astronaut riding a horse",
)
model.upscale_image(image=response[0])
# Upscale a new 1024x1024 image
my_image = Image.load_from_file("my-image.png")
model.upscale_image(image=my_image)
Parameters
image (Union[GeneratedImage, **Image]) – Required. The generated image to upscale.
new_size (int) – The size of the biggest dimension of the upscaled image. Only 2048 and 4096 are currently supported. Results in a 2048x2048 or 4096x4096 image. Defaults to 2048 if not provided.
Returns
An Image object.
class vertexai.preview.vision_models.ImageGenerationResponse(images: List[GeneratedImage])
Bases: object
Image generation response.
images()
The list of generated images.
Type
List[vertexai.preview.vision_models.GeneratedImage]
_getitem_(idx: int)
Gets the generated image by index.
_iter_()
Iterates through the generated images.
class vertexai.preview.vision_models.ImageQnAModel(model_id: str, endpoint_name: Optional[str] = None)
Bases: vertexai._model_garden._model_garden_models._ModelGardenModel
Answers questions about an image.
Examples:
model = ImageQnAModel.from_pretrained("imagetext@001")
image = Image.load_from_file("image.png")
answers = model.ask_question(
image=image,
question="What color is the car in this image?",
# Optional:
number_of_results=1,
)
Creates a _ModelGardenModel.
This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Model Garden Model. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
ask_question(image: vertexai.vision_models.Image, question: str, *, number_of_results: int = 1)
Answers questions about an image.
Parameters
image – The image to get captions for. Size limit: 10 MB.
question – Question to ask about the image.
number_of_results – Number of captions to produce. Range: 1-3.
Returns
A list of answers.
classmethod from_pretrained(model_name: str)
Loads a _ModelGardenModel.
Parameters
model_name – Name of the model.
Returns
An instance of a class derieved from _ModelGardenModel.
Raises
ValueError – If model_name is unknown.
ValueError – If model does not support this class.
class vertexai.preview.vision_models.MultiModalEmbeddingModel(model_id: str, endpoint_name: Optional[str] = None)
Bases: vertexai._model_garden._model_garden_models._ModelGardenModel
Generates embedding vectors from images.
Examples:
model = MultiModalEmbeddingModel.from_pretrained("multimodalembedding@001")
image = Image.load_from_file("image.png")
embeddings = model.get_embeddings(
image=image,
contextual_text="Hello world",
)
image_embedding = embeddings.image_embedding
text_embedding = embeddings.text_embedding
Creates a _ModelGardenModel.
This constructor should not be called directly. Use {model_class}.from_pretrained(model_name=…) instead.
Parameters
model_id – Identifier of a Model Garden Model. Example: “text-bison@001”
endpoint_name – Vertex Endpoint resource name for the model
classmethod from_pretrained(model_name: str)
Loads a _ModelGardenModel.
Parameters
model_name – Name of the model.
Returns
An instance of a class derieved from _ModelGardenModel.
Raises
ValueError – If model_name is unknown.
ValueError – If model does not support this class.
get_embeddings(image: Optional[vertexai.vision_models.Image] = None, contextual_text: Optional[str] = None)
Gets embedding vectors from the provided image.
Parameters
image (Image) – Optional. The image to generate embeddings for. One of image or contextual_text is required.
contextual_text (str) – Optional. Contextual text for your input image. If provided, the model will also generate an embedding vector for the provided contextual text. The returned image and text embedding vectors are in the same semantic space with the same dimensionality, and the vectors can be used interchangeably for use cases like searching image by text or searching text by image. One of image or contextual_text is required.
Returns
The image and text embedding vectors.
Return type
ImageEmbeddingResponse
class vertexai.preview.vision_models.MultiModalEmbeddingResponse(_prediction_response: Any, image_embedding: Optional[List[float]] = None, text_embedding: Optional[List[float]] = None)
Bases: object
The image embedding response.
image_embedding()
Optional. The embedding vector generated from your image.
Type
List[float]
text_embedding()
Optional. The embedding vector generated from the contextual text provided for your image.
Type
List[float]