- 1.28.0 (latest)
- 1.27.0
- 1.26.0
- 1.25.0
- 1.24.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
- 1.15.0
- 1.14.0
- 1.13.0
- 1.12.0
- 1.11.1
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.26.0
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.1
- 0.19.2
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.1
- 0.13.0
- 0.12.0
- 0.11.0
- 0.10.0
- 0.9.0
- 0.8.0
- 0.7.0
- 0.6.0
- 0.5.0
- 0.4.0
- 0.3.0
- 0.2.0
Claude3TextGenerator(
*,
model_name: typing.Literal[
"claude-3-sonnet", "claude-3-haiku", "claude-3-5-sonnet", "claude-3-opus"
] = "claude-3-sonnet",
session: typing.Optional[bigframes.session.Session] = None,
connection_name: typing.Optional[str] = None
)
Claude3 text generator LLM model.
Go to Google Cloud Console -> Vertex AI -> Model Garden page to enabe the models before use. Must have the Consumer Procurement Entitlement Manager Identity and Access Management (IAM) role to enable the models. https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-partner-models#grant-permissions
The models only available in specific regions. Check https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude#regions for details.Parameters |
|
---|---|
Name | Description |
model_name |
str, Default to "claude-3-sonnet"
The model for natural language tasks. Possible values are "claude-3-sonnet", "claude-3-haiku", "claude-3-5-sonnet" and "claude-3-opus". "claude-3-sonnet" is Anthropic's dependable combination of skills and speed. It is engineered to be dependable for scaled AI deployments across a variety of use cases. "claude-3-haiku" is Anthropic's fastest, most compact vision and text model for near-instant responses to simple queries, meant for seamless AI experiences mimicking human interactions. "claude-3-5-sonnet" is Anthropic's most powerful AI model and maintains the speed and cost of Claude 3 Sonnet, which is a mid-tier model. "claude-3-opus" is Anthropic's second-most powerful AI model, with strong performance on highly complex tasks. https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude#available-claude-models Default to "claude-3-sonnet". |
session |
bigframes.Session or None
BQ session to create the model. If None, use the global default session. |
connection_name |
str or None
Connection to connect with remote service. str of the format <PROJECT_NUMBER/PROJECT_ID>.
|
Methods
__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
Parameter | |
---|---|
Name | Description |
deep |
bool, default True
Default |
Returns | |
---|---|
Type | Description |
Dictionary |
A dictionary of parameter names mapped to their values. |
predict
predict(
X: typing.Union[
bigframes.dataframe.DataFrame,
bigframes.series.Series,
pandas.core.frame.DataFrame,
pandas.core.series.Series,
],
*,
max_output_tokens: int = 128,
top_k: int = 40,
top_p: float = 0.95
) -> bigframes.dataframe.DataFrame
Predict the result from input DataFrame.
Parameters | |
---|---|
Name | Description |
X |
bigframes.dataframe.DataFrame or bigframes.series.Series or pandas.core.frame.DataFrame or pandas.core.series.Series
Input DataFrame or Series, can contain one or more columns. If multiple columns are in the DataFrame, it must contain a "prompt" column for prediction. Prompts can include preamble, questions, suggestions, instructions, or examples. |
max_output_tokens |
int, default 128
Maximum number of tokens that can be generated in the response. Specify a lower value for shorter responses and a higher value for longer responses. A token may be smaller than a word. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words. Default 128. Possible values are in the range [1, 4096]. |
top_k |
int, default 40
Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Default 40. Possible values [1, 40]. |
top_p |
float, default 0.95
Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and not consider C at all. Specify a lower value for less random responses and a higher value for more random responses. Default 0.95. Possible values [0.0, 1.0]. |
Returns | |
---|---|
Type | Description |
bigframes.dataframe.DataFrame |
DataFrame of shape (n_samples, n_input_columns + n_prediction_columns). Returns predicted values. |
to_gbq
to_gbq(
model_name: str, replace: bool = False
) -> bigframes.ml.llm.Claude3TextGenerator
Save the model to BigQuery.
Parameters | |
---|---|
Name | Description |
model_name |
str
The name of the model. |
replace |
bool, default False
Determine whether to replace if the model already exists. Default to False. |
Returns | |
---|---|
Type | Description |
Claude3TextGenerator |
Saved model. |