- 1.25.0 (latest)
- 1.24.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
- 1.15.0
- 1.14.0
- 1.13.0
- 1.12.0
- 1.11.1
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.26.0
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.1
- 0.19.2
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.1
- 0.13.0
- 0.12.0
- 0.11.0
- 0.10.0
- 0.9.0
- 0.8.0
- 0.7.0
- 0.6.0
- 0.5.0
- 0.4.0
- 0.3.0
- 0.2.0
LLM models.
Classes
GeminiTextGenerator
GeminiTextGenerator(
*,
model_name: typing.Literal[
"gemini-pro", "gemini-1.5-pro-preview-0514", "gemini-1.5-flash-preview-0514"
] = "gemini-pro",
session: typing.Optional[bigframes.session.Session] = None,
connection_name: typing.Optional[str] = None,
max_iterations: int = 300
)
Gemini text generator LLM model.
Parameters | |
---|---|
Name | Description |
model_name |
str, Default to "gemini-pro"
The model for natural language tasks. Accepted values are "gemini-pro", "gemini-1.5-pro-preview-0514" and "gemini-1.5-flash-preview-0514". Default to "gemini-pro". |
session |
bigframes.Session or None
BQ session to create the model. If None, use the global default session. |
connection_name |
str or None
Connection to connect with remote service. str of the format <PROJECT_NUMBER/PROJECT_ID>.
|
max_iterations |
Optional[int], Default to 300
The number of steps to run when performing supervised tuning. |
PaLM2TextEmbeddingGenerator
PaLM2TextEmbeddingGenerator(
*,
model_name: typing.Literal[
"textembedding-gecko", "textembedding-gecko-multilingual"
] = "textembedding-gecko",
version: typing.Optional[str] = None,
session: typing.Optional[bigframes.session.Session] = None,
connection_name: typing.Optional[str] = None
)
PaLM2 text embedding generator LLM model.
Parameters | |
---|---|
Name | Description |
model_name |
str, Default to "textembedding-gecko"
The model for text embedding. “textembedding-gecko” returns model embeddings for text inputs. "textembedding-gecko-multilingual" returns model embeddings for text inputs which support over 100 languages. Default to "textembedding-gecko". |
version |
str or None
Model version. Accepted values are "001", "002", "003", "latest" etc. Will use the default version if unset. See https://cloud.google.com/vertex-ai/docs/generative-ai/learn/model-versioning for details. |
session |
bigframes.Session or None
BQ session to create the model. If None, use the global default session. |
connection_name |
str or None
Connection to connect with remote service. str of the format <PROJECT_NUMBER/PROJECT_ID>.
|
PaLM2TextGenerator
PaLM2TextGenerator(
*,
model_name: typing.Literal["text-bison", "text-bison-32k"] = "text-bison",
session: typing.Optional[bigframes.session.Session] = None,
connection_name: typing.Optional[str] = None,
max_iterations: int = 300
)
PaLM2 text generator LLM model.
Parameters | |
---|---|
Name | Description |
model_name |
str, Default to "text-bison"
The model for natural language tasks. “text-bison” returns model fine-tuned to follow natural language instructions and is suitable for a variety of language tasks. "text-bison-32k" supports up to 32k tokens per request. Default to "text-bison". |
session |
bigframes.Session or None
BQ session to create the model. If None, use the global default session. |
connection_name |
str or None
Connection to connect with remote service. str of the format <PROJECT_NUMBER/PROJECT_ID>.
|
max_iterations |
Optional[int], Default to 300
The number of steps to run when performing supervised tuning. |