Class GeminiTextGenerator (1.0.0)

GeminiTextGenerator(
    *,
    session: typing.Optional[bigframes.session.Session] = None,
    connection_name: typing.Optional[str] = None
)

Gemini text generator LLM model.

Parameters

NameDescription
session bigframes.Session or None

BQ session to create the model. If None, use the global default session.

connection_name str or None

Connection to connect with remote service. str of the format <PROJECT_NUMBER/PROJECT_ID>.

Methods

__repr__

__repr__()

Print the estimator's constructor with all non-default parameter values

get_params

get_params(deep: bool = True) -> typing.Dict[str, typing.Any]

Get parameters for this estimator.

Parameter
NameDescription
deep bool, default True

Default True. If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
TypeDescription
DictionaryA dictionary of parameter names mapped to their values.

predict

predict(
    X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series],
    *,
    temperature: float = 0.9,
    max_output_tokens: int = 8192,
    top_k: int = 40,
    top_p: float = 1.0
) -> bigframes.dataframe.DataFrame

Predict the result from input DataFrame.

Parameters
NameDescription
X bigframes.dataframe.DataFrame or bigframes.series.Series

Input DataFrame or Series, which contains only one column of prompts. Prompts can include preamble, questions, suggestions, instructions, or examples.

temperature float, default 0.9

The temperature is used for sampling during the response generation, which occurs when topP and topK are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a more deterministic and less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 is deterministic: the highest probability response is always selected. Default 0.9. Possible values [0.0, 1.0].

max_output_tokens int, default 8192

Maximum number of tokens that can be generated in the response. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words. Specify a lower value for shorter responses and a higher value for potentially longer responses. Default 8192. Possible values are in the range [1, 8192].

top_k int, default 40

Top-K changes how the model selects tokens for output. A top-K of 1 means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-K of 3 means that the next token is selected from among the three most probable tokens by using temperature. For each token selection step, the top-K tokens with the highest probabilities are sampled. Then tokens are further filtered based on top-P with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Default 40. Possible values [1, 40].

top_p float, default 0.95

Top-P changes how the model selects tokens for output. Tokens are selected from the most (see top-K) to least probable until the sum of their probabilities equals the top-P value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-P value is 0.5, then the model will select either A or B as the next token by using temperature and excludes C as a candidate. Specify a lower value for less random responses and a higher value for more random responses. Default 1.0. Possible values [0.0, 1.0].

Returns
TypeDescription
bigframes.dataframe.DataFrameDataFrame of shape (n_samples, n_input_columns + n_prediction_columns). Returns predicted values.

to_gbq

to_gbq(
    model_name: str, replace: bool = False
) -> bigframes.ml.llm.GeminiTextGenerator

Save the model to BigQuery.

Parameters
NameDescription
model_name str

the name of the model.

replace bool, default False

whether to replace if the model already exists. Default to False.

Returns
TypeDescription
GeminiTextGeneratorsaved model.