- 1.74.0 (latest)
- 1.73.0
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
ChatSession(
model: vertexai.language_models.ChatModel,
context: typing.Optional[str] = None,
examples: typing.Optional[
typing.List[vertexai.language_models.InputOutputTextPair]
] = None,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
top_k: typing.Optional[int] = None,
top_p: typing.Optional[float] = None,
message_history: typing.Optional[
typing.List[vertexai.language_models.ChatMessage]
] = None,
stop_sequences: typing.Optional[typing.List[str]] = None,
)
ChatSession represents a chat session with a language model.
Within a chat session, the model keeps context and remembers the previous conversation.
Properties
message_history
List of previous messages.
Methods
send_message
send_message(
message: str,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
top_k: typing.Optional[int] = None,
top_p: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None,
candidate_count: typing.Optional[int] = None,
grounding_source: typing.Optional[
typing.Union[
vertexai.language_models._language_models.WebSearch,
vertexai.language_models._language_models.VertexAISearch,
vertexai.language_models._language_models.InlineContext,
]
] = None
) -> vertexai.language_models.MultiCandidateTextGenerationResponse
Sends message to the language model and gets a response.
Parameter | |
---|---|
Name | Description |
message |
str
Message to send to the model |
send_message_async
send_message_async(
message: str,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
top_k: typing.Optional[int] = None,
top_p: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None,
candidate_count: typing.Optional[int] = None,
grounding_source: typing.Optional[
typing.Union[
vertexai.language_models._language_models.WebSearch,
vertexai.language_models._language_models.VertexAISearch,
vertexai.language_models._language_models.InlineContext,
]
] = None
) -> vertexai.language_models.MultiCandidateTextGenerationResponse
Asynchronously sends message to the language model and gets a response.
Parameter | |
---|---|
Name | Description |
message |
str
Message to send to the model |
send_message_streaming
send_message_streaming(
message: str,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
top_k: typing.Optional[int] = None,
top_p: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None
) -> typing.Iterator[vertexai.language_models.TextGenerationResponse]
Sends message to the language model and gets a streamed response.
The response is only added to the history once it's fully read.
Parameter | |
---|---|
Name | Description |
message |
str
Message to send to the model |
send_message_streaming_async
send_message_streaming_async(
message: str,
*,
max_output_tokens: typing.Optional[int] = None,
temperature: typing.Optional[float] = None,
top_k: typing.Optional[int] = None,
top_p: typing.Optional[float] = None,
stop_sequences: typing.Optional[typing.List[str]] = None
) -> typing.AsyncIterator[vertexai.language_models.TextGenerationResponse]
Asynchronously sends message to the language model and gets a streamed response.
The response is only added to the history once it's fully read.
Parameter | |
---|---|
Name | Description |
message |
str
Message to send to the model |