- 1.71.1 (latest)
- 1.71.0
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
Classes for working with the Gemini models.
Classes
AutomaticFunctionCallingResponder
AutomaticFunctionCallingResponder(max_automatic_function_calls: int = 1)
Responder that automatically responds to model's function calls.
CallableFunctionDeclaration
CallableFunctionDeclaration(
name: str,
function: typing.Callable[[...], typing.Any],
parameters: typing.Dict[str, typing.Any],
description: typing.Optional[str] = None,
)
A function declaration plus a function.
Candidate
Candidate()
A response candidate generated by the model.
ChatSession
ChatSession(
model: vertexai.generative_models._generative_models._GenerativeModel,
*,
history: typing.Optional[
typing.List[vertexai.generative_models._generative_models.Content]
] = None,
response_validation: bool = True,
responder: typing.Optional[
vertexai.generative_models._generative_models.AutomaticFunctionCallingResponder
] = None,
raise_on_blocked: typing.Optional[bool] = None
)
Chat session holds the chat history.
Content
Content(
*,
parts: typing.Optional[
typing.List[vertexai.generative_models._generative_models.Part]
] = None,
role: typing.Optional[str] = None
)
The multi-part content of a message.
Usage:
response = model.generate_content(contents=[
Content(role="user", parts=[Part.from_text("Why is sky blue?")])
])
```
FinishReason
FinishReason(value)
The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.
FunctionDeclaration
FunctionDeclaration(
*,
name: str,
parameters: typing.Dict[str, typing.Any],
description: typing.Optional[str] = None
)
A representation of a function declaration.
Usage: Create function declaration and tool:
get_current_weather_func = generative_models.FunctionDeclaration(
name="get_current_weather",
description="Get the current weather in a given location",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": [
"celsius",
"fahrenheit",
]
}
},
"required": [
"location"
]
},
)
weather_tool = generative_models.Tool(
function_declarations=[get_current_weather_func],
)
```
Use tool in `GenerativeModel.generate_content`:
```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
"What is the weather like in Boston?",
# You can specify tools when creating a model to avoid having to send them with every request.
tools=[weather_tool],
))
```
Use tool in chat:
```
model = GenerativeModel(
"gemini-pro",
# You can specify tools when creating a model to avoid having to send them with every request.
tools=[weather_tool],
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
Part.from_function_response(
name="get_current_weather",
response={
"content": {"weather_there": "super nice"},
}
),
))
```
GenerationConfig
GenerationConfig(
*,
temperature: typing.Optional[float] = None,
top_p: typing.Optional[float] = None,
top_k: typing.Optional[int] = None,
candidate_count: typing.Optional[int] = None,
max_output_tokens: typing.Optional[int] = None,
stop_sequences: typing.Optional[typing.List[str]] = None,
presence_penalty: typing.Optional[float] = None,
frequency_penalty: typing.Optional[float] = None,
response_mime_type: typing.Optional[str] = None,
response_schema: typing.Optional[typing.Dict[str, typing.Any]] = None
)
Parameters for the generation.
GenerationResponse
GenerationResponse()
The response from the model.
GenerativeModel
GenerativeModel(
model_name: str,
*,
generation_config: typing.Optional[
typing.Union[
vertexai.generative_models._generative_models.GenerationConfig,
typing.Dict[str, typing.Any],
]
] = None,
safety_settings: typing.Optional[
typing.Union[
typing.List[vertexai.generative_models._generative_models.SafetySetting],
typing.Dict[
google.cloud.aiplatform_v1beta1.types.content.HarmCategory,
google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold,
],
]
] = None,
tools: typing.Optional[
typing.List[vertexai.generative_models._generative_models.Tool]
] = None,
tool_config: typing.Optional[
vertexai.generative_models._generative_models.ToolConfig
] = None,
system_instruction: typing.Optional[
typing.Union[
str,
vertexai.generative_models._generative_models.Image,
vertexai.generative_models._generative_models.Part,
typing.List[
typing.Union[
str,
vertexai.generative_models._generative_models.Image,
vertexai.generative_models._generative_models.Part,
]
],
]
] = None
)
Initializes GenerativeModel.
Usage:
model = GenerativeModel("gemini-pro")
print(model.generate_content("Hello"))
```
Parameter | |
---|---|
Name | Description |
model_name |
str
Model Garden model resource name. Alternatively, a tuned model endpoint resource name can be provided. |
HarmBlockThreshold
HarmBlockThreshold(value)
Probability based thresholds levels for blocking.
HarmCategory
HarmCategory(value)
Harm categories that will block the content.
Image
Image()
The image that can be sent to a generative model.
Part
Part()
A part of a multi-part Content message.
Usage:
text_part = Part.from_text("Why is sky blue?")
image_part = Part.from_image(Image.load_from_file("image.jpg"))
video_part = Part.from_uri(uri="gs://.../video.mp4", mime_type="video/mp4")
function_response_part = Part.from_function_response(
name="get_current_weather",
response={
"content": {"weather_there": "super nice"},
}
)
response1 = model.generate_content([text_part, image_part])
response2 = model.generate_content(video_part)
response3 = chat.send_message(function_response_part)
```
ResponseBlockedError
ResponseBlockedError(
message: str,
request_contents: typing.List[
vertexai.generative_models._generative_models.Content
],
responses: typing.List[
vertexai.generative_models._generative_models.GenerationResponse
],
)
Common base class for all non-exit exceptions.
ResponseValidationError
ResponseValidationError(
message: str,
request_contents: typing.List[
vertexai.generative_models._generative_models.Content
],
responses: typing.List[
vertexai.generative_models._generative_models.GenerationResponse
],
)
Common base class for all non-exit exceptions.
SafetySetting
SafetySetting(
*,
category: google.cloud.aiplatform_v1beta1.types.content.HarmCategory,
threshold: google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold,
method: typing.Optional[
google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockMethod
] = None
)
Parameters for the generation.
Tool
Tool(
function_declarations: typing.List[
vertexai.generative_models._generative_models.FunctionDeclaration
],
)
A collection of functions that the model may use to generate response.
Usage: Create tool from function declarations:
get_current_weather_func = generative_models.FunctionDeclaration(...)
weather_tool = generative_models.Tool(
function_declarations=[get_current_weather_func],
)
```
Use tool in `GenerativeModel.generate_content`:
```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
"What is the weather like in Boston?",
# You can specify tools when creating a model to avoid having to send them with every request.
tools=[weather_tool],
))
```
Use tool in chat:
```
model = GenerativeModel(
"gemini-pro",
# You can specify tools when creating a model to avoid having to send them with every request.
tools=[weather_tool],
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
Part.from_function_response(
name="get_current_weather",
response={
"content": {"weather_there": "super nice"},
}
),
))
```
ToolConfig
ToolConfig(
function_calling_config: vertexai.generative_models._generative_models.ToolConfig.FunctionCallingConfig,
)
Config shared for all tools provided in the request.
Usage: Create ToolConfig
tool_config = ToolConfig(
function_calling_config=ToolConfig.FunctionCallingConfig(
mode=ToolConfig.FunctionCallingConfig.Mode.ANY,
allowed_function_names=["get_current_weather_func"],
))
```
Use ToolConfig in `GenerativeModel.generate_content`:
```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
"What is the weather like in Boston?",
# You can specify tools when creating a model to avoid having to send them with every request.
tools=[weather_tool],
tool_config=tool_config,
))
```
Use ToolConfig in chat:
```
model = GenerativeModel(
"gemini-pro",
# You can specify tools when creating a model to avoid having to send them with every request.
tools=[weather_tool],
tool_config=tool_config,
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
Part.from_function_response(
name="get_current_weather",
response={
"content": {"weather_there": "super nice"},
}
),
))
```