Module generative_models (1.53.0)

Classes for working with the Gemini models.

Classes

AutomaticFunctionCallingResponder

AutomaticFunctionCallingResponder(max_automatic_function_calls: int = 1)

Responder that automatically responds to model's function calls.

CallableFunctionDeclaration

CallableFunctionDeclaration(
    name: str,
    function: typing.Callable[[...], typing.Any],
    parameters: typing.Dict[str, typing.Any],
    description: typing.Optional[str] = None,
)

A function declaration plus a function.

Candidate

Candidate()

A response candidate generated by the model.

ChatSession

ChatSession(
    model: vertexai.generative_models._generative_models._GenerativeModel,
    *,
    history: typing.Optional[
        typing.List[vertexai.generative_models._generative_models.Content]
    ] = None,
    response_validation: bool = True,
    responder: typing.Optional[
        vertexai.generative_models._generative_models.AutomaticFunctionCallingResponder
    ] = None,
    raise_on_blocked: typing.Optional[bool] = None
)

Chat session holds the chat history.

Content

Content(
    *,
    parts: typing.Optional[
        typing.List[vertexai.generative_models._generative_models.Part]
    ] = None,
    role: typing.Optional[str] = None
)

The multi-part content of a message.

Usage:

response = model.generate_content(contents=[
    Content(role="user", parts=[Part.from_text("Why is sky blue?")])
])
```

FinishReason

FinishReason(value)

The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.

FunctionDeclaration

FunctionDeclaration(
    *,
    name: str,
    parameters: typing.Dict[str, typing.Any],
    description: typing.Optional[str] = None
)

A representation of a function declaration.

Usage: Create function declaration and tool:

get_current_weather_func = generative_models.FunctionDeclaration(
    name="get_current_weather",
    description="Get the current weather in a given location",
    parameters={
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
            },
            "unit": {
                "type": "string",
                "enum": [
                    "celsius",
                    "fahrenheit",
                ]
            }
        },
        "required": [
            "location"
        ]
    },
)
weather_tool = generative_models.Tool(
    function_declarations=[get_current_weather_func],
)
```
Use tool in `GenerativeModel.generate_content`:
```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
    "What is the weather like in Boston?",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
))
```
Use tool in chat:
```
model = GenerativeModel(
    "gemini-pro",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
    Part.from_function_response(
        name="get_current_weather",
        response={
            "content": {"weather_there": "super nice"},
        }
    ),
))
```

GenerationConfig

GenerationConfig(
    *,
    temperature: typing.Optional[float] = None,
    top_p: typing.Optional[float] = None,
    top_k: typing.Optional[int] = None,
    candidate_count: typing.Optional[int] = None,
    max_output_tokens: typing.Optional[int] = None,
    stop_sequences: typing.Optional[typing.List[str]] = None,
    presence_penalty: typing.Optional[float] = None,
    frequency_penalty: typing.Optional[float] = None,
    response_mime_type: typing.Optional[str] = None,
    response_schema: typing.Optional[typing.Dict[str, typing.Any]] = None
)

Parameters for the generation.

GenerationResponse

GenerationResponse()

The response from the model.

GenerativeModel

GenerativeModel(
    model_name: str,
    *,
    generation_config: typing.Optional[
        typing.Union[
            vertexai.generative_models._generative_models.GenerationConfig,
            typing.Dict[str, typing.Any],
        ]
    ] = None,
    safety_settings: typing.Optional[
        typing.Union[
            typing.List[vertexai.generative_models._generative_models.SafetySetting],
            typing.Dict[
                google.cloud.aiplatform_v1beta1.types.content.HarmCategory,
                google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold,
            ],
        ]
    ] = None,
    tools: typing.Optional[
        typing.List[vertexai.generative_models._generative_models.Tool]
    ] = None,
    tool_config: typing.Optional[
        vertexai.generative_models._generative_models.ToolConfig
    ] = None,
    system_instruction: typing.Optional[
        typing.Union[
            str,
            vertexai.generative_models._generative_models.Image,
            vertexai.generative_models._generative_models.Part,
            typing.List[
                typing.Union[
                    str,
                    vertexai.generative_models._generative_models.Image,
                    vertexai.generative_models._generative_models.Part,
                ]
            ],
        ]
    ] = None
)

Initializes GenerativeModel.

Usage:

model = GenerativeModel("gemini-pro")
print(model.generate_content("Hello"))
```
Parameter
Name Description
model_name str

Model Garden model resource name. Alternatively, a tuned model endpoint resource name can be provided.

HarmBlockThreshold

HarmBlockThreshold(value)

Probability based thresholds levels for blocking.

HarmCategory

HarmCategory(value)

Harm categories that will block the content.

Image

Image()

The image that can be sent to a generative model.

Part

Part()

A part of a multi-part Content message.

Usage:

text_part = Part.from_text("Why is sky blue?")
image_part = Part.from_image(Image.load_from_file("image.jpg"))
video_part = Part.from_uri(uri="gs://.../video.mp4", mime_type="video/mp4")
function_response_part = Part.from_function_response(
    name="get_current_weather",
    response={
        "content": {"weather_there": "super nice"},
    }
)

response1 = model.generate_content([text_part, image_part])
response2 = model.generate_content(video_part)
response3 = chat.send_message(function_response_part)
```

ResponseBlockedError

ResponseBlockedError(
    message: str,
    request_contents: typing.List[
        vertexai.generative_models._generative_models.Content
    ],
    responses: typing.List[
        vertexai.generative_models._generative_models.GenerationResponse
    ],
)

Common base class for all non-exit exceptions.

ResponseValidationError

ResponseValidationError(
    message: str,
    request_contents: typing.List[
        vertexai.generative_models._generative_models.Content
    ],
    responses: typing.List[
        vertexai.generative_models._generative_models.GenerationResponse
    ],
)

Common base class for all non-exit exceptions.

SafetySetting

SafetySetting(
    *,
    category: google.cloud.aiplatform_v1beta1.types.content.HarmCategory,
    threshold: google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockThreshold,
    method: typing.Optional[
        google.cloud.aiplatform_v1beta1.types.content.SafetySetting.HarmBlockMethod
    ] = None
)

Parameters for the generation.

Tool

Tool(
    function_declarations: typing.List[
        vertexai.generative_models._generative_models.FunctionDeclaration
    ],
)

A collection of functions that the model may use to generate response.

Usage: Create tool from function declarations:

get_current_weather_func = generative_models.FunctionDeclaration(...)
weather_tool = generative_models.Tool(
    function_declarations=[get_current_weather_func],
)
```
Use tool in `GenerativeModel.generate_content`:
```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
    "What is the weather like in Boston?",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
))
```
Use tool in chat:
```
model = GenerativeModel(
    "gemini-pro",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
    Part.from_function_response(
        name="get_current_weather",
        response={
            "content": {"weather_there": "super nice"},
        }
    ),
))
```

ToolConfig

ToolConfig(
    function_calling_config: vertexai.generative_models._generative_models.ToolConfig.FunctionCallingConfig,
)

Config shared for all tools provided in the request.

Usage: Create ToolConfig

tool_config = ToolConfig(
    function_calling_config=ToolConfig.FunctionCallingConfig(
        mode=ToolConfig.FunctionCallingConfig.Mode.ANY,
        allowed_function_names=["get_current_weather_func"],
))
```
Use ToolConfig in `GenerativeModel.generate_content`:
```
model = GenerativeModel("gemini-pro")
print(model.generate_content(
    "What is the weather like in Boston?",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
    tool_config=tool_config,
))
```
Use ToolConfig in chat:
```
model = GenerativeModel(
    "gemini-pro",
    # You can specify tools when creating a model to avoid having to send them with every request.
    tools=[weather_tool],
    tool_config=tool_config,
)
chat = model.start_chat()
print(chat.send_message("What is the weather like in Boston?"))
print(chat.send_message(
    Part.from_function_response(
        name="get_current_weather",
        response={
            "content": {"weather_there": "super nice"},
        }
    ),
))
```