Supported models
The following table lists the models that support video understanding:
Model | Video modality details |
---|---|
Gemini 1.5 Flash Go to the Gemini 1.5 Flash model card |
Maximum video length:
Maximum videos per prompt: 10 |
Gemini 1.5 Pro Go to the Gemini 1.5 Pro model card |
Maximum video length:
Maximum videos per prompt: 10 |
Gemini 1.0 Pro Vision Go to the Gemini 1.0 Pro Vision model card |
Maximum video length: 2 minutes The maximum videos per prompt: 1 Audio in the video is ignored. |
For a list of languages supported by Gemini models, see model information Google models. To learn more about how to design multimodal prompts, see Design multimodal prompts. If you're looking for a way to use Gemini directly from your mobile and web apps, see the Vertex AI in Firebase SDKs for Android, Swift, web, and Flutter apps.
Add videos to a request
You can add a single video or multiple videos in your request to Gemini and the video can include audio.
Single video
The sample code in each of the following tabs shows a different way to identify what's in a video. This sample works with all Gemini multimodal models.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Vertex AI SDK for Python API reference documentation.
Streaming and non-streaming responses
You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.
For a streaming response, use the stream
parameter in
generate_content
.
response = model.generate_content(contents=[...], stream = True)
For a non-streaming response, remove the parameter, or set the parameter to
False
.
Sample code
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI Java SDK for Gemini reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Streaming and non-streaming responses
You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.
For a streaming response, use the
generateContentStream
method.
public ResponseStream<GenerateContentResponse> generateContentStream(Content content)
For a non-streaming response, use the
generateContent
method.
public GenerateContentResponse generateContent(Content content)
Sample code
Node.js
Before trying this sample, follow the Node.js setup instructions in the Generative AI quickstart using the Node.js SDK. For more information, see the Node.js SDK for Gemini reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Streaming and non-streaming responses
You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.
For a streaming response, use the
generateContentStream
method.
const streamingResp = await generativeModel.generateContentStream(request);
For a non-streaming response, use the
generateContent
method.
const streamingResp = await generativeModel.generateContent(request);
Sample code
Go
Before trying this sample, follow the Go setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI Go SDK for Gemini reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Streaming and non-streaming responses
You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.
For a streaming response, use the
GenerateContentStream
method.
iter := model.GenerateContentStream(ctx, genai.Text("Tell me a story about a lumberjack and his giant ox. Keep it very short."))
For a non-streaming response, use the GenerateContent
method.
resp, err := model.GenerateContent(ctx, genai.Text("What is the average size of a swallow?"))
Sample code
C#
Before trying this sample, follow the C# setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI C# reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Streaming and non-streaming responses
You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.
For a streaming response, use the
StreamGenerateContent
method.
public virtual PredictionServiceClient.StreamGenerateContentStream StreamGenerateContent(GenerateContentRequest request)
For a non-streaming response, use the
GenerateContentAsync
method.
public virtual Task<GenerateContentResponse> GenerateContentAsync(GenerateContentRequest request)
For more information on how the server can stream responses, see Streaming RPCs.
Sample code
REST
After you set up your environment, you can use REST to test a text prompt. The following sample sends a request to the publisher model endpoint.
Before using any of the request data, make the following replacements:
LOCATION
: The region to process the request. Enter a supported region. For the full list of supported regions, see Available locations.Click to expand a partial list of available regions
us-central1
us-west4
northamerica-northeast1
us-east4
us-west1
asia-northeast3
asia-southeast1
asia-northeast1
PROJECT_ID
: Your project ID.FILE_URI
: The URI or URL of the file to include in the prompt. Acceptable values include the following:- Cloud Storage bucket URI: The object must either be publicly readable or reside in
the same Google Cloud project that's sending the request. For
gemini-1.5-pro
andgemini-1.5-flash
, the size limit is 2 GB. Forgemini-1.0-pro-vision
, the size limit is 20 MB. - HTTP URL: The file URL must be publicly readable. You can specify one video file, one audio file, and up to 10 image files per request. Audio files, video files, and documents can't exceed 15 MB.
- YouTube video URL:The YouTube video must be either owned by the account that you used to sign in to the Google Cloud console or is public. Only one YouTube video URL is supported per request.
When specifying a
fileURI
, you must also specify the media type (mimeType
) of the file.If you don't have a video file in Cloud Storage, then you can use the following publicly available file:
gs://cloud-samples-data/video/animals.mp4
with a mime type ofvideo/mp4
. To view this video, open the sample MP4 file.- Cloud Storage bucket URI: The object must either be publicly readable or reside in
the same Google Cloud project that's sending the request. For
MIME_TYPE
: The media type of the file specified in thedata
orfileUri
fields. Acceptable values include the following:Click to expand MIME types
application/pdf
audio/mpeg
audio/mp3
audio/wav
image/png
image/jpeg
image/webp
text/plain
video/mov
video/mpeg
video/mp4
video/mpg
video/avi
video/wmv
video/mpegps
video/flv
TEXT
: The text instructions to include in the prompt. For example,What is in the video?
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
cat > request.json << 'EOF' { "contents": { "role": "USER", "parts": [ { "fileData": { "fileUri": "FILE_URI", "mimeType": "MIME_TYPE" } }, { "text": "TEXT" } ] } } EOF
Then execute the following command to send your REST request:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/gemini-1.5-flash:generateContent"
PowerShell
Save the request body in a file named request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
@' { "contents": { "role": "USER", "parts": [ { "fileData": { "fileUri": "FILE_URI", "mimeType": "MIME_TYPE" } }, { "text": "TEXT" } ] } } '@ | Out-File -FilePath request.json -Encoding utf8
Then execute the following command to send your REST request:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/gemini-1.5-flash:generateContent" | Select-Object -Expand Content
You should receive a JSON response similar to the following.
Note the following in the URL for this sample:- Use the
generateContent
method to request that the response is returned after it's fully generated. To reduce the perception of latency to a human audience, stream the response as it's being generated by using thestreamGenerateContent
method. - The multimodal model ID is located at the end of the URL before the method
(for example,
gemini-1.5-flash
orgemini-1.0-pro-vision
). This sample may support other models as well.
Console
To send a multimodal prompt by using the Google Cloud console, do the following:In the Vertex AI section of the Google Cloud console, go to the Vertex AI Studio page.
Click Open freeform.
Optional: Configure the model and parameters:
- Model: Select a model.
- Region: Select the region that you want to use.
Temperature: Use the slider or textbox to enter a value for temperature.
The temperature is used for sampling during response generation, which occurs when
topP
andtopK
are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of0
means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible.If the model returns a response that's too generic, too short, or the model gives a fallback response, try increasing the temperature.
Output token limit: Use the slider or textbox to enter a value for the max output limit.
Maximum number of tokens that can be generated in the response. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words.
Specify a lower value for shorter responses and a higher value for potentially longer responses.
Add stop sequence: Optional. Enter a stop sequence, which is a series of characters that includes spaces. If the model encounters a stop sequence, the response generation stops. The stop sequence isn't included in the response, and you can add up to five stop sequences.
Optional: To configure advanced parameters, click Advanced and configure as follows:
Click to expand advanced configurations
Top-K: Use the slider or textbox to enter a value for top-K. (not supported for Gemini 1.5).
Top-K changes how the model selects tokens for output. A top-K of1
means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-K of3
means that the next token is selected from among the three most probable tokens by using temperature.For each token selection step, the top-K tokens with the highest probabilities are sampled. Then tokens are further filtered based on top-P with the final token selected using temperature sampling.
Specify a lower value for less random responses and a higher value for more random responses.
- Top-P: Use the slider or textbox to enter a value for top-P.
Tokens are selected from most probable to the least until the sum of their
probabilities equals the value of top-P. For the least variable results,
set top-P to
0
. - Max responses: Use the slider or textbox to enter a value for the number of responses to generate.
- Streaming responses: Enable to print responses as they're generated.
- Safety filter threshold: Select the threshold of how likely you are to see responses that could be harmful.
- Enable Grounding: Grounding isn't supported for multimodal prompts.
Click Insert Media, and select a source for your file.
Upload
Select the file that you want to upload and click Open.
By URL
Enter the URL of the file that you want to use and click Insert.
YouTube
Enter the URL of the YouTube video that you want to use and click Insert.
You can use any public video or a video that's owned by the account that you used to sign in to the Google Cloud console.
Cloud Storage
Select the bucket and then the file from the bucket that you want to import and click Select.
Google Drive
- Choose an account and give consent to Vertex AI Studio to access your account the first time you select this option. You can upload multiple files that have a total size of up to 10 MB. A single file can't exceed 7 MB.
- Click the file that you want to add.
Click Select.
The file thumbnail displays in the Prompt pane. The total number of tokens also displays. If your prompt data exceeds the token limit, the tokens are truncated and aren't included in processing your data.
Enter your text prompt in the Prompt pane.
Optional: To view the Token ID to text and Token IDs, click the tokens count in the Prompt pane.
Click Submit.
Optional: To save your prompt to My prompts, click
Save.Optional: To get the Python code or a curl command for your prompt, click
Get code.
Video with audio
The following shows you how to summarize a video file with audio and return chapters with timestamps. This sample works with Gemini 1.5 Pro only.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Vertex AI SDK for Python API reference documentation.
Streaming and non-streaming responses
You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.
For a streaming response, use the stream
parameter in
generate_content
.
response = model.generate_content(contents=[...], stream = True)
For a non-streaming response, remove the parameter, or set the parameter to
False
.
Sample code
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI Java SDK for Gemini reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Streaming and non-streaming responses
You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.
For a streaming response, use the
generateContentStream
method.
public ResponseStream<GenerateContentResponse> generateContentStream(Content content)
For a non-streaming response, use the
generateContent
method.
public GenerateContentResponse generateContent(Content content)
Sample code
Node.js
Before trying this sample, follow the Node.js setup instructions in the Generative AI quickstart using the Node.js SDK. For more information, see the Node.js SDK for Gemini reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Streaming and non-streaming responses
You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.
For a streaming response, use the
generateContentStream
method.
const streamingResp = await generativeModel.generateContentStream(request);
For a non-streaming response, use the
generateContent
method.
const streamingResp = await generativeModel.generateContent(request);
Sample code
Go
Before trying this sample, follow the Go setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI Go SDK for Gemini reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Streaming and non-streaming responses
You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.
For a streaming response, use the
GenerateContentStream
method.
iter := model.GenerateContentStream(ctx, genai.Text("Tell me a story about a lumberjack and his giant ox. Keep it very short."))
For a non-streaming response, use the GenerateContent
method.
resp, err := model.GenerateContent(ctx, genai.Text("What is the average size of a swallow?"))
Sample code
C#
Before trying this sample, follow the C# setup instructions in the Vertex AI quickstart. For more information, see the Vertex AI C# reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Streaming and non-streaming responses
You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.
For a streaming response, use the
StreamGenerateContent
method.
public virtual PredictionServiceClient.StreamGenerateContentStream StreamGenerateContent(GenerateContentRequest request)
For a non-streaming response, use the
GenerateContentAsync
method.
public virtual Task<GenerateContentResponse> GenerateContentAsync(GenerateContentRequest request)
For more information on how the server can stream responses, see Streaming RPCs.
Sample code
REST
After you set up your environment, you can use REST to test a text prompt. The following sample sends a request to the publisher model endpoint.
Before using any of the request data, make the following replacements:
LOCATION
: The region to process the request. Enter a supported region. For the full list of supported regions, see Available locations.Click to expand a partial list of available regions
us-central1
us-west4
northamerica-northeast1
us-east4
us-west1
asia-northeast3
asia-southeast1
asia-northeast1
PROJECT_ID
: Your project ID.FILE_URI
: The URI or URL of the file to include in the prompt. Acceptable values include the following:- Cloud Storage bucket URI: The object must either be publicly readable or reside in
the same Google Cloud project that's sending the request. For
gemini-1.5-pro
andgemini-1.5-flash
, the size limit is 2 GB. Forgemini-1.0-pro-vision
, the size limit is 20 MB. - HTTP URL: The file URL must be publicly readable. You can specify one video file, one audio file, and up to 10 image files per request. Audio files, video files, and documents can't exceed 15 MB.
- YouTube video URL:The YouTube video must be either owned by the account that you used to sign in to the Google Cloud console or is public. Only one YouTube video URL is supported per request.
When specifying a
fileURI
, you must also specify the media type (mimeType
) of the file.If you don't have a video file in Cloud Storage, then you can use the following publicly available file:
gs://cloud-samples-data/generative-ai/video/pixel8.mp4
with a mime type ofvideo/mp4
. To view this video, open the sample MP4 file.- Cloud Storage bucket URI: The object must either be publicly readable or reside in
the same Google Cloud project that's sending the request. For
MIME_TYPE
: The media type of the file specified in thedata
orfileUri
fields. Acceptable values include the following:Click to expand MIME types
application/pdf
audio/mpeg
audio/mp3
audio/wav
image/png
image/jpeg
image/webp
text/plain
video/mov
video/mpeg
video/mp4
video/mpg
video/avi
video/wmv
video/mpegps
video/flv
TEXT
The text instructions to include in the prompt. For example,Provide a description of the video. The description should also contain anything important which people say in the video.
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
cat > request.json << 'EOF' { "contents": { "role": "USER", "parts": [ { "fileData": { "fileUri": "FILE_URI", "mimeType": "MIME_TYPE" } }, { "text": "TEXT" } ] } } EOF
Then execute the following command to send your REST request:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/gemini-1.5-flash:generateContent"
PowerShell
Save the request body in a file named request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
@' { "contents": { "role": "USER", "parts": [ { "fileData": { "fileUri": "FILE_URI", "mimeType": "MIME_TYPE" } }, { "text": "TEXT" } ] } } '@ | Out-File -FilePath request.json -Encoding utf8
Then execute the following command to send your REST request:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/gemini-1.5-flash:generateContent" | Select-Object -Expand Content
You should receive a JSON response similar to the following.
Note the following in the URL for this sample:- Use the
generateContent
method to request that the response is returned after it's fully generated. To reduce the perception of latency to a human audience, stream the response as it's being generated by using thestreamGenerateContent
method. - The multimodal model ID is located at the end of the URL before the method
(for example,
gemini-1.5-flash
orgemini-1.0-pro-vision
). This sample may support other models as well.
Console
To send a multimodal prompt by using the Google Cloud console, do the following:In the Vertex AI section of the Google Cloud console, go to the Vertex AI Studio page.
Click Open freeform.
Optional: Configure the model and parameters:
- Model: Select a model.
- Region: Select the region that you want to use.
Temperature: Use the slider or textbox to enter a value for temperature.
The temperature is used for sampling during response generation, which occurs when
topP
andtopK
are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of0
means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible.If the model returns a response that's too generic, too short, or the model gives a fallback response, try increasing the temperature.
Output token limit: Use the slider or textbox to enter a value for the max output limit.
Maximum number of tokens that can be generated in the response. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words.
Specify a lower value for shorter responses and a higher value for potentially longer responses.
Add stop sequence: Optional. Enter a stop sequence, which is a series of characters that includes spaces. If the model encounters a stop sequence, the response generation stops. The stop sequence isn't included in the response, and you can add up to five stop sequences.
Optional: To configure advanced parameters, click Advanced and configure as follows:
Click to expand advanced configurations
Top-K: Use the slider or textbox to enter a value for top-K. (not supported for Gemini 1.5).
Top-K changes how the model selects tokens for output. A top-K of1
means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-K of3
means that the next token is selected from among the three most probable tokens by using temperature.For each token selection step, the top-K tokens with the highest probabilities are sampled. Then tokens are further filtered based on top-P with the final token selected using temperature sampling.
Specify a lower value for less random responses and a higher value for more random responses.
- Top-P: Use the slider or textbox to enter a value for top-P.
Tokens are selected from most probable to the least until the sum of their
probabilities equals the value of top-P. For the least variable results,
set top-P to
0
. - Max responses: Use the slider or textbox to enter a value for the number of responses to generate.
- Streaming responses: Enable to print responses as they're generated.
- Safety filter threshold: Select the threshold of how likely you are to see responses that could be harmful.
- Enable Grounding: Grounding isn't supported for multimodal prompts.
Click Insert Media, and select a source for your file.
Upload
Select the file that you want to upload and click Open.
By URL
Enter the URL of the file that you want to use and click Insert.
YouTube
Enter the URL of the YouTube video that you want to use and click Insert.
You can use any public video or a video that's owned by the account that you used to sign in to the Google Cloud console.
Cloud Storage
Select the bucket and then the file from the bucket that you want to import and click Select.
Google Drive
- Choose an account and give consent to Vertex AI Studio to access your account the first time you select this option. You can upload multiple files that have a total size of up to 10 MB. A single file can't exceed 7 MB.
- Click the file that you want to add.
Click Select.
The file thumbnail displays in the Prompt pane. The total number of tokens also displays. If your prompt data exceeds the token limit, the tokens are truncated and aren't included in processing your data.
Enter your text prompt in the Prompt pane.
Optional: To view the Token ID to text and Token IDs, click the tokens count in the Prompt pane.
Click Submit.
Optional: To save your prompt to My prompts, click
Save.Optional: To get the Python code or a curl command for your prompt, click
Get code.
Set optional model parameters
Each model has a set of optional parameters that you can set. For more information, see Content generation parameters.
Video requirements
Gemini multimodal models support the following video MIME types:
Video MIME type | Gemini 1.5 Flash | Gemini 1.5 Pro | Gemini 1.0 Pro Vision |
---|---|---|---|
FLV - video/x-flv |
|||
MOV - video/quicktime |
|||
MPEG - video/mpeg |
|||
MPEGPS - video/mpegps |
|||
MPG - video/mpg |
|||
MP4 - video/mp4 |
|||
WEBM - video/webm |
|||
WMV - video/wmv |
|||
3GPP - video/3gpp |
Here's the maximum number of video files allowed in a prompt request:
- Gemini 1.0 Pro Vision: 1 video file
- Gemini 1.5 Flash and Gemini 1.5 Pro: 10 video files
Here's how tokens are calculated for video:
- All Gemini multimodal models: Videos are sampled at
1 frame per second (fps) . Each video frame accounts for 258 tokens. - Gemini 1.5 Flash and
Gemini 1.5 Pro: The audio track is encoded
with video frames. The audio track is also broken down into
1-second trunks that each accounts for 32 tokens. The video frame and audio tokens are interleaved together with their timestamps. The timestamps are represented as 7 tokens.
Best practices
When using video, use the following best practices and information for the best results:
- If your prompt contains a single video, place the video before the text prompt.
- If you need timestamp localization in a video with audio, ask the model
to generate timestamps in the
MM:SS
format where the first two digits represent minutes and the last two digits represent seconds. Use the same format for questions that ask about a timestamp. Note the following if you're using Gemini 1.0 Pro Vision:
- Use no more than one video per prompt.
- The model only processes the information in the first two minutes of the video.
- The model processes videos as non-contiguous image frames from the video. Audio isn't included. If you notice the model missing some content from the video, try making the video shorter so that the model captures a greater portion of the video content.
- The model does not process any audio information or timestamp metadata. Because of this, the model might not perform well in use cases that require audio input, such as captioning audio, or time-related information, such as speed or rhythm.
Limitations
While Gemini multimodal models are powerful in many multimodal use cases, it's important to understand the limitations of the models:
- Content moderation: The models refuse to provide answers on videos that violate our safety policies.
- Non-speech sound recognition: The models that support audio might make mistakes recognizing sound that's not speech.
- High-speed motion: The models might make mistakes
understanding high-speed motion in video due to the fixed
1 frame per second (fps) sampling rate. - Transcription punctuation: (if using Gemini 1.5 Flash) The models might return transcriptions that don't include punctuation.
What's next
- Start building with Gemini multimodal models - new customers get $300 in free Google Cloud credits to explore what they can do with Gemini.
- Learn how to send chat prompt requests.
- Learn about responsible AI best practices and Vertex AI's safety filters.