{"contents":[{"role":"user","parts":[{"fileData":{"fileUri":"gs://<path to the mp4 video file>","mimeType":"video/mp4"},},{"text":" You are a video analysis expert. Detect which animal appears in the video.The video can only have one of the following animals: dog, cat, rabbit.\n Output Format:\n Generate output in the following JSON format:\n [{\n \"animal_name\": \"<CATEGORY>\",\n }]\n"}]},{"role":"model","parts":[{"text":"```json\n[{\"animal_name\": \"dog\"}]\n```"}]},],"generationConfig":{"mediaResolution":"MEDIA_RESOLUTION_LOW"}}
JSON 結構定義範例,適用於訓練和驗證時使用影片片段的情況
這個結構定義會以單行形式新增至 JSONL 檔案。
{"contents":[{"role":"user","parts":[{"fileData":{"fileUri":"gs://<path to the mp4 video file>","mimeType":"video/mp4"},"videoMetadata":{"startOffset":"5s","endOffset":"25s"}},{"text":" You are a video analysis expert. Detect which animal appears in the video.The video can only have one of the following animals: dog, cat, rabbit.\n Output Format:\n Generate output in the following JSON format:\n [{\n \"animal_name\": \"<CATEGORY>\",\n }]\n"}]},{"role":"model","parts":[{"text":"```json\n[{\"animal_name\": \"dog\"}]\n```"}]},],"generationConfig":{"mediaResolution":"MEDIA_RESOLUTION_LOW"}}
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Video tuning\n\nThis page provides prerequisites and detailed instructions for fine-tuning\nGemini on video data using supervised learning.\n\nSupported models\n----------------\n\nThe following Gemini models support video tuning:\n\n- Gemini 2.5 Flash\n- Gemini 2.5 Flash-Lite\n- Gemini 2.5 Pro\n\nUse cases\n---------\n\nFine-tuning lets you adapt base Gemini models for specialized tasks.\nHere are some video use cases:\n\n- **Automated video summarization**: Tuning LLMs to generate concise and\n coherent summaries of long videos, capturing the main themes, events, and\n narratives. This is useful for content discovery, archiving, and quick\n reviews.\n\n- **Detailed event recognition and localization**: Fine-tuning allows LLMs to\n identify and pinpoint specific actions, events, or objects within a video\n timeline with greater accuracy. For example, identifying all instances of a\n particular product in a marketing video or a specific action in sports\n footage.\n\n- **Content moderation**: Specialized tuning can improve an LLM's ability to\n detect sensitive, inappropriate, or policy-violating content within videos,\n going beyond simple object detection to understand context and nuance.\n\n- **Video captioning and subtitling**: While already a common application,\n tuning can improve the accuracy, fluency, and context-awareness of\n automatically generated captions and subtitles, including descriptions of\n nonverbal cues.\n\nLimitations\n-----------\n\n- **Maximum video file size** : 100MB. This may not be sufficient for large video files. Some recommended workarounds are as follows:\n - If there are very few large files, drop those files from including those in the JSONL files.\n - If there are many large files in your dataset and cannot be ignored, reduce visual resolution of the files. This may hurt performance.\n - Chunk the videos to limit the files size to 100MB and use the chunked videos for tuning. Make sure to change any timestamp annotations corresponding to the original video to the new (chunked) video timeline.\n- **Maximum video length per example** : 5 minutes with `MEDIA_RESOLUTION_MEDIUM` and 20 minutes with `MEDIA_RESOLUTION_LOW`.\n- **Dropped examples**: If an example contains video that is longer than the supported maximum length, that example is dropped from the dataset. Dropped examples are not billed or used for training. If more than 10% of the dataset is dropped, the job will fail with an error message before the start of training.\n- **Mixing different media resolutions isn't supported** : The value of `mediaResolution` for each example in the entire training dataset must be consistent. All lines in the JSONL files used for training and validation should have the same value of `mediaResolution`.\n\nDataset format\n--------------\n\nThe `fileUri` field specifies the location of your dataset. It can be the URI\nfor a file in a Cloud Storage bucket, or it can be a publicly available HTTP\nor HTTPS URL.\n\nThe `mediaResolution` field is used to specify the token count per frame for\nthe input videos, as one of the following values:\n\n- `MEDIA_RESOLUTION_LOW`: 64 tokens per frame\n- `MEDIA_RESOLUTION_MEDIUM`: 256 tokens per frame\n\nModel tuning with `MEDIA_RESOLUTION_LOW` is roughly 4 times faster than the ones\ntuned with `MEDIA_RESOLUTION_MEDIUM` with minimal performance improvement.\n\nWhen a video segment is used for training and validation, the video segment\nis in the `videoMetadata` field. During tuning, this data point is decoded\nto contain information from the segment extracted from the specified video file,\nstarting from timestamp `startOffset` (the start offset, in seconds) until\n`endOffset`.\n\nTo see the generic format example, see\n[Dataset example for Gemini](/vertex-ai/generative-ai/docs/models/gemini-supervised-tuning-prepare#dataset-example).\n\nThe following sections present video dataset format examples.\n\n### JSON schema example for cases where the full video is used for training and validation\n\nThis schema is added as a single line in the JSONL file. \n\n {\n \"contents\": [\n {\n \"role\": \"user\",\n \"parts\": [\n {\n \"fileData\": {\n \"fileUri\": \"gs://\u003cpath to the mp4 video file\u003e\",\n \"mimeType\": \"video/mp4\"\n },\n },\n {\n \"text\": \"\n You are a video analysis expert. Detect which animal appears in the\n video.The video can only have one of the following animals: dog, cat,\n rabbit.\\n Output Format:\\n Generate output in the following JSON\n format:\\n\n [{\\n\n \\\"animal_name\\\": \\\"\u003cCATEGORY\u003e\\\",\\n\n }]\\n\"\n }\n ]\n },\n {\n \"role\": \"model\",\n \"parts\": [\n {\n \"text\": \"```json\\n[{\\\"animal_name\\\": \\\"dog\\\"}]\\n```\"\n }\n ]\n },\n ],\n \"generationConfig\": {\n \"mediaResolution\": \"MEDIA_RESOLUTION_LOW\"\n }\n }\n\n### JSON schema example for cases where a video segment is used for training and validation\n\nThis schema is added as a single line in the JSONL file. \n\n {\n \"contents\": [\n {\n \"role\": \"user\",\n \"parts\": [\n {\n \"fileData\": {\n \"fileUri\": \"gs://\u003cpath to the mp4 video file\u003e\",\n \"mimeType\": \"video/mp4\"\n },\n \"videoMetadata\": {\n \"startOffset\": \"5s\",\n \"endOffset\": \"25s\"\n }\n },\n {\n \"text\": \"\n You are a video analysis expert. Detect which animal appears in the\n video.The video can only have one of the following animals: dog, cat,\n rabbit.\\n Output Format:\\n Generate output in the following JSON\n format:\\n\n [{\\n\n \\\"animal_name\\\": \\\"\u003cCATEGORY\u003e\\\",\\n\n }]\\n\"\n }\n ]\n },\n {\n \"role\": \"model\",\n \"parts\": [\n {\n \"text\": \"```json\\n[{\\\"animal_name\\\": \\\"dog\\\"}]\\n```\"\n }\n ]\n },\n ],\n \"generationConfig\": {\n \"mediaResolution\": \"MEDIA_RESOLUTION_LOW\"\n }\n }\n\nWhat's next\n-----------\n\n- To learn more about video tuning, see [How to fine-tune Gemini 2.5 using videos via Vertex AI](https://cloud.google.com/blog/products/ai-machine-learning/how-to-fine-tune-video-outputs-using-vertex-ai?e=48754805).\n- To learn more about the image understanding capability of Gemini, see our [Image understanding](/vertex-ai/generative-ai/docs/multimodal/image-understanding) documentation.\n- To start tuning, see [Tune Gemini models by using supervised fine-tuning](/vertex-ai/generative-ai/docs/models/gemini-use-supervised-tuning)\n- To learn how supervised fine-tuning can be used in a solution that builds a generative AI knowledge base, see [Jump Start Solution: Generative AI\n knowledge base](/architecture/ai-ml/generative-ai-knowledge-base)."]]