Videodatei mit Audio mit Gemini Multimodal zusammenfassen
Mit Sammlungen den Überblick behalten
Sie können Inhalte basierend auf Ihren Einstellungen speichern und kategorisieren.
In diesem Beispiel wird gezeigt, wie Sie eine Videodatei mit Audio zusammenfassen und Kapitel mit Zeitstempeln zurückgeben können.
Weitere Informationen
Eine ausführliche Dokumentation, die dieses Codebeispiel enthält, finden Sie hier:
Codebeispiel
Nächste Schritte
Wenn Sie nach Codebeispielen für andere Google Cloud -Produkte suchen und filtern möchten, können Sie den Google Cloud -Beispielbrowser verwenden.
Sofern nicht anders angegeben, sind die Inhalte dieser Seite unter der Creative Commons Attribution 4.0 License und Codebeispiele unter der Apache 2.0 License lizenziert. Weitere Informationen finden Sie in den Websiterichtlinien von Google Developers. Java ist eine eingetragene Marke von Oracle und/oder seinen Partnern.
[[["Leicht verständlich","easyToUnderstand","thumb-up"],["Mein Problem wurde gelöst","solvedMyProblem","thumb-up"],["Sonstiges","otherUp","thumb-up"]],[["Schwer verständlich","hardToUnderstand","thumb-down"],["Informationen oder Beispielcode falsch","incorrectInformationOrSampleCode","thumb-down"],["Benötigte Informationen/Beispiele nicht gefunden","missingTheInformationSamplesINeed","thumb-down"],["Problem mit der Übersetzung","translationIssue","thumb-down"],["Sonstiges","otherDown","thumb-down"]],[],[],[],null,["# Summarize a video file with audio with Gemini Multimodal\n\nThis sample shows you how to summarize a video file with audio and return chapters with timestamps.\n\nExplore further\n---------------\n\n\nFor detailed documentation that includes this code sample, see the following:\n\n- [Quickstart: Generate text using the Vertex AI Gemini API](/vertex-ai/generative-ai/docs/start/quickstarts/quickstart-multimodal)\n\nCode sample\n-----------\n\n### Go\n\n\nBefore trying this sample, follow the Go setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Go API\nreference documentation](/go/docs/reference/cloud.google.com/go/aiplatform/latest/apiv1).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n import (\n \t\"context\"\n \t\"fmt\"\n \t\"io\"\n\n \tgenai \"google.golang.org/genai\"\n )\n\n // generateWithVideo shows how to generate text using a video input.\n func generateWithVideo(w io.Writer) error {\n \tctx := context.Background()\n\n \tclient, err := genai.NewClient(ctx, &genai.ClientConfig{\n \t\tHTTPOptions: genai.HTTPOptions{APIVersion: \"v1\"},\n \t})\n \tif err != nil {\n \t\treturn fmt.Errorf(\"failed to create genai client: %w\", err)\n \t}\n\n \tmodelName := \"gemini-2.5-flash\"\n \tcontents := []*genai.Content{\n \t\t{Parts: []*genai.Part{\n \t\t\t{Text: `Analyze the provided video file, including its audio.\n Summarize the main points of the video concisely.\n Create a chapter breakdown with timestamps for key sections or topics discussed.`},\n \t\t\t{FileData: &genai.FileData{\n \t\t\t\tFileURI: \"gs://cloud-samples-data/generative-ai/video/pixel8.mp4\",\n \t\t\t\tMIMEType: \"video/mp4\",\n \t\t\t}},\n \t\t},\n \t\t\tRole: \"user\"},\n \t}\n\n \tresp, err := client.Models.GenerateContent(ctx, modelName, contents, nil)\n \tif err != nil {\n \t\treturn fmt.Errorf(\"failed to generate content: %w\", err)\n \t}\n\n \trespText := resp.Text()\n\n \tfmt.Fprintln(w, respText)\n\n \t// Example response:\n \t// Here's an analysis of the provided video file:\n \t//\n \t// **Summary**\n \t//\n \t// The video features Saeka Shimada, a photographer in Tokyo, who uses the new Pixel phone ...\n \t//\n \t// **Chapter Breakdown**\n \t//\n \t// * **0:00-0:05**: Introduction to Saeka Shimada and her work as a photographer in Tokyo.\n \t// ...\n\n \treturn nil\n }\n\n### Node.js\n\n\nBefore trying this sample, follow the Node.js setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Node.js API\nreference documentation](/nodejs/docs/reference/aiplatform/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n const {GoogleGenAI} = require('@google/genai');\n\n const GOOGLE_CLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;\n const GOOGLE_CLOUD_LOCATION = process.env.GOOGLE_CLOUD_LOCATION || 'global';\n\n async function generateContent(\n projectId = GOOGLE_CLOUD_PROJECT,\n location = GOOGLE_CLOUD_LOCATION\n ) {\n const ai = new GoogleGenAI({\n vertexai: true,\n project: projectId,\n location: location,\n });\n\n const prompt = `\n Analyze the provided video file, including its audio.\n Summarize the main points of the video concisely.\n Create a chapter breakdown with timestamps for key sections or topics discussed.\n `;\n\n const video = {\n fileData: {\n fileUri: 'gs://cloud-samples-data/generative-ai/video/pixel8.mp4',\n mimeType: 'video/mp4',\n },\n };\n\n const response = await ai.models.generateContent({\n model: 'gemini-2.5-flash',\n contents: [video, prompt],\n });\n\n console.log(response.text);\n\n return response.text;\n }\n\n### Python\n\n\nBefore trying this sample, follow the Python setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Python API\nreference documentation](/python/docs/reference/aiplatform/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n from google import genai\n from google.genai.types import HttpOptions, Part\n\n client = genai.Client(http_options=HttpOptions(api_version=\"v1\"))\n prompt = \"\"\"\n Analyze the provided video file, including its audio.\n Summarize the main points of the video concisely.\n Create a chapter breakdown with timestamps for key sections or topics discussed.\n \"\"\"\n response = client.models.generate_content(\n model=\"gemini-2.5-flash\",\n contents=[\n Part.from_uri(\n file_uri=\"gs://cloud-samples-data/generative-ai/video/pixel8.mp4\",\n mime_type=\"video/mp4\",\n ),\n prompt,\n ],\n )\n\n print(response.text)\n # Example response:\n # Here's a breakdown of the video:\n #\n # **Summary:**\n #\n # Saeka Shimada, a photographer in Tokyo, uses the Google Pixel 8 Pro's \"Video Boost\" feature to ...\n #\n # **Chapter Breakdown with Timestamps:**\n #\n # * **[00:00-00:12] Introduction & Tokyo at Night:** Saeka Shimada introduces herself ...\n # ...\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=googlegenaisdk)."]]