A partire dal 29 aprile 2025, i modelli Gemini 1.5 Pro e Gemini 1.5 Flash non sono disponibili nei progetti che non li hanno mai utilizzati, inclusi i nuovi progetti. Per maggiori dettagli, vedi
Versioni e ciclo di vita dei modelli.
Generare contenuti da dati multimodali utilizzando l'IA generativa
Mantieni tutto organizzato con le raccolte
Salva e classifica i contenuti in base alle tue preferenze.
Questo esempio mostra la capacità di generare contenuti da una combinazione di testo, immagini e video.
Esempio di codice
Salvo quando diversamente specificato, i contenuti di questa pagina sono concessi in base alla licenza Creative Commons Attribution 4.0, mentre gli esempi di codice sono concessi in base alla licenza Apache 2.0. Per ulteriori dettagli, consulta le norme del sito di Google Developers. Java è un marchio registrato di Oracle e/o delle sue consociate.
[[["Facile da capire","easyToUnderstand","thumb-up"],["Il problema è stato risolto","solvedMyProblem","thumb-up"],["Altra","otherUp","thumb-up"]],[["Difficile da capire","hardToUnderstand","thumb-down"],["Informazioni o codice di esempio errati","incorrectInformationOrSampleCode","thumb-down"],["Mancano le informazioni o gli esempi di cui ho bisogno","missingTheInformationSamplesINeed","thumb-down"],["Problema di traduzione","translationIssue","thumb-down"],["Altra","otherDown","thumb-down"]],[],[],[],null,["# Generate content from multimodal data using Generative AI\n\nThis sample demonstrates the capability to generate content from a combination of text, image, and video.\n\nCode sample\n-----------\n\n### Java\n\n\nBefore trying this sample, follow the Java setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Java API\nreference documentation](/java/docs/reference/google-cloud-aiplatform/latest/com.google.cloud.aiplatform.v1).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n import com.google.cloud.vertexai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.VertexAI.html;\n import com.google.cloud.vertexai.api.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.api.GenerateContentResponse.html;\n import com.google.cloud.vertexai.generativeai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.ContentMaker.html;\n import com.google.cloud.vertexai.generativeai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.GenerativeModel.html;\n import com.google.cloud.vertexai.generativeai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.PartMaker.html;\n import com.google.cloud.vertexai.generativeai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.ResponseHandler.html;\n\n public class Multimodal {\n public static void main(String[] args) throws Exception {\n // TODO(developer): Replace these variables before running the sample.\n String projectId = \"your-google-cloud-project-id\";\n String location = \"us-central1\";\n String modelName = \"gemini-2.0-flash-001\";\n\n String output = nonStreamingMultimodal(projectId, location, modelName);\n System.out.println(output);\n }\n\n // Ask a simple question and get the response.\n public static String nonStreamingMultimodal(String projectId, String location, String modelName)\n throws Exception {\n // Initialize client that will be used to send requests.\n // This client only needs to be created once, and can be reused for multiple requests.\n try (https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.VertexAI.html vertexAI = new https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.VertexAI.html(projectId, location)) {\n https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.GenerativeModel.html model = new https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.GenerativeModel.html(modelName, vertexAI);\n\n String videoUri = \"gs://cloud-samples-data/video/animals.mp4\";\n String imgUri = \"gs://cloud-samples-data/generative-ai/image/character.jpg\";\n\n // Get the response from the model.\n https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.api.GenerateContentResponse.html response = model.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.GenerativeModel.html#com_google_cloud_vertexai_generativeai_GenerativeModel_generateContent_com_google_cloud_vertexai_api_Content_(\n ContentMaker.fromMultiModalData(\n PartMaker.fromMimeTypeAndData(\"video/mp4\", videoUri),\n PartMaker.fromMimeTypeAndData(\"image/jpeg\", imgUri),\n \"Are this video and image correlated?\"\n ));\n\n // Extract the generated text from the model's response.\n String output = https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.ResponseHandler.html.getText(response);\n return output;\n }\n }\n }\n\n### Node.js\n\n\nBefore trying this sample, follow the Node.js setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Node.js API\nreference documentation](/nodejs/docs/reference/aiplatform/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n const {VertexAI} = require('https://cloud.google.com/nodejs/docs/reference/vertexai/latest/overview.html');\n\n /**\n * TODO(developer): Update these variables before running the sample.\n */\n const PROJECT_ID = process.env.CAIP_PROJECT_ID;\n const LOCATION = 'us-central1';\n const MODEL = 'gemini-2.0-flash-001';\n\n async function generateContent() {\n // Initialize Vertex AI\n const vertexAI = new https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/vertexai.html({project: PROJECT_ID, location: LOCATION});\n const generativeModel = vertexAI.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/vertexai.html({model: MODEL});\n\n const request = {\n contents: [\n {\n role: 'user',\n parts: [\n {\n file_data: {\n file_uri: 'gs://cloud-samples-data/video/animals.mp4',\n mime_type: 'video/mp4',\n },\n },\n {\n file_data: {\n file_uri:\n 'gs://cloud-samples-data/generative-ai/image/character.jpg',\n mime_type: 'image/jpeg',\n },\n },\n {text: 'Are this video and image correlated?'},\n ],\n },\n ],\n };\n\n const result = await generativeModel.generateContent(request);\n\n console.log(result.response.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/generatecontentresponse.html[0].https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/generatecontentcandidate.html.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/content.html[0].text);\n }\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=generativeaionvertexai)."]]