生成 AI を使用してマルチモーダル データからコンテンツを生成する
コレクションでコンテンツを整理
必要に応じて、コンテンツの保存と分類を行います。
このサンプルでは、テキスト、画像、動画の組み合わせからコンテンツを生成する機能を紹介します。
コードサンプル
特に記載のない限り、このページのコンテンツはクリエイティブ・コモンズの表示 4.0 ライセンスにより使用許諾されます。コードサンプルは Apache 2.0 ライセンスにより使用許諾されます。詳しくは、Google Developers サイトのポリシーをご覧ください。Java は Oracle および関連会社の登録商標です。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],[],[],[],null,["# Generate content from multimodal data using Generative AI\n\nThis sample demonstrates the capability to generate content from a combination of text, image, and video.\n\nCode sample\n-----------\n\n### Java\n\n\nBefore trying this sample, follow the Java setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Java API\nreference documentation](/java/docs/reference/google-cloud-aiplatform/latest/com.google.cloud.aiplatform.v1).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n import com.google.cloud.vertexai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.VertexAI.html;\n import com.google.cloud.vertexai.api.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.api.GenerateContentResponse.html;\n import com.google.cloud.vertexai.generativeai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.ContentMaker.html;\n import com.google.cloud.vertexai.generativeai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.GenerativeModel.html;\n import com.google.cloud.vertexai.generativeai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.PartMaker.html;\n import com.google.cloud.vertexai.generativeai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.ResponseHandler.html;\n\n public class Multimodal {\n public static void main(String[] args) throws Exception {\n // TODO(developer): Replace these variables before running the sample.\n String projectId = \"your-google-cloud-project-id\";\n String location = \"us-central1\";\n String modelName = \"gemini-2.0-flash-001\";\n\n String output = nonStreamingMultimodal(projectId, location, modelName);\n System.out.println(output);\n }\n\n // Ask a simple question and get the response.\n public static String nonStreamingMultimodal(String projectId, String location, String modelName)\n throws Exception {\n // Initialize client that will be used to send requests.\n // This client only needs to be created once, and can be reused for multiple requests.\n try (https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.VertexAI.html vertexAI = new https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.VertexAI.html(projectId, location)) {\n https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.GenerativeModel.html model = new https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.GenerativeModel.html(modelName, vertexAI);\n\n String videoUri = \"gs://cloud-samples-data/video/animals.mp4\";\n String imgUri = \"gs://cloud-samples-data/generative-ai/image/character.jpg\";\n\n // Get the response from the model.\n https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.api.GenerateContentResponse.html response = model.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.GenerativeModel.html#com_google_cloud_vertexai_generativeai_GenerativeModel_generateContent_com_google_cloud_vertexai_api_Content_(\n ContentMaker.fromMultiModalData(\n PartMaker.fromMimeTypeAndData(\"video/mp4\", videoUri),\n PartMaker.fromMimeTypeAndData(\"image/jpeg\", imgUri),\n \"Are this video and image correlated?\"\n ));\n\n // Extract the generated text from the model's response.\n String output = https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.ResponseHandler.html.getText(response);\n return output;\n }\n }\n }\n\n### Node.js\n\n\nBefore trying this sample, follow the Node.js setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Node.js API\nreference documentation](/nodejs/docs/reference/aiplatform/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n const {VertexAI} = require('https://cloud.google.com/nodejs/docs/reference/vertexai/latest/overview.html');\n\n /**\n * TODO(developer): Update these variables before running the sample.\n */\n const PROJECT_ID = process.env.CAIP_PROJECT_ID;\n const LOCATION = 'us-central1';\n const MODEL = 'gemini-2.0-flash-001';\n\n async function generateContent() {\n // Initialize Vertex AI\n const vertexAI = new https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/vertexai.html({project: PROJECT_ID, location: LOCATION});\n const generativeModel = vertexAI.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/vertexai.html({model: MODEL});\n\n const request = {\n contents: [\n {\n role: 'user',\n parts: [\n {\n file_data: {\n file_uri: 'gs://cloud-samples-data/video/animals.mp4',\n mime_type: 'video/mp4',\n },\n },\n {\n file_data: {\n file_uri:\n 'gs://cloud-samples-data/generative-ai/image/character.jpg',\n mime_type: 'image/jpeg',\n },\n },\n {text: 'Are this video and image correlated?'},\n ],\n },\n ],\n };\n\n const result = await generativeModel.generateContent(request);\n\n console.log(result.response.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/generatecontentresponse.html[0].https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/generatecontentcandidate.html.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/content.html[0].text);\n }\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=generativeaionvertexai)."]]