Text mit Bildern aus einem lokalen Speicher und Google Cloud Storage generieren
Mit Sammlungen den Überblick behalten
Sie können Inhalte basierend auf Ihren Einstellungen speichern und kategorisieren.
In diesem Beispiel wird gezeigt, wie Sie Text mithilfe eines lokalen Bildes und eines Bildes in Google Cloud Storage generieren.
Weitere Informationen
Eine ausführliche Dokumentation, die dieses Codebeispiel enthält, finden Sie hier:
Codebeispiel
Nächste Schritte
Wenn Sie nach Codebeispielen für andere Google Cloud -Produkte suchen und filtern möchten, können Sie den Google Cloud -Beispielbrowser verwenden.
Sofern nicht anders angegeben, sind die Inhalte dieser Seite unter der Creative Commons Attribution 4.0 License und Codebeispiele unter der Apache 2.0 License lizenziert. Weitere Informationen finden Sie in den Websiterichtlinien von Google Developers. Java ist eine eingetragene Marke von Oracle und/oder seinen Partnern.
[[["Leicht verständlich","easyToUnderstand","thumb-up"],["Mein Problem wurde gelöst","solvedMyProblem","thumb-up"],["Sonstiges","otherUp","thumb-up"]],[["Schwer verständlich","hardToUnderstand","thumb-down"],["Informationen oder Beispielcode falsch","incorrectInformationOrSampleCode","thumb-down"],["Benötigte Informationen/Beispiele nicht gefunden","missingTheInformationSamplesINeed","thumb-down"],["Problem mit der Übersetzung","translationIssue","thumb-down"],["Sonstiges","otherDown","thumb-down"]],[],[],[],null,["# Generate text using images from a local and Google Cloud Storage\n\nThis example demonstrates how to generate text using a local image and an image in Google Cloud Storage\n\nExplore further\n---------------\n\n\nFor detailed documentation that includes this code sample, see the following:\n\n- [Image understanding](/vertex-ai/generative-ai/docs/multimodal/image-understanding)\n\nCode sample\n-----------\n\n### Go\n\n\nBefore trying this sample, follow the Go setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Go API\nreference documentation](/go/docs/reference/cloud.google.com/go/aiplatform/latest/apiv1).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n import (\n \t\"context\"\n \t\"fmt\"\n \t\"io\"\n \t\"os\"\n\n \tgenai \"google.golang.org/genai\"\n )\n\n // generateWithMultiImg shows how to generate text using multiple image inputs.\n func generateWithMultiImg(w io.Writer) error {\n \tctx := context.Background()\n\n \tclient, err := genai.NewClient(ctx, &genai.ClientConfig{\n \t\tHTTPOptions: genai.HTTPOptions{APIVersion: \"v1\"},\n \t})\n \tif err != nil {\n \t\treturn fmt.Errorf(\"failed to create genai client: %w\", err)\n \t}\n\n \t// TODO(Developer): Update the path to file (image source:\n \t// https://storage.googleapis.com/cloud-samples-data/generative-ai/image/latte.jpg )\n \timageBytes, err := os.ReadFile(\"./latte.jpg\")\n \tif err != nil {\n \t\treturn fmt.Errorf(\"failed to read image: %w\", err)\n \t}\n\n \tcontents := []*genai.Content{\n \t\t{Parts: []*genai.Part{\n \t\t\t{Text: \"Write an advertising jingle based on the items in both images.\"},\n \t\t\t{FileData: &genai.FileData{\n \t\t\t\t// Image source: https://storage.googleapis.com/cloud-samples-data/generative-ai/image/scones.jpg\n \t\t\t\tFileURI: \"gs://cloud-samples-data/generative-ai/image/scones.jpg\",\n \t\t\t\tMIMEType: \"image/jpeg\",\n \t\t\t}},\n \t\t\t{InlineData: &genai.Blob{\n \t\t\t\tData: imageBytes,\n \t\t\t\tMIMEType: \"image/jpeg\",\n \t\t\t}},\n \t\t}},\n \t}\n \tmodelName := \"gemini-2.5-flash\"\n\n \tresp, err := client.Models.GenerateContent(ctx, modelName, contents, nil)\n \tif err != nil {\n \t\treturn fmt.Errorf(\"failed to generate content: %w\", err)\n \t}\n\n \trespText := resp.Text()\n\n \tfmt.Fprintln(w, respText)\n\n \t// Example response:\n \t// Okay, here's an advertising jingle inspired by the blueberry scones, coffee, flowers, chocolate cake, and latte:\n \t//\n \t// (Upbeat, jazzy music)\n \t// ...\n\n \treturn nil\n }\n\n### Node.js\n\n\nBefore trying this sample, follow the Node.js setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Node.js API\nreference documentation](/nodejs/docs/reference/aiplatform/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n const {GoogleGenAI} = require('@google/genai');\n\n const GOOGLE_CLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;\n const GOOGLE_CLOUD_LOCATION = process.env.GOOGLE_CLOUD_LOCATION || 'global';\n\n async function generateContent(\n projectId = GOOGLE_CLOUD_PROJECT,\n location = GOOGLE_CLOUD_LOCATION\n ) {\n const ai = new GoogleGenAI({\n vertexai: true,\n project: projectId,\n location: location,\n });\n\n const image1 = {\n fileData: {\n fileUri: 'gs://cloud-samples-data/generative-ai/image/scones.jpg',\n mimeType: 'image/jpeg',\n },\n };\n\n const image2 = {\n fileData: {\n fileUri: 'gs://cloud-samples-data/generative-ai/image/fruit.png',\n mimeType: 'image/png',\n },\n };\n\n const response = await ai.models.generateContent({\n model: 'gemini-2.5-flash',\n contents: [\n image1,\n image2,\n 'Generate a list of all the objects contained in both images.',\n ],\n });\n\n console.log(response.text);\n\n return response.text;\n }\n\n### Python\n\n\nBefore trying this sample, follow the Python setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Python API\nreference documentation](/python/docs/reference/aiplatform/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n from google import genai\n from google.genai.types import HttpOptions, Part\n\n client = genai.Client(http_options=HttpOptions(api_version=\"v1\"))\n\n # Read content from GCS\n gcs_file_img_path = \"gs://cloud-samples-data/generative-ai/image/scones.jpg\"\n\n # Read content from a local file\n with open(\"test_data/latte.jpg\", \"rb\") as f:\n local_file_img_bytes = f.read()\n\n response = client.models.generate_content(\n model=\"gemini-2.5-flash\",\n contents=[\n \"Generate a list of all the objects contained in both images.\",\n Part.from_uri(file_uri=gcs_file_img_path, mime_type=\"image/jpeg\"),\n Part.from_bytes(data=local_file_img_bytes, mime_type=\"image/jpeg\"),\n ],\n )\n print(response.text)\n # Example response:\n # Okay, here's the list of objects present in both images:\n # ...\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=googlegenaisdk)."]]