Mulai 29 April 2025, model Gemini 1.5 Pro dan Gemini 1.5 Flash tidak tersedia di project yang belum pernah menggunakan model ini, termasuk project baru. Untuk mengetahui detailnya, lihat
Versi dan siklus proses model.
Membuat teks dari gambar
Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Contoh ini menunjukkan cara menggunakan model Gemini untuk membuat teks dari gambar.
Mempelajari lebih lanjut
Untuk dokumentasi mendetail yang menyertakan contoh kode ini, lihat artikel berikut:
Contoh kode
Kecuali dinyatakan lain, konten di halaman ini dilisensikan berdasarkan Lisensi Creative Commons Attribution 4.0, sedangkan contoh kode dilisensikan berdasarkan Lisensi Apache 2.0. Untuk mengetahui informasi selengkapnya, lihat Kebijakan Situs Google Developers. Java adalah merek dagang terdaftar dari Oracle dan/atau afiliasinya.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],[],[],[],null,["# Generate text from an image\n\nThis sample demonstrates how to use the Gemini model to generate text from an image.\n\nExplore further\n---------------\n\n\nFor detailed documentation that includes this code sample, see the following:\n\n- [Quickstart: Generate text using the Vertex AI Gemini API](/vertex-ai/generative-ai/docs/start/quickstarts/quickstart-multimodal)\n- [Vertex AI client libraries](/vertex-ai/generative-ai/docs/reference/libraries)\n\nCode sample\n-----------\n\n### C++\n\n\nBefore trying this sample, follow the C++ setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI C++ API\nreference documentation](/cpp/docs/reference/aiplatform/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n namespace vertex_ai = ::google::cloud::aiplatform_v1;\n namespace vertex_ai_proto = ::google::cloud::aiplatform::v1;\n [](std::string const& project_id, std::string const& location_id,\n std::string const& model, std::string const& prompt,\n std::string const& mime_type, std::string const& file_uri) {\n google::cloud::Location location(project_id, location_id);\n auto client = vertex_ai::PredictionServiceClient(\n vertex_ai::MakePredictionServiceConnection(location.location_id()));\n\n vertex_ai_proto::GenerateContentRequest request;\n request.set_model(location.FullName() + \"/publishers/google/models/\" +\n model);\n auto generation_config = request.mutable_generation_config();\n generation_config-\u003eset_temperature(0.4f);\n generation_config-\u003eset_top_k(32);\n generation_config-\u003eset_top_p(1);\n generation_config-\u003eset_max_output_tokens(2048);\n\n auto contents = request.add_contents();\n contents-\u003eset_role(\"user\");\n contents-\u003eadd_parts()-\u003eset_text(prompt);\n auto image_part = contents-\u003eadd_parts();\n image_part-\u003emutable_file_data()-\u003eset_file_uri(file_uri);\n image_part-\u003emutable_file_data()-\u003eset_mime_type(mime_type);\n\n auto response = client.GenerateContent(request);\n if (!response) throw std::move(response).status();\n\n for (auto const& candidate : response-\u003ecandidates()) {\n for (auto const& p : candidate.content().parts()) {\n std::cout \u003c\u003c p.text() \u003c\u003c \"\\n\";\n }\n }\n }\n\n### Java\n\n\nBefore trying this sample, follow the Java setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Java API\nreference documentation](/java/docs/reference/google-cloud-aiplatform/latest/com.google.cloud.aiplatform.v1).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n import com.google.cloud.vertexai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.VertexAI.html;\n import com.google.cloud.vertexai.api.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.api.GenerateContentResponse.html;\n import com.google.cloud.vertexai.generativeai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.ContentMaker.html;\n import com.google.cloud.vertexai.generativeai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.GenerativeModel.html;\n import com.google.cloud.vertexai.generativeai.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.PartMaker.html;\n import java.io.IOException;\n\n public class Quickstart {\n\n public static void main(String[] args) throws IOException {\n // TODO(developer): Replace these variables before running the sample.\n String projectId = \"your-google-cloud-project-id\";\n String location = \"us-central1\";\n String modelName = \"gemini-2.0-flash-001\";\n\n String output = quickstart(projectId, location, modelName);\n System.out.println(output);\n }\n\n // Analyzes the provided Multimodal input.\n public static String quickstart(String projectId, String location, String modelName)\n throws IOException {\n // Initialize client that will be used to send requests. This client only needs\n // to be created once, and can be reused for multiple requests.\n try (https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.VertexAI.html vertexAI = new https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.VertexAI.html(projectId, location)) {\n String imageUri = \"gs://generativeai-downloads/images/scones.jpg\";\n\n https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.GenerativeModel.html model = new https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.GenerativeModel.html(modelName, vertexAI);\n https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.api.GenerateContentResponse.html response = model.https://cloud.google.com/vertex-ai/generative-ai/docs/reference/java/latest/com.google.cloud.vertexai.generativeai.GenerativeModel.html#com_google_cloud_vertexai_generativeai_GenerativeModel_generateContent_com_google_cloud_vertexai_api_Content_(ContentMaker.fromMultiModalData(\n PartMaker.fromMimeTypeAndData(\"image/png\", imageUri),\n \"What's in this photo\"\n ));\n\n return response.toString();\n }\n }\n }\n\n### Node.js\n\n\nBefore trying this sample, follow the Node.js setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Node.js API\nreference documentation](/nodejs/docs/reference/aiplatform/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n const {VertexAI} = require('https://cloud.google.com/nodejs/docs/reference/vertexai/latest/overview.html');\n\n /**\n * TODO(developer): Update these variables before running the sample.\n */\n async function createNonStreamingMultipartContent(\n projectId = 'PROJECT_ID',\n location = 'us-central1',\n model = 'gemini-2.0-flash-001',\n image = 'gs://generativeai-downloads/images/scones.jpg',\n mimeType = 'image/jpeg'\n ) {\n // Initialize Vertex with your Cloud project and location\n const vertexAI = new https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/vertexai.html({project: projectId, location: location});\n\n // Instantiate the model\n const generativeVisionModel = vertexAI.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/vertexai.html({\n model: model,\n });\n\n // For images, the SDK supports both Google Cloud Storage URI and base64 strings\n const filePart = {\n fileData: {\n fileUri: image,\n mimeType: mimeType,\n },\n };\n\n const textPart = {\n text: 'what is shown in this image?',\n };\n\n const request = {\n contents: [{role: 'user', parts: [filePart, textPart]}],\n };\n\n console.log('Prompt Text:');\n console.log(request.contents[0].https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/content.html[1].text);\n\n console.log('Non-Streaming Response Text:');\n\n // Generate a response\n const response = await generativeVisionModel.generateContent(request);\n\n // Select the text from the response\n const fullTextResponse =\n response.response.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/generatecontentresponse.html[0].https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/generatecontentcandidate.html.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/content.html[0].text;\n\n console.log(fullTextResponse);\n }\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=generativeaionvertexai)."]]