Starting April 29, 2025, Gemini 1.5 Pro and Gemini 1.5 Flash models are not available in projects that have no prior usage of these models, including new projects. For details, see
Model versions and lifecycle.
Generate text responses using Gemini API with external function calls in a chat scenario
Stay organized with collections
Save and categorize content based on your preferences.
Generate text responses using Gemini API with external function calls. This example demonstrates a chat scenario with two functions and two sequential prompts.
Code sample
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],[],[],[],null,["# Generate text responses using Gemini API with external function calls in a chat scenario\n\nGenerate text responses using Gemini API with external function calls. This example demonstrates a chat scenario with two functions and two sequential prompts.\n\nCode sample\n-----------\n\n### Node.js\n\n\nBefore trying this sample, follow the Node.js setup instructions in the\n[Vertex AI quickstart using\nclient libraries](/vertex-ai/docs/start/client-libraries).\n\n\nFor more information, see the\n[Vertex AI Node.js API\nreference documentation](/nodejs/docs/reference/aiplatform/latest).\n\n\nTo authenticate to Vertex AI, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n const {\n VertexAI,\n FunctionDeclarationSchemaType,\n } = require('https://cloud.google.com/nodejs/docs/reference/vertexai/latest/overview.html');\n\n const functionDeclarations = [\n {\n function_declarations: [\n {\n name: 'get_current_weather',\n description: 'get weather in a given location',\n parameters: {\n type: https://cloud.google.com/nodejs/docs/reference/vertexai/latest/overview.html.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/schematype.html,\n properties: {\n location: {type: https://cloud.google.com/nodejs/docs/reference/vertexai/latest/overview.html.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/schematype.html},\n unit: {\n type: https://cloud.google.com/nodejs/docs/reference/vertexai/latest/overview.html.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/schematype.html,\n enum: ['celsius', 'fahrenheit'],\n },\n },\n required: ['location'],\n },\n },\n ],\n },\n ];\n\n const functionResponseParts = [\n {\n functionResponse: {\n name: 'get_current_weather',\n response: {name: 'get_current_weather', content: {weather: 'super nice'}},\n },\n },\n ];\n\n /**\n * TODO(developer): Update these variables before running the sample.\n */\n async function functionCallingStreamChat(\n projectId = 'PROJECT_ID',\n location = 'us-central1',\n model = 'gemini-2.0-flash-001'\n ) {\n // Initialize Vertex with your Cloud project and location\n const vertexAI = new https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/vertexai.html({project: projectId, location: location});\n\n // Instantiate the model\n const generativeModel = vertexAI.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/vertexai.html({\n model: model,\n });\n\n // Create a chat session and pass your function declarations\n const chat = generativeModel.startChat({\n tools: functionDeclarations,\n });\n\n const chatInput1 = 'What is the weather in Boston?';\n\n // This should include a functionCall response from the model\n const result1 = await chat.sendMessageStream(chatInput1);\n for await (const item of result1.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/streamgeneratecontentresult.html) {\n console.log(item.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/generatecontentresponse.html[0]);\n }\n await result1.response;\n\n // Send a follow up message with a FunctionResponse\n const result2 = await chat.sendMessageStream(functionResponseParts);\n for await (const item of result2.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/streamgeneratecontentresult.html) {\n console.log(item.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/generatecontentresponse.html[0]);\n }\n\n // This should include a text response from the model using the response content\n // provided above\n const response2 = await result2.response;\n console.log(response2.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/generatecontentresponse.html[0].https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/generatecontentcandidate.html.https://cloud.google.com/nodejs/docs/reference/vertexai/latest/vertexai/content.html[0].text);\n }\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=generativeaionvertexai)."]]