Deploy the application

To deploy the application on Vertex AI, create a new instance of ReasoningEngine and pass in the application class as a parameter. If you want to introduce package dependencies for your application, use the following parameters:

  • requirements: A list of external PyPI package dependencies. Each line must be a single string. To learn more, see Requirements File Format.
  • extra_packages: A list of internal package dependencies. These package dependencies are local files or directories that correspond to the local Python packages required by the application.

The following code demonstrates how you can deploy an application:

DISPLAY_NAME = "Demo Langchain Application"

remote_app = reasoning_engines.ReasoningEngine.create(
    reasoning_engines.LangchainAgent(
        model=model,
        tools=[get_exchange_rate],
        model_kwargs=model_kwargs,
    ),
    requirements=[
        "google-cloud-aiplatform[reasoningengine,langchain]",
    ],
    display_name=DISPLAY_NAME,
)
remote_app

When you deploy an application to Reasoning Engine, pass in a new object instead of reusing an existing object. This way, you avoid the creation of an object that has initialized un-pickleable data, such as database connections and services in its .set_up method.

Application deployment takes a few minutes to run. It builds containers and turns up HTTP servers on the backend. Deployment latency is dependent on the total time it takes to install the required packages.

Once deployed, remote_app corresponds to an instance of reasoning_engines.LangchainAgent that is running on Vertex AI and can be queried or deleted. It is separate from local instances of reasoning_engines.LangchainAgent.

Each deployed application has a unique identifier. Run the following command to get the resource_name identifier for your application:

remote_app.resource_name

resource_name has the following format: "projects/PROJECT_ID/locations/LOCATION/reasoningEngines/RESOURCE_ID".

What's next