Best practices

The following best practices can help you build robust agent apps.

Agent name in natural language

Use natural language with clear meanings for agent names. For example, "Customer Help Center Agent" is more descriptive than "company_specialist", which helps LLM performance at runtime.

Concise goals

Goals should be a concise description of the agent's purpose.

Provide quality instructions

Instructions should:

  • reflect the step-by-step approach to solving an end-user problem
  • be concise natural language sentences of high level instructions
  • be straightforward and specify the scenarios for tool uses

At least one example for each agent

You should have at least one example for each agent, but it is recommended to have at least four. Examples should include happy path scenarios.

Without enough examples, an agent is likely to result in unpredictable behavior. If your agent is not responding or behaving in the manner you expect, missing or poorly defined examples are likely the cause. Try improving your examples or adding new ones.

Precision of instructions and examples

While it helps to write clear and descriptive instructions, it's really the quality and quantity of your examples that determine the accuracy of the agent's behavior. In other words, spend more time writing thorough examples than writing perfectly precise instructions.

Reference tools in examples

If the agent is designed to provide responses by using tools, reference the tools in the examples corresponding to this type of request.

Tool schema operationId field

When defining schemas for your tools, the operationId value is important. Your agent instructions will reference this value. The following are naming recommendations for this field:

  • Letters, numbers and underscores only.
  • Must be unique among all operationIds described in the schema.
  • Must be a meaningful name reflecting the capability provided.

Tool schema validation

You should validate your tool schema. You can use the Swagger Editor to check your openAPI 3.0 schema syntax.

Handle empty tool results

When your agent relies on a tool to inform its response, an empty tool result can lead to unpredictable agent behavior. Sometimes, the agent LLM will hallucinate information in a response in lieu of a tool result. To prevent this, you can add specific instructions to ensure the agent LLM doesn't attempt to answer on its own.

Some use cases require agent responses to be well grounded in tool results or provided data and need to mitigate responses based only on the agent LLM's knowledge.

Examples of instructions to mitigate hallucinations:

  • "You must use the tool to answer all user questions"
  • "If you don't get any data back from the tool, respond that you don't know the answer to the user's query"
  • "Don't make up an answer if you don't get any data back from the tool"

Generate a schema with Gemini

Gemini can generate a schema for you. For example, try "can you create an example openAPI 3.0 schema for Google Calendar".

Focused agents

Avoid creating very large and complex agents. Each agent should accomplish a specific and clear task. If you have a complex agent, consider breaking it down into smaller sub-agents.

Avoid loops and recursion

Don't create loops or recursion when linking agent apps in your instructions.

Provide routing information to examples

When an agent should route to another agent, you should provide this information to the examples. This is provided to an example from the End example with output information field of the Input & Output example section.

For instance, the final sentence of this field could be "Reroute back to the default agent for further queries.".

Use Dialogflow CX Messenger JavaScript functions for personalization

When using Dialogflow CX Messenger, the following functions are useful to send user personalization information from the web interface to the agent: