The following best practices can help you build robust agents.
Playbook name in natural language
Use natural language with clear meanings for playbook names. For example, "Customer Help Center Playbook" is more descriptive than "company_specialist", which helps LLM performance at runtime.
Concise goals
Goals should be a concise description of the playbook's purpose.
Provide quality instructions
Instructions should:
- reflect the step-by-step approach to solving an end-user problem
- be concise natural language sentences of high level instructions
- be straightforward and specify the scenarios for tool uses
At least one example for each playbook
You should have at least one example for each playbook, but it is recommended to have at least four. Examples should include happy path scenarios.
Without enough examples, a playbook is likely to result in unpredictable behavior. If your playbook is not responding or behaving in the manner you expect, missing or poorly defined examples are likely the cause. Try improving your examples or adding new ones.
Precision of instructions and examples
While it helps to write clear and descriptive instructions, it's really the quality and quantity of your examples that determine the accuracy of the playbook's behavior. In other words, spend more time writing thorough examples than writing perfectly precise instructions.
Reference tools in examples
If the playbook is designed to provide responses by using tools, reference the tools in the examples corresponding to this type of request.
Tool schema operationId
field
When defining schemas for your tools,
the operationId
value is important.
Your playbook instructions will reference this value.
The following are naming recommendations for this field:
- Letters, numbers and underscores only.
- Must be unique among all
operationId
s described in the schema. - Must be a meaningful name reflecting the capability provided.
Tool schema validation
You should validate your tool schema. You can use the Swagger Editor to check your openAPI 3.0 schema syntax.
Handle empty tool results
When your playbook relies on a tool to inform its response, an empty tool result can lead to unpredictable playbook behavior. Sometimes, the playbook LLM will hallucinate information in a response in lieu of a tool result. To prevent this, you can add specific instructions to ensure the playbook LLM doesn't attempt to answer on its own.
Some use cases require playbook responses to be well grounded in tool results or provided data and need to mitigate responses based only on the playbook LLM's knowledge.
Examples of instructions to mitigate hallucinations:
- "You must use the tool to answer all user questions"
- "If you don't get any data back from the tool, respond that you don't know the answer to the user's query"
- "Don't make up an answer if you don't get any data back from the tool"
Generate a schema with Gemini
Gemini can generate a schema for you. For example, try "can you create an example openAPI 3.0 schema for Google Calendar".
Focused playbooks
Avoid creating very large and complex playbooks. Each playbook should accomplish a specific and clear task. If you have a complex playbook, consider breaking it down into smaller sub-playbooks.
Avoid loops and recursion
Don't create loops or recursion when linking agents in your instructions.
Provide routing information to examples
When a playbook should route to another playbook, you should provide this information to the examples. This is provided to an example from the End example with output information field of the Input & Output example section.
For instance, the final sentence of this field could be "Reroute back to the default playbook for further queries.".
Use Conversational Agents (Dialogflow CX) Messenger JavaScript functions for personalization
When using Conversational Agents (Dialogflow CX) Messenger, the following functions are useful to send user personalization information from the web interface to the playbook: