The following best practices can help you build robust agents.
Playbook name in natural language
Use natural language with clear meanings for playbook names. For example, "Customer Help Center Playbook" is more descriptive than "company_specialist", which helps AI Generators performance at runtime.
Concise goals
Goals should be a concise description of the playbook's purpose.
Provide quality instructions
Instructions should:
- reflect the step-by-step approach to solving an end-user problem
- be concise natural language sentences of high level instructions
- be straightforward and specify the scenarios for tool uses
At least one example for each playbook
You should have at least one example for each playbook, but it is recommended to have at least four. Examples should include happy path scenarios.
Without enough examples, a playbook is likely to result in unpredictable behavior. If your playbook is not responding or behaving in the manner you expect, missing or poorly defined examples are likely the cause. Try improving your examples or adding new ones.
Precision of instructions and examples
While it helps to write clear and descriptive instructions, it's really the quality and quantity of your examples that determine the accuracy of the playbook's behavior. In other words, spend more time writing thorough examples than writing perfectly precise instructions.
Reference tools in examples
If the playbook is designed to provide responses by using tools, reference the tools in the examples corresponding to this type of request.
Tool schema operationId
field
When defining schemas for your tools,
the operationId
value is important.
Your playbook instructions will reference this value.
The following are naming recommendations for this field:
- Letters, numbers and underscores only.
- Must be unique among all
operationId
s described in the schema. - Must be a meaningful name reflecting the capability provided.
Tool schema validation
You should validate your tool schema. You can use the Swagger Editor to check your openAPI 3.0 schema syntax.
Handle empty tool results
When your playbook relies on a tool to inform its response, an empty tool result can lead to unpredictable playbook behavior. Sometimes, the playbook AI Generator will hallucinate information in a response in lieu of a tool result. To prevent this, you can add specific instructions to ensure the playbook AI Generator doesn't attempt to answer on its own.
Some use cases require playbook responses to be well grounded in tool results or provided data and need to mitigate responses based only on the playbook AI Generator's knowledge.
Examples of instructions to mitigate hallucinations:
- "You must use the tool to answer all user questions"
- "If you don't get any data back from the tool, respond that you don't know the answer to the user's query"
- "Don't make up an answer if you don't get any data back from the tool"
Generate a schema with Gemini
Gemini can generate a schema for you. For example, try "can you create an example openAPI 3.0 schema for Google Calendar".
Focused playbooks
Avoid creating very large and complex playbooks. Each playbook should accomplish a specific and clear task. If you have a complex playbook, consider breaking it down into smaller sub-playbooks.
Avoid loops and recursion
Don't create loops or recursion when linking agents in your instructions.
Provide routing information to examples
When a playbook should route to another playbook, you should provide this information to the examples. This is provided to an example from the End example with output information field of the Input & Output example section.
For instance, the final sentence of this field could be "Reroute back to the default playbook for further queries.".
Use Conversational Agents (Dialogflow CX) Messenger JavaScript functions for personalization
When using Conversational Agents (Dialogflow CX) Messenger, the following functions are useful to send user personalization information from the web interface to the playbook:
Planning for performance
Generative features typically require several seconds or even tens of seconds to generate a response. While playbooks enhance conversational naturalness, it's crucial to manage response times to maintain a positive end-user experience. Here are some strategies for optimizing performance:
Balance Generative Feature Usage
Carefully consider the trade-off between the time required to execute multiple generative features and the value they bring to the conversation. Avoid overusing these features if they don't significantly contribute to the user's goal.
Minimize Generative features Input
Aim to gather and process the minimum amount of information required for an AI Generator to generate a useful response. This can reduce processing time significantly.
Use Context Caching
If you're using Gemini through a tool and have a large initial context, explore caching information using Vertex AI Context Caching to avoid repetitive requests for the same data. Implement fixed responses for speed:
If your application doesn't require unique, dynamic content, consider storing frequently used responses in a traditional database like Firebase. Because they are predefined and readily available, these fixed responses provide much faster response times than a generative feature that needs to calculate an answer on the fly.
Instruct the AI Generator to produce Concise Playbook Responses
For text input and output, the AI Generator response time is highly dependent on the model being used and the output length. Short responses can significantly improve performance. While input length also factors in, output length has a larger impact.
Stream Responses:
Stream playbook responses back to the Dialogflow client, which in turn streams the responses to the end user. This contrasts with waiting for the entire response to be generated before sending any portion of it. Streaming responses provides an improved user experience by delivering partial responses as they become available. This approach minimizes the perceived delay and reduces the user's waiting time.