Resource: Generator
Generators contain prompt to be sent to the LLM model to generate text. The prompt can contain parameters which will be resolved before calling the model. It can optionally contain banned phrases to ensure the model responses are safe.
JSON representation |
---|
{ "name": string, "displayName": string, "promptText": { object ( |
Fields | |
---|---|
name |
The unique identifier of the generator. Must be set for the |
display |
Required. The human-readable name of the generator, unique within the agent. The prompt contains pre-defined parameters such as $conversation, $last-user-utterance, etc. populated by Dialogflow. It can also contain custom placeholders which will be resolved during fulfillment. |
prompt |
Required. Prompt for the LLM model. |
placeholders[] |
Optional. List of custom placeholders in the prompt text. |
model |
Parameters passed to the LLM to configure its behavior. |
Phrase
Text input which can be used for prompt or banned phrases.
JSON representation |
---|
{ "text": string } |
Fields | |
---|---|
text |
Required. Text input which can be used for prompt or banned phrases. |
Placeholder
Represents a custom placeholder in the prompt text.
JSON representation |
---|
{ "id": string, "name": string } |
Fields | |
---|---|
id |
Unique ID used to map custom placeholder to parameters in fulfillment. |
name |
Custom placeholder value in the prompt text. |
ModelParameter
Parameters to be passed to the LLM. If not set, default values will be used.
JSON representation |
---|
{ "temperature": number, "maxDecodeSteps": integer, "topP": number, "topK": integer } |
Fields | |
---|---|
temperature |
The temperature used for sampling. Temperature sampling occurs after both topP and topK have been applied. Valid range: [0.0, 1.0] Low temperature = less random. High temperature = more random. |
max |
The maximum number of tokens to generate. |
top |
If set, only the tokens comprising the top topP probability mass are considered. If both topP and topK are set, topP will be used for further refining candidates selected with topK. Valid range: (0.0, 1.0]. Small topP = less random. Large topP = more random. |
top |
If set, the sampling process in each step is limited to the topK tokens with highest probabilities. Valid range: [1, 40] or 1000+. Small topK = less random. Large topK = more random. |
Methods |
|
---|---|
|
Creates a generator in the specified agent. |
|
Deletes the specified generators. |
|
Retrieves the specified generator. |
|
Returns the list of all generators in the specified agent. |
|
Update the specified generator. |