REST Resource: projects.locations.agents.generators

Resource: Generator

Generators contain prompt to be sent to the LLM model to generate text. The prompt can contain parameters which will be resolved before calling the model. It can optionally contain banned phrases to ensure the model responses are safe.

JSON representation
{
  "name": string,
  "displayName": string,
  "promptText": {
    object (Phrase)
  },
  "placeholders": [
    {
      object (Placeholder)
    }
  ],
  "llmModelSettings": {
    object (LlmModelSettings)
  },
  "modelParameter": {
    object (ModelParameter)
  }
}
Fields
name

string

The unique identifier of the generator. Must be set for the Generators.UpdateGenerator method. [Generators.CreateGenerate][] populates the name automatically. Format: projects/<ProjectID>/locations/<LocationID>/agents/<AgentID>/generators/<GeneratorID>.

displayName

string

Required. The human-readable name of the generator, unique within the agent. The prompt contains pre-defined parameters such as $conversation, $last-user-utterance, etc. populated by Dialogflow. It can also contain custom placeholders which will be resolved during fulfillment.

promptText

object (Phrase)

Required. Prompt for the LLM model.

placeholders[]

object (Placeholder)

Optional. List of custom placeholders in the prompt text.

llmModelSettings

object (LlmModelSettings)

The LLM model settings.

modelParameter

object (ModelParameter)

Parameters passed to the LLM to configure its behavior.

Phrase

Text input which can be used for prompt or banned phrases.

JSON representation
{
  "text": string
}
Fields
text

string

Required. Text input which can be used for prompt or banned phrases.

Placeholder

Represents a custom placeholder in the prompt text.

JSON representation
{
  "id": string,
  "name": string
}
Fields
id

string

Unique ID used to map custom placeholder to parameters in fulfillment.

name

string

Custom placeholder value in the prompt text.

ModelParameter

Parameters to be passed to the LLM. If not set, default values will be used.

JSON representation
{
  "temperature": number,
  "maxDecodeSteps": integer,
  "topP": number,
  "topK": integer
}
Fields
temperature

number

The temperature used for sampling. Temperature sampling occurs after both topP and topK have been applied. Valid range: [0.0, 1.0] Low temperature = less random. High temperature = more random.

maxDecodeSteps

integer

The maximum number of tokens to generate.

topP

number

If set, only the tokens comprising the top topP probability mass are considered. If both topP and topK are set, topP will be used for further refining candidates selected with topK. Valid range: (0.0, 1.0]. Small topP = less random. Large topP = more random.

topK

integer

If set, the sampling process in each step is limited to the topK tokens with highest probabilities. Valid range: [1, 40] or 1000+. Small topK = less random. Large topK = more random.

Methods

create

Creates a generator in the specified agent.

delete

Deletes the specified generators.

get

Retrieves the specified generator.

list

Returns the list of all generators in the specified agent.

patch

Update the specified generator.