REST Resource: projects.generators

Resource: Generator

LLM generator.

JSON representation
{
  "name": string,
  "description": string,
  "inferenceParameter": {
    object (InferenceParameter)
  },
  "triggerEvent": enum (TriggerEvent),
  "createTime": string,
  "updateTime": string,

  // Union field context can be only one of the following:
  "summarizationContext": {
    object (SummarizationContext)
  }
  // End of list of possible types for union field context.
}
Fields
name

string

Output only. Identifier. The resource name of the generator. Format: projects/<Project ID>/locations/<Location ID>/generators/<Generator ID>

description

string

Optional. Human readable description of the generator.

inferenceParameter

object (InferenceParameter)

Optional. Inference parameters for this generator.

triggerEvent

enum (TriggerEvent)

Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.

createTime

string (Timestamp format)

Output only. Creation time of this generator.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

updateTime

string (Timestamp format)

Output only. Update time of this generator.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

Union field context. Required. Input context of the generator. context can be only one of the following:
summarizationContext

object (SummarizationContext)

Input of Summarization feature.

SummarizationContext

Summarization context that customer can configure.

JSON representation
{
  "summarizationSections": [
    {
      object (SummarizationSection)
    }
  ],
  "fewShotExamples": [
    {
      object (FewShotExample)
    }
  ],
  "version": string,
  "outputLanguageCode": string
}
Fields
summarizationSections[]

object (SummarizationSection)

Optional. List of sections. Note it contains both predefined section sand customer defined sections.

fewShotExamples[]

object (FewShotExample)

Optional. List of few shot examples.

version

string

Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"].

outputLanguageCode

string

Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions.

SummarizationSection

Represents the section of summarization.

JSON representation
{
  "key": string,
  "definition": string,
  "type": enum (Type)
}
Fields
key

string

Optional. Name of the section, for example, "situation".

definition

string

Optional. Definition of the section, for example, "what the customer needs help with or has question about."

type

enum (Type)

Optional. Type of the summarization section.

Type

Type enum of the summarization sections.

Enums
TYPE_UNSPECIFIED Undefined section type, does not return anything.
SITUATION What the customer needs help with or has question about. Section name: "situation".
ACTION What the agent does to help the customer. Section name: "action".
RESOLUTION Result of the customer service. A single word describing the result of the conversation. Section name: "resolution".
REASON_FOR_CANCELLATION Reason for cancellation if the customer requests for a cancellation. "N/A" otherwise. Section name: "reason_for_cancellation".
CUSTOMER_SATISFACTION "Unsatisfied" or "Satisfied" depending on the customer's feelings at the end of the conversation. Section name: "customer_satisfaction".
ENTITIES Key entities extracted from the conversation, such as ticket number, order number, dollar amount, etc. Section names are prefixed by "entities/".
CUSTOMER_DEFINED Customer defined sections.

FewShotExample

Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response.

JSON representation
{
  "conversationContext": {
    object (ConversationContext)
  },
  "extraInfo": {
    string: string,
    ...
  },
  "output": {
    object (GeneratorSuggestion)
  },

  // Union field instruction_list can be only one of the following:
  "summarizationSectionList": {
    object (SummarizationSectionList)
  }
  // End of list of possible types for union field instruction_list.
}
Fields
conversationContext

object (ConversationContext)

Optional. Conversation transcripts.

extraInfo

map (key: string, value: string)

Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

output

object (GeneratorSuggestion)

Required. Example output of the model.

Union field instruction_list. Instruction list of this few_shot example. instruction_list can be only one of the following:
summarizationSectionList

object (SummarizationSectionList)

Summarization sections.

ConversationContext

Context of the conversation, including transcripts.

JSON representation
{
  "messageEntries": [
    {
      object (MessageEntry)
    }
  ]
}
Fields
messageEntries[]

object (MessageEntry)

Optional. List of message transcripts in the conversation.

MessageEntry

Represents a message entry of a conversation.

JSON representation
{
  "role": enum (Role),
  "text": string,
  "languageCode": string,
  "createTime": string
}
Fields
role

enum (Role)

Optional. Participant role of the message.

text

string

Optional. Transcript content of the message.

languageCode

string

Optional. The language of the text. See Language Support for a list of the currently supported language codes.

createTime

string (Timestamp format)

Optional. Create time of the message entry.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

Role

Enumeration of the roles a participant can play in a conversation.

Enums
ROLE_UNSPECIFIED Participant role not set.
HUMAN_AGENT Participant is a human agent.
AUTOMATED_AGENT Participant is an automated agent, such as a Dialogflow agent.
END_USER Participant is an end user that has called or chatted with Dialogflow services.

SummarizationSectionList

List of summarization sections.

JSON representation
{
  "summarizationSections": [
    {
      object (SummarizationSection)
    }
  ]
}
Fields
summarizationSections[]

object (SummarizationSection)

Optional. Summarization sections.

GeneratorSuggestion

Suggestion generated using a Generator.

JSON representation
{

  // Union field suggestion can be only one of the following:
  "summarySuggestion": {
    object (SummarySuggestion)
  }
  // End of list of possible types for union field suggestion.
}
Fields
Union field suggestion. The suggestion could be one of the many types suggestion can be only one of the following:
summarySuggestion

object (SummarySuggestion)

Optional. Suggested summary.

SummarySuggestion

Suggested summary of the conversation.

JSON representation
{
  "summarySections": [
    {
      object (SummarySection)
    }
  ]
}
Fields
summarySections[]

object (SummarySection)

Required. All the parts of generated summary.

SummarySection

A component of the generated summary.

JSON representation
{
  "section": string,
  "summary": string
}
Fields
section

string

Required. Name of the section.

summary

string

Required. Summary text for the section.

InferenceParameter

The parameters of inference.

JSON representation
{
  "maxOutputTokens": integer,
  "temperature": number,
  "topK": integer,
  "topP": number
}
Fields
maxOutputTokens

integer

Optional. Maximum number of the output tokens for the generator.

temperature

number

Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.

topK

integer

Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.

topP

number

Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.

TriggerEvent

The event that triggers the generator and LLM execution.

Enums
TRIGGER_EVENT_UNSPECIFIED Default value for TriggerEvent.
END_OF_UTTERANCE Triggers when each chat message or voice utterance ends.
MANUAL_CALL Triggers on the conversation manually by API calls, such as Conversations.GenerateStatelessSuggestion and Conversations.GenerateSuggestions.

Methods

create

Creates a generator.

list

Lists generators.