GenerationConfig

Generation config.

JSON representation
{
  "stopSequences": [
    string
  ],
  "responseMimeType": string,
  "temperature": number,
  "topP": number,
  "topK": number,
  "candidateCount": integer,
  "maxOutputTokens": integer,
  "presencePenalty": number,
  "frequencyPenalty": number
}
Fields
stopSequences[]

string

Optional. Stop sequences.

responseMimeType

string

Optional. Output response mimetype of the generated candidate text. Supported mimetype: - text/plain: (default) Text output. - application/json: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.

temperature

number

Optional. Controls the randomness of predictions.

topP

number

Optional. If specified, nucleus sampling will be used.

topK

number

Optional. If specified, top-k sampling will be used.

candidateCount

integer

Optional. Number of candidates to generate.

maxOutputTokens

integer

Optional. The maximum number of output tokens to generate per message.

presencePenalty

number

Optional. Positive penalties.

frequencyPenalty

number

Optional. Frequency penalties.