Configuration options for model generation and outputs.
Package
@google-cloud/vertexaiProperties
candidateCount
candidateCount?: number;
Optional. Number of candidates to generate.
maxOutputTokens
maxOutputTokens?: number;
Optional. The maximum number of output tokens to generate per message.
responseMimeType
responseMimeType?: string;
Optional. Output response mimetype of the generated candidate text. Supported mimetype: - text/plain
: (default) Text output. - application/json
: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.
stopSequences
stopSequences?: string[];
Optional. Stop sequences.
temperature
temperature?: number;
Optional. Controls the randomness of predictions.
topK
topK?: number;
Optional. If specified, topK sampling will be used.
topP
topP?: number;
Optional. If specified, nucleus sampling will be used.