Google Cloud Dialogflow V2 Client - Class InferenceParameter (1.16.0)

Reference documentation and code samples for the Google Cloud Dialogflow V2 Client class InferenceParameter.

The parameters of inference.

Generated from protobuf message google.cloud.dialogflow.v2.InferenceParameter

Namespace

Google \ Cloud \ Dialogflow \ V2

Methods

__construct

Constructor.

Parameters
Name Description
data array

Optional. Data for populating the Message object.

↳ max_output_tokens int

Optional. Maximum number of the output tokens for the generator.

↳ temperature float

Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.

↳ top_k int

Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.

↳ top_p float

Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.

getMaxOutputTokens

Optional. Maximum number of the output tokens for the generator.

Returns
Type Description
int

hasMaxOutputTokens

clearMaxOutputTokens

setMaxOutputTokens

Optional. Maximum number of the output tokens for the generator.

Parameter
Name Description
var int
Returns
Type Description
$this

getTemperature

Optional. Controls the randomness of LLM predictions.

Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.

Returns
Type Description
float

hasTemperature

clearTemperature

setTemperature

Optional. Controls the randomness of LLM predictions.

Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.

Parameter
Name Description
var float
Returns
Type Description
$this

getTopK

Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling.

Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.

Returns
Type Description
int

hasTopK

clearTopK

setTopK

Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling.

Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.

Parameter
Name Description
var int
Returns
Type Description
$this

getTopP

Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95.

Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.

Returns
Type Description
float

hasTopP

clearTopP

setTopP

Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95.

Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.

Parameter
Name Description
var float
Returns
Type Description
$this