- 1.73.0 (latest)
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
FinishReason(value)
The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.
Enum values:
FINISH_REASON_UNSPECIFIED (0):
The finish reason is unspecified.
STOP (1):
Natural stop point of the model or provided
stop sequence.
MAX_TOKENS (2):
The maximum number of tokens as specified in
the request was reached.
SAFETY (3):
The token generation was stopped as the
response was flagged for safety reasons. NOTE:
When streaming the Candidate.content will be
empty if content filters blocked the output.
RECITATION (4):
The token generation was stopped as the
response was flagged for unauthorized citations.
OTHER (5):
All other reasons that stopped the token
generation
BLOCKLIST (6):
The token generation was stopped as the
response was flagged for the terms which are
included from the terminology blocklist.
PROHIBITED_CONTENT (7):
The token generation was stopped as the
response was flagged for the prohibited
contents.
SPII (8):
The token generation was stopped as the
response was flagged for Sensitive Personally
Identifiable Information (SPII) contents.