「Top-P」會影響模型選取輸出符記的方式。模型會按照可能性最高到最低的順序選取符記,直到所選符記的可能性總和等於 Top-P 值。舉例來說,假設詞元 A、B 和 C 的可能性分別為 0.3、0.2 和 0.1,而「Top-P」值為 0.5,模型會依據 temperature 選擇 A 或 B 做為下一個詞元,並排除 C。
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECTexportGOOGLE_CLOUD_LOCATION=globalexportGOOGLE_GENAI_USE_VERTEXAI=True
fromgoogleimportgenaifromgoogle.genai.typesimportGenerateContentConfig,HttpOptionsclient=genai.Client(http_options=HttpOptions(api_version="v1"))response=client.models.generate_content(model="gemini-2.5-flash",contents="Why is the sky blue?",# See the SDK documentation at# https://googleapis.github.io/python-genai/genai.html#genai.types.GenerateContentConfigconfig=GenerateContentConfig(temperature=0,candidate_count=1,response_mime_type="application/json",top_p=0.95,top_k=20,seed=5,max_output_tokens=500,stop_sequences=["STOP!"],presence_penalty=0.0,frequency_penalty=0.0,),)print(response.text)# Example response:# {# "explanation": "The sky appears blue due to a phenomenon called Rayleigh scattering. When ...# }
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECTexportGOOGLE_CLOUD_LOCATION=globalexportGOOGLE_GENAI_USE_VERTEXAI=True
import("context""fmt""io"genai"google.golang.org/genai")//generateWithConfigshowshowtogeneratetextusingatextpromptandcustomconfiguration.funcgenerateWithConfig(wio.Writer)error{ctx:=context.Background()client,err:=genai.NewClient(ctx, &genai.ClientConfig{HTTPOptions:genai.HTTPOptions{APIVersion:"v1"},})iferr!=nil{returnfmt.Errorf("failed to create genai client: %w",err)}modelName:="gemini-2.5-flash"contents:=genai.Text("Why is the sky blue?")//Seethedocumentation:https://googleapis.github.io/python-genai/genai.html#genai.types.GenerateContentConfigconfig:= &genai.GenerateContentConfig{Temperature:genai.Ptr(float32(0.0)),CandidateCount:int32(1),ResponseMIMEType:"application/json",}resp,err:=client.Models.GenerateContent(ctx,modelName,contents,config)iferr!=nil{returnfmt.Errorf("failed to generate content: %w",err)}respText:=resp.Text()fmt.Fprintln(w,respText)//Exampleresponse://{//"explanation":"The sky is blue due to a phenomenon called Rayleigh scattering ...//}returnnil}
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Content generation parameters\n\nThis page shows the optional sampling parameters you can set in a request to a\nmodel. The parameters available for each model may differ. For more information,\nsee the [reference documentation](/vertex-ai/generative-ai/docs/model-reference/inference#generationconfig).\n\nToken sampling parameters\n-------------------------\n\n### Top-P\n\n\nTop-P changes how the model selects tokens for output. Tokens are selected\nfrom the most probable to least probable until the sum of their probabilities\nequals the top-P value. For example, if tokens A, B, and C have a probability of\n0.3, 0.2, and 0.1 and the top-P value is `0.5`, then the model will\nselect either A or B as the next token by using temperature and excludes C as a\ncandidate.\n\nSpecify a lower value for less random responses and a higher value for more\nrandom responses.\nFor more information, see [`topP`](/vertex-ai/generative-ai/docs/model-reference/inference#topP).\n\n\u003cbr /\u003e\n\n### Temperature\n\n\nThe temperature is used for sampling during response generation, which occurs when `topP`\nand `topK` are applied. Temperature controls the degree of randomness in token selection.\nLower temperatures are good for prompts that require a less open-ended or creative response, while\nhigher temperatures can lead to more diverse or creative results. A temperature of `0`\nmeans that the highest probability tokens are always selected. In this case, responses for a given\nprompt are mostly deterministic, but a small amount of variation is still possible.\n\nIf the model returns a response that's too generic, too short, or the model gives a fallback\nresponse, try increasing the temperature.\n\n\u003cbr /\u003e\n\nLower temperatures lead to predictable (but not completely [deterministic](https://medium.com/google-cloud/is-a-zero-temperature-deterministic-c4a7faef4d20))\nresults. For more information, see [`temperature`](/vertex-ai/generative-ai/docs/model-reference/inference#temperature).\n\nStopping parameters\n-------------------\n\n### Maximum output tokens\n\nSet [`maxOutputTokens`](/vertex-ai/generative-ai/docs/model-reference/inference#maxOutputTokens) to limit the number of tokens\ngenerated in the response. A token is approximately four characters, so 100\ntokens correspond to roughly 60-80 words. Set a low value to limit the length\nof the response.\n\n### Stop sequences\n\nDefine strings in [`stopSequences`](/vertex-ai/generative-ai/docs/model-reference/inference#stopSequences) to tell the model to stop\ngenerating text if one of the strings is encountered in the response. If a\nstring appears multiple times in the response, then the response is truncated\nwhere the string is first encountered. The strings are case-sensitive.\n\nToken penalization parameters\n-----------------------------\n\n### Frequency penalty\n\n\nPositive values penalize tokens that repeatedly appear in the generated text, decreasing the\nprobability of repeating content. The minimum value is `-2.0`. The maximum value is up\nto, but not including, `2.0`.\nFor more information, see [`frequencyPenalty`](/vertex-ai/generative-ai/docs/model-reference/inference#frequencyPenalty).\n\n### Presence penalty\n\n\nPositive values penalize tokens that already appear in the generated text, increasing the\nprobability of generating more diverse content. The minimum value is `-2.0`. The maximum\nvalue is up to, but not including, `2.0`.\nFor more information, see [`presencePenalty`](/vertex-ai/generative-ai/docs/model-reference/inference#presencePenalty).\n\nAdvanced parameters\n-------------------\n\nUse these parameters to return more information about the tokens in the response\nor to control the variability of the response.\n|\n| **Preview**\n|\n|\n| This product or feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA products and features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\n### Log probabilities of output tokens\n\n\nReturns the log probabilities of the top candidate tokens at each generation step. The model's\nchosen token might not be the same as the top candidate token at each step. Specify the number of\ncandidates to return by using an integer value in the range of `1`-`20`.\nFor more information, see [`logprobs`](/vertex-ai/generative-ai/docs/model-reference/inference#logprobs). You also need to\nset the [`responseLogprobs`](/vertex-ai/generative-ai/docs/model-reference/inference#responseLogprobs) parameter to `true` to use this\nfeature.\n\nThe [`responseLogprobs`](/vertex-ai/generative-ai/docs/model-reference/inference#responseLogprobs) parameter returns the log\nprobabilities of the tokens that were chosen by the model at each step.\n\nFor more information, see the [Intro to Logprobs](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/logprobs/intro_logprobs.ipynb) notebook.\n\n### Seed\n\n\nWhen seed is fixed to a specific value, the model makes a best effort to provide\nthe same response for repeated requests. Deterministic output isn't guaranteed.\nAlso, changing the model or parameter settings, such as the temperature, can\ncause variations in the response even when you use the same seed value. By\ndefault, a random seed value is used.\nFor more information, see [`seed`](/vertex-ai/generative-ai/docs/model-reference/inference#seed).\n\n### Example\n\nHere is an example that uses parameters to tune a model's response. \n\n### Python\n\n#### Install\n\n```\npip install --upgrade google-genai\n```\n\n\nTo learn more, see the\n[SDK reference documentation](https://googleapis.github.io/python-genai/).\n\n\nSet environment variables to use the Gen AI SDK with Vertex AI:\n\n```bash\n# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values\n# with appropriate values for your project.\nexport GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT\nexport GOOGLE_CLOUD_LOCATION=global\nexport GOOGLE_GENAI_USE_VERTEXAI=True\n```\n\n\u003cbr /\u003e\n\n from google import genai\n from google.genai.types import GenerateContentConfig, HttpOptions\n\n client = genai.Client(http_options=HttpOptions(api_version=\"v1\"))\n response = client.models.generate_content(\n model=\"gemini-2.5-flash\",\n contents=\"Why is the sky blue?\",\n # See the SDK documentation at\n # https://googleapis.github.io/python-genai/genai.html#genai.types.GenerateContentConfig\n config=GenerateContentConfig(\n temperature=0,\n candidate_count=1,\n response_mime_type=\"application/json\",\n top_p=0.95,\n top_k=20,\n seed=5,\n max_output_tokens=500,\n stop_sequences=[\"STOP!\"],\n presence_penalty=0.0,\n frequency_penalty=0.0,\n ),\n )\n print(response.text)\n # Example response:\n # {\n # \"explanation\": \"The sky appears blue due to a phenomenon called Rayleigh scattering. When ...\n # }\n\n### Go\n\nLearn how to install or update the [Go](/vertex-ai/generative-ai/docs/sdks/overview).\n\n\nTo learn more, see the\n[SDK reference documentation](https://pkg.go.dev/google.golang.org/genai).\n\n\nSet environment variables to use the Gen AI SDK with Vertex AI:\n\n```bash\n# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values\n# with appropriate values for your project.\nexport GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT\nexport GOOGLE_CLOUD_LOCATION=global\nexport GOOGLE_GENAI_USE_VERTEXAI=True\n```\n\n\u003cbr /\u003e\n\n import (\n \t\"context\"\n \t\"fmt\"\n \t\"io\"\n\n \tgenai \"google.golang.org/genai\"\n )\n\n // generateWithConfig shows how to generate text using a text prompt and custom configuration.\n func generateWithConfig(w io.Writer) error {\n \tctx := context.Background()\n\n \tclient, err := genai.NewClient(ctx, &genai.ClientConfig{\n \t\tHTTPOptions: genai.HTTPOptions{APIVersion: \"v1\"},\n \t})\n \tif err != nil {\n \t\treturn fmt.Errorf(\"failed to create genai client: %w\", err)\n \t}\n\n \tmodelName := \"gemini-2.5-flash\"\n \tcontents := genai.Text(\"Why is the sky blue?\")\n \t// See the documentation: https://googleapis.github.io/python-genai/genai.html#genai.types.GenerateContentConfig\n \tconfig := &genai.GenerateContentConfig{\n \t\tTemperature: genai.Ptr(float32(0.0)),\n \t\tCandidateCount: int32(1),\n \t\tResponseMIMEType: \"application/json\",\n \t}\n\n \tresp, err := client.Models.GenerateContent(ctx, modelName, contents, config)\n \tif err != nil {\n \t\treturn fmt.Errorf(\"failed to generate content: %w\", err)\n \t}\n\n \trespText := resp.Text()\n\n \tfmt.Fprintln(w, respText)\n \t// Example response:\n \t// {\n \t// \"explanation\": \"The sky is blue due to a phenomenon called Rayleigh scattering ...\n \t// }\n\n \treturn nil\n }\n\n\u003cbr /\u003e\n\nWhat's next\n-----------\n\n- Learn about [responsible AI best practices and Vertex AI's safety filters](/vertex-ai/generative-ai/docs/learn/responsible-ai).\n- Learn about [system instructions for safety](/vertex-ai/generative-ai/docs/multimodal/safety-system-instructions).\n- Learn about [abuse monitoring](/vertex-ai/generative-ai/docs/learn/abuse-monitoring).\n- Learn about [responsible AI](/vertex-ai/generative-ai/docs/learn/responsible-ai)."]]