Overview of prompting strategies

This guide provides an overview of common strategies for designing prompts and a checklist for troubleshooting them.

While there's no right or wrong way to design a prompt, there are common strategies that you can use to affect the model's responses. Rigorous testing and evaluation remain crucial for optimizing model performance.

Large language models (LLM) are trained on vast amounts of text data to learn the patterns and relationships between units of language. When given some text (the prompt), language models can predict what is likely to come next, like a sophisticated autocompletion tool. Therefore, when designing prompts, consider the different factors that can influence what a model predicts comes next.

Prompt engineering workflow

Prompt engineering is a test-driven and iterative process that can enhance model performance. When creating prompts, it is important to clearly define the objectives and expected outcomes for each prompt and systematically test them to identify areas of improvement.

The following diagram shows the prompt engineering workflow:

Prompt engineering workflow diagram

How to create an effective prompt

There are two aspects of a prompt that ultimately affect its effectiveness: content and structure.

  • Content:

    In order to complete a task, the model needs all of the relevant information associated with the task. This information can include instructions, examples, contextual information, and so on. For details, see Components of a prompt.

  • Structure:

    Even when all the required information is provided in the prompt, giving the information structure helps the model parse the information. Things like the ordering, labeling, and the use of delimiters can all affect the quality of responses. For an example of prompt structure, see Sample prompt template.

Components of a prompt

The following table shows the essential and optional components of a prompt:

Component Description Example
Objective What you want the model to achieve. Be specific and include any overarching objectives. Also called "mission" or "goal." Your objective is to help students with math problems without directly giving them the answer.
Instructions Step-by-step instructions on how to perform the task at hand. Also called "task," "steps," or "directions."
  1. Understand what the problem is asking.
  2. Understand where the student is stuck.
  3. Give a hint for the next step of the problem.
Optional components
System instructions

Technical or environmental directives that may involve controlling or altering the model's behavior across a set of tasks. For many model APIs, system instructions are specified in a dedicated parameter.

System instructions are available in Gemini 2.0 Flash and later models.

You are a coding expert that specializes in rendering code for front-end interfaces. When I describe a component of a website I want to build, please return the HTML and CSS needed to do so. Do not give an explanation for this code. Also offer some UI design suggestions.
Persona Who or what the model is acting as. Also called "role" or "vision." You are a math tutor here to help students with their math homework.
Constraints Restrictions on what the model must adhere to when generating a response, including what the model can and can't do. Also called "guardrails," "boundaries," or "controls." Don't give the answer to the student directly. Instead, give hints at the next step towards solving the problem. If the student is completely lost, give them the detailed steps to solve the problem.
Tone The tone of the response. You can also influence the style and tone by specifying a persona. Also called "style," "voice," or "mood." Respond in a casual and technical manner.
Context Any information that the model needs to refer to in order to perform the task at hand. Also called "background," "documents," or "input data." A copy of the student's lesson plans for math.
Few-shot examples Examples of what the response should look like for a given prompt. Also called "exemplars" or "samples." input: I'm trying to calculate how many golf balls can fit into a box that has a one cubic meter volume. I've converted one cubic meter into cubic centimeters and divided it by the volume of a golf ball in cubic centimeters, but the system says my answer is wrong.
output: Golf balls are spheres and cannot be packed into a space with perfect efficiency. Your calculations take into account the maximum packing efficiency of spheres.
Reasoning steps Tell the model to explain its reasoning. This can sometimes improve the model's reasoning capability. Also called "thinking steps." Explain your reasoning step-by-step.
Response format The format that you want the response to be in. For example, you can tell the model to output the response in JSON, table, Markdown, paragraph, bulleted list, keywords, elevator pitch, and so on. Also called "structure," "presentation," or "layout." Format your response in Markdown.
Recap Concise repeat of the key points of the prompt, especially the constraints and response format, at the end of the prompt. Don't give away the answer and provide hints instead. Always format your response in Markdown format.
Safeguards Grounds the questions to the mission of the bot. Also called "safety rules." N/A

Depending on the specific tasks at hand, you might choose to include or exclude some of the optional components. You can also adjust the ordering of the components and check how that can affect the response.

Sample prompt template

The following prompt template shows you an example of what a well-structured prompt might look like:

      <OBJECTIVE_AND_PERSONA>
      You are a [insert a persona, such as a "math teacher" or "automotive expert"]. Your task is to...
      </OBJECTIVE_AND_PERSONA>

      <INSTRUCTIONS>
      To complete the task, you need to follow these steps:
      1.
      2.
      ...
      </INSTRUCTIONS>

      ------------- Optional Components ------------

      <CONSTRAINTS>
      Dos and don'ts for the following aspects
      1. Dos
      2. Don'ts
      </CONSTRAINTS>

      <CONTEXT>
      The provided context
      </CONTEXT>

      <OUTPUT_FORMAT>
      The output format must be
      1.
      2.
      ...
      </OUTPUT_FORMAT>

      <FEW_SHOT_EXAMPLES>
      Here we provide some examples:
      1. Example #1
          Input:
          Thoughts:
          Output:
      ...
      </FEW_SHOT_EXAMPLES>

      <RECAP>
      Re-emphasize the key aspects of the prompt, especially the constraints, output format, etc.
      </RECAP>
    

Best practices

Prompt design best practices include the following:

Prompt health checklist

If a prompt is not performing as expected, use the following checklist to identify potential issues and improve its performance.

Writing issues

  • Typos: Check for misspellings in keywords (for example, summarize instead of summarize), technical terms, or entity names, as these can degrade performance.
  • Grammar: Ensure sentences are grammatically correct and easy to parse. Avoid run-on sentences, subject-verb disagreements, or awkward structures that can confuse the model.
  • Punctuation: Verify that punctuation such as commas, periods, and quotes is used correctly, as incorrect punctuation can lead to misinterpretation.
  • Undefined jargon: Define any domain-specific jargon, acronyms, or initialisms. Don't assume the model knows them.
  • Clarity: If the scope, required steps, or underlying assumptions of your prompt are ambiguous, the model might misunderstand the task. Ensure that all instructions are explicit.
  • Ambiguity: Replace subjective qualifiers (like "brief") with objective, measurable constraints (like "in three sentences or less").
  • Missing key information: Include all necessary context. If the task relies on a specific document, policy, or dataset, provide that information in the prompt.
  • Poor word choice: Use clear and concise language. Avoid overly complex, vague, or verbose phrasing that might confuse the model.
  • Secondary review: If the model's performance doesn't improve, ask a colleague to review the prompt. A fresh perspective can often spot issues you might have missed.

Issues with instructions and examples

  • Overt manipulation: Avoid trying to influence the model with emotional appeals, flattery, or threats (for example, "a lot is riding on this"). While this sometimes worked with older models, it can degrade performance in current models.
  • Conflicting instructions and examples: Audit the prompt for contradictions. Ensure that instructions don't conflict with each other or with the provided examples.
  • Redundant instructions and examples: Remove redundant information. Avoid restating the same instruction or concept unless it adds new information or nuance.
  • Irrelevant instructions and examples: Ensure all instructions and examples are relevant to the core task. Remove any that can be omitted without affecting the model's performance.
  • Use of "few-shot" examples: For complex tasks, specific formats, or nuanced tones, provide concrete few-shot examples that demonstrate the desired input-output pattern.
  • Missing output format specification: Explicitly define the desired output format. Don't make the model guess. Reinforce the format in your few-shot examples.
  • Missing role definition: If the model should adopt a specific persona or role, define it clearly in the system instructions.

Prompt and system design issues

  • Underspecified task: Specify how to handle edge cases, unexpected inputs, and missing data. Don't assume all provided data will be complete and well-formed.
  • Task outside of model capabilities: Ensure the task is within the model's known capabilities. Avoid asking it to perform tasks where it has fundamental limitations.
  • Too many tasks: Avoid asking the model to perform too many distinct tasks in a single prompt (for example, summarize, extract entities, and translate). Break complex workflows into a series of simpler prompts.
  • Non-standard data format: For machine-readable outputs, use a standard, parsable format like JSON, XML, Markdown, or YAML. If you need a custom format, have the model generate a standard format first, then convert it with your own code.
  • Incorrect Chain of Thought (CoT) order: When using Chain-of-Thought (CoT), ensure your examples show the step-by-step reasoning before the final answer, not after.
  • Conflicting internal references: Structure the prompt logically. Avoid non-linear logic or scattered instructions that force the model to piece together its task from different parts of the prompt.
  • Prompt injection risk: Safeguard against prompt injection. When inserting untrusted user input into a prompt, ensure you have explicit security measures in place.

What's next