The Vertex AI PaLM API for text lets you design prompts with flexibility in terms of their structure and format. This page gives you an overview of and general guidance for designing text prompts for the PaLM API for text.
To follow step-by-step guidance for this task directly in the Google Cloud console, click Guide me:
Supported models
text-bison
text-bison-32k
text-unicorn
Prompt structure
The Vertex AI PaLM API for text lets you structure your prompts however you like. You can add contextual information, instructions, examples, questions, lists, and any other types of text content that you can think of.
You can label text content by adding prefixes to the text. A prefix can be a word or a phrase that ends with a colon (:). Examples of prefixes include:
- Text:
- Question:
- Answer:
- Categories:
- Options:
You can use whatever prefixes you want, but you might find that some prefixes work better than others for a given task. You should also ensure that you refer to prefixes consistently within the prompt.
⛔ Inconsistent reference: The instruction uses the terms sentiment and tweet, but the prefixes are Text: and Answer:.
|
✅ Consistent reference: The prefixes Text: and Sentiment: match the terms used in the instruction.
|
Common task types
You can create text prompts for handling any number of tasks. Some of the most common tasks are classification, summarization, and extraction. You can learn more about designing text prompts for these common tasks in the following pages:
Classification prompts
The Vertex AI PaLM API can perform classification tasks that assign a class or category to text. You can specify a list of categories to choose from or let the model choose from its own categories. This page shows you how to create prompts that classify text.
Classification use cases
The following are common use cases for text classification:
- Fraud detection: Classify whether transactions in financial data are fraudulent or not.
- Spam filtering: Identify whether an email is spam or not.
- Sentiment analysis: Classify the sentiment conveyed in text as positive or negative. For example, you can classify movie reviews or email as positive or negative.
- Content moderation: Identify and flag content that might be harmful, such as offensive language or phishing.
Best practices for classification prompts
Try setting the temperature to zero and top-K to one. Classification tasks are typically deterministic, so these settings often produce the best results.
Example classification prompts
Use the following examples to learn how to design classification prompts for various use cases.
Sentiment analysis prompt
Sentiment analysis evaluates text and classifies it as positive or negative. Including sentiment analysis in a prompt is useful when analyzing content such as reviews, feedback, and emails.
The following prompt classifies the sentiment of a review:
|
To make the model respond with only positive or negative, try adding those terms in the instruction.
|
You can also get the model to produce a more structured response that includes the sentiment and an explanation of the reason why it selected that sentiment.
|
Content classification prompt
The following prompt classifies customer emails based on what's requested in its content.
|
If the request in the email isn't clear, you might need to send it to customer service to get more information. To do this, add a "customer service" category and instruct the model to apply this category to outliers that require more information.
|
Another option to handle emails that require more information is to include examples of what to do with outliers that don't fit in any other categories.
|
Movie classification prompt
The following prompt classifies movies by who you should watch it with.
|
You might need to define your own category. For example, suppose you own a pet hotel named the Remi Inn that shows movies for pets. The movie selection criteria could be:
- The main character must be an animal.
- The movie must be happy.
- The movie can't be a cartoon.
The following prompt classifies movies that match the three criteria as Remi-tastic and all others as Bark-fest.
|
To check if the model uses the criteria or randomly chooses a classification, the following prompt includes instructions to return a reason for its classification.
|
Summarization prompts
Vertex AI PaLM API can extract a summary of the most important information from text. You can provide information in the prompt to help the model create a summary, or ask the model to create a summary on its own. This page shows you how to design prompts to create different kinds of summarizaries.
Summarization use cases
The following are common use cases for summarization:
- Summarize text: Summarize text content such as the following:
- News articles.
- Research papers.
- Legal documents.
- Financial documents.
- Technical documents.
- Customer feedback.
- Content generation: Generate content for an article, blog, or product description.
Best practices
Use the following guidelines to create optimal text summaries:
- Specify any characteristics that you want the summary to have.
- For more creative summaries, specify higher temperature, top-K, and top-P
values. For more information, learn about the
temperature
,topK
, andtopP
parameters in Text parameter definitions. - When you write your prompt, focus on the purpose of the summary and what you want to get out of it.
Example summarization prompts
Use the following examples to learn how to design summarization prompts for various use cases.
Article summarization prompt
The following prompt summarizes the main points of an article:
|
You might want an abstract of an article that you wrote. The following prompt creates an abstract of an article:
|
A prompt used to create a title for an article is similar to a prompt that uses a short phrase to summarize an article. The following summarization prompt returns a title for an article.
|
Chat summarization prompt
The following prompt summarizes a customer support chat log:
|
Hashtag tokenization summary prompt
Hashtag tokenization is a form of summarization where the model extracts words and phrases from text that are representative of the text as a whole.
The following is an example of a prompt that uses hashtag tokenization:
|
Extraction prompts
Vertex AI PaLM API can extract information from text. This page shows you how to design prompts that extract information from text.
Use cases
The following are common use cases for extraction:
- Named entity recognition (NER): Extract named entities from text, including people, places, organizations, and dates.
- Relation extraction: Extract the relationships between entities in text, such as family relationships between people.
- Event extraction: Extract events from text, such as project milestones and product launches.
- Question answering: Extract information from text to answer a question.
Best practices
Try setting the temperature to zero and top-K to one. Extraction tasks are typically
deterministic, so these settings often produce the best results. For more
information, learn about the temperature
and topK
parameters in
Text parameter definitions.
Examples of extraction tasks
Use the following examples to learn how to design extraction prompts for various use cases.
Use extraction to answer a question
The following prompt includes context and a question. The model searches the context for information that answers the question.
|
Format extracted text
You can extract information from a text source and organize it into a structured format. The following prompt formats extracted text as a JSON file:
|
What's next
- Try a quickstart tutorial using Generative AI Studio or the Vertex AI API.
- Learn how to test text prompts.