Responsible AI

Large language models (LLMs) can translate language, summarize text, generate creative writing, generate code, power chatbots and virtual assistants, and complement search engines and recommendation systems. At the same time, as an early-stage technology, its evolving capabilities and uses create potential for misapplication, misuse, and unintended or unforeseen consequences. Large language models can generate output that you don't expect, including text that's offensive, insensitive, or factually incorrect.

What's more, the incredible versatility of LLMs is also what makes it difficult to predict exactly what kinds of unintended or unforeseen outputs they might produce. Given these risks and complexities, Vertex AI generative AI APIs are designed with Google's AI Principles in mind. However, it is important for developers to understand and test their models to deploy safely and responsibly. To aid developers, the Vertex AI Studio has built-in content filtering, and our generative AI APIs have safety attribute scoring to help customers test Google's safety filters and define confidence thresholds that are right for their use case and business. Refer to the Safety filters and attributes section to learn more.

When our generative APIs are integrated into your unique use case and context, additional responsible AI considerations and limitations might need to be considered. We encourage customers to promote fairness, interpretability, privacy and security recommended practices.

Safety filters and attributes

To learn how to use safety filters and attributes for an API, see the following pages:

Citation metadata

Our generative code features are intended to produce original content and not replicate existing content at length. We've designed our systems to limit the chances of this occurring, and continuously improve how these systems function. If these features do directly quote at length from a web page, they cite that page.

Sometimes the same content can be found on multiple web pages and we attempt to point you to a popular source. In the case of citations to code repositories, the citation might also reference an applicable open source license. Complying with any license requirements is your responsibility.

Sample citation metadata

{
  "predictions": [
    {
      "safetyAttributes": {
        "scores": [],
        "categories": [],
        "blocked": false
      },
      "content": "Shall I compare thee to a summer's day?\nThou art more lovely and more temperate.\nRough winds do shake the darling buds of May,\nAnd summer's lease hath all too short a date.\n\nSometime too hot the eye of heaven shines,\nAnd often is his gold complexion dimm'd;\nAnd every fair from fair sometime declines,\nBy chance or nature's changing course, untrimm'd.\n\nBut thy eternal summer shall not fade,\nNor lose possession of that fair thou ow'st,\nNor shall death brag thou wanderest in his shade,\nWhen in eternal lines to time thou grow'st.\n\nSo long as men can breathe or eyes can see,\nSo long lives this and this gives life to thee.",
      "citationMetadata": {
        "citations": [
          {
            "endIndex": 262,
            "publicationDate": "1800",
            "startIndex": 0,
            "title": ""The" Royal Shakespeare"
          },
          {
            "title": "Sabrinae corolla in hortulis regiae scholae Salopiensis contextuerunt tres viri floribus legendis ...",
            "publicationDate": "1801",
            "startIndex": 140,
            "endIndex": 417
          },
          {
            "startIndex": 302,
            "publicationDate": "1800",
            "title": ""The" Royal Shakespeere",
            "endIndex": 429
          },
          {
            "startIndex": 473,
            "publicationDate": "1847",
            "title": "The Poems of William Shakespeare",
            "endIndex": 618
          }
        ]
      }
    }
  ]
}

Metadata description

The following table describes the citation metadata.

Metadata Description

startIndex

Index in the response where the citation starts (inclusive). Must be greater than or equal to 0 and less than the value of endIndex.

endIndex

Index in the prediction output where the citation ends (exclusive). Must be greater than startIndex and less than the length of the response.

url

URL associated with this citation. If present, this URL links to the source web page of this citation.

title

Title associated with this citation. If present, it refers to the title of the source of this citation.

license

License associated with this citation. If present, it refers to the automatically detected license of the source of this citation. Possible licenses include open source licenses.

publicationDate

Publication date associated with this citation. If present, it refers to the date at which the source of this citation was published. Possible formats are YYYY, YYYY-MM, and YYYY-MM-DD.

Model limitations

Limitations you can encounter when using generative AI models include (but are not limited to):

  • Edge cases: Edge cases refer to unusual, rare, or exceptional situations that are not well-represented in the training data. These cases can lead to limitations in the performance of the model, such as model overconfidence, misinterpretation of context, or inappropriate outputs.

  • Model hallucinations, grounding, and factuality: Generative AI models can lack factuality in real-world knowledge, physical properties, or accurate understanding. This limitation can lead to model hallucinations, which refer to instances where it can generate outputs that are plausible-sounding but factually incorrect, irrelevant, inappropriate, or nonsensical. To reduce this chance, you can ground the models to your specific data. To learn more about grounding in Vertex AI, see Grounding overview.

  • Data quality and tuning: The quality, accuracy, and bias of the prompt or data input into a model can have a significant impact on its performance. If users enter inaccurate or incorrect data or prompts, the model can have suboptimal performance or false model outputs.

  • Bias amplification: Generative AI models can inadvertently amplify existing biases in their training data, leading to outputs that can further reinforce societal prejudices and unequal treatment of certain groups.

  • Language quality: While the models yield impressive multilingual capabilities on the benchmarks we evaluated against, the majority of our benchmarks (including all of fairness evaluations) are in the English language. For more information, see the Google Research blog.

    • Generative AI models can provide inconsistent service quality to different users. For example, text generation might not be as effective for some dialects or language varieties due to underrepresentation in the training data. Performance can be worse for non-English languages or English language varieties with less representation.
  • Fairness benchmarks and subgroups: Google Research's fairness analyses of our generative AI models don't provide an exhaustive account of the various potential risks. For example, we focus on biases along gender, race, ethnicity and religion axes, but perform the analysis only on the English language data and model outputs. For more information, see the Google Research blog.

  • Limited domain expertise: Generative AI models can lack the depth of knowledge required to provide accurate and detailed responses on highly specialized or technical topics, leading to superficial or incorrect information. For specialized, complex use cases, models should be tuned on domain-specific data, and there must be meaningful human supervision in contexts with the potential to materially impact individual rights.

  • Length and structure of inputs and outputs: Generative AI models have a maximum input and output token limit. If the input or output exceeds this limit, our safety classifiers are not applied, which could ultimately lead to poor model performance. While our models are designed to handle a wide range of text formats, their performance can be affected if the input data has an unusual or complex structure.

To utilize this technology safely and responsibly, it is also important to consider other risks specific to your use case, users, and business context in addition to built-in technical safeguards.

We recommend taking the following steps:

  1. Assess your application's security risks.
  2. Consider adjustments to mitigate safety risks.
  3. Perform safety testing appropriate to your use case.
  4. Solicit user feedback and monitor content.

Report abuse

You can report suspected abuse of the Service or any generated output that contains inappropriate material or inaccurate information by using the following form: Report suspected abuse on Google Cloud.

Additional resources