Jump to

What are AI hallucinations?

AI hallucinations are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model. AI hallucinations can be a problem for AI systems that are used to make important decisions, such as medical diagnoses or financial trading.

How do AI hallucinations occur?

AI models are trained on data, and they learn to make predictions by finding patterns in the data. However, if the training data is incomplete or biased, the AI model may learn incorrect patterns. This can lead to the AI model making incorrect predictions, or hallucinating.

For example, an AI model that is trained on a dataset of medical images may learn to identify cancer cells. However, if the dataset does not include any images of healthy tissue, the AI model may incorrectly predict that healthy tissue is cancerous. This is an example of an AI hallucination.

Examples of AI hallucinations

AI hallucinations can take many different forms. Some common examples include:

  • Incorrect predictions: An AI model may predict that an event will occur when it is unlikely to happen. For example, an AI model that is used to predict the weather may predict that it will rain tomorrow when there is no rain in the forecast.
  • False positives: When working with an AI model, it may identify something as being a threat when it is not. For example, an AI model that is used to detect fraud may flag a transaction as fraudulent when it is not.
  • False negatives: An AI model may fail to identify something as being a threat when it is. For example, an AI model that is used to detect cancer may fail to identify a cancerous tumor.

How to prevent AI hallucinations

There are a number of things that can be done to prevent AI hallucinations, including:

Limit possible outcomes

When training an AI model, it is important to limit the number of possible outcomes that the model can predict. This can be done by using a technique called "regularization." Regularization penalizes the model for making predictions that are too extreme. This helps to prevent the model from overfitting the training data and making incorrect predictions.

Train your AI with only relevant and specific sources

When training an AI model, it is important to use data that is relevant to the task that the model will be performing. For example, if you are training an AI model to identify cancer, you should use a dataset of medical images. Using data that is not relevant to the task can lead to the AI model making incorrect predictions.

Create a template for your AI to follow

When training an AI model, it is helpful to create a template for the model to follow. This template can help to guide the model in making predictions. For example, if you are training an AI model to write text, you could create a template that includes the following elements:

  • A title
  • An introduction
  • A body
  • A conclusion

Tell your AI what you want and don't want

When using an AI model, it is important to tell the model what you want and don't want. This can be done by providing the model with feedback. For example, if you are using an AI model to generate text, you can provide the model with feedback by telling it which text you like and don't like. This will help the model to learn what you are looking for.

Solve your business challenges with Google Cloud

New customers get $300 in free credits to spend on Google Cloud.
Get started
Talk to a Google Cloud sales specialist to discuss your unique challenge in more detail.
Contact us

How Google Cloud can help prevent hallucinations