What To Know About the AI Hallucination Issue

March 14, 2024 • Devin Partida

Advertisements

Anyone who’s used a generative artificial intelligence (AI) chatbot lately has probably felt amazed by how quickly the tool can generate responses to nearly any question or prompt imaginable. Unfortunately, an AI hallucination can occur if one of these tools provides inaccurate information that sounds correct. Generative AI can produce content from its training data, but you shouldn’t trust it without question.

Why Do AI Hallucinations Happen?

Generative AI chatbots — such as ChatGPT and Googl’se Bard— are large language models (LLMs). LLMs are undoubtedly impressive, but they don’t understand the content generated or its context. Instead, large language models use statistical models to predict human communication patterns, relying on that information to produce the words most likely appearing next in text strings. 

Additionally, developers train LLMs on human-generated content. When they made the first version of ChatGPT, most training data came from Wikipedia, with another 22% originating from Reddit links. Since humans show bias, so do the AI tools they create. 

An AI hallucination also becomes more likely when people try to use chatbots outside the scope of those tools’ capabilities. Developers create safeguards to prevent individuals from getting responses to malicious prompts. However, “jailbreaking” a chatbot is relatively easy, and you don’t have to look far to get instructions for circumventing the guardrails. 

Experts also say hallucination prevention becomes easier when users receive clear information about an AI chatbot’s limitations and data sources. For example, people who ask ChatGPT about recent events will receive responses about the tool’s training data cutoff. It cannot reliably answer anything regarding events after that time. 

AI Hallucination and Other Concerns

Many company leaders are eager to see how generative AI chatbots could fit into their work and help them serve customers. However, issues remain and hinder this exploration. 

Google debuted its Bard chatbot in February 2023, but the tool captured headlines for an unintended reason. During a company demo, the chatbot gave the wrong information about discoveries recently made by a space telescope. Bard provided three, but it didn’t take astronomers long to point out one was incorrect. 

Consider what might happen if a company offers an AI chatbot that occasionally gives wrong answers to customers or employees. Such instances could make people lose trust in the business. Workers may also find these internal chatbots reduce their productivity or mislead them, particularly if they use such corporate tools to make important decisions. 

In December 2023, The New York Times sued Microsoft and ChatGPT creator OpenAI. The suit alleged the companies infringed copyright and perpetuated intellectual property abuses while training generative AI tools. 

Additionally, these products get ongoing training based on users’ inputs. Some interfaces allow people to indicate whether the responses were helpful and accurate. Relatedly, corporate representatives fear employees could accidentally leak secret or sensitive information while interacting with generative AI chatbots during the workday. Spotify, Samsung and Apple are some tech companies that have limited or banned workers’ use of these tools.

A worrisome Stanford University study also showed generative AI chatbots provided different answers to medical questions based on patients’ gender, race or socioeconomic status. Such responses could perpetuate life-altering misinformation if not addressed.

How Can People Reduce AI Hallucination Problems?

AI hallucination issues are not impossible to solve. Here are some practical ways to minimize them.

Emphasize Fact-Checking

One of the most effective ways to deal with AI hallucinations is to carefully fact-check the generated content. The results from search engines such as Google show direct links containing relevant information. Although some AI tools cite sources, they may wholly fabricate those resources. 

Such was the case when a journalist for The Guardian received an email from someone who used ChatGPT for research. The chatbot cited an article from The Guardian the user could not find on the publication’s website or through search engines. An internal investigation revealed the chatbot completely made up the source. 

Prioritize Learning and Use Limitations

Decision-makers at The Guardian used the fabricated source as a learning experience. They created a working group and engineering team tasked with learning more about this type of artificial intelligence. 

Within a couple of months after the above incident, the publication also released its associated principles, including only pursuing editorial uses of generative AI if such applications promoted the creation and distribution of original work. 

One University of Pennsylvania professor expects and requires students to use AI and ChatGPT for classes he teaches. He believes artificial intelligence is and will remain a part of society, so people must learn to use it well. The teacher’s students must specify how and why they used AI on each assignment and assume responsibility for any hallucinations in their work. 

Minimize Negative Consequences

Another possibility is to primarily ask these tools questions with easily spotted wrong answers or for which errors have no huge consequences.

You might notice your local supermarket has a sale on carrots and want to get ideas for new ways to use them. A generative AI chatbot could provide some, and you could probably pick out the recipes that won’t taste good or are impossible to make.

Use for Brainstorming

People can also try a comparatively safe way to use AI while reducing hallucinations by letting chatbots produce easily confirmed or rejected suggestions. Then, they combine AI with critical-thinking skills. 

For example, you might ask, “How can I live more frugally this year?” The best approach is only to do that once you’ve already come up with a list of ideas. The chatbot’s answer can fill in gaps and encourage you to consider additional possibilities. 

Encourage Responsible AI Use

Even though many AI researchers frequently discuss the shortcomings mentioned here, more people need to be aware of them. That’s particularly likely when individuals are older or less familiar with emerging technologies. 

Look for opportunities to correct wrong assumptions among people you know while discussing what AI can and cannot do well. Talk about how artificial intelligence is rapidly changing, and researchers are working hard to address the known issues. 

Don’t Let an AI Hallucination Intimidate You

AI hallucinations happen frequently enough that you may have already noticed a few inaccuracies while using these newer chatbots. The main thing to remember is not to treat these issues as reasons to avoid generative AI tools. Instead, realize hallucinations are always possible, and all users must use proactive strategies to recognize them. 

bg-pamplet-2