Back to Reality: OpenAI Leads the Charge Against AI Hallucinations
AI “hallucinations” are among the key concerns about AI-generated content. OpenAI is developing a new process to improve the answers provided by its platform, ChatGPT.

Platforms like ChatGPT can be helpful tools for content writers, but only when you understand AI’s limitations. One key concern is when the platform “hallucinates” false answers to a topic. OpenAI is taking steps to minimize these “ AI hallucinations” and make it more reliable.
Examples of AI Hallucinations
AI hallucinations are when AI generates unexpected and fabricated results unsupported by real-world data. Sometimes, the information provided is obviously false and easy to detect. One example of this is when you input the prompt, “Compose a birthday greeting for my sister.” ChatGPT might respond, “Merry Christmas, Aunt Kate.”
Another example is when ChatGPT responds with additional information unrelated to the topic. For instance, you might use the prompt, “Tell me about New York.” and ChatGPT might say something like, “New York is a city on the eastern coast of the United States. The average elephant eats 350 pounds of food per day.”
Needless to say, these hallucinations can result in serious content problems. In an extreme example, several attorneys were sanctioned for filing a case in New York Federal Court using case information generated by ChatGPT. Difficulties arose when it was discovered that ChatGPT fabricated the judicial decisions used to support the case, including fictitious quotes and internal citations.
What Causes AI Hallucinations?
ChatGPT uses information and data that already exists, so how could it fabricate information? This seems like something only a human could do, but OpenAI has an answer. AI uses an algorithm called a large language model (LLM). An LLM uses machine learning, statistics, and massive language data sets to generate text that mimics human intelligence and speech.
The AI analyzes text and then makes a predictive output. The algorithms that power AI have multiple layers of interconnected neural networks in their decision trees. Several causes of AI hallucinations have been found:
- Poor quality training data
- Bias in previous data generation
- Unclear prompts by the user
Poor data quality occurs when lousy information is used to train the AI model. It goes back to the old concept of garbage in/garbage out. Even when the data is reliable, some models have been found to have a bias toward certain generic word choices. This influences the information they generate.
What OpenAI Is Doing About It
OpenAI and its competitors consider hallucinations to be a serious concern. The biggest issue is that they could lead to the spread of false information.
The first step in resolving hallucinations is to be able to detect them. This requires fact-checking the output.
Users can ask ChatGPT to self-evaluate and predict the probability that the answer is right or wrong. Google’s new AI-powered Search takes a different approach that uses its Trusted Tester program to provide feedback on the quality of the answers. It will combine external and internal testing to try to improve the answers it provides. Both of these models use “output supervision” to evaluate the accuracy of answers, which does not address the process used to generate them.
Another technique OpenAI is using to combat hallucinations is to train AI models to reward themselves for each correct step in the reasoning decision tree. This method of process supervision helps prevent logical mistakes that lead to hallucinations. OpenAI is currently developing this solution using a math test set.
They generate several solutions for each math problem and then choose the solution ranked highest by the reward model. OpenAI is only beginning to use its new process supervision techniques, and it does not know how well they will work beyond math test sets. It’s only the first step in solving the AI hallucination problem, and it will take time before this process can be used.
Advice to Content Creators
OpenAI warns that ChatGPT might not always produce accurate answers and encourages users to check the facts for themselves using authoritative sites. AI hallucinations problem can appear to sound plausible. Even if the answer sounds correct, it’s still a good idea to check every fact for yourself.
Remember that ChatGPT was trained on a data set that cuts off in September of 2021. Therefore, it’s vital to fact-check any AI-generated content that references or could be affected by information released in late 2021 or beyond.
ChatGPT can be an excellent tool to cut research time and generate topic ideas, but it cannot replace human writers and editors.
Additional reading:
Also, please take a moment to explore our previous blog post discussing the profound impact of ChatGPT on the content industry.