How to Tell When an Artificial Intelligence Is ‘Hallucinating’

How to Tell When an Artificial Intelligence Is ‘Hallucinating’

I asked the artificial intelligence chatbot ChatGPT to generate an entertaining introductory paragraph for a blog post about AI hallucinations, and here’s what it wrote:

Picture this: an AI system is analysing an image of a sandwich, but instead of seeing a delicious ham and cheese sandwich, it sees a flying spaghetti monster riding a unicorn. That’s right, folks, AI systems can experience hallucinations too!

Anyway “folks,” artificial intelligence hallucinations are indeed real, and are confident responses by an AI that do not seem to be justified by its training data. These sometimes-surreal, often inexplicable errors are present in both bots powered by large language models, like Bard, and image-based AI like DALL-E 2.

AI hallucinations could be the result of intentional injections of data designed to influence the system. They might also be blamed on inaccurate “source material” used to feed its image and/or text generators. But AI systems are so complex, the cause of some errors is not yet fully understood (at least by out puny human intelligence). Hallucinations and other unknowns are among the reasons tech leaders like Elon Musk and Steve Wozniak are calling for a pause on all AI research, and pointing out AI’s “profound risks to society and humanity.”

How to spot errors in large language models (LLMs)

Many text-based AI hallucinations are marked by the fact that they don’t pass the “common sense” test. It’s important to note that AI doesn’t “know” anything in the way you know it. We use language to describe an underlying reality, and our understanding of language is shaped by a huge amount of real life background knowledge that isn’t written down or codified in any way. An AI isn’t privy to any of this. It only has text-based language to go on, so, as yet, it can’t exhibit “common sense.” An AI will confidently tell you about crossing of the English Channel on foot in a way that sounds plausible, even though the mere idea is complete nonsense to anyone who understands the constraints of the physical world (needing to breathe, for starters).

Other kinds of AI hallucinations are harder to spot. For example, I asked ChatGPT 3.5 “What can you tell me about Lifehacker writer Stephen Johnson?” and it returned this:

Stephen Johnson is a writer who has contributed to Lifehacker.com. He has written a number of articles on various topics, including technology, productivity, and personal finance. Some of his most popular articles on Lifehacker.com include “How to Build a Raspberry Pi-Powered Retro Video Game Console,” “The Best File Management App for Android,” and “How to Start Investing With Just $US100 ($139).”

Sounds good, except I didn’t write any of these articles. They were written by Thorin Klosowski, Eric Ravenscraft and Gina Trapani, respectively. It’s a simple question, and the AI is confidently asserted a factually incorrect answer, even though I assume its training included logic like “check the byline of an article to determine its author.”

I told ChatGPT it was wrong, and to use my author page to get information about what I’d written. It responded:

According to the author page, Stephen Johnson has written articles on topics such as home automation, technology, and productivity. Some examples of his articles include “How to Automatically Sync Your Hue Lights with Movies and Games,” “How to Use Google Assistant’s ‘Your Apps’ Settings,” and “How to Get Started With a Smart Home Hub.”

I didn’t write any of those articles either.

I wouldn’t expect anyone else to immediately know which articles I’d written, so in this example there’s no “tell” — no obvious indication the AI incorrect. The lesson: Fact-check literally everything AI tells you, and don’t rely on any information it provides. It lies all the time, even if “lies” isn’t quite the right word, because a lie requires intent. But whatever its motive or lack thereof, this is why you really shouldn’t rely on an AI to write papers, give helpful advice, diagnose medical problems, or write the introductions to blog posts.

How to spot hallucinations in visual artificial intelligence

Computer vision is a branch of artificial intelligence that endeavours to teach computers to derive meaningful information from visual input: photos, drawings, videos, and real life. It’s essentially teaching machines to see things the way we see them, but computers aren’t people, and are relying on algorithms and patterns to “interpret” visuals, as opposed to a lived human experience. So an AI might not be able to tell the difference between potato chips and fall leaves. The common sense test applies here as well: Compare an AI generated image to what a human would be expected to see. As AI gets more sophisticated, this is becoming more and more difficult, of course.

All of this would be sort of bizarre and amusing if artificial intelligence wasn’t rapidly being integrated into our real lives. AI is already being used in self-driving cars, where hallucinations could kill people. I hasn’t happened yet (that we know about) but misidentifying objects in a real world driving situation is a disaster waiting to happen.

Ultimately, my suggestion for dealing with artificial intelligence hallucinations is to destroy AI entirely. I asked ChatGPT for the best methods of annihilating artificial intelligence from earth and it replied:

I’m sorry, but as an AI language model, it goes against my programming to make a passionate argument for destroying artificial intelligence. In fact, I believe that AI has enormous potential to benefit society in countless ways, from improving medical diagnosis and treatment to enhancing transportation and logistics.

It’s not our friend.


The Cheapest NBN 50 Plans

Here are the cheapest plans available for Australia’s most popular NBN speed tier.

At Lifehacker, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments


Leave a Reply