top of page
Writer's pictureDavid Recine

AI Hallucinations vs AI BS

Meet Albert the Alligator

Comic strip panel featuring Albert the Alligator and Pogo Possum, in a rowboat. Albert says "I don't want what?!" Pogo says "Money can't sway you, like you said." An octopus appears to be jumping off of Albert's head.
Albert the Alligator (left), Pogo Possum (right), and an unnamed octopus (upper left)

The photo above depicts one of my favorite cartoon characters: Albert the Alligator. Albert is featured in the comic strip Pogo, created by veteran Disney animator Walt Kelly.


What does Albert have to do with the fact that AI hallucinates? And how can my love of this cartoon character help you understand whether ChatGPT is hallucinating or simply trying to bull**** you with poor research and lazy outputs?


Read on, True Believer. The answer may surprise you.


Think of AI as a Child. Well, As Two Different Children


You've probably heard a lot about AI hallucinations-- nonsensical innaccuaracies that Large Language Models make up out of thin air when they don't know what to say. When AI hallucinates, it's like a small child with a wild imagination who is making stuff up about something they barely understand.


Sometimes, however, AI acts like a slightly older child-- a lazy high school kid who turned in their essay, but really phoned it in. Sometimes an LLM like ChatGPT skims the source material and writes a poor essay, powered by shallow understanding and bad shortcuts.


To illustrate the difference, let's look at two ChatGPT threads. The first concerns Albert the Alligator. Remember him, from a few paragraphs up? I am a HUGE fan of Albert, and the Pogo, the comic strip Albert appears in. That means I can ask ChatGPT to tell me about Albert, and I can know if any of its response is a hallucination or BS.


ChatGPT Hallucinates


And so, I asked ChatGPT:


"Please tell me everything you know about Albert the Alligator from the comic strip POGO."


ChatGPT opened up with some true, if shallow facts, telling me who created Pogo (Walt Kelly), when the comic strip ran in newspapers (1948-1975), the style Albert is drawn in (cartoonish and expressive), and so on. That's when things got a little... weird.

ChatGPT said:




As a Pogo superfan, I can tell you that Albert is not especially into pies. Maybe I'm being pedantic, but I think that's a mild hallucination.


From there, though, ChatGPT also insisted that Albert has a lot of catch phrases, and went on to give the following examples of Albert's supposed dialogue from the comic:




As a giant nerd who has read all the Pogo comics multiple times, I can tell you that Albert the Alligator never has said and never will say these so-called "catch phrases". What do those last two even mean?! At this point, ChatGPT is clearly hallucinating. It's whimsical stuff a four year old might say if they saw a picture of Albert, and imagined they knew what kinds of things he'd say.


In contrast, let's look at what it's like when ChatGPT ages up 10 years, and bulls***s its way through an answer, like a shamelessly shiftless high school freshman who's just phoned in an essay for English class.


ChatGPT Bull****s Its Way Through a Topic I'm the World's Chief Expert On


Taking a note from Elevate AI Coaching CEO Chris Lele's recent post, I decided to ask ChatGPT about the thing I know most about... myself.


I asked:


"Please give me a summary of David Recine's career."


ChatGPT's brazenly overconfident opening salvo was:




None of this is true about me! This is so wildly unaligned with my actual career path that for a second, I thought it was yet another hallucination, far more off the mark than ChatGPT's hallucinations about Albert. It seemed that ChatGPT knew even less about me than it knew about a fictional alligator that hasn't appeared in the funny papers in decades. OUCH!


But then I realized that the information above, while not at all true of me, sounded, well... familiar in a way that might indicate I was being BSed with real information, inaccurately summarized.


Next, ChatGPT said this:




Now this is interesting. The information is so specific, I can look it up to see if it's referencing anything real. I searched Google for Data Driven Phrama, and it is indeed a professional networking event series in the Bay Area... founded by Ilya Captain, who also happens to be a sales consultant here at Elevate AI Coaching. Aha! ChatGPT was indeed behaving like the proverbial shiftless high school or college student who skims the source material and phones in their essay at the last minute. It glanced at places on the web where Ilya and I are mentioned as colleagues, and hastily conflated parts of his career with mine.


As ChatGPT continued to move forward, it got eerily human in its approach to BSing. It began to cite sources about the claims it was making about my career... peer reviewed scholarly journal articles. When I clicked the reference links and reviewed the articles, the sources were real... but unsurprisingly, they did not mention my life and times. People writing peer-reviewed articles have better things to talk about than me!


In other words, ChatGPT uses impressive-looking but irrelevant citations, hoping its "teacher" (me) wouldn't actually check those sources. CLASSIC lazy student trick. I say that as a former high school and college English teacher. I also say that (if I'm honest), as a former lazy student. I pulled the "dummy citation" trick quite a few times in my teens and early twenties. My teachers seldom caught it, which motivated me to always be on the lookout for it once I became an educator myself. I guess that's just ChatGPT's bad luck.


How to Prevent AI Hallucinations


How do you minimimize the chance of hallucinations? Think out your prompts more carefully, ease the LLM into the topic of the thread, and give the LLM context about that topic. For a demonstration of how this works, read on!


Step 1: Prime the AI to Think More Critically


For example, I did a follow-up thread where I started out more slowly, simply showing the LLM two pictures of Albert the Alligator. My initial prompt looked like this:


Here are two images of Albert the Alligator, from the comic strip POGO.


This primes the LLM, allowing it to ponder a newly introduced and be more mentally prepared before you ask it a more substantive question. (Yes, that approach does work; it's another way in which AI can seem strangely human in its "thinking.")


ChatGPT replied:




I treated this as confirmation that the LLM had been primed and was ready to give a decent answer to my follow-up prompt. Next, I had to make sure my follow up prompt fed ChatGPT's "brain" with the context and careful thought that LLMs crave.


Step 2: Head off Hallucinations by Combining Robust Context with a Well Thought Out Prompt


Next, I gave ChatGPT a TON of direct, rich context about Albert, in the form of ten carefully selected Pogo comic strips featuring Albert demonstrating his various key character traits. (I will not post those strips here, because I don't want the Walt Kelly Estate to write me a letter saying I've crossed the boundary from fair use to outright theft.)


Along with those ten comic strips (not pictured here), I said to ChatGPT:


"Here are ten Pogo comic strips that have Albert the Alligator in them. Based on these comics, and based on anything else you know about this character, please tell me about Albert the Alligator's personality and character designs, as well as general info about the way the creator of Albert writes and draws this character."


ChatGPT went on to give me a far better output than the last one, a well-organized, focused and accurate write-up on Albert the Alligator's character design, mannerisms, face expressions, dialogue, etc....


How to Prevent ChatGPT from Bull****ing You


Use the exact same method I showed you above. That's right-- priming, context, and a well-thought out response will prevent hallucinations and bull****!


The Takeaway


Remember, AI is your thought partner, but it's your thoughts-- your real, human thoughts and knowledge-- that drive the conversation and determine whether you get a good or bad output. If you think carefully, and feed the LLM's brain with real context, the LLM's own thoughts will be clearer and far more on point.






88 views0 comments
bottom of page