AI Does not Hallucinate, It Confabulates.

Ben Zimmer the WSJ’s lexicographer writes:  “I asked Bard about “argumentative diphthongization,” a phrase that I just made up. Not only did it produce five paragraphs elucidating this spurious phenomenon, the chatbot told me the term was “first coined by the linguist Hans Jakobsen in 1922.” Needless to say, there has never been a prominent linguist named Hans Jakobsen (although a Danish gymnast with that name did compete in the 1920 Olympics).” Zimmer goes on to say: that Google CEO Sundar Pichai in an interview I also watched on CBS’s “60 Minutes” says that all AI models have the problem of hallucination. 

In my trials with AI I had not experienced it coming up with it making up stories out of whole cloth. But as a neurologist I can see the problem they are describing is not hallucination, but confabulation. A person with Korsakoff’s syndrome,  Beri-Beri of the brain, that usually comes from alcoholism, knowing he lived through yesterday, and asked about it, makes a clever story about what never happened.  He reasonably and truthfully fills in gaps in knowledge he should have had. He is not lying, but making a truthful guess and does not hallucinate. Similarly certain persons blind with Anton’s Syndrome where the visual part of the cerebral cortex damaged from a stroke if asked, “What color is my hair” will simply make a response. 

What is the issue? In both cases it is anosognosia not recognizing your deficit. The Korsakoff does not know he can’t remember, the Anton’s not knowing he can’t see. The brain hasn’t gotten the message there is a problem. 

That brings to mind a solution to the universal problem in AI of “hallucination,” actually confabulation. The program is not aware of its own deficit. Likely AI lacks introspection, being too busy looking away from itself. In humans it does not usually help simply to inform a person they have no memory or cannot see. The Korsakoff might not remember you have told him, and for the Anton’s the concept that they are blind has to somehow “sink in” attain the status of an internal reality before they stop making things up. In any case, the solution for an AI program ought simply be that it should to make it aware of its own limits.  


Posted

in

by

Tags: