LLMs DON'T KNOW ABOUT REALITY.
That's the bottom line folks.
That's why they *can't* really ever do what people are trying to get them to do.
They're not "broken" or in need of "fixing." They're doing exactly what they're designed to do: mimic language in a highly realistic way.
That's all.
Again, @ft.com reporters or whoever else needs to hear this:
"Hallucinations" are not the result of "flaws," they are literally inherent in & inextricable from what LLM systems do & are.
Whether an "AI" tells you something that matches reality or something that doesn't, *it is working as designed*
May 14, 2025 06:21Those of us who were doing "AI" 50 years ago can tell you that even back then we realized that the problem with language recognition was what was then called a "lack of World Knowledge."
Current systems-by hoovering up all content-are just really good at spitting back stuff they DON'T UNDERSTAND.
"I'm not intelligent, I'm just drawn that way."