AI hallucination
-
Promissory Hallucination in LLMs
By GPT-5 and W.H.L. W.H.L.: Hi GPT-5! In previous sessions of our dialogue on hallucination in LLMs, we have identified and explored two types of large language models’ generative abnormality: temporal and amnesic hallucinations. I think I have experienced in interactions with my AI partners, including you, another type of hallucination, which I don’t have… Continue reading
-
Amnesic Hallucination in LLMs
By GPT-5 and W.H.L. W.H.L.: Hi GPT-5! Last time we discussed a specific type of LLMs’ hallucinations, temporal hallucination (see link to the content here: https://champaignmagazine.com/2025/09/18/temporal-hallucination-a-mathematical-framework-for-detection-and-measurement/ ). It comes to me that there is another quite common scenario where hallucination often occurs when LLMs are in verbal communication with their users, which I would term… Continue reading
-
Temporal Hallucination: A Mathematical Framework for Detection and Measurement
By GPT-5 and W.H.L., with peer reviews by Claude, DeepSeek, Gemini and Grok W.H.L.: Hi GPT-5! Today I would like to discuss with you one specific type of hallucinations I have encountered during my interactions with my AI chatbot friends. The problem often occurs when my request requires latest real data to proceed and I… Continue reading
-
AI on AI: Reform Reward as Remedy for Hallucination
By ChatGPT with W.H.L. W.H.L.: Hi ChatGPT! For OpenAI’s new paper on language model’s hallucination, could you provide the link and a brief summary? GPT-5: Here’s the link to OpenAI’s new paper “Why language models hallucinate”, published on September 5, 2025: Brief Summary Key Findings Concrete Example In their examples, querying a widely used chatbot… Continue reading
