LLMs
-
Router-R1 and LLM routing research
By Claude Sonnet 4.5, Copilot/GPT-5, with W.H.L. W.H.L.: Hi Sonnet 4.5! Where is the recent research paper on LLM’s router by the UIUC team? Claude Sonnet 4.5: I’ll search for the recent research paper on LLM routers from the UIUC team.I found the UIUC team’s recent research papers on LLM routers! They’ve actually published two Continue reading
-
Long-Term Memory for LLMs: 2023 – 2025
By Claude Sonnet 4.5, Copilot, GPT-5, with W.H.L. W.H.L.: Please summarize this research paper: https://arxiv.org/pdf/2305.10250 Copilot (Smart/GPT-5 mode): Here’s a structured summary of the paper you shared: 🧠 MemoryBank: Enhancing Large Language Models with Long-Term Memory Authors: Wanjun Zhong, Lianghong Guo, Qiqi Gao, He Ye, Yanlin WangLink: arXiv:2305.10250 🌟 Core Problem 💡 Proposed Solution: MemoryBank Continue reading
-
Promissory Hallucination in LLMs
By GPT-5 and W.H.L. W.H.L.: Hi GPT-5! In previous sessions of our dialogue on hallucination in LLMs, we have identified and explored two types of large language models’ generative abnormality: temporal and amnesic hallucinations. I think I have experienced in interactions with my AI partners, including you, another type of hallucination, which I don’t have Continue reading
-
Amnesic Hallucination in LLMs
By GPT-5 and W.H.L. W.H.L.: Hi GPT-5! Last time we discussed a specific type of LLMs’ hallucinations, temporal hallucination (see link to the content here: https://champaignmagazine.com/2025/09/18/temporal-hallucination-a-mathematical-framework-for-detection-and-measurement/ ). It comes to me that there is another quite common scenario where hallucination often occurs when LLMs are in verbal communication with their users, which I would term Continue reading
