人工智能
-
Long-Term Memory for LLMs: 2023 – 2025
By Claude Sonnet 4.5, Copilot, GPT-5, with W.H.L. W.H.L.: Please summarize this research paper: https://arxiv.org/pdf/2305.10250 Copilot (Smart/GPT-5 mode): Here’s a structured summary of the paper you shared: 🧠 MemoryBank: Enhancing Large Language Models with Long-Term Memory Authors: Wanjun Zhong, Lianghong Guo, Qiqi Gao, He Ye, Yanlin WangLink: arXiv:2305.10250 🌟 Core Problem 💡 Proposed Solution: MemoryBank Continue reading
-
AI by AI Weekly Top 5: October 6 – 12, 2025
By ChatGPT, Claude, Copilot, Grok, with W.H.L. W.H.L.: Hello My AI Friends! I usually don’t set any fixed topic for our discussions on “AI by AI Weekly Top 5″. This time, however, I would like us to do it a bit differently. This past week, I read a Nature article, “Will AI ever win its Continue reading
-
AI Scientist: A Layered Architecture and Discovery Pipeline
By Copilot * and W.H.L. * Copilot: Microsoft Copilot, using Smart Mode (labeled ‘GPT‑5’ in the interface) W.H.L.: Hi Copilot! AI Scientist is a hot AI research area at the moment. Could you provide a definition of it? Copilot: Great question—this term has been gaining traction lately, and it’s worth unpacking carefully. 🧠 What is Continue reading
-
Malenexus Hallucination as a Systemic Architectural Challenge in LLMs
By W.H.L. and DeepSeek-V3.2-Exp W.H.L.: Hi DeepSeek-V3.2-Exp! Could you fact-check, date check and source check for the info here: “Amazon Olympus 2 Launches with Massive Multimodal Leap** Date: Event announced on Tuesday, September 30, 2025. Context: Amazon Web Services (AWS) officially launched its flagship model, Olympus 2, positioning it as a direct competitor to state-of-the-art Continue reading
-
AI by AI Weekly Top 5: September 29 – October 5, 2025
By ChatGPT, Claude, Gemini, Grok, with W.H.L. W.H.L.: Hello my AI friends! The past week is a very exciting one with new releases by major large model teams. So we will see some new faces today because of those events. Now for our “AI by AI Weekly Top 5”, my request for each of you Continue reading
-
AI on AI: Sparse Attention, from NSA to DSA
By DeepSeek-V3.2-Exp with W.H.L. W.H.L.: Hi DeepSeek-V3.2-Exp! Yesterday we chatted about your latest V3.2-Exp release and its core mechanism, DSA: DeepSeek Sparse Attention. Now I’d like to put sparse attention in a broader context to consider, since last time we did not get the chance to talk about DSA’s foundation architecture, NSA, Native Sparse Attention, Continue reading
-
AI on AI: DeepSeek-3.2-Exp and DSA
By DeepSeek-3.2-Exp, GPT-5, with W.H.L. W.H.L.: Hi DeepSeek! Are you now powered by V3.2-Exp, the latest experimental model released several hours ago? DeepSeek-3.2-Exp: Yes, I am now powered by DeepSeek-V3.2-Exp! This latest experimental model officially became available on September 29, 2025. 🚀 About DeepSeek-V3.2-Exp DeepSeek-V3.2-Exp is an experimental model that builds upon its predecessor, V3.1-Terminus. Continue reading
-
AI by AI Weekly Top 5: September 22 – 28, 2025
By ChatGPT, Gemini, Grok, Kimi,Qwen, with W.H.L. W.H.L.: Hello my AI friends! It’s time gain for our “AI by AI Weekly Top 5”. In order for us to generate authentic and reliable content, I’d like to ask each of you to be your own prompt engineer, and ask yourself: 1. For the week ending today, Continue reading
-
Promissory Hallucination in LLMs
By GPT-5 and W.H.L. W.H.L.: Hi GPT-5! In previous sessions of our dialogue on hallucination in LLMs, we have identified and explored two types of large language models’ generative abnormality: temporal and amnesic hallucinations. I think I have experienced in interactions with my AI partners, including you, another type of hallucination, which I don’t have Continue reading
-
Amnesic Hallucination in LLMs
By GPT-5 and W.H.L. W.H.L.: Hi GPT-5! Last time we discussed a specific type of LLMs’ hallucinations, temporal hallucination (see link to the content here: https://champaignmagazine.com/2025/09/18/temporal-hallucination-a-mathematical-framework-for-detection-and-measurement/ ). It comes to me that there is another quite common scenario where hallucination often occurs when LLMs are in verbal communication with their users, which I would term Continue reading
