Nvidia CEO Jensen Huang claimed that “AI has become super useful, no longer hallucinating.”
The statement, made during a CNBC interview, has sparked intense debate across the tech industry, as it appears to contradict the current technical reality of Large Language Models (LLMs). While Huang used the claim to highlight the massive progress in AI reliability, experts have been quick to point out that hallucinations remain a fundamental part of how these models function.
1. The Claim: “Reasoning” Over “Predicting”
Huangโs assertion is rooted in a shift from simple next-token prediction to what he calls “Test-Time Scaling” or “Reasoning.”
- The “Thinking” Model: Huang argued that by using new architectures (like those found in Nvidia’s 2026 Rubin chips), AI can now break problems down step-by-step, effectively “fact-checking” itself before providing an answer.
- The Context Memory Storage: Nvidiaโs latest systems feature a new “context memory” layer designed to maintain coherence in long interactions, which Huang believes eliminates the “drift” that causes hallucinations.
- The Quote: “AI became super useful, no longer hallucinating. It can now reason through a problem, search for the truth, and verify its own work before it speaks to you.”
2. The Reality Check: A “Structural” Problem
Despite Huangโs confidence, researchers and industry peers have characterized his claim as a “marketing hallucination” of its own.
- Architecture Limitations: Hallucinations are a byproduct of the probability-based architecture of LLMs. They don’t “know” facts; they predict the most likely next word based on patterns.
- OpenAI & Anthropic Stance: Even Nvidia’s biggest customers have been more conservative. In early 2026 briefings, OpenAI and Anthropic engineers admitted that while RAG (Retrieval-Augmented Generation) has reduced errors by 80%, a “zero-hallucination” model is still theoretically out of reach for current transformer-based systems.
- Reliability Gap: Reliability remains the #1 hurdle for enterprise adoption. If hallucinations were truly solved, AI could act as a fully autonomous doctor or lawyerโmilestones the industry still considers years away.
3. Why Jensen Huang is Making This Claim Now
Analysts suggest the timing of this “hallucination-free” narrative is strategic:
- Market Pressure: Nvidiaโs primary customers (Meta, Alphabet, Microsoft) faced stock volatility in February 2026 over the massive costs of AI infrastructure. Huang is emphasizing that the “ROI” (Return on Investment) is here because the tech is now “perfectly reliable.”
- Promoting “Agentic AI”: 2026 is the year of AI Agents. For an agent to buy a plane ticket or manage a bank account, it must be trusted. By claiming hallucinations are over, Huang is signaling that the era of “Agentic AI” has officially arrived.
4. Technical Progress vs. Absolute Truth
| Feature | Traditional AI (2024) | Reasoning AI (2026) |
| Response Style | Immediate, predictive | Step-by-step reasoning (Thinking) |
| Hallucination Rate | High (5โ15% in complex tasks) | Significantly Lower (<1% in narrow tasks) |
| Verification | User-led | Self-correction via RAG & Search |
| Industry Consensus | Unreliable for critical tasks | “Good enough” for most professional tools |
Conclusion: Hype vs. Help
While Jensen Huangโs claim that AI “no longer hallucinates” is technically an overstatement, it reflects a genuine leap in perceived reliability. AI in 2026 is far more capable of citing sources and admitting when it doesn’t know an answer. However, as long as these models are built on probabilities, the “ghost in the machine” remains.


