Research Training And DistillationResearch Item

Hallucination rate not next to nil for frontier reasoning models

April 6, 2026Gary Marcus

Gary Marcus disputes a claim that hallucinations are nearly gone, citing data showing a best-case 4.6 percent hallucination rate and arguing that is still too high for high-stakes use.

This ML Prof told me that the hallucination rate for frontier reasoning LLMs is “next to nil”
showing a best-case rate of 4.6% (which of course is benchmark specific).
4.6% is not “next to nil”.
Gary Marcus
evaluationreliabilityllmllms

See what authorities are saying right now

This finding is one of many signals tracked across Artificial Intelligence. The live feed updates every few hours with new authority voices, debates, and emerging ideas.

← Back to Artificial Intelligence