Category · AI
Hallucination
The LLM produces content that sounds plausible but is factually wrong.
Can be partial (wrong number, wrong date) or total (invented API, fictional case law). Mitigation: grounded RAG, strict instructions, targeted evals, confidence thresholding. Never 100% eliminable on a probabilistic LLM.
// In action with our clients
Related articles
// See also
