Abbeal

Category · AI

Hallucination

The LLM produces content that sounds plausible but is factually wrong.

Can be partial (wrong number, wrong date) or total (invented API, fictional case law). Mitigation: grounded RAG, strict instructions, targeted evals, confidence thresholding. Never 100% eliminable on a probabilistic LLM.

// In action with our clients

// See also

Want us to apply this for you?

Talk to an architect