What it is
An AI hallucination happens when a model fills gaps with invented information instead of saying it does not know.
Why it happens
- Missing or weak context
- Overconfident decoding settings
- Training data that mixes truth with noise
How to reduce it
- Ground answers with real sources and cite them
- Lower temperature and add clear refusals
- Monitor outputs and add human review where stakes are high
