LLMs and Uncertainty
Ekimetrics, 36 rue Lafayette, Paris
This event is jointly organized by PEPR-IA and Ekimetrics. Registration is free but mandatory before May 8, 2026:




Plenary Speakers
Sebastien Destercke, CNRS Senior Researcher, CID team leader
Title: Uncertainty quantification in machine learning - a gentle introduction
Abstract: In this talk, I will review some of the main challenges associated with uncertainty quantification in standard statistical machine learning, where many questions remain open. I will, in particular, consider the core problems of calibration and uncertainty quantification. I will then conclude with some personal reflections and extrapolations on additional challenges raised by large language models (LLMs), whose usage and underlying framework depart from standard statistical assumptions. I hope this will help open up discussion for the subsequent talks, as LLMs are not within my areas of expertise.
Yedidia AGNIMO, Doctoral Researcher at Ekimetrics and INRIA
Title: Measures of uncertainty for hallucination in Large Language Models
Abstract: In this talk, we explore how uncertainty quantification can be considered in the context of large language models, with a particular focus on hallucination detection. We first discuss why hallucination is difficult to define and evaluate, and how different definitions lead to distinct practical objectives. We then review several families of uncertainty estimation methods for language models, highlighting the assumptions they rely on and the signals they use. Finally, we discuss how such methods can be evaluated in benchmark settings, and what this suggests for their practical use in reliability-oriented applications.
Gianni Franchi, Research Director at AMIAD leading the Trustworthy AI group
Title: From token-level uncertainty to semantic uncertainty: how can we better assess the reliability of large vision language models?
Abstract: To be updated soon.
Schedule
This schedule will be updated with more details soon.
| Time | Session |
|---|---|
| 14:00 – 15:00 | Sébastien Destercke — Uncertainty quantification in machine learning - a gentle introduction |
| 15:00 – 15:30 | Yedidia AGNIMO — Measures of uncertainty for hallucination in Large Language Models |
| 15:30 – 16:00 | Coffee break |
| 16:00 – 17:00 | Gianni Franchi — Improving reliability assessment in LVLMs |
| End | Cocktail |