Interpretability in Artificial Intelligence

12th May from 1:00PM to 2:30 PM

Cycle webinar

The opacity of some Machine Learning algorithms is a fundamental issue for not only the future development of AI but also society. Current Machine Learning algorithms indeed process increasingly large amounts of data, which makes it difficult to interpret their internal functioning. However, the ability to account for algorithmic decisions is a fundamental component for building a shared responsibility in AI. Many research initiatives, often called XAI, seek to explain and interpret algorithmic decisions using quantitative methods. This transdisciplinary research seminar focuses on these AI interpretation methods and the difficulty of implementing them. The speakers are Timothy Miller (Professor at the School of Computing and Information Systems at the University of Melbourne and co-director of the Centre of AI and Digital Ethics), Doaa Abu Elyounes (researcher at the Harvard Law of School / Sciences Po Law of School), Jean-Marie John-Mathews (researcher at University Paris-Saclay).

" class="translated-elem">Le Cycle webinar de recherche de Good In Tech est un ensemble de séminaires de recherche où des chercheurs  présentent leurs travaux récents sur les axes de recherche de Good In Tech, à savoir l'innovation numérique responsable, le développement de technologies responsables et la gouvernance du numérique responsable.