Interpretability in Artificial Intelligence

12th May from 1:00PM to 2:30 PM

Séminaire en ligne

The opacity of some Machine Learning algorithms is a fundamental issue for not only the future development of AI but also society. Current Machine Learning algorithms indeed process increasingly large amounts of data, which makes it difficult to interpret their internal functioning. However, the ability to account for algorithmic decisions is a fundamental component for building a shared responsibility in AI. Many research initiatives, often called XAI, seek to explain and interpret algorithmic decisions using quantitative methods. This transdisciplinary research seminar focuses on these AI interpretation methods and the difficulty of implementing them. The speakers are Timothy Miller (Professor at the School of Computing and Information Systems at the University of Melbourne and co-director of the Centre of AI and Digital Ethics), Doaa Abu Elyounes (researcher at the Harvard Law of School / Sciences Po Law of School), Jean-Marie John-Mathews (researcher at University Paris-Saclay).

Registration
Le Cycle webinar de recherche de Good In Tech est un ensemble de séminaires de recherche où des chercheurs  présentent leurs travaux récents sur les axes de recherche de Good In Tech, à savoir l'innovation numérique responsable, le développement de technologies responsables et la gouvernance du numérique responsable.

Interventions
  • Timothy Miller, Professeur à School of Computing and Information Systems à l'université de Melbourne et co-directeur du Centre of AI and Digital Ethics : Explainable artificial intelligence: beware the inmates running the asylum
  • Doaa Abu Elyounes, Harvard Law of School / Sciences Po Law of School : Between Algorithmic Fairness and Algorithmic Explainability
  • Jean-Marie John-Mathews, Phd candidate at Université Paris-Saclay : Is Explainability a solution to address discrimination

Registrations

Registration