Episode 8: Black… boxes?

22 June, 2021

#podcast #T1E8

In this episode we will talk about the obscure science of “Interpretability” and “explainability“, particularly in artificial neural networks. We will try to unravel several mysteries together with an all-star cast. We will try to explain why neural networks, usually considered as black boxes, decide to classify that image as a cat, or as a lung with covid, taking into account the characteristics that most influence this decision. And we interview Dr. Natalia Díaz Rodríguez, interpretability expert and newcomer to the DaSCI institute.

Natalia Díaz Rodríguez graduated from the University of Granada in 2010. She obtained her double PhD from öbo Akademí (Finland) and the University of Granada in 2015. She has worked in Research at CERN (Switzerland), Philips Research (The Netherlands), at the University of California Santa Cruz and at the robotics laboratory of the polytechnic of Paris Paris Paris. He has also worked in Silicon Valley industry at Stitch Fix (San Francisco, CA). She has extensive experience in the field of artificial intelligence and is currently focused on deep, reinforced and unsupervised learning, with emphasis on explainable AI and for social good.

This is SintonIA

Listen “SintonIA 08 – Cajas… ¿Negras?” at Spreaker.