UGR coordinates a manual on explainable artificial intelligence to improve transparency and trust in AI
5 November, 2024
This resource is essential for sectors such as health, finance and the legal field.
The UGR has coordinated the development of a handbook that seeks to strengthen trust, accountability and fairness in AI applications in sectors such as health, finance and law. This resource provides a way to verify and certify the results of complex models and contributes to the development of AI systems that are not only effective, but also understandable and fair.
In recent years, the use of automated decision support systems, such as Deep Neural Networks (DNNs), has grown significantly. These models stand out for their predictive capabilities, but their opaque nature makes it difficult to interpret their behaviour in detail, which poses ethical and legitimacy risks in high-impact decisions. To address the issue, the recent article published in the ACM Digital Library, entitled ‘A Practical tutorial on Explainable AI Techniques’, presents a comprehensive manual on explainable artificial intelligence (XAI) techniques.
This resource aims to become an essential guide for computer science professionals seeking to understand and explain the results of Machine Learning models. Each chapter describes XAI techniques applicable in everyday situations, with examples and Python workbooks that can be easily adapted to various specific applications. In addition to providing practical methods, the handbook helps users to understand the requirements for each technique and the benefits they can gain, thus promoting ethical and responsible use of AI.
The director of this guide is Natalia Díaz Rodríguez, professor in the Department of Computer Science and Artificial Intelligence at the UGR, and one of the members of the Andalusian Inter-University Institute in Data Science and Computational Intelligence (DaSCI Institute). Natalia Díaz is also a beneficiary of one of the BBVA Foundation’s Talento Leonardo grants. ‘It is important to be aware of the capabilities and limitations of both advanced AI models and the explanatory techniques that try to argue and validate the former. Sometimes explanations are not satisfactory or easily validated: for example, they do not match the language spoken by the domain expert in each case, exhibiting difficulties in resolving understanding between technical and lay audiences. Wouldn’t it be great to ask chat GPT to translate for us that they have actually learned a model and put it into words? While this is something we aspire to, and much research remains to be done, this tutorial attempts to take a step in this direction by showing the catalogue of basic techniques for the different types of data that models most frequently ingest’.
The work has been carried out during Professor Díaz’s stay at the Institut Polytechnique de Paris and is an international collaboration with experts from the UK, France and Austria, among other countries.
The Andalusian Interuniversity Institute in Data Science and Computational Intelligence is a collaborative entity between the universities of Granada, Jaén and Córdoba. It is dedicated to advanced research and training in the field of artificial intelligence, with a particular focus on data science and computational intelligence. It brings together an outstanding group of researchers working on joint projects, promoting the development and application of innovative technologies in various sectors, with the aim of becoming a benchmark in its field. The DaSCI promotes the transfer of scientific knowledge to the socio-economic environment, thus contributing to technological progress and the digitisation of industry.
Bibliographic reference:
Adrien Bennetot, Ivan Donadello, Ayoub El Qadi El Haouari, Mauro Dragoni, Thomas Frossard, Benedikt Wagner, Anna Sarranti, Silvia Tulli, Maria Trocan, Raja Chatila, Andreas Holzinger, Artur d’Avila Garcez, and Natalia Díaz-Rodríguez. 2024. A Practical tutorial on Explainable AI Techniques. ACM Comput. Surv. Just Accepted (June 2024). https://doi.org/10.1145/3670685
Part of this work was funded by the Leonardo grant for researchers and cultural creators from the BBVA Foundation 2022. https://www.redleonardo.es/beneficiario/natalia-diaz-rodriguez/
Contact:
Natalia Díaz Rodríguez
DaSCI Institute
University of Granada
E-mail: nataliadiaz@ugr.es