DaSCI Webinars
Doctoral Training – Online
DaSCI Seminars
Are talks by an outstanding invited researcher who presents recent disruptive advances on AI. The seminars will last about 1 hour and 30 minutes (45min. Speaker + 30 min. for questions)
How to do Data Science without writing code
Lecturer: Victoriano Izquierdo (1990) is a software engineer from Granada who is the co-founder and CEO of Graphext. Graphext is developing an advanced data analysis software built upon the last advances in data science and artificial intelligence for supporting small and big companies to address hard problems using data.
Date: 15/02/2021
Abstract: How to do Data Science without writing code
Variational Autoencoders for Audio, Visual and Audio-Visual Learning
Lecturer: Xavier Alameda-Pineda is a (tenured) Research Scientist at Inria, in the Perception Group. He obtained the M.Sc. (equivalent) in Mathematics in 2008, in Telecommunications in 2009 from BarcelonaTech and in Computer Science in 2010 from Université Grenoble-Alpes (UGA). He then worked towards his Ph.D. in Mathematics and Computer Science, and obtained it in 2013, from UGA. After a two-year post-doc period at the Multimodal Human Understanding Group, at University of Trento, he was appointed with his current position. Xavier is an active member of SIGMM, and a senior member of IEEE and a member of ELLIS. He is co-chairing the “Audio-visual machine perception and interaction for companion robots” chair of the Multidisciplinary Institute of Artificial Intelligence. Xavier is the Coordinator of the H2020 Project SPRING: Socially Pertinent Robots in Gerontological Healthcare. Xavier’s research interests are in combining machine learning, computer vision and audio processing for scene and behavior analysis and human-robot interaction. More info at xavirema.eu
Date: 01/02/2021
Abstract: Since their introduction, Variational Autoencoders (VAE) demonstrated great performance in key unsupervised feature representation applications, specifically in visual and auditory representation. In this seminar, the global methodology of variational auto-encoders will be presented, along with applications in learning with audio and visual data. Special emphasis will be put in discussing the use of VAE for audio-visual learning, showcasing its interest with the task of audio-visual speech enhancement.
Variational Autoencoders for Audio, Visual and Audio-Visual Learning – Recording
Five Sources of Biases and Ethical Issues in NLP, and What to Do about Them
Lecturer: Dirk Hovy is associate professor of computer science at Bocconi University in Milan, Italy. Before that, he was faculty and a postdoc in Copenhagen, got a PhD from USC, and a linguistics masters in Germany. He is interested in the interaction between language, society, and machine learning, or what language can tell us about society, and what computers can tell us about language. He has authored over 60 articles on these topics, including 3 best paper awards. He has organized one conference and several workshops (on abusive language, ethics in NLP, and computational social science). Outside of work, Dirk enjoys cooking, running, and leather-crafting. For updated information, see http://www.dirkhovy.com
Date: 11/01/2021
Abstract: Never before was it so easy to write a powerful NLP system, never before did it have such a potential impact. However, these systems are now increasingly used in applications they were not intended for, by people who treat them as interchangeable black boxes. The results can be simple performance drops, but also systematic biases against various user groups. In this talk, I will discuss several types of biases that affect NLP models (based on Shah et al. 2020 and Hovy & Spruit, 2016), what their sources are, and potential counter measures.
Five Sources of Biases and Ethical Issues in NLP, and What to Do about Them – Recording
Image and Video Generation using Deep Learning
Lecturer: Stéphane Lathuilière is an associate professor (maître de conférence) at Telecom Paris, France, in the multimedia team. Until October 2019, he was a post-doctoral fellow at the University of Trento (Italy) in the Multimedia and Human Understanding Group, led by Prof. Nicu Sebe and Prof. Elisa Ricci. He received the M.Sc. degree in applied mathematics and computer science from ENSIMAG, Grenoble Institute of Technology (Grenoble INP), France, in 2014. He completed his master thesis at the International Research Institute MICA (Hanoi, Vietnam). He worked towards his Ph.D. in mathematics and computer science in the Perception Team at Inria under the supervision of Dr. Radu Horaud, and obtained it from Université Grenoble Alpes (France) in 2018. His research interests cover machine learning for computer vision problems (eg. domain adaptation, continual learning) and deep models for image and video generation. He regularly publishes papers in the most prestigious computer vision conferences (CVPR, ICCV, ECCV, NeurIPS) and top journals (IEEE TPAMI).
Date: 14/12/2020
Abstract: Generating realistic images and videos has countless applications in different areas, ranging from photography technologies to e-commerce business. Recently, deep generative approaches have emerged as effective techniques for generation tasks. In this talk, we will first present the problem of pose-guided person image generation. Specifically, given an image of a person and a target pose, a new image of that person in the target pose is synthesized. We will show that important body-pose changes affect generation quality and that specific feature map deformations lead to better images. Then, we will present our recent framework for video generation. More precisely, our approach generates videos where an object in a source image is animated according to the motion of a driving video. In this task, we employ a motion representation based on keypoints that are learned in a self-supervised fashion. Therefore, our approach can animate any arbitrary object without using annotation or prior information about the specific object to animate.
Aggregating Weak Annotations from Crowds
Lecturer: Edwin Simpson, is a lecturer (equivalent to assistant professor) at the University of Bristol, working on interactive natural language processing. His research focusses on learning from small and unreliable data, including user feedback, and adapts Bayesian approaches to topics such as argumentation, summarisation and sequence labelling. Previously, he was a post-doc at TU Darmstadt, Germany and completed his PhD at the University of Oxford on Bayesian methods for aggregating crowdsourced data.
Date: 09/11/2020
Abstract: Current machine learning methods are data hungry. Crowdsourcing is a common solution to acquiring annotated data at large scale for a modest price. However, the quality of the annotations is highly variable and annotators do not always agree on the correct label for each data point. This talk presents techniques for aggregating crowdsourced annotations using preference learning and classifier combination to estimate gold-standard rankings and labels, which can be used as training data for ML models. We apply approximate Bayesian approaches to handle noise, small amounts of data per annotator, and provide a basis for active learning. While these techniques are applicable to any kind of data, we demonstrate their effectiveness for natural language processing tasks.
Reinforcement Learning
Lecturer: Sergio Guadarrama is a senior software engineer at Google Brain. He focuses on reinforcement learning and neural networks. His research focuses on robust, scalable and efficient reinforcement learning. He is currently the leader of the TF-Agents project and a lead developer of TensorFlow (co-creator of TF-Slim). Before joining Google, he was a researcher at the University of California, Berkeley, where he worked with Professor Lotfi Zadeh and Professor Trevor Darrell. He received his B.A. and Ph.D. from the Universidad Politécnica de Madrid.
Date: 26/10/2020
Abstract: Learning by reinforcement (RL) is a type of machine learning where the objective is to learn to solve a task through interaction with the environment, maximizing the expected return. Unlike supervised learning, the solution requires making multiple decisions in a sequential way and the reinforcement occurs through rewards. The two main components are the environment, which represents the problem to be solved, and the agent, which represents the learning algorithm.
DaSCI Lectures
Are talks by a DaSCI senior researcher who presents the latest advances on DaSCI consolidated research lines.The seminars will last about 1 hour and 30 minutes (45min. Speaker + 30 min. for questions)
Artificial intelligence, refugees and border security. Ethical implications of technological and political worlds
Lecturer: Ana Valdivia
Date: 08/02/2021
Abstract: Over the last decade, a large number of people are on the move due to conflict, instability, climatic emergency consequences, and other economic reasons. In Europe, the so-called refugee crisis has become a testing ground to explore the use of artificial intelligence for law enforcement and border security. Interoperable databases, facial recognition and fingerprints registration, iris data collection, lie detectors, and other forms of data-driven risk assessments now all form part of the European border policies for refugees.
In this webinar, we will explore which socio-technical systems are applied nowadays at European borders by analysing technical specifications. After that, we will discuss the ethical impact and human rights violation that this situation is causing. It is now necessary that computer scientist and data engineers recognise how technology might perpetuate harms, and collaborate with academics from other disciplines to mitigate discrimination.
Artificial Intelligence, Migrants, and Border Security – Recording
EXplainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Lecturer: Natalia Díaz Rodríguez
Date: 25/01/2021
Abstract: The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.
Autoencoders: an Overview and Applications
Lecturer: David Charte
Date: 21/12/2020
Abstract: In this talk, we motivate the need for representation learning techniques, especially those based in artificial neural networks. We arrive to a definition of autoencoders which then are further developed in a step-by-step example. Next, several applications of autoencoders are described and illustrated with case studies as well as uses in the literature. Last, some comments on the current situation and possible future trends are provided.
AI Ethics: encompassing the facets of FATE. Fairness, transparency and auditability
Lecturer: José Daniel Pascual Triana
Date: 23/11/2020
Abstract: Artificial Intelligence Ethics is the field that strives to apply the ethical and moral principles of humans to the development and operation of machine learning. This includes, amongst other topics, bias reduction to enforce parity, transparency and model auditing.
Due to the sheer amount of data that is currently generated and used, as well as the increased awareness of the population and the evolving legislation to keep up with the times, AI Ethics’ relevance keeps rising as a means to maintain the trust in the analysis and treatment of data. In this seminar, a taxonomy of AI Ethics will be presented, current techniques to promote it will be shown and the usefulness of several tools for data and model treatment will be discussed.
AI Ethics: encompassing the facets of FATE-Recording (in Spanish)
DaSCI Readings
Are short talks by a DaSCI PhD Student who presents recents results on the different DaSCI research lines. Two presentations per day. Each presentation will be approximately 30 minutes long, followed by 15 minutes for questions
StyleGAN: Background and evolution
Lecturer: Guillermo Gómez Trenado
Date: 22/02/2021
Abstract: The work developed by Tero Karras and his team at Nvidia has been the state-of-the-art in GAN for image generation since 2017. In this DaSCI reading we’ll use this results to discuss different aspects of GAN, the iterative process by which the authors detected and corrected the limitations of their work, the technological solutions that allowed such results and the difficulties that we may find if we face related tasks.
Action Recognition for Anomaly Detection using Transfer Learning and Weak Supervision
Lecturer: Francisco Luque
Date: 18/01/2021
Abstract: Automatic video surveillance is an emerging research area, where a huge number of publications are appearing everyday. Particularly, action anomaly detection is a fairly relevant task nowadays. The mainstream approach to the problem using deep models consists in transfer learning from action recognition and weakly supervised fine-tuning for anomaly detection. The objective of the current study is to identify the key aspects of this approaches, and assess the importance of each decision on the training process. To this end, we propose a specific pipeline, where a model is defined by three key aspects: the action recognition model, the pretraining dataset and the weakly supervised fine-tuning policy. Furthermore, we perform extensive experiments to validate the impact of each of the previous aspects in the final solution.
Fuzzy Monitoring of In-bed Postural Changes for the Prevention of Pressure Ulcers using Inertial Sensors Attached to Clothing
Lecturer: Edna Rocío Bernal Monroy
Date: 18/01/2021
Abstract: Postural changes while maintaining a correct body position are the most efficient method of preventing pressure ulcers. However, executing a protocol ofpostural changes over a long period of time is an arduous task for caregivers.To address this problem, we propose a fuzzy monitoring system for posturalchanges which recognizes in-bed postures by means of micro inertial sensors attached to patients’ clothes. First, we integrate a data-driven model to classifyin-bed postures from the micro inertial sensors which are located in the socksand t-shirt of the patient. Second, a knowledge-based fuzzy model computes thepriority of postural changes for body zones based on expert-defined protocols.Results show encouraging performance in the classification of in-bed posturesand high adaptability of the knowledge-based fuzzy approach.
COVID-19 study based on chest X-rays of patients
Lecturer: Anabel Gómez
Date: 06/11/2020
Abstract: COVID-19 is becoming one of the most infectious diseases of the 21st century. Due to the importance of its early detection, new ways to detect it are emerging. In this study, we focus on its detection using chest X-rays, pointing out the main problems of the most used data sets for this purpose. We propose a new data set and a new methodology that allows us to detect cases of COVID-19 with an accuracy of 76.18%, which is higher than the accuracies obtained by experts.
Image inpainting using non-adversarial networks. Towards a deeper semantic understanding of images
Lecturer: Guillermo Gómez
Date: 06/11/2020
Abstract: In this study we explore the problem of image inpainting from a non-adversarial perspective. Can we use general generative models to solve problems other than those for which it was trained to? Do models acquire a deeper and transferable knowledge about the nature of the images they generate? We propose a novel methodology for the image inpainting problem using the implicit knowledge acquired in non-adversarial generative models.
Sentiment Analysis based Multi-person Multi-criteria Decision Making (SA-MpMcDM) Methodology
Lecturer: Cristina Zuheros
Date: 30/11/2020
Abstract: Traditional decision making models are limited by pre-defined numerical and linguistic terms. We present the SA-MpMcDM methodology, which allows experts to evaluate through unlimited natural language and even through numerical ratings. We propose a deep learning model to extract the expert knowledge from the evaluations. We evaluate the methodology in a real case study, which we collect into the TripR-2020 dataset
MonuMAI: Architectural information extraction of monuments through Deep Learning techniques
Lecturer: Alberto Castillo
Date: 30/11/2020
Abstract: An important part of art history can be discovered through the visual information in monument facades. However, the analysis of this visual information, i.e, morphology and architectural elements, requires high expert knowledge. An automatic system for identifying the architectural style or detecting the architectural elements of a monument based on one image will certainly help improving our knowledge in art and history.
The aim of this seminary is to introduce the MonuMAI (Monument with Mathematics and Artificial Intelligence) framework published in the related work [1]. In particular, we designed MonuMAI dataset considering the proposed architectural styles taxonomy, developed MonuMAI deep learning pipeline, and built citizen science based MonuMAI mobile app that uses the proposed deep learning pipeline and dataset for performing in real life conditions.
[1] Lamas, Alberto & Tabik, Siham & Cruz, Policarpo & Montes, Rosana & Martínez-Sevilla, Álvaro & Cruz, Teresa & Herrera, Francisco. (2020) MonuMAI: Dataset, deep learning pipeline and citizen science based app for monumental heritage taxonomy and classification. Neurocomputing. doi.org/10.1016/j.neucom.2020.09.041