Are talks by an outstanding invited researcher who presents recent disruptive advances on AI. The seminars will last about 1 hour and 30 minutes (45min. Speaker + 30 min. for questions)
DaSCI Seminars 2022
DaSCI Seminars 2021
If all you have is a hammer, everything looks like a nail
Abstract: In this talk, I’ll focus on some recent advances in privacy-preserving NLP. In particular, we will look at the differential privacy paradigm and its applications in NLP, namely by using differentially-private training of neural networks. Although the training framework is very general, does it really fit everything we typically do in NLP?
Speaker: Dr. Habernal is leading an independent research group “Trustworthy Human Language Technologies” at the Department of Computer Science, Technical University of Darmstadt, Germany. His current research areas include privacy-preserving NLP, legal argument mining, and explainable and trustworthy models. His research track spans argument mining and computational argumentation, crowdsourcing, or serious games, among others. More info at www.trusthlt.org.
Graph Mining with Graph Neural Networks
Lecturer: Bryan Perozzi is a Research Scientist in Google Research’s Algorithms and Optimization group, where he routinely analyzes some of the world’s largest (and perhaps most interesting) graphs. Bryan’s research focuses on developing techniques for learning expressive representations of relational data with neural networks. These scalable algorithms are useful for prediction tasks (classification/regression), pattern discovery, and anomaly detection in large networked data sets. Bryan is an author of 30+ peer-reviewed papers at leading conferences in machine learning and data mining (such as NeurIPS, ICML, KDD, and WWW). His doctoral work on learning network representations was awarded the prestigious SIGKDD Dissertation Award. Bryan received his Ph.D. in Computer Science from Stony Brook University in 2016, and his M.S. from the Johns Hopkins University in 2011.
Abstract: How can neural networks best model data which doesn’t have a fixed structure? In this talk, I will discuss graph neural networks (GNNs), a very active area of current research in machine learning aimed at answering this interesting (and practical) question. After reviewing the basics of GNNs, I’ll discuss some challenges applying these methods in industry, and some of the methods we’ve developed for addressing these challenges.
Detecting the “Fake News” Before It Was Even Written, Media Literacy, and Flattening the Curve of the COVID-19 Infodemic
Lecturer: Dr. Preslav Nakov is a Principal Scientist at the Qatar Computing Research Institute (QCRI), HBKU , where he leads the Tanbih mega-project (developed in collaboration with MIT ), which aims to limit the effect of “fake news”, propaganda and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. He received his PhD degree in Computer Science from the University of California at Berkeley, supported by a Fulbright grant. Dr. Preslav Nakov is President of ACL SIGLEX , Secretary of ACL SIGSLAV , and a member of the EACL advisory board. He is also member of the editorial board of a number of journals including Computational Linguistics, TACL , CS&L, NLE , AI Communications, and Frontiers in AI. He authored a Morgan & Claypool book on Semantic Relations between Nominals and two books on computer algorithms. He published 250+ research papers, and he was named among the top 2% of the world’s most-cited in the career achievement category, part of a global list compiled by Stanford University. He received a Best Long Paper Award at CIKM ‘2020, a Best Demo Paper Award (Honorable Mention) at ACL ‘2020, a Best Task Paper Award (Honorable Mention) at SemEval’2020, a Best Poster Award at SocInfo’2019, and the Young Researcher Award at RANLP ‘2011. He was also the first to receive the Bulgarian President’s John Atanasoff award, named after the inventor of the first automatic electronic digital computer. Dr. Nakov’s research was featured by over 100 news outlets, including Forbes, Boston Globe, Aljazeera, DefenseOne, Business Insider, MIT Technology Review, Science Daily, Popular Science, Fast Company, The Register, WIRED , and Engadget, among others.
Abstract: Given the recent proliferation of disinformation online, there has been growing research interest in automatically debunking rumors, false claims, and “fake news”. A number of fact-checking initiatives have been launched so far, both manual and automatic, but the whole enterprise remains in a state of crisis: by the time a claim is finally fact-checked, it could have reached millions of users, and the harm caused could hardly be undone.
An arguably more promising direction is to focus on analyzing entire news outlets, which can be done in advance; then, we could fact-check the news before it was even written: by checking how trustworthy the outlet that has published it is (which is what journalists actually do). We will show how we do this in the Tanbih news aggregator (http://www.tanbih.org/), which aims to limit the impact of “fake news”, propaganda and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking, which are arguably the best way to address disinformation in the long run. In particular, we develop media profiles that show the general factuality of reporting, the degree of propagandistic content, hyper-partisanship, leading political ideology, general frame of reporting, stance with respect to various claims and topics, as well as audience reach and audience bias in social media.
Another important observation is that the term “fake news” misleads people to focus exclusively on factuality, and to ignore the other half of the problem: the potential malicious intent. Thus, we detect the use of specific propaganda techniques in text, e.g., appeal to emotions, fear, prejudices, logical fallacies, etc. We will show how we do this in the Prta system (https://www.tanbih.org/prta), another media literacy tool, which got the Best Demo Award (Honorable Mention) at ACL -2020; an associated shared task got the Best task award (Honorable Mention) at SemEval-2020.
Finally, at the time of COVID -19, the problem of disinformation online got elevated to a whole new level as the first global infodemic. While fighting this infodemic is typically thought of in terms of factuality, the problem is much broader as malicious content includes not only “fake news”, rumors, and conspiracy theories, but also promotion of fake cures, panic, racism, xenophobia, and mistrust in the authorities, among others. Thus, we argue for the need of a holistic approach combining the perspectives of journalists, fact-checkers, policymakers, social media platforms, and society as a whole, and we present our recent research in that direction (https://mt.qcri.org/covid19disinformationdetector/).
Efficient Deep Learning
Lecturer: Marco Pedersoli is Assistant Professor at ETS Montreal. He obtained his PhD in computer science in 2012 at the Autonomous University of Barcelona and the Computer Vision Center of Barcelona. Then, he was a postdoctoral fellow in computer vision and machine learning at KU Leuven with Prof. Tuytelaars and later at INRIA Grenoble with Drs. Verbeek and Schmid. At ETS Montreal he is a member of LIVIA and he is co-chairing an industrial Chair on Embedded Neural Networks for Connected Building Control. His research is mostly applied on visual recognition, the automatic interpretation and understanding of images and videos. His specific focus is on reducing the complexity and the amount of annotation required for deep learning algorithms such as convolutional and recurrent neural networks. Prof. Pedersoli has authored more than 40 publications in top-tier international conferences and journals in computer vision and machine learning.
Abstract: In the last 10 years deep learning (DL) models have shown great progress in many different fields, from Computer Vision to Natural Language Processing. However, DL methods require great computational resources (i.e. GPUs or TPUs) and very large datasets, which also makes the training phase very long and painful. Thus, there is a strong need for reducing the computational cost of DL methods both in training as well as in deployment. In this talk, I present the most common families of approaches used to reduce the requirements of DL methods in terms of Memory and Computation for both training and deployment, and show how a reduction of the model footprint does not always produce a corresponding speed-up. Finally, I will present some recent results that suggest that large DL models are important mostly for facilitating the model training, and when that is finished, we can deploy a much smaller and faster model with almost no loss in accuracy.
Variational Autoencoders for Audio, Visual and Audio-Visual Learning
Lecturer: Xavier Alameda-Pineda is a (tenured) Research Scientist at Inria, in the Perception Group. He obtained the M.Sc. (equivalent) in Mathematics in 2008, in Telecommunications in 2009 from BarcelonaTech and in Computer Science in 2010 from Université Grenoble-Alpes (UGA). He then worked towards his Ph.D. in Mathematics and Computer Science, and obtained it in 2013, from UGA. After a two-year post-doc period at the Multimodal Human Understanding Group, at University of Trento, he was appointed with his current position. Xavier is an active member of SIGMM, and a senior member of IEEE and a member of ELLIS. He is co-chairing the “Audio-visual machine perception and interaction for companion robots” chair of the Multidisciplinary Institute of Artificial Intelligence. Xavier is the Coordinator of the H2020 Project SPRING: Socially Pertinent Robots in Gerontological Healthcare. Xavier’s research interests are in combining machine learning, computer vision and audio processing for scene and behavior analysis and human-robot interaction. More info at xavirema.eu
Abstract: Since their introduction, Variational Autoencoders (VAE) demonstrated great performance in key unsupervised feature representation applications, specifically in visual and auditory representation. In this seminar, the global methodology of variational auto-encoders will be presented, along with applications in learning with audio and visual data. Special emphasis will be put in discussing the use of VAE for audio-visual learning, showcasing its interest with the task of audio-visual speech enhancement.
Five Sources of Biases and Ethical Issues in NLP, and What to Do about Them
Lecturer: Dirk Hovy is associate professor of computer science at Bocconi University in Milan, Italy. Before that, he was faculty and a postdoc in Copenhagen, got a PhD from USC, and a linguistics masters in Germany. He is interested in the interaction between language, society, and machine learning, or what language can tell us about society, and what computers can tell us about language. He has authored over 60 articles on these topics, including 3 best paper awards. He has organized one conference and several workshops (on abusive language, ethics in NLP, and computational social science). Outside of work, Dirk enjoys cooking, running, and leather-crafting. For updated information, see http://www.dirkhovy.com
Abstract: Never before was it so easy to write a powerful NLP system, never before did it have such a potential impact. However, these systems are now increasingly used in applications they were not intended for, by people who treat them as interchangeable black boxes. The results can be simple performance drops, but also systematic biases against various user groups. In this talk, I will discuss several types of biases that affect NLP models (based on Shah et al. 2020 and Hovy & Spruit, 2016), what their sources are, and potential counter measures.
Image and Video Generation using Deep Learning
Lecturer: Stéphane Lathuilière is an associate professor (maître de conférence) at Telecom Paris, France, in the multimedia team. Until October 2019, he was a post-doctoral fellow at the University of Trento (Italy) in the Multimedia and Human Understanding Group, led by Prof. Nicu Sebe and Prof. Elisa Ricci. He received the M.Sc. degree in applied mathematics and computer science from ENSIMAG, Grenoble Institute of Technology (Grenoble INP), France, in 2014. He completed his master thesis at the International Research Institute MICA (Hanoi, Vietnam). He worked towards his Ph.D. in mathematics and computer science in the Perception Team at Inria under the supervision of Dr. Radu Horaud, and obtained it from Université Grenoble Alpes (France) in 2018. His research interests cover machine learning for computer vision problems (eg. domain adaptation, continual learning) and deep models for image and video generation. He regularly publishes papers in the most prestigious computer vision conferences (CVPR, ICCV, ECCV, NeurIPS) and top journals (IEEE TPAMI).
Abstract: Generating realistic images and videos has countless applications in different areas, ranging from photography technologies to e-commerce business. Recently, deep generative approaches have emerged as effective techniques for generation tasks. In this talk, we will first present the problem of pose-guided person image generation. Specifically, given an image of a person and a target pose, a new image of that person in the target pose is synthesized. We will show that important body-pose changes affect generation quality and that specific feature map deformations lead to better images. Then, we will present our recent framework for video generation. More precisely, our approach generates videos where an object in a source image is animated according to the motion of a driving video. In this task, we employ a motion representation based on keypoints that are learned in a self-supervised fashion. Therefore, our approach can animate any arbitrary object without using annotation or prior information about the specific object to animate.
Aggregating Weak Annotations from Crowds
Lecturer: Edwin Simpson, is a lecturer (equivalent to assistant professor) at the University of Bristol, working on interactive natural language processing. His research focusses on learning from small and unreliable data, including user feedback, and adapts Bayesian approaches to topics such as argumentation, summarisation and sequence labelling. Previously, he was a post-doc at TU Darmstadt, Germany and completed his PhD at the University of Oxford on Bayesian methods for aggregating crowdsourced data.
Abstract: Current machine learning methods are data hungry. Crowdsourcing is a common solution to acquiring annotated data at large scale for a modest price. However, the quality of the annotations is highly variable and annotators do not always agree on the correct label for each data point. This talk presents techniques for aggregating crowdsourced annotations using preference learning and classifier combination to estimate gold-standard rankings and labels, which can be used as training data for ML models. We apply approximate Bayesian approaches to handle noise, small amounts of data per annotator, and provide a basis for active learning. While these techniques are applicable to any kind of data, we demonstrate their effectiveness for natural language processing tasks.
How to do Data Science without writing code
Lecturer: Victoriano Izquierdo (1990) is a software engineer from Granada who is the co-founder and CEO of Graphext. Graphext is developing an advanced data analysis software built upon the last advances in data science and artificial intelligence for supporting small and big companies to address hard problems using data.
Abstract: How to do Data Science without writing code
DaSCI Seminars 2020
Lecturer: Sergio Guadarrama is a senior software engineer at Google Brain. He focuses on reinforcement learning and neural networks. His research focuses on robust, scalable and efficient reinforcement learning. He is currently the leader of the TF-Agents project and a lead developer of TensorFlow (co-creator of TF-Slim). Before joining Google, he was a researcher at the University of California, Berkeley, where he worked with Professor Lotfi Zadeh and Professor Trevor Darrell. He received his B.A. and Ph.D. from the Universidad Politécnica de Madrid.
Abstract: Learning by reinforcement (RL) is a type of machine learning where the objective is to learn to solve a task through interaction with the environment, maximizing the expected return. Unlike supervised learning, the solution requires making multiple decisions in a sequential way and the reinforcement occurs through rewards. The two main components are the environment, which represents the problem to be solved, and the agent, which represents the learning algorithm.