DaSCI Seminars

Are talks by an outstanding invited researcher who presents recent disruptive advances on AI. The seminars will last about 1 hour and 30 minutes (45min. Speaker + 30 min. for questions)
DaSCI Seminars 2023
Machine learning in medicine: Sepsis prediction and antibiotic resistance prediction
Date: 13/06/2023
Abstract: Sepsis is a major cause of mortality in intensive care units around the world. If recognized early, it can often be treated successfully, but early prediction of sepsis is an extremely difficult task in clinical practice. The data wealth from intensive care units that is increasingly becoming available for research now allows to study this problem of predicting sepsis using machine learning and data mining approaches. In this talk, I will describe our efforts towards data-driven early recognition of sepsis and the related problem of antibiotic resistance prediction.
Speaker: Karsten Borgwardt is Director of the Department of Machine Learning and Systems Biology at the Max Planck Institute of Biochemistry in Martinsried, Germany since February 2023. His work won several awards, including the 1 million Euro Krupp Award for Young Professors in 2013 and a Starting Grant 2014 from the ERC-backup scheme of the Swiss National Science Foundation. Prof. Borgwardt has been leading large national and international research consortia, including the “Personalized Swiss Sepsis Study” (2018-2023) and the subsequent National Data Stream on infection-related outcomes in Swiss ICUs (2022-2023), and two Marie Curie Innovative Training Networks on Machine Learning in Medicine (2013-2016 and 2019-2022).
Multiscale Random Models of Deep Neural Networks
Date: 16/05/2023
Abstract: Deep neural networks have spectacular applications but remain mostly a mathematical mystery. An outstanding issue is to understand how they circumvent the curse of dimensionality to generate or classify data. Inspired by the renormalization group in physics, we explain how deep networks can separate phenomena which appear at different scales, and capture scale interactions. It provides high-dimensional model, which approximate the probability distribution of complex physical fields such as turbulences or structured images. For classification, learning becomes similar to a compressed sensing problem, where low-dimensional discriminative structures are identified with random projections. We introduce a multiscale random feature model of deep networks for classification, which is validated numerically.
Speaker: Stéphane Mallat was Professor at NYU in computer science, until 1994, then at Ecole Polytechnique in Paris and Department Chair. From 2001 to 2007 he was co-founder and CEO of a semiconductor start-up company. Since 2017, he holds the “Data Sciences” chair at the Collège de France. He is a member of the French Academy of sciences, of the Academy of Technologies, and a foreign member of the US National Academy of Engineering. Stéphane Mallat’s research interests include machine learning, signal processing and harmonic analysis. He developed the multiresolution wavelet theory and algorithms at the origin of the compression standard JPEG-2000, and sparse signal representations in dictionaries through matching pursuits. He currently works on mathematical models of deep neural networks, for data analysis and physics.
Geometric Deep Learning: Grids, Graphs, Groups, Geodesics and Gauges
Date: 28/03/2023
Abstract:
The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Indeed, many high-dimensional learning tasks previously thought to be beyond reach –such as computer vision, playing Go, or protein folding – are in fact feasible with appropriate computational scale. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning, whereby adapted, often hierarchical, features capture the appropriate notion of regularity for each task, and second, learning by local gradient-descent type methods, typically implemented as backpropagation.
While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not generic, and come with essential pre-defined regularities arising from the underlying low-dimensionality and structure of the physical world. This talk is concerned with exposing these regularities through unified geometric principles that can be applied throughout a wide spectrum of applications.
Such a ‘geometric unification’ endeavour in the spirit of Felix Klein’s Erlangen Program serves a dual purpose: on one hand, it provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers. On the other hand, it gives a constructive procedure to incorporate prior physical knowledge into neural architectures and provide principled way to build future architectures yet to be invented.
Speaker: Petar Veličković (see https://petar-v.com/ for a short bio)
Model-free, Model-based, and General Intelligence: Learning Representations for Acting and Planning
Date: 07/03/2023
Abstract: During the 60s and 70s, AI researchers explored intuitions about intelligence by writing programs that displayed intelligent behavior. Many good ideas came out from this work but programs written by hand were not robust or general. After the 80s, research increasingly shifted to the development of learners capable of inferring behavior and functions from experience and data, and solvers capable of tackling well-defined but intractable models like SAT, classical planning, Bayesian networks, and POMDPs. The learning approach has achieved considerable success but results in black boxes that do not have the flexibility, transparency, and generality of their model-based counterparts. Model-based approaches, on the other hand, require models and scalable algorithms. The two have close parallels with Daniel Kahneman’s Systems 1 and 2: the first, a fast, opaque, and inflexible intuitive mind; the second, a slow, transparent, and flexible analytical mind. In this talk, I review learners and solvers, and the challenge of integrating their System 1 and System 2 capabilities, focusing then on our recent work aimed at bridging this gap in the context of action and planning, where combinatorial and deep learning approaches are used to learn general action models, general policies, and general subgoal structures.
Speaker: Hector Geffner is an Alexander Humbolt Professor at RWTH Aachen University, Germany, and a Wallenberg Guest Professor at Linköping University, Sweden. Hector grew up in Buenos Aires and obtained a PhD in Computer Science at UCLA in 1989. He then worked at the IBM T.J. Watson Research Center in New York, at the Universidad Simon Bolivar in Caracas, and at the Catalan Institute of Advanced Research (ICREA) and the Universitat Pompeu Fabra in Barcelona. Hector teaches courses on logic, AI, and social and technological change, and is currently doing research on representations learning for acting and planning as part of the ERC project RLeap 2020-2025.
DaSCI Seminars 2022
Neurosymbolic Computing for Accountability in AI
Date: 11/05/2022
Abstract: Despite achieving much success, the deep learning approach to AI has been criticised for being “black box”: the decisions made by such large and complex learning systems are difficult to explain or analyse. If the system makes a mistake in a critical situation then the consequences can be serious. The use of black box systems has obvious implications to transparency but also fairness and ultimately trust in current AI. System developers might also like to learn from system errors so that errors can be fixed. The area of explainable AI (XAI) has sought to open the black box by providing explanations for large AI systems mostly through the use of visualization techniques and user studies that seek to associate the decisions made by the system with known features of the deep learning model. In this talk, I will argue that XAI needs knowledge extraction and an objective measure of fidelity as a pre-requisite for visualization and user studies. As part of a neurosymbolic approach, knowledge extraction creates a bridge between sub-symbolic deep learning and logic-based symbolic AI with a precise semantics. I will exemplify how knowledge extraction can be used in the analysis of chest x-ray images as part of a collaborative project with Fujitsu Research to find and fix mistakes in image classification. I will conclude by arguing that knowledge extraction is an important tool, but is only one of many elements that are needed to address fairness and accountability in AI.
Speaker: Artur Garcez is Professor of Computer Science and Director of the Data Science Institute at City, University of London. He holds a PhD in Computing (2000) from Imperial College London. He is a Fellow of the British Computer Society (FBCS) and president of the steering committee of the Neural-Symbolic Learning and Reasoning Association. He has co-authored two books: Neural-Symbolic Cognitive Reasoning, 2009, and Neural-Symbolic Learning Systems, 2002. His research has led to publications in the journals Behavioral & Brain Sciences, Theoretical Computer Science, Neural Computation, Machine Learning, Journal of Logic and Computation, IEEE Transactions on Neural Networks, Journal of Applied Logic, Artificial Intelligence, and Studia Logica, and the flagship AI and Neural Computation conferences AAAI, NeurIPS, IJCAI, IJCNN, AAMAS and ECAI. Professor Garcez holds editorial positions with several scientific journals in the fields of Computational Logic and Artificial Intelligence, and has been Programme Committee member for several conferences, including IJCAI, IJCNN, NeurIPS and AAAI.
The Modern Mathematics of Deep Learning
Date: 05/04/2022
Abstract: Despite the outstanding success of deep neural networks in real-world applications, ranging from science to public life, most of the related research is empirically driven and a comprehensive mathematical foundation is still missing. At the same time, these methods have already shown their impressive potential in mathematical research areas such as imaging sciences, inverse problems, or numerical analysis of partial differential equations, sometimes by far outperforming classical mathematical approaches for particular problem classes. The goal of this lecture is to first provide an introduction into this new vibrant research area. We will then survey recent advances in two directions, namely the development of a mathematical foundation of deep learning and the introduction of novel deep learning-based approaches to solve mathematical problem settings.
Speaker: Gitta Kutyniok (https://www.ai.math.lmu.de/kutyniok) currently has a Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at the Ludwig-Maximilians Universität München. She received her Diploma in Mathematics and Computer Science as well as her Ph.D. degree from the Universität Paderborn in Germany, and her Habilitation in Mathematics in 2006 at the Justus-Liebig Universität Gießen. From 2001 to 2008 she held visiting positions at several US institutions, including Princeton University, Stanford University, Yale University, Georgia Institute of Technology, and Washington University in St. Louis, and was a Nachdiplomslecturer at ETH Zurich in 2014. In 2008, she became a full professor of mathematics at the Universität Osnabrück, and moved to Berlin three years later, where she held an Einstein Chair in the Institute of Mathematics at the Technische Universität Berlin and a courtesy appointment in the Department of Computer Science and Engineering until 2020. In addition, Gitta Kutyniok holds an Adjunct Professorship in Machine Learning at the University of Tromso since 2019.
Neuroevolution: A Synergy of Evolution and Learning
Date: 22/03/2022
Abstract: Neural network weights and topologies were originally evolved in order to solve tasks where gradients are not available. Recently, it has also become a useful technique for metalearning architectures of deep learning networks. However, neuroevolution is most powerful when it utilizes synergies of evolution and learning. In this talk I review several examples of such synergies: evolving loss functions, activation functions, surrogate optimization, and human-designed solutions. I will demonstrate these synergies in image recognition, game playing, and pandemic policy optimization, and point out opportunities for future work.
Speaker: Risto Miikkulainen is a Professor of Computer Science at the University of Texas at Austin and Associate VP of Evolutionary AI at Cognizant. He received an M.S. in Engineering from Helsinki University of Technology (now Aalto University) in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His current research focuses on methods and applications of neuroevolution, as well as neural network models of natural language processing and vision; he is an author of over 450 articles in these research areas. At Cognizant, he is scaling up these approaches to real-world problems. Risto is an IEEE Fellow; his work on neuroevolution has recently been recognized with the IEEE CIS Evolutionary Computation Pioneer Award, the Gabor Award of the International Neural Network Society and Outstanding Paper of the Decade Award of the International Society for Artificial Life.
Trustable autonomy: creating interfaces between human and robot societies
Date: 26/01/2022
Abstract: Robotic systems are starting to revolutionize many applications, from transportation to health care, assisted by technological advancements, such as cloud computing, novel hardware design, and novel manufacturing techniques. However, several of the characteristics that make robots ideal for certain future applications such as autonomy, self-learning, knowledge sharing, can also raise concerns in the evolution of the technology from academic institutions to the public sphere. Blockchain, an emerging technology originated in the digital currency field, is starting to show great potential to make robotic operations more secure, autonomous, flexible, and even profitable. Therefore, bridging the gap between purely scientific domains and real-world applications. This talk seeks to move beyond the classical view of robotic systems to advance our understanding about the possibilities and limitations of combining state-of-the art robotic systems with blockchain technology.
Speaker: Eduardo Castello experience and interests comprise robotics, blockchain technology, and complex systems. Eduardo was a Marie Curie Fellow at the MIT Media Lab where he worked to explore the combination of distributed robotic systems and blockchain technology. His work focuses on implementing new security, behavior, and business models for distributed robotics by using novel cryptographic methods. Eduardo received his Bsc.(Hons) intelligent systems from University of Portsmouth (UK) and his M. Eng and Ph.D degrees in robotics engineering from Osaka University (Japan). During his graduate studies, Eduardo’s research focused on swarm robotics and how to achieve cooperative and self-sustaining groups of robots.
DaSCI Seminars 2021
If all you have is a hammer, everything looks like a nail
Date: 01/12/2021
Abstract: In this talk, I’ll focus on some recent advances in privacy-preserving NLP. In particular, we will look at the differential privacy paradigm and its applications in NLP, namely by using differentially-private training of neural networks. Although the training framework is very general, does it really fit everything we typically do in NLP?
Speaker: Dr. Habernal is leading an independent research group “Trustworthy Human Language Technologies” at the Department of Computer Science, Technical University of Darmstadt, Germany. His current research areas include privacy-preserving NLP, legal argument mining, and explainable and trustworthy models. His research track spans argument mining and computational argumentation, crowdsourcing, or serious games, among others. More info at www.trusthlt.org.
Graph Mining with Graph Neural Networks
Lecturer: Bryan Perozzi is a Research Scientist in Google Research’s Algorithms and Optimization group, where he routinely analyzes some of the world’s largest (and perhaps most interesting) graphs. Bryan’s research focuses on developing techniques for learning expressive representations of relational data with neural networks. These scalable algorithms are useful for prediction tasks (classification/regression), pattern discovery, and anomaly detection in large networked data sets. Bryan is an author of 30+ peer-reviewed papers at leading conferences in machine learning and data mining (such as NeurIPS, ICML, KDD, and WWW). His doctoral work on learning network representations was awarded the prestigious SIGKDD Dissertation Award. Bryan received his Ph.D. in Computer Science from Stony Brook University in 2016, and his M.S. from the Johns Hopkins University in 2011.
Date:17/05/2021
Abstract: How can neural networks best model data which doesn’t have a fixed structure? In this talk, I will discuss graph neural networks (GNNs), a very active area of current research in machine learning aimed at answering this interesting (and practical) question. After reviewing the basics of GNNs, I’ll discuss some challenges applying these methods in industry, and some of the methods we’ve developed for addressing these challenges.
Detecting the “Fake News” Before It Was Even Written, Media Literacy, and Flattening the Curve of the COVID-19 Infodemic
Lecturer: Dr. Preslav Nakov is a Principal Scientist at the Qatar Computing Research Institute (QCRI), HBKU , where he leads the Tanbih mega-project (developed in collaboration with MIT ), which aims to limit the effect of “fake news”, propaganda and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. He received his PhD degree in Computer Science from the University of California at Berkeley, supported by a Fulbright grant. Dr. Preslav Nakov is President of ACL SIGLEX , Secretary of ACL SIGSLAV , and a member of the EACL advisory board. He is also member of the editorial board of a number of journals including Computational Linguistics, TACL , CS&L, NLE , AI Communications, and Frontiers in AI. He authored a Morgan & Claypool book on Semantic Relations between Nominals and two books on computer algorithms. He published 250+ research papers, and he was named among the top 2% of the world’s most-cited in the career achievement category, part of a global list compiled by Stanford University. He received a Best Long Paper Award at CIKM ‘2020, a Best Demo Paper Award (Honorable Mention) at ACL ‘2020, a Best Task Paper Award (Honorable Mention) at SemEval’2020, a Best Poster Award at SocInfo’2019, and the Young Researcher Award at RANLP ‘2011. He was also the first to receive the Bulgarian President’s John Atanasoff award, named after the inventor of the first automatic electronic digital computer. Dr. Nakov’s research was featured by over 100 news outlets, including Forbes, Boston Globe, Aljazeera, DefenseOne, Business Insider, MIT Technology Review, Science Daily, Popular Science, Fast Company, The Register, WIRED , and Engadget, among others.
Date:19/04/2021
Abstract: Given the recent proliferation of disinformation online, there has been growing research interest in automatically debunking rumors, false claims, and “fake news”. A number of fact-checking initiatives have been launched so far, both manual and automatic, but the whole enterprise remains in a state of crisis: by the time a claim is finally fact-checked, it could have reached millions of users, and the harm caused could hardly be undone.
An arguably more promising direction is to focus on analyzing entire news outlets, which can be done in advance; then, we could fact-check the news before it was even written: by checking how trustworthy the outlet that has published it is (which is what journalists actually do). We will show how we do this in the Tanbih news aggregator (http://www.tanbih.org/), which aims to limit the impact of “fake news”, propaganda and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking, which are arguably the best way to address disinformation in the long run. In particular, we develop media profiles that show the general factuality of reporting, the degree of propagandistic content, hyper-partisanship, leading political ideology, general frame of reporting, stance with respect to various claims and topics, as well as audience reach and audience bias in social media.
Another important observation is that the term “fake news” misleads people to focus exclusively on factuality, and to ignore the other half of the problem: the potential malicious intent. Thus, we detect the use of specific propaganda techniques in text, e.g., appeal to emotions, fear, prejudices, logical fallacies, etc. We will show how we do this in the Prta system (https://www.tanbih.org/prta), another media literacy tool, which got the Best Demo Award (Honorable Mention) at ACL -2020; an associated shared task got the Best task award (Honorable Mention) at SemEval-2020.
Finally, at the time of COVID -19, the problem of disinformation online got elevated to a whole new level as the first global infodemic. While fighting this infodemic is typically thought of in terms of factuality, the problem is much broader as malicious content includes not only “fake news”, rumors, and conspiracy theories, but also promotion of fake cures, panic, racism, xenophobia, and mistrust in the authorities, among others. Thus, we argue for the need of a holistic approach combining the perspectives of journalists, fact-checkers, policymakers, social media platforms, and society as a whole, and we present our recent research in that direction (https://mt.qcri.org/covid19disinformationdetector/).
Efficient Deep Learning
Lecturer: Marco Pedersoli is Assistant Professor at ETS Montreal. He obtained his PhD in computer science in 2012 at the Autonomous University of Barcelona and the Computer Vision Center of Barcelona. Then, he was a postdoctoral fellow in computer vision and machine learning at KU Leuven with Prof. Tuytelaars and later at INRIA Grenoble with Drs. Verbeek and Schmid. At ETS Montreal he is a member of LIVIA and he is co-chairing an industrial Chair on Embedded Neural Networks for Connected Building Control. His research is mostly applied on visual recognition, the automatic interpretation and understanding of images and videos. His specific focus is on reducing the complexity and the amount of annotation required for deep learning algorithms such as convolutional and recurrent neural networks. Prof. Pedersoli has authored more than 40 publications in top-tier international conferences and journals in computer vision and machine learning.
Date:12/04/2021
Abstract: In the last 10 years deep learning (DL) models have shown great progress in many different fields, from Computer Vision to Natural Language Processing. However, DL methods require great computational resources (i.e. GPUs or TPUs) and very large datasets, which also makes the training phase very long and painful. Thus, there is a strong need for reducing the computational cost of DL methods both in training as well as in deployment. In this talk, I present the most common families of approaches used to reduce the requirements of DL methods in terms of Memory and Computation for both training and deployment, and show how a reduction of the model footprint does not always produce a corresponding speed-up. Finally, I will present some recent results that suggest that large DL models are important mostly for facilitating the model training, and when that is finished, we can deploy a much smaller and faster model with almost no loss in accuracy.
Variational Autoencoders for Audio, Visual and Audio-Visual Learning
Lecturer: Xavier Alameda-Pineda is a (tenured) Research Scientist at Inria, in the Perception Group. He obtained the M.Sc. (equivalent) in Mathematics in 2008, in Telecommunications in 2009 from BarcelonaTech and in Computer Science in 2010 from Université Grenoble-Alpes (UGA). He then worked towards his Ph.D. in Mathematics and Computer Science, and obtained it in 2013, from UGA. After a two-year post-doc period at the Multimodal Human Understanding Group, at University of Trento, he was appointed with his current position. Xavier is an active member of SIGMM, and a senior member of IEEE and a member of ELLIS. He is co-chairing the “Audio-visual machine perception and interaction for companion robots” chair of the Multidisciplinary Institute of Artificial Intelligence. Xavier is the Coordinator of the H2020 Project SPRING: Socially Pertinent Robots in Gerontological Healthcare. Xavier’s research interests are in combining machine learning, computer vision and audio processing for scene and behavior analysis and human-robot interaction. More info at xavirema.eu
Date: 01/02/2021
Abstract: Since their introduction, Variational Autoencoders (VAE) demonstrated great performance in key unsupervised feature representation applications, specifically in visual and auditory representation. In this seminar, the global methodology of variational auto-encoders will be presented, along with applications in learning with audio and visual data. Special emphasis will be put in discussing the use of VAE for audio-visual learning, showcasing its interest with the task of audio-visual speech enhancement.
Variational Autoencoders for Audio, Visual and Audio-Visual Learning – Recording
Five Sources of Biases and Ethical Issues in NLP, and What to Do about Them
Lecturer: Dirk Hovy is associate professor of computer science at Bocconi University in Milan, Italy. Before that, he was faculty and a postdoc in Copenhagen, got a PhD from USC, and a linguistics masters in Germany. He is interested in the interaction between language, society, and machine learning, or what language can tell us about society, and what computers can tell us about language. He has authored over 60 articles on these topics, including 3 best paper awards. He has organized one conference and several workshops (on abusive language, ethics in NLP, and computational social science). Outside of work, Dirk enjoys cooking, running, and leather-crafting. For updated information, see http://www.dirkhovy.com
Date: 11/01/2021
Abstract: Never before was it so easy to write a powerful NLP system, never before did it have such a potential impact. However, these systems are now increasingly used in applications they were not intended for, by people who treat them as interchangeable black boxes. The results can be simple performance drops, but also systematic biases against various user groups. In this talk, I will discuss several types of biases that affect NLP models (based on Shah et al. 2020 and Hovy & Spruit, 2016), what their sources are, and potential counter measures.
Five Sources of Biases and Ethical Issues in NLP, and What to Do about Them – Recording
Image and Video Generation using Deep Learning
Lecturer: Stéphane Lathuilière is an associate professor (maître de conférence) at Telecom Paris, France, in the multimedia team. Until October 2019, he was a post-doctoral fellow at the University of Trento (Italy) in the Multimedia and Human Understanding Group, led by Prof. Nicu Sebe and Prof. Elisa Ricci. He received the M.Sc. degree in applied mathematics and computer science from ENSIMAG, Grenoble Institute of Technology (Grenoble INP), France, in 2014. He completed his master thesis at the International Research Institute MICA (Hanoi, Vietnam). He worked towards his Ph.D. in mathematics and computer science in the Perception Team at Inria under the supervision of Dr. Radu Horaud, and obtained it from Université Grenoble Alpes (France) in 2018. His research interests cover machine learning for computer vision problems (eg. domain adaptation, continual learning) and deep models for image and video generation. He regularly publishes papers in the most prestigious computer vision conferences (CVPR, ICCV, ECCV, NeurIPS) and top journals (IEEE TPAMI).
Date: 14/12/2020
Abstract: Generating realistic images and videos has countless applications in different areas, ranging from photography technologies to e-commerce business. Recently, deep generative approaches have emerged as effective techniques for generation tasks. In this talk, we will first present the problem of pose-guided person image generation. Specifically, given an image of a person and a target pose, a new image of that person in the target pose is synthesized. We will show that important body-pose changes affect generation quality and that specific feature map deformations lead to better images. Then, we will present our recent framework for video generation. More precisely, our approach generates videos where an object in a source image is animated according to the motion of a driving video. In this task, we employ a motion representation based on keypoints that are learned in a self-supervised fashion. Therefore, our approach can animate any arbitrary object without using annotation or prior information about the specific object to animate.
Aggregating Weak Annotations from Crowds
Lecturer: Edwin Simpson, is a lecturer (equivalent to assistant professor) at the University of Bristol, working on interactive natural language processing. His research focusses on learning from small and unreliable data, including user feedback, and adapts Bayesian approaches to topics such as argumentation, summarisation and sequence labelling. Previously, he was a post-doc at TU Darmstadt, Germany and completed his PhD at the University of Oxford on Bayesian methods for aggregating crowdsourced data.
Date: 09/11/2020
Abstract: Current machine learning methods are data hungry. Crowdsourcing is a common solution to acquiring annotated data at large scale for a modest price. However, the quality of the annotations is highly variable and annotators do not always agree on the correct label for each data point. This talk presents techniques for aggregating crowdsourced annotations using preference learning and classifier combination to estimate gold-standard rankings and labels, which can be used as training data for ML models. We apply approximate Bayesian approaches to handle noise, small amounts of data per annotator, and provide a basis for active learning. While these techniques are applicable to any kind of data, we demonstrate their effectiveness for natural language processing tasks.
How to do Data Science without writing code
Lecturer: Victoriano Izquierdo (1990) is a software engineer from Granada who is the co-founder and CEO of Graphext. Graphext is developing an advanced data analysis software built upon the last advances in data science and artificial intelligence for supporting small and big companies to address hard problems using data.
Date: 15/02/2021
Abstract: How to do Data Science without writing code
DaSCI Seminars 2020
Reinforcement Learning
Lecturer: Sergio Guadarrama is a senior software engineer at Google Brain. He focuses on reinforcement learning and neural networks. His research focuses on robust, scalable and efficient reinforcement learning. He is currently the leader of the TF-Agents project and a lead developer of TensorFlow (co-creator of TF-Slim). Before joining Google, he was a researcher at the University of California, Berkeley, where he worked with Professor Lotfi Zadeh and Professor Trevor Darrell. He received his B.A. and Ph.D. from the Universidad Politécnica de Madrid.
Date: 26/10/2020
Abstract: Learning by reinforcement (RL) is a type of machine learning where the objective is to learn to solve a task through interaction with the environment, maximizing the expected return. Unlike supervised learning, the solution requires making multiple decisions in a sequential way and the reinforcement occurs through rewards. The two main components are the environment, which represents the problem to be solved, and the agent, which represents the learning algorithm.