As Machine Learning (ML) is applied to increasingly sensitive tasks, and applied to increasingly noisy data, it has become important that the algorithms we develop for the ML are robust for potentially noisy cases. In robust Machine Learning we address recent advances in a number of related topics, both theoretical and applied, including
- Learning in the presence of outliers and noise, creating robust and generalizable models when our training data set is corrupted by noise. This includes robust statistics (non-parametric), learning lists, and data and watermark poisoning attacks.
- Learning with adversaries. It is known that machine vision systems based on Deep Learning can be fooled by disturbing a test image by an amount imperceptible to the human eye. We look at how these attacks work, as well as the empirical defenses for these attacks (e.g. PGD).
- Private Machine Learning, where we try to answer the question how can we develop algorithms for the HLM that respect the privacy of the users providing the data?
Contact: Julián Luengo Martín
|Benítez Sánchez, José Manuel||J.M.Benitez@decsai.Rt@e7E1ugr.es||Data Science and Big Data Area, Computational Intelligence Area||PhD|
|Herrera Triguero, Francisco||herrera@de2IOQ5c@_jgcsai.ugr.es||DaSCI Technology Applications Area, Data Science and Big Data Area, Computational Intelligence Area||PhD|
|Luengo Martín, Julián||julianlm@EpdvoYAdecsai.ugr.es||Data Science and Big Data Area||PhD|
|Romero Zaliz, Rocío||rocio@ugLcc.2Z0r.es||DaSCI Technology Applications Area||PhD|