Principal Investigator Ankur Moitra
Project Website http://www.nsf.gov/awardsearch/showAward?AWD_ID=1453261&HistoricalAwards=false
Project Start Date July 2015
Project End Date June 2020
Algorithms and complexity are the theoretical foundation and backbone of machine learning. Yet over the past few decades an uncomfortable truth has set in that worst-case analysis is not the right framework to study it: every model that is interesting enough to use in practice leads to computationally hard problems. The goal of the PI's research agenda is to move beyond worst-case analysis. This involves formalizing when and why heuristics -- such as alternating minimization and Gibbs sampling -- work as well as designing fundamentally new algorithms for some of the basic tasks in machine learning. This project has already had a number of successes such as provable algorithms for nonnegative matrix factorization, topic modeling and learning mixture models.
Professor Moitra will investigate several new directions in topics such as sparse coding, inference in graphical models, inverse problems for tensors, and semi-random models. These projects will leverage a wide range of modern tools to give new provable algorithms in each of these settings, and will involve making new connections between alternating minimization and approximate gradient descent, analyzing Gibbs sampling through correlation decay and coupling, connecting tensor completion and quantum complexity and rethinking the standard distributional models used in machine learning. These projects cut across several areas of computer science and applied mathematics and will build new bridges between them, as well as expanding the reach of theory into a number of domains where there is a serious gap in our current understanding. Bridging theory and practice has significant broader impact. The PI will continue to mentor undergraduate and graduate students.