Title: Transforming Machine Learning Heuristics into Provable Algorithms: Classical, Stochastic, and Neural
Speaker: Cheng Tang, GWU, Computer Science
Date and Time: Friday February 23, 11:00-12:00 noon
Place: Phillips Hall (801 22nd Street), Room 736
Abstract: A recurring pattern in many areas of machine learning is the empirical success of a handful of “heuristics”, i.e., any simple learning procedure favored by practitioners. Many of these heuristic techniques lack formal theoretical justification. For unsupervised learning, Lloyd's k-means algorithm, while provably exponentially slow in the worst-case, remains popular for clustering problems arising from different applications. For supervised learning, random forest is another example of a winning heuristic with many variants and applications. But the most prominent example is perhaps the blossoming field of deep learning, which is almost entirely composed of heuristics; the practical success of a deep learning algorithm usually relies on an experienced user skillfully and creatively combining heuristics. In this talk, I will discuss some of my thesis work in advancing the theoretical understanding of some of the most widely used machine learning heuristics.