Technion, IEM faculty - Operations Research seminar
 
Speaker:  Nathan Srebro, Toyota Technological Institute, Chicago, USA
 
Title:    Optimization, Learning and the Universality of Mirror Descent
 
Date:     14/11/2011
 
Time:     12:30
 
Place:    Bloomfield-527
 
____________________________________________________________________________
 
Abstract: I will discuss deep connections between Statistical
Learning, Online Learning and Optimization. I will show that there is
a tight correspondence between the sample size required for learning
and the number of local oracle accesses required for optimization,
and the same measures of "complexity" (e.g. the fat-shattering
dimension or Rademacher complexity) control both of them.
Furthermore, I will show how the Mirror Descent method, and in
particular its stochastic/online variant, is in a strong sense
"universal" for online learning, statistical learning and
optimization. That is, for a general class of convex
learning/optimization problems, Mirror Descent can always achieve a
(nearly) optimal guarantee. In the context of statistical learning,
this also implies that for a broad generic class of convex problems,
learning can be done optimally (in the worst-case agnostic-PAC sense)
with a single pass over the data. Joint work with Karthik Sridharan
and Ambuj Tewari, and mostly based on Sridharan's PhD Thesis.
____________________________________________________________________________
 
You can watch the rest of the seminar plan at our web site
_______________________________________________
Or-seminar mailing list
 <Or-seminar@technion.ac.il>
 <http://iemm1.technion.ac.il/mailman/listinfo/or-seminar>
 
---------------------------------------------------------
Technion Math. Net (TECHMATH)
Editor: Michael Cwikel   <techm@math.technion.ac.il> 
Announcement from:  <levinas@tx.technion.ac.il>