Jeffrey Negrea, University of Chicago
Assumptions on data are used to develop prediction methods with optimistic performance guarantees. Even if these assumptions don’t hold, we often believe that if our models are “nearly correct”, then our methods will have performance similar to those optimistic guarantees. How can we use models that we know to be wrong, but expect to be nearly correct, in a way that is robust and reliable? In order to provide robustness to the failure of our models, we must quantify the degree to which our simplifying models fail to explain observed data, and develop prediction methods that adapt to the degree of this failure.
In this seminar, I will discuss my work on the canonical problem of sequential prediction with expert advice, i.e., combining predictions from a large number of models or experts. We define a continuous spectrum of relaxations of the IID assumption, with IID data at one extreme and adversarial data at the other. We develop an online learning method that adapts to the level of failure of the IID assumption. We quantify the difficulty of prediction with expert advice in all scenarios along the spectrum we introduce, demonstrate that the prevailing methods do not adapt to this spectrum, and present new methods that are adaptively minimax optimal. More broadly, this work shows that it is possible to develop methods that are both adaptive and robust: they realize the benefits of the IID assumption when it holds, without ever compromising performance when the IID assumption fails, and without having to know the degree to which the IID assumption fails in advance.
This seminar is based on the following two research papers:
Join via Zoom