IN THE WILD

IN THE WILD (NED):

A new learning and evaluation framework to further research towards versatile and robust ML systems

Enabling robust intelligence in the wild entails learning systems that offer uninterrupted inference while continually and rapidly adapting to new data. Such ML systems must be flexible and general enough to handle the uncertain and imperfect conditions inherent to the real world. The machine learning community has organically broken down this challenging task into manageable subtasks such as supervised, few-shot, continual, and self-supervised learning; each affording distinctive challenges and leading to unique set of dedicated methods. Notwithstanding this amazing progress, the restrictive and isolated nature of these settings has resulted in methods that excel in one setting and struggle to extend beyond them.

To foster research towards general ML systems capable of learning in a diverse set of learning scenarios, we introduce a unified learning and evaluation framework - In the Wild (NED). NED loosens the restrictive design decisions of past frameworks (e.g. closed-world assumption) and imposes fewer restrictions on learning algorithms (e.g. predefined train and test phases). The learners can infer the experimental parameters themselves by optimizing for the best tradeoff between accuracy and compute. In NED, a learner faces a stream of data and must make sequential predictions while choosing how to update itself, adapt to data from novel categories, and deal with changing data distributions; while optimizing the total amount of compute. We present multiple surprising results about current prominent methods that are revealed through the NED framework. For example, prominent few shot methods are drastically outperformed (30-40% accuracy) by simple baselines and the self-supervised method, MoCo, struggles to generalize to novel classes which motivates the need for a more general framework that carefully conglomerates the objectives of past frameworks.

In the Wild: From ML Models to Pragmatic ML Systems

Aaron Walsman

Aaron Walsman
×

Tickets

Need help?

CONTACT

GitHub
Email: mcw244 at cs.washington.edu