Multiscale Modeling for Control

Abstract

A long-standing goal of AI research is to build/learn representations that lead to generalization in real-world sequential decision settings. Object-level representation that enables abstract reasoning in visual environments is a good candidate for this goal. Indeed, RL agents equipped with this inductive bias are able to generalize to tasks that involve a different arrangement of objects than seen at testing time. Current object-level representation methods are generally based on only one level of granularity. Yet real-world environments often involve multiple levels of granularity and the right level of granularity for the downstream task might not be known when building the representation. For example, a robotic system interacting with the world has 1) a hierarchy of constituent parts (e.g. actuators/joints that come together to form parts of the system) 2) interacts with hierarchical structures present in the real-world (e.g. needing to understand general objects with the ability to deconstruct them for a task, such as opening a bottle or a constructing objects from many smaller parts). In this project we propose to investigate whether an object-level representation method that explicitly models whole-part relations at different scales can facilitate generalization in real-world dynamical control settings.

Researchers

Arnaud Fickinger, UC Berkeley

Brandon Amos, Meta AI

Stuart Russell, UC Berkeley