Stuart Russell

Multiscale Modeling for Control

Abstract

A long-standing goal of AI research is to build/learn representations that lead to generalization in real-world sequential decision settings. Object-level representation that enables abstract reasoning in visual environments is a good candidate for this goal. Indeed, RL agents equipped with this inductive bias are able to generalize to tasks that involve a different...

Automatic Curriculum Generation and Emergent Complexity via Inter-agent Competition

Reinforcement Learning (RL) has been most successful when agents can collect extensive training experience in a simulated environment [1-4]. However, building simulated environments requires a great deal of manual effort, is error prone, and is unlikely to cover the space of all real world tasks. Inter-agent competition has...

Using Deep Reinforcement Learning to Generalize Search in Games

Search methods have been instrumental in computing superhuman strategies for large-scale games [1,2,3]. However, existing search techniques are tabular and can therefore have trouble searching far into the future. This is particularly a problem in games with high stochasticity and/or imperfect information. For example, existing search techniques in Hanabi, which is considered an interesting research problem by the AI community [4], are only able to search one move ahead. Even searching two moves ahead is considered intractable for existing techniques. Since real-world situations are...