Project Goals: The goal of the project is to develop an algorithm for iteratively constructing recursive hierarchies of options. The hypothesis is that such a method could have the potential to achieve an exponential improvement over flat reinforcement learning policies in learning efficiency by exploring with high-level option primitives in addition to low-level actions. Our proposed algorithm works similarly to tabulation methods for dynamic programming, which iteratively fill a repertoire of subsolutions to subproblems that are then reused for more complex problems.
Researchers
- Michael Chang, UC Berkeley, http://mbchang.github.io/
- Kelvin Xu, UC Berkeley, https://kelvinxu.github.io/
- Igor Mordatch, Google, https://scholar.google.com/citations?user=Vzr1RukAAAAJ&hl=en
- Glen Berseth, UC Berkeley, https://people.eecs.berkeley.edu/~gberseth/
- Natasha Jaques, Google, https://www.media.mit.edu/people/jaquesn/overview/
- Sergey Levine, UC Berkeley/Google, https://people.eecs.berkeley.edu/~svlevine/