Offline Recovery RL: Offline Reinforcement Learning with Safe Online Adaptation

Safety remains a central obstacle preventing widespread use of RL in the real world: learning new tasks in uncertain environments requires extensive exploration, but safety requires limiting exploration. We propose Recovery RL, an algorithm which navigates this tradeoff by (1) leveraging offline data to learn about constraint violating zones before policy learning and (2) separating the goals of improving task performance and constraint satisfaction across two policies: a task policy that only optimizes the task reward and a recovery policy that guides the agent to safety when constraint violation is likely. We evaluate Recovery RL on 6 simulation domains, including two contact-rich manipulation tasks and an image-based navigation task, and an image-based obstacle avoidance task on a physical robot. We compare Recovery RL to 5 prior safe RL methods which jointly optimize for task performance and safety via constrained optimization or reward shaping and find that Recovery RL outperforms the next best prior method across all domains. Results suggest that Recovery RL trades off constraint violations and task successes 2 - 80 times more efficiently in simulation domains and 3 times more efficiently in physical experiments. See this https URL for videos and supplementary material.

Updates

August 27, 2021 

Researchers

Overview

See project website here: https://sites.google.com/berkeley.edu/recovery-rl/ for an overview, video, and links to the paper. We will also open source code soon. This work is currently under review at ICRA/RAL 2021.

In future work we are pursuing (1) methods to enable task agnostic safe exploration to collect broadly useful data indicating the structure of constraints in the environment and (2) algorithms which can module their risk online based on the changing availability of human supervisors to rectify the consequence of the agent’s actions.

Links