Adding Safety and Robustness to Learning for Robots by Learning on Robots

Safety and robustness of robotic systems are crucial for deploying robots in the real world. Machine learning has emerged as a promising tool for enabling robots to perform complex tasks or operate under uncertainties in dynamics and environment. However, learning techniques used often do not take safety into account, which could damage the robot or its environment, hindering deployment of learning-based methods. In contrast, in control theory, several methods such as model-predictive control, Hamilton-Jacobi (HJ) reachability, control barrier functions, and Lyapunov-based safety methods have been used to reason about safety of dynamical systems. We aim to develop new frameworks that ensure or encourage safety of robot learning by combining machine learning algorithms with such control theoretic tools.

Researchers

  • Chia-Yin Shih, UC Berkeley
  • Laurent El Ghaoui, UC Berkeley
  • Akshara Rai, Facebook AI Research
  • Franziska Meier, Facebook AI Research

Overview

Our recently published work, A Framework for Online Updates to Safe Sets, builds on HJ reachability and proposes a safety framework that enables robots to learn about their dynamics and update their safe sets online. This can be used in combination with learning techniques, such as model-based RL to enable safe learning under dynamics uncertainties. While this work concerns uncertainties in the dynamics, we are also interested in enabling robot safety with uncertainties from the environment such as obstacle location or size.  We are exploring the theme of generating safe sets for high-dimensional systems in a relatively short amount of time given new observations or a new environment. This may involve robust transfer of knowledge about safety through machine learning and tools from control theory. We are working on researching methods that can quickly generate a safe set for the robot in a new setting through prior experiences in other environments. In addition, we also plan to explore the possibility of theoretical guarantees on safety of our proposed approach under specific conditions.