Legged systems have the potential to navigate over a wide variety of difficult terrain, such as cluttered indoor environments, as illustrated in the figure above. Enabling legged robots to autonomously step on, over and around obstacles make them better suited for mobility in the real-world for various applications. This is challenging and requires going from non-trivial observation spaces such as RGB-D images or LiDAR point clouds to high-performance low-level feedback policies that achieve precise foot-placements, while maintaining balance and guaranteeing safety.
- Ayush Agrawal, UC Berkeley, https://sites.google.com/view/ayushagrawal
- Akshara Rai, Facebook AI Research, email@example.com
- Koushil Sreenath, UC Berkeley, https://hybrid-robotics.berkeley.edu/koushil/
We propose a framework that combines model-based optimal control approaches for low-level control and a high-level, learning based approach that takes in visual sensory data and outputs an optimal footstep sequence.