Published using Google Docs
Uncertainty Aware Machine Learning For Model Based Planning and Control
Updated automatically every 5 minutes

Commons Project Description

Uncertainty Aware Machine Learning For Model Based Planning and Control

Safety is a fundamental aspect of many robotics problems. In the autonomous navigation setting, recent fatalities from self-driving cars have brought much public attention to the need for more robust systems which can reason intelligently about what they know and what they do not know. We propose to design robust learning systems in the context of indoor ground robot navigation.

Specifically, we will build off Learning Based Waypoint Navigation (LB-WayPtNav), a framework which combines learning based perception and model based control for navigation in a-priori, unknown, indoor environments. LB-WayPtNav leverages a learned Convolutional Neural Network (CNN), which inputs the robot’s desired goal coordinates (e.g. 10 meters forward, 2 meters to the left) and a monocular RGB image from the robot’s perspective, predicting a waypoint, or desired next state, for the robot. The planning and control pipelines then plan and execute a path to the waypoint. We propose to:

1. Model epistemic and aleatoric uncertainty of the LB-WayPtNav perception module

2. Utilize tools from robust and stochastic control to incorporate uncertainty in waypoint prediction into downstream planning and control strategies

3. Evaluate the benefit of reasoning about uncertainty on performance and improve the learning system over time by actively collecting uncertainty-reducing samples

Researchers

Overview

In our current line of work we are experimenting with:

This is all novel work and as such results are not ready to be discussed/ published publicly.

In previous work we explored “Visual Navigation Among Humans with Optimal Control as a Supervisor” (see link below). The abstract for this work is below:

Real world navigation requires robots to operate in unfamiliar, dynamic environments, sharing spaces with humans. Navigating around humans is especially difficult because it requires predicting their future motion, which can be quite challenging. We propose a novel framework for navigation around humans which combines learning-based perception with model-based optimal control. Specifically, we train a Convolutional Neural Network (CNN)-based perception module which maps the robot's visual inputs to a waypoint, or next desired state. This waypoint is then input into planning and control modules which convey the robot safely and efficiently to the goal. To train the CNN we contribute a photo-realistic bench-marking dataset for autonomous robot navigation in the presence of humans. The CNN is trained using supervised learning on images rendered from our photo-realistic dataset. The proposed framework learns to anticipate and react to peoples' motion based only on a monocular RGB image, without explicitly predicting future human motion. Our method generalizes well to unseen buildings and humans in both simulation and real world environments. Furthermore, our experiments demonstrate that combining model-based control and learning leads to better and more data-efficient navigational behaviors as compared to a purely learning based approach. Videos describing our approach and experiments are available on the project website.

Links