Abstract
When navigating complex multi-agent scenarios, humans not only reason about the uncertainty of their own perception, but also reason about the effect of their actions on their own perception. For example, a human driver at a blind intersection may inch forward to improve their visibility of oncoming traffic before deciding whether to proceed. This sort of reasoning is a natural human behavior critical to safe and efficient decision making, however, it does not exist in most real-world autonomous systems. While it may be possible to engineer human-like behavior for individual scenarios involving perception uncertainty, it is highly desirable to develop a principled solution for planning over perception uncertainty that can generalize to novel settings. Our goal is to develop an novel active visual planner for multi-agent planning in environments with partial observability.
Researchers
- Charles Packer, UC Berkeley
- Xin Wang, Microsoft
- Joseph Gonzalez, UC Berkeley
Acknowledgements
This project is in part based upon work sponsored by Microsoft.