Rearranging and manipulating deformable objects such as cables, fabrics, and bags is a long-standing challenge in robotic manipulation. The complex dynamics and high-dimensional configuration spaces of deformables, compared to rigid objects, make manipulation difficult not only for multi-step planning, but even for goal specification. Goals cannot be as easily specified as rigid object poses, and may involve complex relative spatial relations such as ``place the item inside the bag". In this work, we develop a suite of simulated benchmarks with 1D, 2D, and 3D deformable structures, including tasks that involve image-based goal-conditioning and multi-step deformable manipulation. We propose embedding goal-conditioning into Transporter Networks, a recently proposed model architecture for robotic manipulation that uses learned template matching to infer displacements that can represent pick and place actions. We demonstrate that goal-conditioned Transporter Networks enable agents to manipulate deformable structures into flexibly specified configurations without test-time visual anchors for target locations. We also significantly extend prior results using Transporter Networks for manipulating deformable objects by testing on tasks with 2D and 3D deformables.
Overview
See the project website here: https://berkeleyautomation.github.io/bags/ for an overview video, links to data, and videos.
Update
Researchers
- Daniel Seita, UC Berkeley, https://people.eecs.berkeley.edu/~seita/
- Pete Florence, Google, http://www.peteflorence.com/
- Jonathan Tompson, Google, https://jonathantompson.github.io/
- Erwin Coumans, Google, https://twitter.com/erwincoumans
- Vikas Sindhwani, Google, https://vikas.sindhwani.org/
- Ken Goldberg, UC Berkeley, https://goldberg.berkeley.edu/
- Andy Zeng, Google, https://andyzeng.github.io/
Links
- Link to Project Page: https://berkeleyautomation.github.io/bags/
- Contact email: seita@berkeley.edu
- arXiv: https://arxiv.org/abs/2012.03385