Pieter Abbeel

Smart Practice: Learning to Practice Skills from Demonstrations

In order to solve complex tasks, intelligent agents learn to perform a variety of skills. These skills abstract key information necessary for agents to act in real-world environments with high-dimensional state and action spaces. The goal of the Smart Practice project is to develop learning algorithms that...

Autonomous Skill Discovery Through Self-Supervised Exploration

Unsupervised Skill Discovery

Abstract

We introduce Contrastive Intrinsic Control (CIC) - an algorithm for unsupervised skill discovery that maximizes the mutual information between skills and state transitions. In contrast to most prior approaches, CIC uses a decomposition of the mutual information that explicitly incentivizes diverse behaviors by...

Knowledge Transferable Bayesian Optimization

Abstract

The need for automated optimization has become very important in many domains including hyper-parameter tuning in ML or in manufacturing industry. In practice, one frequently has to solve similar optimization problems for a specific customized setting, e.g. manufacturing robots optimized for a new customer environment or hyper-parameter optimization for a new classification task....

Masked Trajectory Modeling for Embodied Intelligence

Abstract

We introduce Masked Trajectory Models (MTM) as a generic abstraction for sequential decision making. MTM takes a trajectory, such as a state-action sequence, and aims to reconstruct the trajectory conditioned on random subsets of the same trajectory. By training with a highly randomized masking pattern, MTM learns versatile networks that can take on different roles or capabilities, by simply choosing appropriate masks at inference time. For example, the same MTM network can be used as a forward dynamics model, inverse dynamics model, or even an...

Unsupervised Environment Design for Multi-task Reinforcement Learning

We are interested in designing a method to improve learning efficiency and generalization in a single-agent multi-task reinforcement learning (RL) setting by leveraging unsupervised environment design techniques.

Researchers Yuqing Du, UC Berkeley,...

Task-Specific World Models for Robotic Manipulation

In this project, we look to develop methods that learn world models for agents to solve difficult real-world robotics tasks. Specifically, we focus on various real-world tasks, such as cable manipulation, that require very fine-grained details of a scene to accurately model future predictions. To do this, we will explore models that localize spatial regions of interest in images, and construct patch or region-based selection methods to acquire fine-grained detailed information.

Researchers Wilson Yan, UC Berkeley...