Masayoshi Tomizuka

Differentiable Optimization for Game Theoretic Formulations

Game theory is an effective tool in formulating interactions among agents and finds its use in many real world challenges including human-robot interaction scenarios, like self-driving. Recently, such tools have also found applications in machine learning [1,2] and reinforcement learning [3] domains. For example, virtual agents in the form of optimizer and uncertainties in robust optimization, the E-step and M-step in EM algorithm [4,5], and the model adaptation and policy update in model-based RL [6]. Typically real world problems like self-driving are represented as general-...

Multi-task Learning with Safe and Differentiable Policies

Generalization capability to new tasks or environments is crucial to deploy autonomous agents like robots and self-driving vehicles at scale in the real-world. This is extremely challenging and often requires the agent to perform state-specific reasoning such as in model-based planning and control. Optimization-based meta-learning methods like MAML [1] have been shown to tackle multi-task adaptation problems, but the inner-loop optimization contained in those methods makes them hard to train in an end-to-end fashion. Differentiable and end-to-end learning for planning [2] and...