Modeling Interpersonal Multimodal Signals in Social Conversation

With the integration of VR/AR and robotics in society, the need for socially intelligent AI systems has become more compelling, as people seek to build systems that are more responsive to human interactions, or strive to recreate more embodied telepresence. Furthermore, current advances in 3D human pose estimation have reached levels of accuracy that allow us to tap into in-the-wild datasets to extract poses and study human behavior, which previously was only performed on constrained mocap datasets. Coupled with the demand for social AI, the time is ripe for investigating social signals in a data-driven manner.


Related Projects

Body2Hands: Learning to Infer 3D Hands from Conversational Gesture Body Dynamics
Evonne Ng, Hanbyul Joo, Shiry Ginosar, Trevor Darrell

We propose a novel learned deep prior of body motion for 3D hand shape synthesis and estimation in the domain of conversational gestures. Our model builds upon the insight that body motion and hand gestures are strongly correlated in non-verbal communication settings. We formulate the learning of this prior as a prediction task of 3D hand shape over time given body motion input alone. Trained with 3D pose estimations obtained from a large-scale dataset of internet videos, our hand prediction model produces convincing 3D hand gestures given only the 3D motion of the speaker's arms as input. We demonstrate the efficacy of our method on hand gesture synthesis from body motion input, and as a strong body prior for single-view image-based 3D hand pose estimation. We demonstrate that our method outperforms previous state-of-the-art approaches and can generalize beyond the monologue-based training data to multi-person conversations. [arxiv][project page]