Anna Deichler, Jim O'Regan, Teo Guichoux, David Johansson, Jonas Beskow
KTH Royal Institute of Technology
CVPR Humanoid Agents Workshop
Human motion generation has advanced rapidly in recent years, yet the critical problem of creating spatially grounded, context-aware gestures has been largely overlooked. Existing models typically specialize either in descriptive motion generation, such as locomotion and object interaction, or in isolated co-speech gesture synthesis aligned with utterance semantics. However, both lines of work often treat motion and environmental grounding separately, limiting advances toward embodied, communicative agents. To address this gap, our work introduces a multimodal dataset and framework for grounded gesture generation, combining two key resources: (1) a synthetic dataset of spatially grounded referential gestures, and (2) MM-Conv, a VR-based dataset capturing two-party dialogues. Together, they provide over 7.7 hours of synchronized motion, speech, and 3D scene information, standardized in the HumanML3D format. Our framework further connects to a physics-based simulator, enabling synthetic data generation and situated evaluation. By bridging gesture modeling and spatial grounding, our contribution establishes a foundation for advancing research in situated gesture generation and grounded multimodal interaction.
Situated referential communication — where speech, gesture, and gaze are coordinated to ground meaning in the surrounding environment — is a core aspect of human interaction. It is essential not only for resolving ambiguities and expressing communicative intents clearly in everyday conversation, but also fundamental to pedagogical instruction, development of language skills, and overcoming language barriers. In recent years, there has been a surge in technologies related to the generation of humanoid agents with believable and communicative behaviors. However, generation of spatially grounded, context-aware gestures for these agents remains an underexplored area.
To date, research on motion generation using deep generative models has largely focused on two separate problem domains. One is co-speech gesture generation, which aims to generate believable and semantically plausible gestures that match a verbal message in speech and/or text. These models are trained on large corpora of motion capture or video, conditioned with speech and/or text, typically with no spatial information other than that of the interlocutor (if present). The other class of models are so-called “text-to-motion” models, where natural language prompts are used to drive human behavior. These models are trained on large and diverse datasets of human motion and can also, in some cases, incorporate semantic and geometric information about the scene, allowing instructions like “walk over to the sofa and sit down.” However, these models have no concept of speech and gesture.
In order to build humanoid agents that can function in real-life scenarios, we need models that can integrate spatial information and communicative behavior, including speech and gesture. Progress is constrained by the lack of standardized datasets that combine motion, language, gaze, and 3D scene information in a way that supports both training and evaluation of situated behaviors. Existing datasets often treat gesture generation and environmental grounding separately, limiting advances in embodied, interactive AI systems.