SoPo: Text-to-Motion Generation Using Semi-Online Preference Optimization

1 Department of Computer Science and Engineering, Southeast University, Nanjing, China 2 Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications 3 Singapore Management University

TL;DR: We propose SoPo, a semi-online preference optimization method, combining the strengths of online and offline direct preference optimization to overcome their individual shortcomings, delivering enhanced motion generation quality and preference alignment.

Toy Example

MY ALT TEXT

Comparison of offline, online DPO, and our SoPo on synthetic data. Offline DPO suffers from mining unpreferred motions with high probability, and online DPO is limited by biased sampling. Our SoPo utilizes the dynamic unpreferred motions and preferred motions from unbiased offline dataset, overcoming their advantage. Here, the blue region is the distribution of generative model.


Abstract

Text-to-motion generation is essential for advancing the creative industry but often presents challenges in producing consistent, realistic motions. To address this, we focus on fine-tuning text-to-motion models to consistently favor high-quality, human-preferred motions—a critical yet largely unexplored problem. In this work, we theoretically investigate the DPO under both online and offline settings, and reveal their respective limitation: overfitting in offline DPO, and biased sampling in online DPO. Building on our theoretical insights, we introduce Semi-online Preference Optimization (SoPo), a DPO-based method for training text-to-motion models using ``semi-online” data pair, consisting of unpreferred motion from online distribution and preferred motion in offline datasets. This method leverages both online and offline DPO, allowing each to compensate for the other’s limitations. Extensive experiments demonstrate that SoPo outperforms other preference alignment methods, with an MM-Dist of 3.25% (vs e.g. 0.76% of MoDiPO) on the MLD model, 2.91% (vs e.g. 0.66% of MoDiPO) on MDM model, respectively. Additionally, the MLD model fine-tuned by our SoPo surpasses the SoTA model in terms of R-precision and MM Dist. Visualization results also show the efficacy of our SoPo in preference alignment.

Experiment Results


Quantitative results of preference alignment methods for text-to-motion generation on the HumanML3D test set.

MY ALT TEXT

Quantitative comparison of state-of-the-art text-to-motion generation on the HumanML3D test set.

MY ALT TEXT

Quantitative comparison of state-of-the-art text-to-motion generation on the KIT-ML test set.

MY ALT TEXT

Ablation studies on the threshold $\tau$ and the number of generated motion.

MY ALT TEXT

Visual results on HumanML3D dataset

MY ALT TEXT

Visual results on HumanML3D dataset. We integrate our SoPo into MLD. Here, the red text denotes descriptions inconsistent with the generated motion.



MY ALT TEXT

Visual results on HumanML3D dataset. We integrate our SoPo into MDM and MLD, respectively. Here, the red text denotes descriptions inconsistent with the generated motion.

Visualizations