Skeleton Motion Words for Unsupervised Skeleton-Based Temporal Action Segmentation

1 University of Bonn
2 University College London
3 Lamarr Institute for Machine Learning and Artificial Intelligence
Learning from model weights

Skeleton Motion Quantization (SMQ), takes a set of untrimmed skeleton sequences as input (top) and discovers actions consistent across all sequences (bottom). This is achieved by jointly segmenting and clustering all sequences. The letters A, B, C, and D correspond to the identified actions.

Abstract

Current state-of-the-art methods for skeleton-based temporal action segmentation are predominantly supervised and require annotated data, which is expensive to collect. In contrast, existing unsupervised temporal action segmentation methods have focused primarily on video data, while skeleton sequences remain underexplored, despite their relevance to real-world applications, robustness, and privacy-preserving nature. In this paper, we propose a novel approach for unsupervised skeleton-based temporal action segmentation. Our method utilizes a sequence-to-sequence temporal autoencoder that keeps the information of the different joints disentangled in the embedding space. Latent skeleton sequences are then divided into non-overlapping patches and quantized to obtain distinctive skeleton motion words, driving the discovery of semantically meaningful action clusters. We thoroughly evaluate the proposed approach on three widely used skeleton-based datasets, namely HuGaDB, LARa, and BABEL. The results demonstrate that our model outperforms the current state-of-the-art unsupervised temporal action segmentation methods.

Skeleton Motion Quantization (SMQ)

Given a set of skeleton sequences, our approach discovers actions that are semantically consistent across all sequences. The discovered actions correspond to clusters of learned motion words in the codebook. The model first encodes the skeleton sequences into a joint-based disentangled embedding space. Each embedded sequence is then divided into short non-overlapping temporal patches and a codebook with K motion words is learned for the entire dataset via a patch-based quantization process, which assigns each patch to its nearest motion word using Euclidean distance. This assignment directly provides the segmentation of each sequence based on the motion word indices. In order to learn meaningful representations, the decoder reconstructs the input sequences from the quantized patches, where each patch is replaced by its corresponding motion word.

Learning from model weights

Qualitative Results - Segmentation

Qualitative results for unsupervised action segmentation algorithms.

Qualitative Results - Latent Patches

Qualitative results for latent patches. Left: Ground-truth skeletons from human-annotated actions. Right: Top-9 input skeleton patches closest to their assigned cluster centers in the latent space, illustrating alignment between discovered latent patterns and annotated actions.

Quantitative Results

Comparison to supervised and unsupervised temporal action segmentation methods on the HuGaDB and LARa datasets.


Comparison to unsupervised temporal action segmentation methods on the BABEL dataset.

BibTeX

@misc{gökay2025skeletonmotionwordsunsupervised,
        title={Skeleton Motion Words for Unsupervised Skeleton-Based Temporal Action Segmentation}, 
        author={Uzay Gökay and Federico Spurio and Dominik R. Bach and Juergen Gall},
        year={2025},
        eprint={2508.04513},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2508.04513}, 
  }