Yuan-Kai Wang, T. M. Pan, C. P. Sun. "A CNN-RNN Siamese framework with multi-level aggregation for video-based person re-identification," Scientific Reports 16, 8224, 2026. DOI
Person re-identification (re-ID) in video sequences is a central task in surveillance and computer vision, yet it continues to present substantial challenges due to occlusion, viewpoint variation, and noisy frames. This study proposes a compact deep learning framework that integrates convolutional features, recurrent temporal modeling, and multi-level similarity aggregation to effectively capture both fine-grained spatial cues and long-range temporal patterns. The framework is deliberately designed as a compact CNN–GRU architecture, thereby avoiding the depth and computational demands of transformer-based backbones while preserving robust recognition capabilities. Experimental evaluations reveal clear advantages over conventional and Siamese-based approaches, confirming the complementary nature of spatial and temporal features and the effectiveness of efficient pooling strategies. These findings indicate that accurate and resource-efficient person re-ID can be achieved through compact architectures, offering practical potential for implementation in real-world, resource-constrained environments.
Keywords: person re-identification, video-based recognition, Siamese network, temporal modeling, multi-level feature aggregation, resource-efficient architectures
Figure 1 Architecture of the proposed MSP-SRC. (a) Training phase (b) Testing phase