Learning Bi-directional Feature Propagation with Latent Layout Modeling for Group Re-identification

Abstract

Group re-identification (G-ReID) aims to identify the same group of persons across the disjoint cameras. The key challenge of G-ReID is the robust feature extraction against the potential group layout and membership varitions. However, previous works focus more on the appearance modeling and less on the importance of group layout. In this paper, we propose a bi-directional feature propagation framework, which propagates information between group layout and member appearance. In addition, we propose the spatial generation framework, which analyses the group image and generates new images with different group layouts to simulate various layouts in the real world. Moreover, we propose a network that learns latent layout representations and propagates the layout representations with the member appearance representations. The proposed network achieves SOTA performance on two widely used G-ReID datasets, i.e., 87.9% mAP and 89.2% Rank-1 on CSG, 92.7% mAP and 90.1% Rank-1 on RoadGroup.

Publication
In International Conference on Pattern Recognition
Quan Zhang
Quan Zhang
PhD Student