Uncertainty Modeling for Group Re-Identification


Group re-identification (GReID) aims to correctly associate images containing the same group members captured with non-overlapping camera networks, which has important applications in video surveillance. Unlike the person re-identification, the unique challenge of GReID lies in variations of group structure, including the number and layout of members. Current methods use certainty modeling, in which the specific group structure presented in each image is considered. However, certainty modeling can only describe finite group structures and shows poor generalization for unseen group structures, i.e., group variations that do not exist in the training set. In this paper, we propose a methodology called uncertainty modeling, which excavates near-infinite group structures from finite samples by simulating variations in both number and layout. Specifically, member uncertainty treats the number of intra-group members as a truncated Gaussian distribution instead of a fixed value and then simulates member variations by dynamic sampling. Layout uncertainty constructs random affine transformations about the positions of members to enlarge the fixed schemes in the training set. To implement the proposed methodology, we technically propose an Uncertainty-Modeling Second-Order Transformer (UMSOT) that extracts a first-order token for each member and further uses these tokens to learn a second-order token as a group feature. The UMSOT exploits the structural advantages of the transformer to explicitly extract layout features and efficiently integrate appearance and layout features, which are hardly achievable by current CNN- and GNN-based methods. Comprehensive experiments on four datasets (CSG, SYSUGroup, RoadGroup, and iLIDS-MCTS), fully demonstrate the superiority of the proposed method, which surprisingly outperforms the state-of-the-art method by 30.4% in Rank1 on the CSG dataset.

In International Journal of Computer Vision
Quan Zhang
Quan Zhang
PhD Student