Person re-identification across visible and near-infrared cameras (VIS-NIR Re-ID) has widespread applications. The challenge of this task lies in heterogeneous image matching. Existing methods attempt to learn discriminative features via complex feature extraction strategies. Nevertheless, the distributions of visible and near-infrared features are disparate caused by modal gap, which significantly affects feature metric and makes the performance of the existing models poor. To address this problem, we propose a novel approach from the perspective of metric learning. We conduct metric learning on a well-designed angular space. Geometrically, features are mapped from the original space to the hypersphere manifold, which eliminates the variations of feature norm and concentrates on the angle between the feature and the target category. Specifically, we propose a cyclic projection network (CPN) that transforms features into an angle-related space while identity information is preserved. Furthermore, we proposed three kinds of loss functions, AICAL, LAL and DAL, in angular space for angular metric learning. Multiple experiments on two existing public datasets, SYSU-MM01 and RegDB, show that performance of our method greatly exceeds the SOTA performance.