Camera-Tracklet-Aware Contrastive Learning for Unsupervised Vehicle Re-Identification

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 108
  • Download : 0
Recently, vehicle re-identification methods based on deep learning constitute remarkable achievement. However, this achievement requires large-scale and well-annotated datasets. In constructing the dataset, assigning globally available identities (Ids) to vehicles captured from a great number of cameras is labour-intensive, because it needs to consider their subtle appearance differences or viewpoint variations. In this paper, we propose camera-tracklet-aware contrastive learning (CTACL) using the multi-camera tracklet information without vehicle identity labels. The proposed CTACL divides an unlabelled domain, i.e., entire vehicle images, into multiple camera-level subdomains and conducts contrastive learning within and beyond the subdomains. The positive and negative samples for contrastive learning are defined using tracklet Ids of each camera. Additionally, the domain adaptation across camera networks is introduced to improve the generalisation performance of learnt representations and alleviate the performance degradation resulted from the domain gap between the subdomains. We demonstrate the effectiveness of our approach on video-based and image-based vehicle Re-ID datasets. Experimental results show that the proposed method outperforms the recent state-of-the-art unsupervised vehicle Re-ID methods. The source code for this paper is publicly available on https://github.com/andreYoo/CTAM-CTACL-VVReID.git.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2022-05
Language
English
Citation

39th IEEE International Conference on Robotics and Automation, ICRA 2022, pp.905 - 911

ISSN
1050-4729
DOI
10.1109/ICRA46639.2022.9812007
URI
http://hdl.handle.net/10203/299147
Appears in Collection
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0