Please use this identifier to cite or link to this item:
https://hdl.handle.net/2440/139587
Citations | ||
Scopus | Web of Science® | Altmetric |
---|---|---|
?
|
?
|
Type: | Journal article |
Title: | Learning Resolution-Adaptive Representations for Cross-Resolution Person Re-Identification |
Author: | Wu, L.Y. Liu, L. Wang, Y. Zhang, Z. Boussaid, F. Bennamoun, M. Xie, X. |
Citation: | IEEE Transactions on Image Processing, 2023; 32:4800-4811 |
Publisher: | Institute of Electrical and Electronics Engineers (IEEE) |
Issue Date: | 2023 |
ISSN: | 1057-7149 1941-0042 |
Statement of Responsibility: | Lin Yuanbo Wu, Lingqiao Liu, Yang Wang, Zheng Zhang, Farid Boussaid, Mohammed Bennamoun, Xianghua Xie |
Abstract: | Cross-resolution person re-identification (CRReID) is a challenging and practical problem that involves matching low-resolution (LR) query identity images against high-resolution (HR) gallery images. Query images often suffer from resolution degradation due to the different capturing conditions from real-world cameras. State-of-the-art solutions for CRReID either learn a resolution-invariant representation or adopt a super-resolution (SR) module to recover the missing information from the LR query. In this paper, we propose an alternative SR-free paradigm to directly compare HR and LR images via a dynamic metric that is adaptive to the resolution of a query image. We realize this idea by learning resolution-adaptive representations for cross-resolution comparison. We propose two resolution-adaptive mechanisms to achieve this. The first mechanism encodes the resolution specifics into different subvectors in the penultimate layer of the deep neural network, creating a varying-length representation. To better extract resolution-dependent information, we further propose to learn resolution-adaptive masks for intermediate residual feature blocks. A novel progressive learning strategy is proposed to train those masks properly. These two mechanisms are combined to boost the performance of CRReID. Experimental results show that the proposed method outperforms existing approaches and achieves state-of-the-art performance on multiple CRReID benchmarks. |
Rights: | © 2023, IEEE |
DOI: | 10.1109/TIP.2023.3305817 |
Grant ID: | http://purl.org/au-research/grants/arc/DP210101682 http://purl.org/au-research/grants/arc/DP210102674 |
Published version: | http://dx.doi.org/10.1109/tip.2023.3305817 |
Appears in Collections: | Computer Science publications |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.