Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/123378
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: A theoretically sound upper bound on the triplet loss for improving the efficiency of deep distance metric learning
Author: Do, T.-T.
Tran, T.
Reid, I.
Kumar, V.
Hoang, T.
Carneiro, G.
Citation: Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, vol.2019-June, pp.10396-10405
Publisher: IEEE
Issue Date: 2019
Series/Report no.: IEEE Conference on Computer Vision and Pattern Recognition
ISBN: 9781728132945
ISSN: 1063-6919
2575-7075
Conference Name: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (16 Jun 2019 - 20 Jun 2019 : Long Beach, CA, USA)
Statement of
Responsibility: 
Thanh-Toan Do, Toan Tran, Ian Reid, Vijay Kumar, Tuan Hoang, Gustavo Carneiro
Abstract: We propose a method that substantially improves the efficiency of deep distance metric learning based on the optimization of the triplet loss function. One epoch of such training process based on a naive optimization of the triplet loss function has a run-time complexity O(N^³), where N is the number of training samples. Such optimization scales poorly, and the most common approach proposed to address this high complexity issue is based on sub-sampling the set of triplets needed for the training process. Another approach explored in the field relies on an ad-hoc linearization (in terms of N) of the triplet loss that introduces class centroids, which must be optimized using the whole training set for each mini-batch - this means that a na"ive implementation of this approach has run-time complexity O(N^²). This complexity issue is usually mitigated with poor, but computationally cheap, approximate centroid optimization methods. In this paper, we first propose a solid theory on the linearization of the triplet loss with the use of class centroids, where the main conclusion is that our new linear loss represents a tight upper-bound to the triplet loss. Furthermore, based on the theory above, we propose a training algorithm that no longer requires the centroid optimization step, which means that our approach is the first in the field with a guaranteed linear run-time complexity. We show that the training of deep distance metric learning methods using the proposed upper-bound is substantially faster than triplet-based methods, while producing competitive retrieval accuracy results on benchmark datasets (CUB-200-2011 and CAR196).
Rights: ©2019 IEEE
DOI: 10.1109/CVPR.2019.01065
Grant ID: http://purl.org/au-research/grants/arc/DP180103232
Published version: https://ieeexplore.ieee.org/xpl/conhome/8938205/proceeding
Appears in Collections:Aurora harvest 8
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.