Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/55204
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChin, T.en
dc.contributor.authorSuter, D.en
dc.date.issued2006en
dc.identifier.citationProceedings of the 17th British Machine Vision Conference, Edinburgh, U.K., 2006: pp.939-948en
dc.identifier.isbn1904410146en
dc.identifier.urihttp://hdl.handle.net/2440/55204-
dc.description.abstractThe Kernel Principal Component Analysis (KPCA) has been effectively applied as an unsupervised non-linear feature extractor in many machine learning applications. However, with a time complexity of O(n3), the practicality of KPCA on large datasets is minimal. In this paper, we propose an approximate incremental KPCA algorithm which allows efficient processing of large datasets. We extend a linear PCA updating algorithm to the non-linear case by utilizing the kernel trick, and apply a reduced set construction method to compress expressions for the derived KPCA basis at each update. In addition, we show how multiple feature space vectors can be compressed efficiently, and how approximated KPCA bases can be re-orthogonalized using the kernel trick. The proposed method is justified through experimental validations.en
dc.description.statementofresponsibilityTat-Jun Chin and David Suteren
dc.description.urihttp://www.macs.hw.ac.uk/bmvc2006/proceedings.htmlen
dc.language.isoenen
dc.publisherThe british Machine Vision Associationen
dc.titleIncremental Kernel PCA for Efficient Non-linear Feature Extractionen
dc.typeConference paperen
dc.contributor.conferenceBritish Machine Vision Conference (17th : 2006 : Edinburgh)en
dc.publisher.placeOnlineen
pubs.publication-statusPublisheden
dc.identifier.orcidSuter, D. [0000-0001-6306-3023]en
Appears in Collections:Aurora harvest 5
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.