Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/126651
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorRowntree, T.-
dc.contributor.authorPontecorvo, C.-
dc.contributor.authorReid, I.-
dc.date.issued2019-
dc.identifier.citationProceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA 2019), 2019, pp.1-7-
dc.identifier.isbn9781728138572-
dc.identifier.urihttp://hdl.handle.net/2440/126651-
dc.description.abstractThis paper describes a system for estimating the course gaze or 1D head pose of multiple people in a video stream from a moving camera in an indoor scene. The system runs at 30 Hz and can detect human heads with a F-Score of 87.2% and predict their gaze with an average error 20.9° including when they are facing directly away from the camera. The system uses two Convolutional Neural Networks (CNNs) for head detection and gaze estimation respectively and uses common tracking and filtering techniques for smoothing predictions over time. This paper is application-focused and so describes the individual components of the system as well as the techniques used for collecting data and training the CNNs.-
dc.description.statementofresponsibilityThomas Rowntree, Carmine Pontecorvo, Ian Reid-
dc.language.isoen-
dc.publisherIEEE-
dc.rights© 2019 IEEE-
dc.source.urihttps://ieeexplore.ieee.org/xpl/conhome/8943071/proceeding-
dc.titleReal-time human gaze estimation-
dc.typeConference paper-
dc.contributor.conferenceInternational Conference on Digital Image Computing: Techniques and Applications (DICTA) (2 Dec 2019 - 4 Dec 2019 : Perth, Australia)-
dc.identifier.doi10.1109/DICTA47822.2019.8945919-
dc.publisher.placeonline-
pubs.publication-statusPublished-
dc.identifier.orcidReid, I. [0000-0001-7790-6423]-
Appears in Collections:Aurora harvest 4
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.