Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/127216
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Attention-guided network for ghost-free high dynamic range imaging
Author: Yan, Q.
Gong, D.
Shi, Q.
Van Den Hengel, A.
Shen, C.
Reid, I.
Zhang, Y.
Citation: Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, vol.2019-June, pp.1751-1760
Publisher: IEEE
Publisher Place: online
Issue Date: 2019
Series/Report no.: IEEE Conference on Computer Vision and Pattern Recognition
ISBN: 9781728132938
ISSN: 1063-6919
2575-7075
Conference Name: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (15 Jun 2019 - 20 Jun 2019 : Long Beach, CA)
Statement of
Responsibility: 
Qingsen Yan, Dong Gong, Qinfeng Shi, Anton van den Hengel, Chunhua Shen, Ian Reid, Yanning Zhang
Abstract: Ghosting artifacts caused by moving objects or misalignments is a key challenge in high dynamic range (HDR) imaging for dynamic scenes. Previous methods first register the input low dynamic range (LDR) images using optical flow before merging them, which are error-prone and cause ghosts in results. A very recent work tries to bypass optical flows via a deep network with skip-connections, however, which still suffers from ghosting artifacts for severe movement. To avoid the ghosting from the source, we propose a novel attention-guided end-to-end deep neural network (AHDRNet) to produce high-quality ghost-free HDR images. Unlike previous methods directly stacking the LDR images or features for merging, we use attention modules to guide the merging according to the reference image. The attention modules automatically suppress undesired components caused by misalignments and saturation and enhance desirable fine details in the non-reference images. In addition to the attention model, we use dilated residual dense block (DRDB) to make full use of the hierarchical features and increase the receptive field for hallucinating the missing details. The proposed AHDRNet is a non-flow-based method, which can also avoid the artifacts generated by optical-flow estimation error. Experiments on different datasets show that the proposed AHDRNet can achieve state-of-the-art quantitative and qualitative results.
Rights: ©2019 IEEE
DOI: 10.1109/CVPR.2019.00185
Grant ID: http://purl.org/au-research/grants/arc/DP140102270
http://purl.org/au-research/grants/arc/DP160100703
Published version: http://dx.doi.org/10.1109/cvpr.2019.00185
Appears in Collections:Aurora harvest 4
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.