Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/133427
Citations
Scopus Web of Science® Altmetric
?
?
Type: Journal article
Title: Dual-attention-guided network for ghost-free high dynamic range imaging
Author: Yan, Q.
Gong, D.
Shi, J.Q.
van den Hengel, A.
Shen, C.
Reid, I.
Zhang, Y.
Citation: International Journal of Computer Vision, 2021; 130(1)
Publisher: Springer Science and Business Media LLC
Issue Date: 2021
ISSN: 0920-5691
1573-1405
Statement of
Responsibility: 
Qingsen Yan, Dong Gong, Javen Qinfeng Shi, Anton van den Hengel, Chunhua Shen, Ian Reid and Yanning Zhang
Abstract: Ghosting artifacts caused by moving objects and misalignments are a key challenge in constructing high dynamic range (HDR) images. Current methods first register the input low dynamic range (LDR) images using optical flow before merging them. This process is error-prone, and often causes ghosting in the resulting merged image. We propose a novel dual-attention-guided end-to-end deep neural network, called DAHDRNet, which produces high-quality ghost-free HDR images. Unlike previous methods that directly stack the LDR images or features for merging, we use dual-attention modules to guide the merging according to the reference image. DAHDRNet thus exploits both spatial attention and feature channel attention to achieve ghost-free merging. The spatial attention modules automatically suppress undesired components caused by misalignments and saturation, and enhance the fine details in the non-reference images. The channel attention modules adaptively rescale channel-wise features by considering the inter-dependencies between channels. The dual-attention approach is applied recurrently to further improve feature representation, and thus alignment. A dilated residual dense block is devised to make full use of the hierarchical features and increase the receptive field when hallucinating missing details. We employ a hybrid loss function, which consists of a perceptual loss, a total variation loss, and a content loss to recover photo-realistic images. Although DAHDRNet is not flow-based, it can be applied to flow-based registration to reduce artifacts caused by optical-flow estimation errors. Experiments on different datasets show that the proposed DAHDRNet achieves state-of-the-art quantitative and qualitative results.
Keywords: High dynamic range imaging; de-ghosting; attention mechanism; deep learning
Rights: © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021
DOI: 10.1007/s11263-021-01535-y
Grant ID: http://purl.org/au-research/grants/arc/DP140102270
http://purl.org/au-research/grants/arc/DP160100703
Published version: http://dx.doi.org/10.1007/s11263-021-01535-y
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.