Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/137495
Type: Thesis
Title: Deep Learning for Image Deblurring and Reflection Removal
Author: Yang, Jie
Issue Date: 2021
School/Discipline: School of Computer Science
Abstract: This thesis focuses on two highly ill-posed inverse problems in low-level computer vision, i.e. image deblurring and reflection removal. Digital photos taken in the realworld are likely to suffer from certain types of degration, for example, the motion of camera and objects cause image blur, lights from objects in front of glass lead to reflections that will obstruct the background behind the glass, etc. While in some scenarios image blur and reflections may be appealing to photographers, more often they are undesirable, and both image blur and reflections can reduce the performance of other computer vision systems. In those situations, it is significant to obtain clear sharp images from corrupted ones by image deblurring and reflection removal. Image deblurring aims to recover the sharp image alone or with the blur kernel and reflection removal aims to recover the clear background image alone or with the reflection image. We focus to use deep learning based approach to address the image blurring and reflection removal problem in this thesis. Conventional methods usually rely on manually defined priors and image features, which may not reflect the nature of real data and the type and range of blur and reflections that can be handled are limited. By learning from data, we are able to model more general image blur and reflections. For image deblurring, we focus on removing pixel-wise heterogeneous motion blur. We propose a fully convolutional network to estimate a dense motion flow from a blurry image and recovers the clear image from the estimated motion flow. Learning a prior over the latent image would require modeling all possible image content, while an easier task is to learn the motion flow instead, which allows the model to focus on the cause of the blur, irrespective of the image content. Our network is the first universal end-to-end mapping from the blurred image to the dense motion flow. To train the network, we simulate motion flows to generate synthetic blurred-imagemotion- flow pairs. The proposed method outperforms the state-of-the-art on both synthetic and challenging realistic blurred images. We address the reflection removal problem in two different approaches. The first is through supervised learning which requires mixed-background-reflection image triplets as training data. To obtain sufficient training data, we propose to simulate the reflections from two clear images, which represent background and reflection layer respectively, using a general reflection model. To remove reflection truly well, we argue that it is essential to estimate the reflection and utilize it to estimate the background image. We propose a cascade neural network to estimate both the background image and the reflection. The network uses the estimated background image to estimate the reflection, and then use the estimated reflection to estimate the background image, which significantly improves reflection removal. The second approach is through self-supervised learning, which alleviates the necessity of ground-truth training data. We propose a reflection removal framework relying on learning from real-world image pairs with reflection taken from multiple views. Our method only relies on the supervision from the geometry correspondence and consistency between the multi-view consistency. A series of novel consistency losses are introduced to effectively and robustly utilize the imperfect cues derived from the multi-view consistency. By training on easily obtained real data without ground-truth, the model generalizes better on real-world images.
Advisor: Shi, Javen Qinfeng
Liu, Lingqiao
Dissertation Note: Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2021
Keywords: Image deblurring
reflection removal
deep learning
Provenance: This electronic version is made publicly available by the University of Adelaide in accordance with its open access policy for student theses. Copyright in this thesis remains with the author. This thesis may incorporate third party material which has been used by the author pursuant to Fair Dealing exceptions. If you are the owner of any included third party copyright material you wish to be removed from this electronic version, please complete the take down form located at: http://www.adelaide.edu.au/legals
Appears in Collections:Research Theses

Files in This Item:
File Description SizeFormat 
Yang2021_PhD.pdf10.86 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.