검색 상세

Restoring Signals from Spatially-Heterogeneous Image Distortions

초록/요약

Image restoration is a classic and fundamental task in the computer vision field, the problem of restoring a distortion-free image from its corrupted observation. Recently, the performance of the image restoration methods has been shown drastic improvement since using a deep learning-based approach. However, most of the previous deep learning-based methods assume that the image is corrupted only with a single distortion. In fact, there are many different types of distortions that affect image quality and various scenarios that can contaminate an image. Especially, we need to deal with a more complex scenario that distortions occur simultaneously within a single image. Depending on the high-level application side, such as medical image processing or 3D object reconstruction, many applications take the image reconstructing process by merging several images that are gotten from different devices. Because there are many differences in the performance between the devices, the reconstructed image from an application may have spatially-varying distortions. To expand the coverage of image restoration, we propose a new image restoration task and a dataset which is handling with degraded images by spatially-heterogeneous distortions. Also, we propose a novel deep learning-based restoration method to restore the above new image restoration task. It is designed using two Multi-Task Learning(MTL) approaches – Mixture of Experts, Parameter Sharing- by complementarily merging these methods. In our method, each parameter shared expert learns meaningful features by dividing a complex restoration case. The experimental results show that our proposed method works better than the existing image restoration methods.

more

목차

I. Introduction 1
II. Related Works 5
A. Image Distortion Restoration 5
III. Preliminaries 6
A. Mixture of Experts 6
B. Parameter Sharing 7
IV. Dataset 9
A. Spatially-Heterogeneous Distortion Dataset 9
V. Methodology 11
A. Overview 11
B. Architecture of the MEPSNet 12
C. Mixture of Parameter Shared Experts 14
D. Attentive Feature Fusion 17
VI. Experiment 19
A. Implementation and Training Details 19
B. Model Analysis 19
C. Comparison Models 24
D. Comparison Results 25
VII. Conclusion 28
VIII. References 29

more