Description
A variety of neural networks architectures are being studied to tackle blur in images and videos caused by a non-steady camera and objects being captured. In this paper, we present an overview of these existing networks and perform experiments to remove the blur caused by atmospheric turbulence. By performing experiments, we aim to examine the reusability of existing networks and identify aspects of the architecture that would be desirable in a system that is geared specifically towards atmospheric turbulence mitigation. In order to determine the kind of architecture that would be beneficial for mitigating atmospheric turbulence, we compare five different networks that have shown the most success, each with unique specifications. Some well received techniques include methods to increase the receptive field of the network, perform deblurring across multiple scales, use of patch based error and adversarial training. As turbulence mitigation involves the additional step of image stabilization, a stabilization algorithm is utilized to create a single image from a sequence that can be used to serve as the input for the network. However, by utilizing a video deblurring network, we also aim to determine if the need for the stabilization algorithm can be forgone making it possible to perform training in an end-to-end fashion by considering each frame to be a representation of a static scene. We perform the training on a synthetic dataset that simulates realistic blur using the physics of atmospheric turbulence and provide a comparison with optimization based deblurring algorithms.