Please wait a minute...
Tsinghua Science and Technology  2019, Vol. 24 Issue (2): 238-248    doi: 10.26599/TST.2018.9010123
    
Image Blind Deblurring Using an Adaptive Patch Prior
Yongde Guo, Hongbing Ma*
∙ Yongde Guo is with the Department of Electronic Engineering, Tsinghua University, Beijing 100084, China. E-mail: gyd14@mails.tsinghua.edu.cn.
∙ Hongbing Ma is with the Department of Electronic Engineering, Tsinghua University, Beijing 100084, and the College of Information Science and Engineering, Xinjiang University, Urumqi 830046, China.
Download: PDF (4084 KB)      HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

Image blind deblurring uses an estimated blur kernel to obtain an optimal restored original image with sharp features from a degraded image with blur and noise artifacts. This method, however, functions on the premise that the kernel is estimated accurately. In this work, we propose an adaptive patch prior for improving the accuracy of kernel estimation. Our proposed prior is based on local patch statistics and can rebuild low-level features, such as edges, corners, and junctions, to guide edge and texture sharpening for blur estimation. Our prior is a nonparametric model, and its adaptive computation relies on internal patch information. Moreover, heuristic filters and external image knowledge are not used in our prior. Our method for the reconstruction of salient step edges in a blurry patch can reduce noise and over-sharpening artifacts. Experiments on two popular datasets and natural images demonstrate that the kernel estimation performance of our method is superior to that of other state-of-the-art methods.



Key wordsblind deblurring      adaptive patch prior      kernel estimation      low-level features      internal patch information     
Received: 05 June 2018      Published: 29 April 2019
Corresponding Authors: Hongbing Ma   
About author:

Hongbing Ma received the PhD degree from Peking University, China, in 1999. He is currently an associate professor with the Department of Electronic Engineering, Tsinghua University, China. His main research interest covers image processing, pattern recognition, and spatial information processing and application.

Cite this article:

Yongde Guo, Hongbing Ma. Image Blind Deblurring Using an Adaptive Patch Prior. Tsinghua Science and Technology, 2019, 24(2): 238-248.

URL:

http://tst.tsinghuajournals.com/10.26599/TST.2018.9010123     OR     http://tst.tsinghuajournals.com/Y2019/V24/I2/238

Fig. 1 Sample of our processed patch value distribution and regionalization in a blurry image patch. The black solid line in the right figure represents the original blurred pixel values. The blue dashed line shows the step edge derived from the internal properties of the blurry patch using our method and is relative to the orange dashed line, which is generated by the ordinary 2D-Otsu method. The blue step edge is close to the extreme values of black line. The blue region represents our segmentation result and is superimposed on the left primitive blurry patch.
Fig. 2 Image patch decomposition for the pixel transform model. The processed patch g(u) is derived from the base patch f(u) multiplied by the standard deviation α of the latent image and added to its average β. In blind deconvolution, we regard g(u) as the image patch prior.
Fig. 3 Our blind deconvolution framework illustrated in a single image. The MAP scheme and image pyramid scaling strategy are used for blind deblurring. A blurry image is used as input. The refined edge can be extracted on each image scale level through edge selection. The sharp step edge of patch reconstruction is generated using 2D-Otsu. Our adaptive patch prior is generated by the pixel transform model. Then, the latent image and kernel objection function alternately solve for the generated intermediate image and blur kernel until they converge. Finally, the sharp image is restored through nonblind deconvolution with the estimated blur kernel.
MethodError ratioSuccess rate (%)PSNR (dB)SSIM
Ours1.1735100.0032.06840.9202
Yu et al.[10]1.716696.8830.60600.9006
Sun et al.[16]2.234190.6330.88250.9030
Pan et al.[22]1.2823100.0031.70760.9147
Xu and Jia[13]2.136593.7530.70930.8974
Cho and Lee[12]2.668868.7529.70560.8837
Perrone and Favaro[25]1.202493.7532.47800.9375
Levin et al.[3]2.058387.3530.05000.8960
Fergus et al.[1]13.526875.0028.37580.8451
Table 1 Quantitative measurement obtained by various methods for the Levin et al.[4] dataset.
4] dataset. These results show that our kernel outperforms other approaches and eliminates noise interference and ringing artifacts.">
Fig. 4 Visualization of the deblurring results of other tested methods for the Levin et al.[4] dataset. These results show that our kernel outperforms other approaches and eliminates noise interference and ringing artifacts.
4] dataset. An approach with an error rate that exceeds 3, has poor performance. Note that the results of Cho and Lee[12] and Perrone and Favaro[25] have a certain percentage at which the error ratio equals 1, which achieves the performance of ground truth. Overall, however, the performance of our approach is more comprehensive than that of other approaches because its success rate is 100% when the error ratio is under 1.5.">
Fig. 5 Cumulative error ratio distribution for the Levin et al.[4] dataset. An approach with an error rate that exceeds 3, has poor performance. Note that the results of Cho and Lee[12] and Perrone and Favaro[25] have a certain percentage at which the error ratio equals 1, which achieves the performance of ground truth. Overall, however, the performance of our approach is more comprehensive than that of other approaches because its success rate is 100% when the error ratio is under 1.5.
MethodError ratioSuccess rate (%)PSNR (dB)SSIM
Ours1.898097.8130.15380.8610
Yu et al.[10]2.218296.8829.41830.8518
Sun et al.[16]2.376493.4429.52790.8533
Lai et al.[18]2.124897.3429.60810.8421
Xu and Jia[13]3.629385.6328.31350.8492
Cho and Lee[12]8.690165.4726.23530.8138
Perrone and Favaro[25]9.368742.8124.42130.6581
Levin et al.[3]6.557746.7224.94100.7952
Michaeli and Irani[17]2.566295.9428.62100.8279
Krishnan et al.[5]12.023424.2223.17080.7540
Table 2 Quantitative comparison of the performance of the tested methods on the Sun et al.[16] dataset. Our method outperforms other test approaches.
16] dataset. The success rate and error ratio of our method are superior to those of other methods.">
Fig. 6 Cumulative error ratios of the compared methods on the Sun et al.[16] dataset. The success rate and error ratio of our method are superior to those of other methods.
16] dataset. In contrast to other methods, our method can accurately estimate the blur kernel with noise artifacts and obtain a sharp deblurred image.">
Fig. 7 Visual examples of the results obtained with the tested methods for the Sun et al.[16] dataset. In contrast to other methods, our method can accurately estimate the blur kernel with noise artifacts and obtain a sharp deblurred image.
Fig. 8 Visual deblurring results from the state-of-the-art algorithms on a real-world image. Our estimated kernel compares favorably with other testing approaches.
Fig. 9 Deblurring of each comparison method on another real-world image. Our method can restore more details and reduce noise in a sharp image.
[1]   Fergus R., Singh B., Hertzmann A., Roweis S. T., and Freeman W. T., Removing camera shake from a single photograph, ACM Transactions on Graphics, vol. 25, no. 3, pp. 787-794, 2006.
[2]   Shan Q., Jia J., and Agarwala A., High-quality motion deblurring from a single image, ACM Transactions on Graphics, vol. 27, no. 3, pp. 15-19, 2008.
[3]   Levin A., Weiss Y., Durand F., and Freeman W. T., High-quality motion deblurring from a single image, IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 12, pp. 2354-2367, 2011.
[4]   Levin A., Weiss Y., Durand F., and Freeman W. T., Efficient marginal likelihood optimization in blind deconvolution, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 2011, pp. 2657-2664.
[5]   Krishnan D., Tay T., and Fergus R., Blind deconvolution using a normalized sparsity measure, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 2011, pp. 233-240.
[6]   Pan J. and Su Z., Fast ℓ0-regularized kernel estimation for robust motion deblurring, IEEE Signal Processing Letters, vol. 20, no. 9, pp. 841-844, 2013.
[7]   Perrone D. and Favaro P., Total variation blind deconvolution: The devil is in the details, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 2909-2916.
[8]   Bronstein M. M., Bronstein A. M., Zibulevsky M., and Zeevi Y. Y., Blind deconvolution of images using optimal sparse representations, IEEE Trans. Image Processing, vol. 14, no. 6, pp. 726-736, 2005.
[9]   Zhang H., Yang J., and Zhang Y., Sparse representation based blind image deblurring, in Proc. IEEE Conf. Multimedia and Expo, Barcelona, Spain, 2011, pp. 1-6.
[10]   Yu J., Chang Z., Xiao C., and Sun W., Blind image deblurring based on sparse representation and structural self-similarity, in Proc. IEEE Conf. Acoustics, Speech and Signal Processing, New Orleans, LA, USA, 2017, pp. 1328-1332.
[11]   Joshi N., Szeliski R., and Kriegman D. J., PSF estimation using sharp edge prediction, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Anchorage, AK, USA, 2008, pp. 1-8.
[12]   Cho S. and Lee S., Fast motion deblurring, ACM Transactions on Graphics, vol. 28, no. 5, pp. 89-97, 2009.
[13]   Xu L. and Jia J., Two-phase kernel estimation for robust motion deblurring, in Proc. European Conf. Computer Vision, 2010, pp. 157-170.
[14]   Cho T. S., Paris S., Horn B. K., and Freeman W. T., Blur kernel estimation using the radon transform, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 2011, pp. 241-248.
[15]   Zhou Y. and Komodakis N., A MAP-estimation framework for blind deblurring using high-level edge priors, in Proc. European Conf. Computer Vision, Cham, Switzerland, 2014, pp. 142-157.
[16]   Sun L., Cho S., and Wang J., Edge-based blur kernel estimation using patch priors, in Proc. IEEE Conf. Computational Photography, Cambridge, MA, USA, 2013, pp. 1-8.
[17]   Michaeli T. and Irani M., Blind deblurring using internal patch recurrence, in Proc. European Conf. Computer Vision, Cham, Switzerland, 2014, pp. 783-798.
[18]   Lai W. S., Ding J. J., Lin Y. Y., and Chuang Y. Y., Blur kernel estimation using normalized color-line prior, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 64-72.
[19]   Ren W., Cao X., Pan J., Guo X., Zuo W., and Yang M., Image deblurring via enhanced low-rank prior, IEEE Trans. Image Processing, vol. 25, no. 7, pp. 3426-3437, 2016.
[20]   Hacohen Y., Shechtman E., and Lischinski D., Deblurring by example using dense correspondence, in Proc. IEEE Conf. Computer Vision, Sydney, Australia, 2013, pp. 2384-2391.
[21]   Kenig T., Kam Z., and Feuer A., Blind image deconvolution using machine learning for three-dimensional microscopy, IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 12, pp. 2191-2204, 2010.
[22]   Pan J., Sun D., Pfister H., and Yang M. H., Blind image deblurring using dark channel prior, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 1628-1636.
[23]   Yan Y., Ren W., Guo Y., Wang R., and Cao X., Image deblurring via extreme channels prior, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 6978-6986.
[24]   Ren W., Pan J., Cao X., and Yang M., Video deblurring via semantic segmentation and pixel-wise non-linear kernel, in Proc. IEEE Conf. Computer Vision, Venice, Italy, 2017, pp. 1086-1094.
[25]   Perrone D. and Favaro P., A logarithmic image prior for blind deconvolution, International Journal of Computer Vision, vol. 117, no. 2, pp. 159-172, 2016.
[26]   Szeliski R., Computer Vision: Algorithms and Applications. Springer Science+Business Media, 2010.
[27]   Liu J., Li W., and Tian Y., Automatic thresholding of gray-level pictures using two-dimension Otsu method, in Proc. IEEE Conf. Circuits and Systems, Shenzhen, China, 1991, pp. 325-327.
[28]   Zontak M. and Irani M., Internal statistics of a single natural image, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 2011, pp. 977-984.
[29]   Joshi N., Zitnick C. L., Szeliski R., and Kriegman D. J., Image deblurring and denoising using color priors, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Miami, FL, USA, 2009, pp. 1550-1557.
[30]   Zoran D. and Weiss Y., From learning models of natural image patches to whole image restoration, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Barcelona, Spain, 2011, pp. 479-486.
No related articles found!