Monday, June 3, 2019
Image Super-Resolver using Cascaded Linear Regression
Image Super-Resolver utilise Cascaded elongated RegressionAbstract A public figure of alert super- resolution algorithmic programs fail in modeling the blood between high and low resolution send off patches and time complexity in training the model. To overcome the above-stated problem, simple, effective, broad-shouldered and fast compass super-resolver ( serf) found on cascaded analogue arrested development has been used for acquire the model parameters. The kitchen stove divide into patches atomic number 18 grouped into thumpings using k- sloppeds clod algorithm for learning the model parameter based on series of linear least squ be function, named cascaded linear reverting to identify the deficient pointedness information. This climb up has been assumed using MATLAB for various dates. The simulation get outs show that serf gives better PSNR and less computation cost comp ard to existing rules.Keywords-Cascaded linear regression, example learning based ima ge super-resolution, K-means.Super-Resolution (SR) is the swear out of producing a high-resolution (HR) image or video from low-resolution images or frames. In this technology, multiple low-resolution (LR) images are utilise to dumbfound the single high-resolution image. The image super-resolution is applied in a wide range, including the areas of military, medicine, public safety and computer vision, all of which will be in great need of this technology. The SR process is an ill-posed inverse problem, even though the estimation of HR image from LR input image has many possible solutions. thither are many SR algorithms available to resolve this ill-pose problem. Interpolation Based method is the most intuitive method for the image super-resolution. This kind of algorithm has the low-resolution image registered on the grid of the high-resolution image to be calculated. Reconstruction based method is mainly based on iterative back acoustic projection method. This algorithm is ver y convergent, simple and direct, but the resolution is not steady and unique. Because of the limitation of the reconstruction algorithm, the learning-based super-resolution technology emerges as an active research area. tuition based approach synthesize HR image from a training set of HR and LR image pairs. This approach commonly work on the image patches (Equal-sized patches which is divided from the original image with overlaps between neighbouring patches). Since, learning based method achieves good performance result for HR image recovery most of the recent technologies bind this methodology.Freeman et al 1 describe a learning based method for low-level vision problem-estimating scenes from images and modeling the relation between synthetic world of images and its agree images with markov network. This technique use Bayesian belief propagation to find out a local maximum of the posterior probability for the scene of given image. This method shows the benefits of applying mac hine learning network and large datasets to the problem of visual interpretation. Sun et al 2 use the Bayesian approach to image hallucination where HR images are hallucinated from a generic LR images using a set of training images. For practical applications, the robustness of this Bayesian approach beats an inaccurate PSF. To overcome the estimation of PSF, Wang et al 3 aspire a framework. It is based on annealed Gibbs sampling method. This framework utilized both SR reconstruction constraint and a patch based image synthesis constraint in a general probabilistic and also has potential to reduce the other low-level vision related problems. A new approach introduced by Yang et al 4 to represent single image super-resolution via sparse representation. With the help of low resolution input image sparse model, output high resolution image canister be generated. This method is superior to patch-based super-resolution method 3. Zedye et al 5 proposed a sparse representation model f or single image scale-up problem. This method reduces the computational complexity and algorithmic architecture than Zhan 6 model. Gao et al 7 introduce the sparsity based single image super-resolution by proposing a structure prior based sparse representation. But, this model lags in estimation of model parameter and sparse representation. Freedman et al 8 extend the existing example-based learning framework for up-scaling of single image super-resolution. This broad method follows a local similarity assumption on images and extract localized region from input image. This technique retains the quality of image while minify the nearest-neighbour search time. Some recent techniques for single image SR learn a mapping from LR domain to HR domain through regression operation. godly by the concept of regression 9, Kim 10 and Ni Nguyen 11 use the regression model for estimating the missing detail information to resolve SR problem. Yang and Wang 12 presented a self-learning approach fo r SR, which maturate support vector regression (SVR) with image sparse co- cost-efficient to make the model relationship between LR and HR domain. This method follows bayes decision theory for selecting the best SVR model which produces the minimum SR reconstruction error Kim and Kwon 13 proposed kernel ridge regression (KRR) to train the model parameter for single image SR. He and siu 14 presented a model which estimates the parameter using Gaussian process regression (GPR).Some efforts fork out been taken to reduce the time complexity. Timofte et al 15 proposed Anchored neighbourhood regression (ANR) with projection matrices for mapping the LR image patches onto the HR image patches. Yang et al 16 combined two fundamental SR approaches-learning from datasets and learning from self-examples. The effect of noise and visual artifacts are control by combining the regression on multiple in-place examples for better estimation. Dong et al 17 18 proposed a deep learning convolutional neural network (CNN) to model the relationship between LR and HR images. This model performs end-to-end mapping which formulates the non-linear mapping and jointly optimize the number of layers.An important issues of the example learning based image SR technique are how to model the mapping relationship between LR and HR image patches most existing models either hard to diverse natural images or consume a lot of time to train the model parameters. The existing regression functions cannot model the complicated mapping relationship between LR and HR images.Considering this problem, we have developed a new image super-resolver for single image SR which consisting of cascaded linear regression (series of linear regression) function. In this method, depression the images are subdivided into equal-sized image patches and these image patches are grouped into clusters during training phase. Then, each clusters learned with model parameter by a series of linear regression, thereby reducin g the gap of missing detail information. Linear regression produces a closed-form solution which makes the proposed method simple and efficient.The paper is organized as follows. Section II describes a series of linear regression, results are discussed in section III and section IV concludes the paper.Inspired by the concept of linear regression method for face detection 19, a series of linear regression framework is used for image super-resolution. Here, the framework of cascaded linear regression in and how to use it for image SR were explained.A. Series of Linear Regression FrameworkThe main idea behind cascaded linear regression is to learn a set of linear regression function for each cluster thereby gradually decreasing the gaps of high frequency details between the estimated HR image patches and the ground truth image patches. In order to produce the original HR image from LR input image, first interpolate LR image to obtain the interpolated LR image with same size as HR image . This method works at the patch level, each linear regressor parameter computes an increment from a previous image patch, and the present image patch is then updated in cascaded manner. (1) (2) denotes the estimated image patch after t-stages. denotes the estimated increment. denotes feature extractor by which the f-dimensional feature vector can be obtained. Linear regressor parameters at t-stageT Total number of regression stages.The next step is learning of the linear regression parameters and for T stages. Relying on these linear regression T stages, parameters for regressors are subsequently learnt to reduce the total number of reconstruction errors and to make presently updated image patch more appropriate to generate the HR patch. Using least squares form to optimize and , it can be written as, (3)The regularization term accomplishes a constraint on the linear regression parameters and to avert over-fitting and be the data fidelity term and the regularization term. At each regression stage, a new dataset values can be created by recurrently applying the update rule in (1) with learnedand. Next, and can be learned subsequently using (2) in cascade manner. Fig. 1. Flow of cascaded linear regression frameworkB. Pseudo cypher For Cascaded Linear Regression AlgorithmThe Pseudo code for cascaded linear regression algorithm for training phase is given below,Input , image patch size d xdfor t=1 to TdoApply k-means to obtain cluster centres for i = 1 to cdocompute A and b.update the values of A and b in .end forend forThe output of this training phase is and cluster centroid.C. SERF Image Super-ResolverThis section deals with cascaded linear regression based SERF image. The process starts by converting color image from the RGB space into the YCbCr space where the Y channel represents luminance, and the Cb and Cr channels represent the chromaticity. SERF is only applied to the Y channel. The Cb and Cr channels reflect G and B channels of the interpol ated LR image.D. SERF ImplementationTo extract the high frequency details from each patch by subtracting the mean value from each patch as feature patch denoted as . Since the frequency content is missing from the initially estimated image patches, the goal of a series of linear regression is to compensate for high frequency detail (4), (4)To diminish the error between HR feature patch and the estimated feature patch, it is normal that the regression output should be small. Hence, by putting the constraint on regularization term to (4), the output is, (5)Where, is the regularization parameter.t Denotes the number of regression stages. denotes the feature extractor. and are set to 1 and 0.25.A closed-form solution for equation (5) can be computed by making the partial derivative of equation (5) equal to zero.In testing phase, for a given LR image, bicubic interpolation is applied to up sample it by a factor of r. This interpolated image is divided into M image patches. Feature patches are calculated by subtracting the mean value from each image patch. At the tth stage, each feature patch is assigned to a cluster l jibe to the Euclidean distance. To obtain the feature subsequently, linear regression parameters are applied to compute the increment. Concurrently, the feature patch is updated using, (6)After passing through T-stages, reconstructed image patches are obtained by adding mean value to the final feature patches. All the reconstructed patches are then combined with the overlapping area and then averaged to generate the original HR image.E. Pseudo code For SERF Image Super-Resolver AlgorithmThe pseudo code for SERF image super-resolver algorithm is as followsInputs Y, a, r, for t=1 to TdoAdapt each patch clusterto a cluster.Compute.Update the values of A and b in End forThe output will be the High Resolution image (HR).The simulation of the SERF image super-resolver is done by using MATLAB R2013a for various images. The LR image is read from image folder and is impact using the algorithms explained before. The output HR image is taken after regression stages. The implementation is done by considering many reference images. The colour image (RGB) is first converted into YCbCr space, where Y channel represents luminance. Cb and Cr are simply copied from the interpolated LR image. The number of cluster size is 200. Image patch size 5 x 5 and magnification factor is set to 3.a)LR inputb)HR input(c)Zooming resultFig.2. SERF Result under Magnification Factor 3a)LR inputb)HR outputc)zooming resultFig.3. SERF Result under Magnification Factor 2a)LR inputb)HR outputc)zooming resultFig.4. SERF Result under Magnification Factor 1(a) (b) (c) (d)(e) (f) (g) (h)Fig.5. Comparisons ResultsButterfly (a) ground truth image (original size is 256 - 256) (a)super-resolution results of (b) SRCNN, (c) ScSR, (d) Zeydes, (e) ANR, (f) BPJDL,(g) SPM, and (h) SERF.Zeydes 5 method gives noiseless image, but texture details are not well reconstructed as shown check (d). The BPJDL 14 methods generate sharper edges when compared to other methods as shown Figure (f). Figure (h) shows the zooming results of SERF method that performs well for both reconstruction and visual artifacts suppression.TABLE IPSNR AND SSIM determine UNDER MAGNIFICATION component OF 1, 2 AND 3.MagnificationFactorPSNRSSIMTIME(s)329.07750.8390.4323230.50.8120.4000138.40.7980.3870TABLE IIPSNR AND SSIM VALUES UNDER MAGNIFICATION FACTOR OF 3 FOR TESTING IMAGES.S.NOIMAGESPSNRSSIM TIME(s)1Baboon23.630.5320.31152Baby35.290.9060.41483Butterfly26.870.8830.20184Comic24.320.7550.22085Man28.190. 7780.54686zebra29.090.8390.4324For magnification factor of 3, SERF outplays ScSR method by an average PSNR gain of 0.43dB, Zeydes 5 method by 0.37dB, ANR 15 by 0.44dB, BPJDL 14 method by 0.23dB and the SPM 7 method by 0.16dB. SERF gives average SSIM value of 0.8352 and it is quick method compared to existing methods (TABLE III).TABLE III PSNR AND SSIM VALUE COMPARISON OF SERF METH OD WITH EXISTING METHODS UNDER MAGNIFICATION FACTOR OF 3.EXISTING METHODSPSNRSSIMTIME(s)ScSR 423.690.88357.27Zeydes 523.600.87650.06ANR 1524.320.86870.02BPJDL 1424.170.889017.85SPM 724.630.89820.74SERF29.07750.83520.23SERF has few parameters to control the model, and results in easy adaption for training a new model when the experimental settings, zooming factors and databases were changed. The cascaded linear regression algorithm and SERF image super-resolver has been simulated in MATLAB2013a. SERF Image super-resolver achieves better performance with sharper details for magnification factor up to 3. This model reduces the gaps of high-frequency details between the HR image patch and the LR image patch gradually and thus recovers the HR image in a cascaded manner. This cascading process promises the convergence of SERF image super-resolver. This method can also be applied to other heterogeneous image transformation fields such as face sketch photo synthesis. Further this algorithm will be implemented on FPGA by proposing suitable VLSI architectures.REFERENCES1 W. Freeman, E. Pasztor, and O. Carmichael, Learning low-level vision, International Journal of Computer Vision, vol. 40, no. 1, pp. 25-47,2000.2 J. Sun, N. Zheng, H. Tao, and H. Shum, Image hallucination with autochthonic sketch priors, in proceeding of IEEE Conference on Computer Vision and class Recognition, 2003, pp. 729-736.3 Q. Wang, X. Tang, and H. Shum, Patch based blind image super resolution, in minutes of IEEE international Conference on Computer Vision, 2005, pp. 709-716.4 J. Yang, J. Wright, T. Huang, and Y. Ma, Image super-resolution via sparse representation, IEEE proceedings on Image Processing, vol. 19,no. 11, pp. 2861-2873,2010.5 R. Zeyde, M. Elad, and M. Protter, On single image scale-up using sparse-representations, in Proceedings of Curves and Surfaces, 2012, pp. 711-730.6 X. Gao, K. Zhang, D. Tao, and X. Li, Joint learning for single-image super-resolution via a coupled constrai nt, IEEE Transactions on Image Processing, vol. 21, no. 2, pp. 469-480, 2012.7 K. Zhang, X. Gao, D. Tao, and X. Li, Single image super-resolution with multiscale similarity learning, IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 10, pp. 1648-1659, 2013.8 G. Freedman and G. Fattal, Image and video upscaling from local selfexamples, ACM Transactions on Graphics, vol. 28, no. 3, pp. 1-10, 2011.9 K. Zhang, D. Tao, X. Gao, X. Li, and Z. Xiong, Learning multiple linear mappings for efficient single image super-resolution, IEEE Transactions on Image Processing, vol. 24, no. 3, pp. 846-861, 2015.10 K. Kim, D. Kim, and J. Kim, Example-based learning for image super resolution, in Proceedings of Tsinghua-KAIST Joint Workshop Pattern Recognition, 2004, pp. 140-148.11 K. Zhang, D. Tao, X. Gao, X. Li, and Z. Xiong, Learning multiple linear mappings for efficient single image super-resolution, IEEE Transactions on Image Processing, vol. 24, no. 3, pp. 846-861, 2015.12 M. Yang and Y. Wang, A self-learning approach to single image super resolution, IEEE Transactions on Multimedia, vol. 15, no. 3, pp. 498-508, 2013.13 K. Kim and K. Younghee, Single-image super-resolution using sparse regression and natural image prior, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 6, pp. 1127-1133, 2010.14 H. He and W. Siu, Single image super-resolution using Gaussian process regression, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp. 449-456.15 R. Timofte, V. Smet, and L. Gool, Anchored neighborhood regression for fast example-based super-resolution, in Proceedings of IEEE Conference on Computer Vision, 2013, pp. 1920-1927.16 J. Yang, Z. Lin, and S. Cohen, Fast image super-resolution based on in-place example regression, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 1059-1066.17 C. Dong, C. Loy, K. He, and X. Tang, Learning a deep convolutional network for image super-resolution, in Proceedings of European Conference on Computer Vision, 2014, pp. 184-199.18 C. Dong, C. Loy, K. He, and X. Tang, Image super-resolution using deep convolutional networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, DOI10.1109/TPAMI.2015.2439281, 2015.19 P. Viola and M. Jones, Robust real-time face detection, International Journal of Computer Vision, vol. 57, no. 2, pp. 137-154, 2004.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment