IMAGE QUALITY METRICS BASED MULTI-FOCUS IMAGE ... - L2TI

6 downloads 0 Views 779KB Size Report
proposed method, steerable pyramid method, quincunx lifting scheme and laplacian pyramid are shown in figure 2. Figure. 2 shows that the EC based fusionΒ ...
IMAGE QUALITY METRICS BASED MULTI-FOCUS IMAGE FUSION Amina Saleem 1 , Azeddine Beghdadi 1 and Boualem Boashash 2 1. L2TI-Institute Galilee, Universite Paris 13,Villetaneuse, France 2. Qatar University College of Engineering,Doha Qatar. Adjunct Professor University of Queensland,Brisbane, Australia ABSTRACT We describe an innovative methodology for block based multifocus image fusion in the spatial domain. The main idea is to use blind or no reference blur metrics for image fusion applications. Two blur quality metrics based on edge content information and radon wigner ville based mean directional entropy, developed in the paper are used as activity measures. Fusion is performed by calculating blur based activity measure for each sub window in the source images and using maximum selection criterion according to the magnitude of these metrics. Furthermore, the defined metrics can be used for quantitative performance assessment of multifiocus fused images. The results of the proposed algorithm are compared to some existing image fusion methods. Index Termsβ€” Image fusion, Image quality, Performance evaluation 1. INTRODUCTION The aim of image fusion algorithms is to combine complementary and redundant information from multiple images (from same or different sensors) to generate a composite that gives better description of the scene than any of the individual source images. Image fusion is recognized as a useful tool for improving overall system performance in image-based application areas such as defense surveillance, remote sensing, medical imaging, machine vision and biometrics. Image fusion methods are classified on the basis of the different stages at which fusion takes place [2].They are broadly categorized as pixel level, feature level and decision level fusion [1, 2]. Pixel level fusion involves integration of low level information like intensity, where the information at each pixel in the fused image is determined from a set of pixels in the source images. Feature level fusion requires the extraction of salient features such as edges or textures. These features from the source images are then fused to get the composite image. Decision level is a higher level of fusion where input images are processed individually for information extraction and the information combined by applying decision rules to reinforce common interpretation. In imaging applications where images are captured by CCD devices, we get images of the

978-1-4577-0071-2/11/$26.00 Β©2011 IEEE

77

same scene which are not in-focus everywhere (if one object is in-focus, another one will be out of focus). This occurs due to sensors which cannot generate images at various distances with equal sharpness. One way to overcome this problem is to take different in-focus parts of a scene and merge them into a composite image which has all parts in focus. This is also effective for digital camera design and industrial inspection applications where the need to visualize objects at very short distances complicates the preservation of the depth of field. Many techniques for multi-focus image fusion problem have been proposed in literature. Wavelet transform [7], wavelet transform contrast [8], contourlet transform [9] and Haar transform [10] are some of the transform based methods that are used for multi-focus fusion. In [5] ratio of blurred and original image intensities is used to guide the fusion process. A multi-focus image fusion method using spatial frequency (SF) and morphological operators is proposed in [8]. In [11] the application of artificial neural networks to pixel level multi-focus image fusion problem is presented. Most of the image fusion techniques use pixel based methods [3,4].The advantage of the pixel level fusion methods is that the composite image contains original information. The spatial methods as compared to the transform based methods in general, are computationally less extensive and undergo lesser information loss as no transforms and inverse transformations are involved. In this work, we propose a block based multi-focus image fusion algorithm using the no reference blur quality metrics. Indeed,the previous methods are based on indirect measures (like spatial frequency visibility and transform coefficients) to estimate the sharpness or contrast of the multi-focus images. We introduce image quality metrics that directly measure the sharpness or contrast of the source images to yield a fused output. The basic idea is to divide the source images into blocks, and then select the corresponding block with lesser blur measured by the metrics defined in the paper. The performance assessment of the image fusion algorithms is a difficult task due to a non availability of ground truth. However, we can define metrics for a specific image fusion task. We further show that the blur metrics can be used as a quantitative measure for performance assessment of multi-focus fused images. The paper is organized as follows. The following section introduces

EUVIP 2011

briefly the blur quality metrics. Section 3 describes the multifocus image fusion framework. The next section is devoted to the performance evaluation of the multi-focus image fusion applications in general and the proposed method in specific. The results are discussed in section. 5. Finally, we conclude our work in section.6. 2. BLUR QUALITY METRICS Different contrast structures in images provide information on sharp edges and texture components. In [12] A Turiel and N Parga present a formalism that results in a natural hierarchical description of the different contrast structures in images, from textures to sharp edges. A quantity, the edge content is defined that accumulates all the contrast changes whatever their strength inside a scale π‘Ÿ. Using the edge content information, we define a metric to measure the blur degradation of images in a non reference scenario i.e. using this metric we can sort images for selecting the best among a set of blurred images. In [13] blind image quality metric based on measuring image anisotropy is proposed. It is shown that image anisotropy is sensitive to noise and blur hence quality can be measured in this way. The variance of the expected entropy is measured as a function of the directionality, and taken as an anisotropy indicator. In our work, we have used the mean renyi entropy of radon wigner ville transformed image as an indicator of the blur in images. Based on experiments we hypothesize that blur in the image reduces the edge content 𝐸𝐢 and the mean directional renyi entropy. Following this line of reasoning, we extend the concept of 𝐸𝐢 and anisotropy to be used as activity measures for the multi-focus image fusion problem. 2.1. Edge content

β†’

β†’

β†’

𝐴

∫ β†’ βˆ’ where 𝑑 π‘₯β€² is a bidimensional integration over the set 𝐴 . It 𝐴

is also possible to generalize the definition of πœ–π‘Ÿ for any subset 𝐴 as:   β†’β€²  βˆ’ ∫ βˆ’ β†’ 𝑑 π‘₯β€² βˆ‡πΆ( π‘₯ ) πœ‡(𝐴) =𝐴 πœ–π΄ = πœ†(𝐴) πœ†(𝐴) Contrast changes are distributed over the images in such a way that 𝐸𝐢 has large contributions even from pixels that are very close together. The 𝐸𝐢 increases with decreasing blur. The results of the quality metric are tested by applying it to a set of randomly selected natural images as proposed in [13]. One set of natural images that have been progressively degraded by the addition of blur are taken. The value of 𝐸𝐢 for this image set is plotted in Figure 1 (b). We can see from Figure 1 (b) that the value decreases with increasing blur. Similar results are observed for a number of other natural images and images from the LIVE database [21] (not included here). This metric is also tested for full reference performance of the ringing and blur degradations giving high correlations. 2.2. Radon wigner ville based blur metric

Image-processing techniques emphasize that edges are the most prominent structures in a scene as they cannot be predicted easily. Changes in contrast are relevant because they occur at the most informative pixels of the scene [12]. In [12] precise definitions of edges and other texture components of images were proposed. Contrast changes are defined by a quantity the edge content EC that accumulates all of them, whatever their strength, contained inside a scale π‘Ÿ, that is: 1 β†’ 𝐸𝐢 = πœ–π‘Ÿ ( π‘₯ ) = 2 π‘Ÿ

β†’

The contrast 𝐢( π‘₯ ) is taken as 𝐢( π‘₯ ) = 𝐼( π‘₯ ) βˆ’ 𝐼 where 𝐼( π‘₯ ) is the field of luminosities and ⟨𝐼⟩ is the average value across the ensemble. The bi-dimensional integral on the right-hand side, defined on the set of pixels contained in a square of linear size π‘Ÿ, is a measure of that square. It is divided by the factor π‘Ÿ2 , which is the Lebesque measure (denoted by πœ†) of a square β†’ of linear size π‘Ÿ. The quantity πœ–π‘Ÿ ( π‘₯ ) can then be regarded as the ratio between these two measures. More generally, we define the measure πœ‡(𝐴) , the edge measure 𝐸𝑀 of the subset 𝐴 as:  ∫ β†’β€²  βˆ’ β†’β€²  βˆ’  𝐸𝑀 = πœ‡(𝐴) = 𝑑 π‘₯ βˆ‡πΆ( π‘₯ )

π‘₯1 + π‘Ÿ2

∫

β€²

𝑑π‘₯1 π‘₯1 βˆ’ π‘Ÿ2

∫

π‘₯2 + π‘Ÿ2 π‘₯2 βˆ’ π‘Ÿ2

 β†’β€²  βˆ’ β€²  𝑑π‘₯2 βˆ‡πΆ( π‘₯ )

The discrete formulation is represented by the following expression:  βˆ‘ βˆ‘  1 β†’  𝐸𝐢 = βˆ‡πΆ( π‘₯ ) (π‘š Γ— 𝑛) π‘₯ π‘₯ 2

1

where π‘š and 𝑛 represents the size of the image block for which we calculate 𝐸𝐢 and 1 ≀ π‘₯1 ≀ π‘š and 1 ≀ π‘₯2 ≀ 𝑛.

78

In this paper we also develop another methodology based on the anisotropy for blind assessment of blur in images. To get the directional information of the images radon profiles are taken ten degrees apart. It has been experimentally validated in [14] that profiles taken ten degrees apart retain all the important information in an image. The one dimensional wigner ville [18] gives the time frequency distribution corresponding to each radon profile. The renyi entropy with the parameter 𝛼 set to 3 is used to determine the directional information in each WVD transformed radon profile. The expected or the mean value of these directional entropies is used as a metric for the blind assessment of noise. Let 𝑅𝐼 (𝐼)(𝛼, 𝑠) be β†’ the radon transform of the image 𝐼( π‘₯ ) and 𝑅𝐼 (𝐼)(𝛼𝑖 , 𝑠𝑖 ) the radon profiles corresponding to the angles πœƒ = [πœƒ1 , πœƒ2 ......πœƒπ‘– ]. The renyi entropy for each radon profile be denoted by 𝑅3 (𝑖) The image quality metric is then mathematically defined by: βˆ‘ 𝑅3 (𝑖) βˆ’ 𝑖 𝑅= 𝑝

where 𝑝 represents the total number of image radon profiles. The discrete wigner ville distribution π‘Šπ‘§ (𝑛, π‘˜) of a function 𝑧(𝑛) and the third order renyi information applied to a timefrequency distribution π‘Šπ‘§ (𝑛, π‘˜) is given by π‘Šπ‘§ (𝑛, π‘˜) = 2𝐷𝐹 𝑇 {𝑧[𝑛 + π‘š]𝑧 βˆ— [𝑛 βˆ’ π‘š]} π‘š ∈ βŸ¨π‘ ⟩ π‘šβ†’π‘˜

where βŸ¨π‘ ⟩ means any set of 𝑁 consecutive integers and

is dictated by the decision map. The construction of the decision map is a key point in the approach because its output π‘‘π‘˜ governs the combination map 𝐢 π‘˜ . Although the fusion algorithm can be extended for applications with more than two source images, we consider the fusion of two source images for simplicity. The composite image is assembled from the source images for each region as π‘˜ π‘˜ (.), 𝑦𝐡 (.), π‘‘π‘˜ (.)) π‘¦πΉπ‘˜ (.) = 𝐢 π‘˜ (𝑦𝐴

βˆ‘βˆ‘ 1 𝑅3 = βˆ’ log2 ( π‘Šπ‘§3 (𝑛, π‘˜)) 2 𝑛

where 𝑦𝐹 , 𝑦𝐴 and 𝑦𝐡 stand for the composite and source images respectively and 𝐢 π‘˜ is the combination map for region π‘˜. A simple choice for 𝐢 π‘˜ can be a linear mapping

π‘˜

Third order entropy is used to avoid problems caused by the presence of the oscillatory cross terms which are ignored by the renyi entropy based measure for 𝛼 = 3. The value of the βˆ’

metric 𝑅 decreases with increasing blur as validated by experiments and simulations. The results are validated by applying them to a set of blur degraded images in the LIVE database [21] to test if the metric correlates with human judgment. The blur metric for the selected images from the LIVE database are given in Table 1. The metric is also compared with some well known image quality metrics, the PSNR and SSIM. The proposed metric gives a good match with human judgment therefore it can be successfully used for sorting blurred images according to their visual quality for image processing applications where no reference or ground truth is available. Such no reference or blind metrics are therefore particularly suitable for image fusion and enhancement applications where there is an absence of ground truth. The calculation of this metric involves transformation to the frequency domain using wigner ville transform. However, the complexity of this metric is reduced by applying a 1D transform and taking few radon profiles (utilizing the characteristics of the HVS).

𝐢 π‘˜ (𝑦𝐴 , 𝑦𝐡 , 𝛿) = 𝑀𝐴 (𝛿)𝑦1 + 𝑀𝐡 (𝛿)𝑦2 where the weights 𝑀𝐴 (𝛿), 𝑀𝐡 (𝛿) depend on the decision parameter. For our application, the decision map decides that block with the most salient activity measure is the best choice for the composite output image and tells the combination map to select it. { 1 if 𝑆 = arg max𝑠 𝐸𝐢 π‘˜ 𝑀𝑆 (π‘‘π‘˜ (.)) = 0 otherwise where 𝑆 is the index set of the source images. Thus we use a selective rule that picks the most salient component, i.e. the one with largest activity. After applying the combination map we get: βˆ’π‘˜

π‘˜ (.) where 𝑀 = arg max 𝐸𝐢 π‘˜ or 𝑀 = arg max 𝑅 π‘¦πΉπ‘˜ (.) = 𝑦𝑀 𝑠

𝑠

βˆ’

𝐸𝐢 is the edge content and 𝑅 the mean renyi entropy used as activity measures.

3. MULTI-FOCUS IMAGE FUSION

4. PERFORMANCE METRIC FOR MULTI-FOCUS IMAGE FUSION PROBLEM

Image fusion starts by dividing the source images into sub blocks and then calculating a quantity (activity measure) which measures the saliency of each sub block. The edge content activity measure is calculated for each sub block and the value of π‘Ÿ is the area of the sub block. This is followed by applying some fusion rules to combine the images to get a composite or fused output. The activity measure of each region in the source images depends on the nature of sources as well as the particular fusion application. For the multifocus image fusion problem a desirable activity level would be a quantitative metric which increases when image features are more in focus. This justifies the use of the edge content and the mean radon wigner ville based entropy measures as good activity measure candidates for multi-focus image fusion applications. The next step in the fusion process is to define the decision and combination maps. The combination module performs the actual combination of the images which

The performance of most fused images is evaluated subjectively. The subjective methods for image quality assessment are known to be complex, time consuming and expensive. Objective assessment is difficult as it requires the availability of ground truth. Various image fusion algorithms in literature [15] are evaluated objectively by constructing an ideal output image by a manual β€˜cut and paste’ process [16, 17]. Mean square error is also widely used for the evaluation. Mutual information is used in [17] as a no reference measure to evaluate the performance of image fusion algorithms. Xydeas and Petrovic [19] proposed a metric based on relative amount of edge information transferred from the source to the fused image. A similarity based image metric for image fusion is defined in [20]. Although the performance assessment for image fusion problem is a difficult task, it can be simplified by defining parameters for particular image fusion tasks. This amounts to defining reduced reference image quality metrics

79

where we have certain information about the source images and the application. Based on this idea we propose that the blur quality metrics defined in our work can be used as quantitative metrics for the performance of a particular image fusion problem. We calculate the edge content and the radon wigner ville based mean directional entropy to evaluate the performance of our proposed algorithm. 5. RESULTS AND DISCUSSION In this section we present some examples to illustrate the proposed method. We test our method for a number of multifocus pair of images getting similar results. Two examples are given here to illustrate the fusion process described above. In the first example of clock image we use 𝐸𝐢 and the second example of cup image the radon wigner ville metric is used as activity measure. Figure 3 shows two out of focus clock and cup images, respectively, and their corresponding composite images. The performance of these methods based on the defined metrics is given in Table 2. From the experiments, we see that the proposed fusion method gives accurate results as it is based on selection of in focus blocks from the input sources thus transferring relevant information from the source to composite image without loss of information as the fusion is performed in the spatial and not in the transform domain. Next, we present a comparison of our method to some existing multiresolution approaches, Burt’s method [23] comprising a laplacian pyramid decomposition, steerable pyramid with Burt and Kolczynski’s combination algorithm [24] and the lifting scheme based MR decomposition on a quincunx lattice [25]. These algorithms are implemented using the MATIFUS toolbox for image fusion [22]. Some other averaging and weighted averaging methodologies using PCA are also proposed in literature. However, averaging produces a loss of contrast therefore comparisons to averaging methods are not presented in this work. The composite images obtained by the proposed method, steerable pyramid method, quincunx lifting scheme and laplacian pyramid are shown in figure 2. Figure 2 shows that the 𝐸𝐢 based fusion method outperforms other MR based fusion schemes followed by the laplacian pyramid based MR method. Note, for instance the loss of contrast and blurring introduced in the edges in figure 2 b and c for the steerable pyramid and the quincunx lifting schemes. We can also observe a loss of contrast for the laplacian based MR resolution methodology. This can be due to the averaging performed for the approximation coefficients for the MR based fusion methods. Some general requirements for image fusion methods are no loss of salient information and no artifacts introduced by the fusion process. The proposed method retains effectively the objectives of image fusion [26] described above i.e. no information is lost as the method involves the selection of in focus blocks from the source images. There is no loss of information introduced as smoothing of edges and reduction in contrast for the averaging and transform based

80

fusion methodologies. It is implemented in the space domain so it is computationally much less extensive as compared to other transform based fusion methods. The computation of the blur quality based metric involves computation of the 1D PWVD for radon profiles in an image. However, it needs to be emphasized that this transformation is to compute the metric. However the fusion is performed in the space domain so the complexity of our proposed algorithm is less than the transform based methods as they involve forward and reverse 2D transforms. The performance of the proposed method is however sensitive to block size. There is some loss of information if the block size is large such that there is an overlap of some focus and out of focus regions of the source images. Moreover, the radon wigner ville based blur metric is effective for larger block size because if the block size is too small the entropy results are not meaningful. The radon based metric is more useful for complementary blurred images. The images are complementary in the sense that the blurring occurs at the left half and the right half, respectively. In this case we can have large block sizes and thus get meaningful entropy values.

6. CONCLUSION A novel block based multi-focus image fusion method is presented in this work. The idea developed is to utilize blur image quality metrics to guide the multi-focus image fusion problem in the spatial domain. There is little or no loss of information as no transformations and inverse transformations or averaging operations are involved. The proposed method is computationally less extensive as it doesn’t involve forward and inverse image transformations. Moreover the blur metrics proposed here can be used as quantitative measure for the multi-focus image fusion problem as well as for blind sorting of blurred images (according to visual quality) for other image processing applications. The proposed method provides superior results to some existing algorithms as validated by experiments. However, further investigation on the automatic

(a)

(b)

Fig. 1. (a) Original image (b) EC of ten images of the progressively blurred test set in (a)

(a)

(b)

(c)

(d)

(a)

(b)

(c)

(d)

Fig. 2. Examples of fusion by some existing methods (a) our method (b) Steerable pyramid method (c) Quincunx lifting scheme (d) Laplacian pyramid selection of the block size needs to be performed. Coins #146 (0.00) # 32 (0.56) # 101 (0.84) # 22 (1.45) # 19 (1.70) #123 (2.51)

Index 1.0000 0.9520 0.8840 0.6213 0.4862 0.0000

Dancers #149 (0.00) # 5 (0.56) # 21 (0.90) # 93 (0.96) # 128 (1.47) # 41 (2.16)

Index 1.0000 0.9362 0.8167 0.7869 0.4884 1.0000

(e)

Fig. 3. (a,b) Multifocus clock source images (c) Fused image using the mean directional entropy metric R (d,e) Multifocus cup source images (f) Fused image using EC

Table 1. ”Coins,” ”Dancers, images taken from the LIVE database for blur degradation.The bracket next to the image number contains the standard deviation of the Gaussian kernel for each image. Images Clock img 1 Clock img 2 Fused img

Metric EC 6.44 4.62 7.13

Images Cup img 1 Cup img 2 Fused img

(f)

Metric R 0.03 0.02 0.04

Image Orig img 1 Img 2 Img 3 Img 4

Metric EC 16.3273 9.7140 8.6574 8.1476 7.8036

Image Img 5 Img 6 Img 7 Img 8 Img 9

Metric EC 7.5523 7.3575 7.2009 7.0717 6.9622

Table 3. Table of Images and the Blur metric EC.

Table 2. Value of proposed metrics for the cup and the clock images.

7. REFRENCES [1] R. C. Luo and M. G. Kay β€œMultisensor Integration and Fusion for Intelligent Machines and Systems,”Norwood,

81

NJ, USA Ablex Publishing Corporation,1995, ISBN 089391-863-6. [2] C. Pohl and J. L. Genderen, β€œMultisensor image fusion in remote sensing: concepts, methods and applications,” International Journal of Remote Sensing , No 5, pp 823-854, 1998. [3] Z.H. Li, Z.L Jing, G. Liu, S.Y. Sun, H. Leung, β€œPixel visibility based multifocus image fusion,”in Proceeding

of the International Conference on Neural Networks and Signal Processing, 2003 pp 1050 - 1053. [4] S. Li , J. T. Kwok and Y. Wanga, β€œCombination of images with diverse focuses using the spatial frequency, ”Information Fusion ,vol 2, Issue 3, pp 169-176, Sep 2001. [5] M. Qiguang, W. Baoshu, R. Z. Schowengerdt, A. Robert. β€œMulti-focus image fusion using ratio of blurred and original image intensities,” in Proceedings of SPIE, the International Society for Optical Engineering.Visual information processing XIV: (29-30 March 2005, Orlando, Florida,USA). [6] B.Yang and S. Li ,β€œMulti-focus image fusion based on spatial frequency and morphological operators,” Chinese Optics Letters, vol. 5, Issue 8, pp. 452-453. [7] W. Wang, P. Shui, G. Song, β€œMultifocus image fusion in wavelet domain,” International Conference on Machine Learning and Cybernetics, 2003, vol 5, pp 28872890, Nov. 2003. [8] D. Lu, L. Wang, and J. Lv (PRC) β€œNew Multi-Focus Image Fusion Scheme based on Wavelet Contrast,” From Proceeding (534) Signal and Image Processing , 2006. [9] L.Yang, B.Guo, W.Ni, β€œMultifocus Image Fusion Algorithm Based on Contourlet Decomposition and Region Statistics,” Fourth International Conference on Image and Graphics (ICIG) pp. 707-712, 2007, Chengdu, Sichuan, China. [10] Toxqui-Quitl, Carina, Padilla-Vivanco, Alfonso, UrcidSerrano, Gonzalo,β€œMutlifocus image fusion using the Haar wavelet transform,” Applications of Digital Image Processing XXVII, Proceedings of the SPIE, vol 5558 , pp.796-803, Aug 2004, USA.

[15] G. Piella, β€œA general framework for multiresolution image fusion: from pixels to regions”, Information Fusion, vol.4, Issue 4, pp 255-280 (Elsevier), 2003. [16] H. Li, B. S. Manjunath, and S. K. Mitra,β€œ Multisensor image fusion using the wavelet transform”, Graphical Models and Image Processing,vol. 57, No. 3, pp. 235245, 1995. [17] O. Rockinger, β€œImage sequence fusion using a shift invariant wavelet transform,” in Proc. IEEE International Conference on Image Processing,Washington, DC, pp.288-291, 1997. [18] B. Boashash, β€œTime-Frequency Signal Analysis and Processing: A comprehensive Reference, Elsevier,” Science, Oxford, 2003. [19] G. H. Qu, D. L. Zhang, and P. F. Yan β€œInformation measure for performance of image fusion”, Electronics Letters, vol. 38, Issue 7 pp 313-315, 2002. [20] Z. Wang and A. C. Bovik. β€œA universal image quality index” IEEE Signal Processing Letters, vol. 9, No. 3, pp. 81-84, 2002. [21] http://live.ece.utexas.edu/research/quality/ [22] http://homepages.cwi.nl/˜pauldz/index.html [23] P.J.Burt. β€œThe pyramid as a structure for efficient computation,” In Multiresolution Image Processing and Analysis, A. Rosenfeld, Ed. Springer-Verlag, Berlin, 1984. [24] P. J. Burt., and , R. J. Kolczynski. β€œEnhanced image capture through fusion,” in Proceedings of the 4th International Conference on Computer Vision, .pp 173-182, Berlin, Germany, May 1993.

[11] S.Li, J.T. Kwok, Y.Wang β€œMultifocus image fusion using artificial neural networks” Pattern Recognition Letters, vol. 23, Issue 8, pp. 985 -997, 2002.

[25] H .J. A. M Heijmans, J. Goutsias, β€œNonlinear multiresolution signal decomposition schemes,” Part II: Morphological wavelets. IEEE Transactions on Image Processing, vol 9, Issue 11, pp 1897-1913,2000.

[12] A. Turiel, N. Parga ,β€œThe Multifractal Structure of Contrast Changes in Natural Images: From Sharp Edges to Textures,”Journal of Neural Computation archive, vol.12 , Issue 4, pp. 763-793 April 2000.

[26] O. Rockinger β€œPixel-level fusion of image sequences using wavelet frames,” in Proceedings of the 16th Leeds Annual Statistical Research Workshop , Leeds University Press, pp. 149-154, 1996.

[13] S. Gabarda and G. Cristobal, β€œBlind image quality assessment through anisotropy,” JOSA A, vol. 24, Issue 12, pp. B42-B51, 2007.

[27] J. A. Richard.β€œThematic mapping from multitemporal image data using the principal component transformation” Remote Sensing of Environment, vol 16, Issue 1, pp 36-46, 1984.

[14] A. Saleem, A. Beghdadi, A. Chetouani, B. Boashash, β€œA Radon Wigner Ville Based Image Dissimilarity Measure”, IEEE Symposium on Computational Intelligence for Multimedia, CIMSIVP 2011, Paris, April 11-15, 2011.

82