Image Retrieval using Color-Texture Features from ... - Semantic Scholar

116 downloads 2804 Views 1MB Size Report
The novel technique for image retrieval using the color- texture features extracted ...... Available online at www.ceser.res.in/iji.html. [54] H.B.Kekre, Sudeep D.
ICGST-GVIP Journal, Volume 9, Issue 5, September 2009, ISSN: 1687-398X

Image Retrieval using Color-Texture Features from DCT on VQ Codevectors obtained by Kekre’s Fast Codebook Generation H.B.Kekre#, Tanuja K. Sarode^, Sudeep D. Thepade* Senior Professor, ^Ph.D.Research Scholar,,*Ph.D.Research Scholar & Asst. Professor, Mukesh Patel School of Technology, Management and Engineering, SVKM’s NMIMS University, Mumbai, INDIA ^ Asst.Professor, Thadomal Shahani Engineering College, Mumbai, INDIA # [email protected],^[email protected], *[email protected], #

shape – a technology now generally referred to as ContentBased Image Retrieval (CBIR). After a decade of intensive research, CBIR technology is now beginning to move out of the laboratory and into the marketplace, in the form of commercial products like QBIC [24] and Virage [23]. However, the technology still lacks maturity, and is not yet being used on a significant scale [1],[28]. In the absence of hard evidence on the effectiveness of CBIR in practice, opinion is still sharply divided about their usefulness in handling real-life queries in large and diverse image collections [10]. A wide range of possible applications for CBIR technology has been identified [1],[7],[8]. Some of these are art galleries, museums, archaeology [21],[29], architecture/engineering design [19],[29], geographic information systems [1], weather forecast [18],[20], medical imaging [17],[27], trademark databases [15],[20], criminal investigations [13],[15], image search on the Internet [7],[8],[16],[17]. The problems of image retrieval are becoming widely recognized, and the search for solutions is an increasingly active area for research and development [30]. There are mainly two streams of research for image retrieval. The database community is mainly focusing on image indexing [17] whereas image processing groups are concentrating on representing the image content in the form of some feature descriptors [11]. Most of the current image indexing practices mainly rely on color, texture or shape features. The performance of the image retrieval technique improves if these features are combined and considered together [1],[20]. In case of color and texture features combination, the red, green and blue planes are considered separately and then some texture features are extracted from these color planes. The limitation of extracting color texture features is that they represent the individual color planes of image separately. Considering the red, green and blue color planes together to obtain color texture features is the better way of image color feature representation. In section 2 theoretical considerations are elaborated, section 3 deals with codebook generation algorithm is given. Proposed CBIR method is given in section 4, results and discussion are given in section 5 and conclusions are presented in section 6

Abstract The novel technique for image retrieval using the colortexture features extracted from images based on vector quantization with Kekre’s fast codebook generation is proposed. This gives better discrimination capability for Content Based Image Retrieval (CBIR). Here the database image is divided into 2x2 pixel windows to obtain 12 color descriptors per window (Red, Green and Blue per pixel) to form a vector. Collection of all such vectors is a training set. Then the Kekre’s Fast Codebook Generation (KFCG) is applied on this set to get 16 codevectors. The Discrete Cosine Transform (DCT) is applied on these codevectors by converting them to column vector. This transform vector is used as the image signature (feature vector) for image retrieval. The method takes lesser computations as compared to conventional DCT applied on complete image. The method gives the color-texture features of the image database at reduced feature set size. Proposed method avoids resizing of images which is required for any transform based feature extraction method. Keywords: CBIR, Vector Quantization, KFCG, ColorTexture Features.

1. Introduction The large numbers of image collections have posed increasing technical challenges to computer systems to store/transmit and index/manage image data effectively [1],[2],[3]. The storage and transmission challenge is tackled by image compression [4],[5],[6]. The challenge to image retrieval is studied in the context of image database [7],[8],[9], and has also been attempted by researchers from a wide range of disciplines from computer vision [10], image processing [11]-[24] to traditional database areas for over a decade [16],[18]. Researchers are discovering that the process of locating a desired image in a large and varied collection can be a source of considerable frustration [20],[25],[26]. Problems with traditional methods of image indexing [1],[14],[16],[26],[27] have led to the rise of interest in techniques for retrieving images on the basis of automatically-derived features such as color, texture and

1

ICGST-GVIP Journal, Volume 9, Issue 5, September 2009, ISSN: 1687-398X

time will also decrease, at the cost of increased distortion and decreased accuracy. In order to reduce the searching time there are various search algorithms available in literature [44]-[51]. All these algorithms reduce the computational cost needed for VQ encoding keeping the image quality close to exhaustive search algorithm

2. Theoretical Considerations A. Content Based Image Retrieval The earliest use of the term content-based image retrieval in the literature seems to have been by Kato et.al.[31], to describe his experiments into automatic retrieval of images from a database by color and shape feature. The term has since been widely used to describe the process of retrieving desired images from a large collection on the basis of features (such as color, texture and shape) that can be automatically extracted from images themselves. The typical CBIR system performs two major tasks [25], [26]. The first one is feature extraction (FE), where a set of features, called image signature or feature vector, is generated to accurately represent the content of each image in the database. A feature vector is much smaller in size than the original image, typically of the order of hundreds of elements (rather than millions). The second task of CBIR system is similarity measurement (SM) Here a distance between the query image and each image in the database using their signatures is computed so that the top “closest” images can be retrieved. Non-transform based image coding can have image features such as color more easily available. For example, recent work on color image coding using vector quantization [2],[32],[33], has demonstrated that, color as well as pattern information can be readily available in the compressed image stream (without performing decoding) to be used as image indices for effective and efficient image retrieval (i,e image indexing using a colored pattern appearance model.)

C. Similarity Measure [25],[27],[52] Many Current Retrieval systems take a simple approach by using typically norm-based distances (e.g., Euclidian distance [25]) on the extracted feature set as a similarity function. The main premise behind these CBIR systems is that given a “good set” of features extracted from the images in the database, then for two images to be “similar” their extracted features have to be “close” to each other. Most commonly Euclidian distance and correlation coefficient [52]-[54] are used as similarity measure in CBIR. Correlation coefficient measures the cosine of the angle between two vectors and varies between 0 to 1. When it is 1 both the vectors are aligned but their magnitude may not be same. In contrast to this Euclidian measure gives the distance between the vectors, when it is 0 not only the vectors are aligned but their magnitude is also same. Here we have preferred Euclidian distance as a similarity measure. The direct Euclidian distance between an image P and query image Q can be given as below.

ED =

B. Vector Quantization(VQ) VQ [4],[34],[35],[36],[37] can be defined as the mapping function that maps k-dimensional vector space to the finite set CB = { C1, C2, C3, . . .., CN}. The set CB is called codebook consisting of N number of codevectors and each codevector Ci = {ci1, ci2, ci3, ……, cik} is of dimension k. The key to VQ is the good codebook. Codebook can be generated in spatial domain by clustering algorithms [2],[5],[6],[9],[38],[39] or using transform domain techniques [40]-[43]. In Encoding phase image is divided into non overlapping blocks and each block then is converted to the training vector Xi = (xi1, xi2, ……., xik ). The codebook is then searched for the nearest codevector Cmin by computing squared Euclidian distance as presented in equation (1) with vector Xi with all the codevectors of the codebook CB. This method is called exhaustive search (ES).

k

d ( X i , C j ) =∑ ( xip − c jp ) 2

∑ (Vpi − Vqi) 2

(2)

i =1

where, Vpi and Vqi be the feature vectors of image P and Query image Q respectively with size ‘n’..

3. Kekre’s Fast Codebook Generation (KFCG) Algorithm In [45],[46], KFCG algorithm for image data compression is proposed. This algorithm reduces the time for code book generation. It does not use Euclidian distance for codebook generation. In this algorithm image is divided in to blocks and blocks are converted to the vectors of size k. Initially we have one cluster with the entire training vectors and the codevector C1 which is centroid. In the first iteration of the algorithm, the clusters are formed by comparing first element of training vector with first element of code vector C1. The vector Xi is grouped into the cluster 1 if xi1< c11 otherwise vector Xi is grouped into cluster 2 as shown in Figure. 1a. where codevector dimension space is 2. In second iteration, the cluster 1 is split into two by comparing second element xi2 of vector Xi belonging to cluster 1 with that of the second element of the codevector which is centroid of cluster 1. Cluster 2 is split into two by comparing the second element xi2 of vector Xi belonging to cluster 2 with that of the second element of the codevector which is centroid of cluster 2, as shown in Figure. 1b. This procedure is repeated till the codebook size is reached to the size specified by user. It is observed that this algorithm gives less error as compared to LBG and requires least time to generate codebook as compared to other algorithms [45], as it does not require computation of Euclidian distance.

d ( X i , C min ) = min 1≤ j ≤ N {d ( X i , C j )} Where

n

(1)

p =1

Although the exhaustive search (ES) method gives the optimal result at the end, it involves heavy computational complexity. If we observe the above equation (1) to obtain one nearest codevector for a training vector, it requires N Euclidian distance computations where N is the size of the codebook. So for M image training vectors, will require MxN number of Euclidian distance computations. It is obvious that if the codebook size is decreased the search

2

ICGST-GVIP Journal, Volume 9, Issue 5, September 2009, ISSN: 1687-398X

4. Compare the first element of the training vector with the first element of the codevector and split the above cluster into two. 5. Compute the centroids of both the clusters obtained in step 4. 6. Split both the clusters by comparing second element of training vectors with the second element of the codevectors. 7. Repeat the process till we obtain codebook of size 16. 8. Codebook is then converted to 1-Dimension of size 16x12=192 and DCT is applied on this to get the feature vector of size 192. 9. The result is stored as the feature vector for the image. Thus the feature vector database is generated.

Figure.1a.

B. Query Execution

Figure.1b. Figure. 1 KFCG algorithm for 2 dimensional case

4. Proposed CBIR Technique A. Feature Extraction Figure.3 Flowchart for Query Execution

Here the codebook of size 16x12 for the query image is extracted using Kekre’s Fast Codebook Generation Algorithm. The feature vector of query image can be obtained by applying DCT on this codebook. This feature set is compared with other feature sets in feature database using Euclidian distance as similarity measure. As compared to taking complete DCT of the image, this proposed method takes DCT of codebook only, which saves tremendous number of computations. As the size of codebook is same for all images we are not resizing the images for feature vector extraction or query execution, which is necessary in case of other transform based CBIR techniques.

5. Results and Discussion Figure.2 Flowchart for Feature Extraction using Proposed Technique

Vector quantization is used for CBIR, as texture is one of the important aspects to represent image contents. A block of 2x2 color pixels is considered for vector thereby making the training vector dimension equal to 12. The method is implemented on Intel processor 1.66 GHz, 1GB RAM machine to obtain results. Our database consists of total 1080 images of size 128x128x3. There are 15 different categories consisting of 72 images in each categories To test the proposed method, from every class five query images are selected randomly. So in all 75 query images are used. To check the performance of proposed technique we have used precision and recall. The standard definitions of these two measures are given by following equations.

Here the feature vector space has 16x12 number of elements. This is obtained using following steps of Kekre’s Fast Codebook Generation (KFCG) algorithm 1. Image is divided into the windows of size 2x2 pixels (each pixel consisting of red, green and blue components). 2. These are put in a row to get 12 values per vector. Collection of these vectors is a training set (initial cluster). 3. Compute centroid (codevector) of the cluster

3

ICGST-GVIP Journal, Volume 9, Issue 5, September 2009, ISSN: 1687-398X

Pr ecision =

Recall =

Number _ of _ relevant _ images _ retrieved Total _ number _ of _ images _ retrieved

(3)

Number_ of _ relevant_ images_ retrieved (4) Total_ number_ of _ relevent_ images_ in _ database

For full 2-Dimensional DCT for an 3xNxM image the number of multiplications required are 3xNxMx(N+M) and number of additions required are 3xNxMx(N+M-2). Here we have generated codebook of size 16x12 using KFCG which is based on comparison technique. Codebook is converted to 1-Dimensional of size 192 and DCT is applied on this to get the feature vector of size 192. For 1-D DCT of size 192 number of multiplications required are (192)2, and number of additions required are 192x191. The computational complexity of proposed method is 99% less compared to full DCT on images of size 128x128x3. Figure 4 shows a sample database of 15 images by randomly selecting one image from each category. The database has 15 categories, for a total of 1080 images. The image database used in the experiments is the subset of Columbia Object Image Library (COIL-100) [55] that contain 100 different objects (classes). Each object was rotated in 72 different degrees, resulting in 7200 images in the database. Figure 5 Shows Results of Yellow-Cat query image. Note that the database contains total 72 Yellow-Cat’s images. For the query image as shown in figure for 72 retrieved images the total relevant images obtained are all 72. Figure.6. Shows plot of Precision/Recall Vs Number of retrieved images for all 15 categories. From each category randomly five images are chosen as query image and for every query image precision and recall are computed and average of all five is taken and plotted against number of retrieved images. It is observed that crossover point of precision and recall varies from 58% to 90%. Figure 7. Shows net Average Precision/Recall vs Number of Retrieved images for all categories. The crossover point obtained is 82%. Table 1 gives number of total relevant images in the set of first 72 retrieved images for all 15 categories and 5 queries each.

Figure. 4. Sample database of 15 Images, the database has 15 categories, for a total of 1,080 images. [Image category names from left to right and top to bottom are: a.PaperBox, b.White-Stripped-Cup, c.,Wooden-Cup, d.Wooden-T, e.Blue-Bottle, f.White-Cat, g. Yellow-Toy-Car, h.Flowery-Cup, i.Yellow-Cat, j.Jar-Cup, k.Red-Toy-Engine, l.Birdy, m.Roler Object, n.White-Bottle, o.Orange-ToyJeep]

Query Image Retrieved Images

Figure.5. Results of Yellow-Cat Object query image. Note that the database contains total 72 images. For the query image as shown in Figure. 4 for 72 retrieved images the total relevant images obtained are 72

Table 1. Average Precision/Recall for all 15 categories for first 72 result images

Total Revelant images obtained in First 72 Result Images

A

b

C

d

e

f

G

H

I

j

k

l

m

n

O

Q1

61

52

70

45

56

44

64

72

72

72

61

72

60

37

72

Q2

61

49

72

54

56

49

64

72

72

72

56

72

59

39

72

Q3

60

60

71

59

57

39

64

72

72

72

55

72

56

38

72

Q4

60

57

71

60

54

42

64

72

72

72

51

72

57

41

72

Q5

59

69

72

62

54

43

65

72

72

72

48

72

57

40

72

Minimum

59

49

70

45

54

39

64

72

72

72

48

72

56

37

72

Maximum

61

69

72

62

57

49

65

72

72

72

61

72

60

41

72

Average

60

57.4

71

56

55

43

64

72

72

72

54.2

72

58

39

72

% Precision/Recall 84 79.7 99 78 77 60 89 100 100 100 75.3 100 80 54.2 100 [a.Paper-Box, b.White-Stripped-Cup, c.,Wooden-Cup, d.Wooden-T, e.Blue-Bottle, f.White-Cat, g. Yellow-Toy-Car, h.Flowery-Cup, i.Yellow-Cat, j.JarCup, k.Red-Toy-Engine, l.Birdy, m.Roler Object, n.White-Bottle, o.Orange-Toy-Jeep]

4

ICGST-GVIP Journal, Volume 9, Issue 5, September 2009, ISSN: 1687-398X

a.Paper-Box

b.White-Stripped-Cup

c.Wooden-Cup

d.Wooden-T

e.Blue-Bottle

f.White-Cat

g.Yellow-Toy-Car

h.Flowery-Cup

i.Yellow-Cat

j.Jar-Cup

k.Red-Toy-Engine

m.Roller-Object

n.White-Bottle

l.Birdy

o.Orange-Toy-Jeep

Figure. 6. Shows plot of average Precision/Recall Vs Number of retrieved images for all 15 categories. [From each category randomly five images are chosen as query image and for every query image precision and recall are computed and average of all five is taken and plotted against number of retrieved images as shown in Figure. 4.]

5

ICGST-GVIP Journal, Volume 9, Issue 5, September 2009, ISSN: 1687-398X

Figure.7. Net Average Precision/Recall Vs Number of Retrieved images for all categories

6. Conclusion Today we are living in the information age, where because of advent of the technology there is a situation like information explosion. Images have giant share in this information. More précised retrieval techniques are needed to access the large image archives being generated, for finding relatively similar images. Here in this paper novel image retrieval technique is proposed using vector quantization a popular technique for data compression. For VQ technique we have used KFCG algorithm to generate codebook which is very fast as it does not involve any Euclidian distance computation. This technique for CBIR has far less complexity as compared to using full DCT. The computational complexity of proposed method is 99% less for the image with size of 128x128x3 compared to full DCT. This results into only 1% time requirement per query image for retrieval, as compared to full DCT. The proposed technique avoids resizing of images for feature extraction, which is necessary in case of applying any transform technique directly on image.

[6]

[7]

[8]

[9]

[10]

7. References [1] H.B.Kekre, Sudeep D. Thepade, “Rendering Futuristic Image Retrieval System”, In Proc. of National Conference EC2IT-2009, KJSCOE, Mumbai, 20-21 Mar 2009. [2] Zhibin P.; Kotani, K.; Ohmi, T.,”Enhanced fast encoding method for vector quantization by finding an optimally-ordered Walsh transform kernel”, ICIP05, IEEE Int. Conf., Vol.1, pp I-573, Sept. 2005. [3] Zaher Al Aghbari and Ruba Al-Haj, “Building SSegTree for Image Representation and Retrieval”, ICGST Int. Journal on Graphics, Vision and Image Processing (GVIP), Special Issue on Image Retrieval and Representation, Vol. 6, Year 2006, pp. 101-109. [4] H.B.Kekre, Tanuja K. Sarode, “New Fast Improved Clustering Algorithm for Codebook Generation for Vector Quantization”, In Proc. of Int. Conf. ICETAETS, Saurashtra University, Gujarat (India), 13–14 January 2008 [5] Robert Li and Jung Kim, “Image Compression Using Fast Transformed Vector Quantization”, IEEE

[11]

[12]

[13]

[14]

6

Applied Imagery Pattern Recognition Workshop, 2000 Proceedings, Volume 29, 2000, pp.141 – 145. Chin-Chen Chang, Wen-Chuan Wu, “ Fast PlanarOriented Ripple Search Algorithm for Hyperspace VQ Codebook”, IEEE Transaction on image processing, vol 16, No. 6, June 2007. H.B.Kekre, Sudeep D. Thepade, “Ubicomp The Future of Computing Technology”, TechnoPath : Journal of Science Technology and Management, Volume 1, Issue 2, 2009. H.B.Kekre, Sudeep Thepade, Bhushan Deshmukh,“WEB 3.0 The Astonishing Avtar of Web”, TechnoPath: Journal of Science Technology and Management, Volume 1, Issue 2, 2009. S. J. Wang and C. H. Yang, “Hierarchy-oriented searching algorithms using alternative duplicate codewords for vector quantization mechanism,” Appl. Math. Comput., vol. 162, No. 234, pp. 559– 576, Mar. 2005. B.G.Prasad, K.K. Biswas, and S. K. Gupta, “Region –based image retrieval using integrated color, shape, and location index”, Int. Journal on Computer Vision and Image Understanding Special Issue: Colour for Image Indexing and Retrieval, Volume 94, Issues 1-3, April-June 2004, pp.193-233. Minh N. Do, Member, IEEE, and Martin Vetterli, Fellow, IEEE, “Wavelet-Based Texture Retrieval Using Generalized Gaussian Density and KullbackLeibler Distance”, IEEE Transactions On Image Processing, Vol 11, Num. 2, pp.146-158, Feb 2002. H.B.Kekre, Ms. Tanuja K. Sarode, “Clustering Algorithms for Codebook Generation Using Vector Quantization,” National Conference on Image Processing NCIP-2005, TSEC, Mumbai. H.B.Kekre, Ms Tanuja Sarode, Sudeep D. Thepade, “DCT Applied to Row Mean and Column Vectors in Fingerprint Identification”, In Proceedings of Int. Conf. on Computer Networks and Security (ICCNS), 27-28 Sept. 2008, VIT, Pune. Y. Rui, T. S. Huang, S. Mehrotra, M. Ortega, “Automatic matching tool selection using relevance feedback in MARS”, In Proc. of Int. Conf. on Visual Information Systems, pp 109–116, San Diego, CA, Dec. 1997.

ICGST-GVIP Journal, Volume 9, Issue 5, September 2009, ISSN: 1687-398X

[15] J. P. Eakins, J. M. Boardman, and M. E. Graham, “Trademark image retrieval by shape similarity”, IEEE Multimedia, Volume 5, Number-2, pp 53–63, 1998. [16] M. La Cascia, S. Sethi, S. Sclaroff. “Combining textual and visual cues for content-based image retrieval on the world wide web”, In IEEE Workshop on Content-based Access of Image and Video Libraries, pp 24–28, Santa Barbara, CA, June 1998. [17] C. M. Thomas, S. Belongie, J. M. Hellerstein, J. Malik, “Blobworld: a system for region-based image indexing and retrieval”, In Visual Information and Information Systems (VISUAL), LNCS 1614, pages 509–516, Amsterdam, The Netherlands, June 1999. [18] S. Sclaroff, L. Taycher, and M. La Cascia, “ImageRover: a content-based image browser for the world wide web” , In IEEE Workshop on Contentbased Access of Image and Video Libraries, pages 2–9, San Juan, Puerto Rico, June 1997. [19] H. B.Kekre, Sudeep D. Thepade, “Image Blending in Vista Creation using Kekre's LUV Color Space”, SPIT-IEEE Colloquium and Int. Conference, SPIT, Andheri, Mumbai, 04-05 Feb 2008. [20] Gudivada V N and Raghavan V V “Design and evaluation of algorithms for image retrieval by spatial similarity” ACM Trans. on Information Systems, Vol. 13, Num. 2, pp 115-144, 1995. [21] Jain, A K et al (1997) “Multimedia systems for art and culture: a case study of Brihadisvara Temple” in Storage and Retrieval for Image and Video Databases V, Proc SPIE 3022, pp 249-261 [22] Hirata K. and Kato T. “Query by visual example – content-based image retrieval”, In Proc. of Third International Conference on Extending Database Technology, EDBT’92, 1992, pp 56-71 [23] Gupta A. et al, “The Virage image search engine: an open framework for image management” , in Storage and Retrieval for Image and Video Databases IV, Proc SPIE 2670, 1996, pp 76-87. [24] Flickner M. et al, “Query by image and video content: the QBIC system”, IEEE Computer 1995, Volume 28, Number 9, pp 23-32. [25] H.B.Kekre, Sudeep D. Thepade, “Boosting Block Truncation Coding using Kekre’s LUV Color Space for Image Retrieval”, WASET Int. Journal of Electrical, Computer and System Engineering (IJECSE), Vol.2, Num.3, Summer 2008. Available online at www.waset.org/ijecse/v2/v2-3-23.pdf [26] H.B.Kekre, Sudeep D. Thepade, “Image Retrieval using Augmented Block Truncation Coding Techniques”, ACM Int. Conf. ICAC3-09, 23-24 Jan 2009, FCRCE, Mumbai. Is uploaded on ACM portal. [27] H.B.Kekre, Sudeep D. Thepade, “Using YUV Color Space to Hoist the Performance of Block Truncation Coding for Image Retrieval”, IEEE International Advanced Computing Conference 2009 (IACC’09), Thapar University, Patiala, INDIA, 6-7 March 2009. [28] M.Eisa, I.Elhenawy, A.E.Elalafi and H. Burkhardt, “Image Retrieval based on Invariant Features and Histogram Refinement”, ICGST Int. Journal on Graphics, Vision and Image Processing (GVIP) , Special Issue on Image Retrieval and Representation, Vol. 6, Year 2006, pp. 7-11.

[29] H.B.Kekre, Sudeep D. Thepade, “Color Traits Transfer to Grayscale Images”, In Proc.of IEEE First International Conference on Emerging Trends in Engg. & Technology, (ICETET-08), G.H.Raisoni COE, Nagpur, INDIA. Available on IEEE Xplore. [30] P.Aggarwal, H.K.Sardana, G.Jindal, “Content Based Medical Image Retrieval: Theory, Gaps and Future Directions”, ICGST Int. Journal on Graphics, Vision and Image Processing, GVIP, Vol. 9, Issue II, year 2009, pp. 27-37. [31] Hirata K. and Kato T. “Query by visual example – content-based image retrieval”, In Proc. of Third International Conference on Extending Database Technology, EDBT’92, 1992, pp 56-71 [32] H. B. Kekre, Tanuja K. Sarode, “Fast Codebook Search Algorithm for Vector Quantization using Sorting Technique”, ACM Int. Conf. ICAC3-2009, 23-24 Jan 2009, FCRCE, Mumbai. Available on ACM portal. [33] H. B. Kekre, Tanuja K. Sarode, Bhakti Raul, ”Color Image Segmentation using Kekre’s Fast Codebook Generation Algorithm Based on Energy Ordering Concept”, ACM Int. Conference ICAC3-2009, 2324 Jan 2009, FCRCE, Mumbai. Available on ACM portal. [34] R. M. Gray,“Vector Quantization”, IEEE ASSP Magazine, April 1984, pp. 4-29. [35] Momotaz Begum, et.al., “An Efficient Algorithm for Codebook Design in Transform Vector Quantization”, WSCG”2003, February 3-7, 2003. [36] A.Vasuki, P.T.Vanathi ,“Image Compression using Lifting and Vector Quantization”, ICGST International Journal on Graphics, Vision and Image Processing (GVIP) , Special Issue on Image Compression, Vol. 7 pp. 73-81 [37] Noha A. Hikal and Roumen Kountchev, “A Method for Digital Image Compression with IDP Decomposition Based on 2D-SOFM VQ”, ICGST International Journal on Graphics, Vision and Image Processing (GVIP) , Special Issue on Image Compression, Vol. 7, pp.32-42. [38] Chang, and I. C. Lin, “Fast search algorithm for vector quantization without extra look-up table using declustered subcodebooks,” IEE Proc. Vis., Image, Signal Process., vol. 152, No. 5, pp. 513–519, Oct.2005. [39] Y. C. Hu, C. C. Chang, “An effective codebook search algorithm for vector quantization”, Imag. Sci. J., vol. 51, No. 4, pp. 221–234, 2003. [40] C.H. Hsieh, Y.J. Liu, “Fast search algorithms for vector quantization of images using multiple triangle inequalities and wavelet transform”, IEEE Trans. Image Process. Vol. 9, No. 3, 2000, pp. 321–328. [41] J.S. Pan, Z.M. Lu, S.H. Sun, “An efficient encoding algorithm for vector quantization based on subvector technique”, IEEE Trans. Image Processing. Vol 12, No.3, 2003, pp. 265–270. [42] B.C. Song, J.B. Ra, “A fast algorithm for vector quantization using L2-norm pyramid of codewords”, IEEE Trans. Image Processing. Vol. 4, No.12, 2002, pp. 325–327. [43] Z. Pan, K. Kotani, T. Ohmi, “Fast encoding method for vector quantization using modified L2-norm 7

ICGST-GVIP Journal, Volume 9, Issue 5, September 2009, ISSN: 1687-398X

[44] [45]

[46]

[47]

[48]

[49]

[50]

[51] [52]

[53]

[54]

[55]

pyramid”, IEEE Signal Process. Lett. Vol. 12, issue 9, 2005, pp. 609–612. Y. Chen, B. Hwang, C. Chiang, “Fast VQ codebook search algorithm for grayscale image coding”, Image and Vision Compu., vol. 26, 2008, pp. 657-666. H. B. Kekre, Tanuja K. Sarode, “New Fast Improved Codebook Generation Algorithm for Color Images using Vector Quantization”, International Journal of Engg. & Tech., Vol.1, No.1, pp. 67-77, 2008 H. B. Kekre, Tanuja K. Sarode, “Fast Codebook Generation Algorithm for Color Images using Vector Quantization,” Int. Journal of Computer Sci. & Information Tech., Vol. 1, No. 1, pp: 7-12, Jan 2009. H. B. Kekre, Tanuja K. Sarode, “An Efficient Fast Algorithm to Generate Codebook for Vector Quantization,” First Int. Conf. on Emerging Trends in Engineering and Technology, ICETET-2008, held at Raisoni College of Engg., Nagpur, India, 16-18 July 2008, Avaliable at online IEEE Xplore. H. B. Kekre, Tanuja K. Sarode, “Speech Data Compression using Vector Quantization”, WASET Int. Journal of Computer and Information Science and Engg., 2008 (IJECSE), Vol. 2, Num. 4, pp 251254, 2008. available: http://www.waset.org/ijcise H. B. Kekre, Tanuja K. Sarode, “Centroid Based Fast Search Algorithm for Vector Quantization”, International Journal of Imaging (IJI), Volume 1, Number A08, pp. 73-83, Autumn 2008, available: http://www.ceser.res.in/iji.html H. B. Kekre, Tanuja K. Sarode, “Fast Codevector Search Algorithm for 3-D Vector Quantized Codebook”, WASET Int. Journal of cal Computer Information Science and Engineering (IJCISE), Vol.2, Num.4, pp. 235-239, Fall 2008. available: http://www.waset.org/ijcise Guoping Qiu, “Color Image Indexing Using BTC”, IEEE Transactions on Image Processing, Volume 12, Number 1, pp.93-101, January 2003. H.B.Kekre, Sudeep D. Thepade, “Creating the Color Panoramic View using Medley of Grayscale and Color Partial Images ”, WASET Int. Journal of Electrical, Computer and System Engg. (IJECSE), Volume 2, No. 3, Summer 2008. Available online at www.waset.org/ijecse/v2/v2-3-26.pdf. H.B.Kekre, Sudeep D. Thepade, “Scaling Invariant Fusion of Image Pieces in Panorama Making and Novel Image Blending Technique”, Int.Journal on Imaging (IJI), Autumn 2008, Volume 1, No. A08, Available online at www.ceser.res.in/iji.html. H.B.Kekre, Sudeep D. Thepade, “Rotation Invariant Fusion of Partial Images in Vista Creation”, WASET Int. Journal of Electrical, Computer and System Engg.(IJECSE), Vol.2, No.2, Spring 2008. Available online at www.waset.org/ijecse/v2/v2-2-13.pdf S. Nene, S.Nayar, & H. Murase. Columbia object image library (COIL-100). Technical report, CUCS006-96, Feb 1996 http://www1.cs.columbia.edu/ CAVE/software/softlib/coil-100.php.

8. Author Biographies Dr. H. B. Kekre has received B.E. (Hons.) in Telecomm. Engg. from Jabalpur University in 1958, M.Tech (Industrial Electronics) from IIT Bombay in 1960, M.S.Engg. (Electrical Engg.) from University of Ottawa in 1965 and Ph.D. (System Identification) from IIT Bombay in 1970. He has worked Over 35 years as Faculty of Electrical Engineering and then HOD Computer Science and Engg. at IIT Bombay. For last 13 years worked as a Professor in Department of Computer Engg. at Thadomal Shahani Engineering College, Mumbai. He is currently Senior Professor working with Mukesh Patel School of Technology Management and Engineering, SVKM’s NMIMS University, Vile Parle(w), Mumbai, INDIA. His areas of interest are Digital Signal processing and Image Processing. He has more than 250 papers in National / International Conferences / Journals to his credit. Recently six students working under his guidance have received best paper awards. Currently he is guiding seven Ph.D. students. Ms. Tanuja K. Sarode has Received M.E.(Computer Engineering) degree from Mumbai University in 2004, currently Pursuing Ph.D. from Mukesh Patel School of Technology, Management and Engg., SVKM’s NMIMS University, Vile-Parle (W), Mumbai, INDIA. She has more than 10 years of experience in teaching. Currently working as Assistant Professor in Dept. of Computer Engineering at Thadomal Shahani Engineering College, Mumbai. She is member of International Association of Engineers (IAENG). Her areas of interest are Image Processing, Signal Processing and Computer Graphics. She has 25 papers in National /International Conferences/journal to her credit. Sudeep D. Thepade has Received B.E.(Computer) degree from North Maharashtra University with Distinction in 2003. M.E. in Computer Engineering from University of Mumbai in 2008 with Distinction, currently Perusing Ph.D. from SVKM’s NMIMS University, Mumbai. He has more than 06 years of experience in teaching and industry. He was Lecturer in Dept. of Information Technology at Thadomal Shahani Engineering College, Bandra(w), Mumbai for nearly 04 years. Currently working as Assistant Professor in Computer Engineering at Mukesh Patel School of Technology Management and Engineering, SVKM’s NMIMS University, Vile Parle(w), Mumbai, INDIA. He is member of International Association of Engineers (IAENG) and International Association of Computer Science and Information Technology (IACSIT), Singapore. His areas of interest are Image Processing and Computer Networks. He has about 40 papers in National/International Conferences/Journals to his credit with a Best Paper Award at International Conference SSPCCIN-2008 and Second Best Paper Award at ThinkQuest-2009 National Level paper presentation competition for faculty.

8