Collaborative Image Compression in Wireless Sensor ... - CiteSeerX

0 downloads 0 Views 754KB Size Report
Jan 21, 2019 - vides an attractive approach for remote location monitoring. Although .... less gray level; Reconstruct the final image at destination from.
24

INTERNATIONAL JOURNAL OF COMPUTATIONAL COGNITION (HTTP://WWW.IJCC.US), VOL. 8, NO. 1, MARCH 2010

Collaborative Image Compression in Wireless Sensor Networks Muhammad Imran Razzak, S. A. Hussain, Abid Ali Minhas and Muhammad Sher

Abstract— Life time of wireless sensor networks is limited due to battery constraint. In camera equipped sensor nodes, due to battery and processing power constraints, the life time of sensor network is decreased quickly when image is processed and is transferred to the destination. Image compression not only helps to reduce the communication latency in a sensor network but also save the life time of the network. In this paper, a novel technique is presented for collaborative image transmission in wireless sensor networks to avoid the extra energy usage during redundant data transmission. First a shape matching method to coarsely register image is applied by sharing shape context to avoid communication overhead. Image is divided into sub regions based on the gray scale value and quantization is performed on sub regions. This load is shared on the overlapping camera c 2010 Yang’s Scientific Research Institute, LLC. node. Copyright ° All rights reserved. Index Terms— Image Compression, sensor network, node life time, battery optimization.

I. I NTRODUCTION

W

IRELESS sensor networks are showing increasing prominence over the last few years, by both theoretical and practical problems in its operating systems, protocols, distributed signal and image processing. A sensor is a low cost, small battery operated device and rechargeable used for remote monitoring. Sensors are characterized by several constraints such as a short transmission range, poor computation and processing power, less reliable and low data transmission rates, and have very limited energy. The sensor nodes are able to communicate with each other in order to detect objects collaboratively, collects information from each other. Wireless Sensor networks have been becomes an important technology especially for environmental monitoring, habitat monitoring, target detection, military applications and disaster management [4][6]. Distributed processing in Wireless sensor networks provides an attractive approach for remote location monitoring. Although computer is much faster than human yet fails in image processing because of complexities involves in image processing and heavy processing requires for images. As for the sensor networks, which have limited battery and low Manuscript received January 16, 2009; revised October 5, 2009. Muhammad Imran Razzak and Muhammad Sher, International Islamic University, Islamabad, Pakistan. S. A. Hussain, Air University, Islamabad. Abid Ali Minhas, Bahria University, Islamabad. Emails: [email protected](Muhammad Imran Razzak), [email protected](S. A. Hussain), abid [email protected] (Abid Ali Minhas), [email protected](Muhammad Sher). Publisher Item Identifier S 1542-5908(10)10105-5/$20.00 c Copyright °2010 Yang’s Scientific Research Institute, LLC. All rights reserved. The online version posted on January 21, 20100 at http://www.YangSky.com/ijcc/ijcc81.htm

processing power, so to overcome the drawback of image processing to compute the results at sensor node it is better to send the image to the base station. As image is a large data, thus to transfer the image communication overhead is increased. Data transmission is one of the most energy expensive tasks in WSN thus by using data compression techniques energy can be reduced by reducing the number of bits to be sent [3]. Energy-efficient image communication in WSN is one of the most important issue. Using image compression technique, energy consumption can be reduced by reducing the number of bits that will be sent to the destination. A number of research efforts have been done to compress the image at sensor node with respect to the energy consumption and to decrease the communication overhead. Thus the goal is to minimize the energy consumption by compressing image as much as possible. Energy-aware data compression has been examined before by Sadler et-al [1] Barr et-al [7]. Both have discussed the various lossless data compression algorithms such as “bzip2” and “LZO” on constrained embedded platforms. The results demonstrate significant energy benefits while during transmitting and receiving compressed data over uncompressed data, mainly due to the higher energy costs associated with communication and computation. In parallel distributed computing [2], a problem is divided into multiple sub-problems of smaller size. Every node solves by running the local algorithm for each sub problem. The solution of the original problem is obtained by combining the solution from each node. Min Wu and Chang Wen Chen discussed a novel technique for collaborative image compression in wireless sensor networks [10]. A shape matching method is applied to collaborative image coding to reduce energy consumption. Background from each camera node is segmented using light weight image subtraction method due to the nodes are stationary with respect to position. The background is send only once. Only the changes are sent from each camera node. The image is then reconstructed by fusing the background and changes sent. In [9] two methods are discussed in order to compress the image and to save energy, two methods are implemented. According to first method data is partitioned [5] in to n blocks along the rows, then 1-D wavelet is run on row. After words data is divided into m block along the column and 1 D wavelet is performed. While second method, tilling technique is used with wavelet based compression. In both proposed methods, the full captured image is sent to the nearest nodes that take part in compression. thus the camera equipped node life time is decreased due to sending the full image. Also other nodes

RAZZAK, HUSSAIN, MINHAS & SHER, COLLABORATIVE IMAGE COMPRESSION IN WIRELESS SENSOR NETWORKS

life time may also decrease due to communication required in distributed computation. R. S. Wagner examined [8], presented a novel technique for distributed image compression in wireless sensor networks. Sensor nodes are allowed to share low-bandwidth descriptors of the image of their view as feature points, which is used to find the common region of overlap. The overlapped region is then compressed locally via spatial down sampling and finally image super-resolution techniques are applied to get original resolution image from the set of low resolution image at receiver end and demonstrated the feasibility of algorithm by prototype implementation. R. S. Wagner [8] discussed uses the down sampling from each camera node that have the common image. Thus low resolution image is sent from each node that causes the energy consumption. Min Wu et al. [10] sent only the changes from each node that has the common image. This also have overhead of communication from each node and also considered that background is stationary. A novel distributed image compression in WSN is examined in this paper. A common region of overlapped area is identified by sharing low bandwidth descriptors [8] as feature points. This overlapped area is divided into small regions. This division depends upon the gray level, in other words image is divided tilling based on the gray scale value that are with in the threshold value. Then each sub region is quantized and compressed locally, and then this image is recomputed up requantization at destination. II. PROPOSED SYSTEM Energy consumption is one of the most important factor to analyze the life of the sensor networks. Energy optimization in sensor networks epically camera sensor network is very complicated because it not only involves the reduction of energy consumption but also load distribution in the network. Energyefficient image communication is one of the most important goal for WSN. A novel distributed image compression in WSN shown in Fig. 6 is proposed in this paper and densely deployed sensor network is considered. Image in the neighbor sensor are spatially correlated with overlaps. Independently transmitting the whole image from each node means that the received data is redundant. For energy point of view, image transfer from each sensor node to destination decreases the life time of the network due to the redundant data in the images. The life time can be increased by reducing the redundant data. A common region of overlapped area is identified by sharing low bandwidth descriptors [8] as feature points. The joint compression is implemented using six steps: Find the feature points and broadcast to neighbor nodes; Identifying the camera with overlapping field of view; Find the common area in overlapping camera; Distribute the image to each camera based on grayscale regions; Quantized these regions up to sixteen or less gray level; Reconstruct the final image at destination from small quantized images. In order to find the overlapping camera, first reference camera is selected, and overlapping camera is specified by using the reference camera as in [8]. The next step is the image matching.

25

When feature points are sent from the reference node to the overlapping nodes, the overlapping sensor find the registration. Sobal edge-detection is used to extract the feature points, so that the communication overhead between sensor nodes for registration can be reduced shown in Fig. 2. Given the two sets of feature points, the first task on the path to deducing an aligning transform is to identify the best association of points in the first set to points in the second. For each point a shape context is computed in both feature points sets, i.e., reference image feature points, and overlapping node feature points. The shape context is the coarse histogram description operated on the feature points [9]. Histogram bins are divided in a log-polar fashion as shown in Fig. 1. In other words, the distance from pi to each of the other points in P is expressed in (r, θ) polar coordinates, and histogram bin assignment for the angle θ is uniform, while assignment for radius r is logarithmic [10]. For a feature point pi on the shape, the histogram hi is calculated as hi (k) = #{q 6= pi : (q − pi ) ∈ bin(k)}

(1)

where each bin in H(Pi (k)) corresponds to a pair of distance and angle. H(Pi (k)) counts the number of neighbors of pi whose relationships to pi fall within the distance and angle threshold for that bin.

Fig. 1: Polar Histogram [10].

Shape contexts have been assigned to the reference image and overlapping image. The next step is to determine the best match between reference image and overlapping image. The cost matrix [10] is calculated as K

Ci,j = C(pi , qj ) =

1 X [hi (k) − hj (k)]2 2 hi (k) + hj (k) k=1

where pi is the feature points in the reference image and qi is the feature points in the overlapped image. Minimize the total cost of matching to find the best one-to-one match X H= C(pi , qj ). i,j

This overlapped area is divided into small regions. This division depends upon the gray level, in other words image is divided tilling based on the gray scale value that are with in the threshold value as shown in Fig. 3. The extracted region have gray scale belongs to the same regions. Then each sub region is quantized up to sixteen gray scales and then compressed locally. This local region is sent towards the destination with

26

INTERNATIONAL JOURNAL OF COMPUTATIONAL COGNITION (HTTP://WWW.IJCC.US), VOL. 8, NO. 1, MARCH 2010

its quantization value and its coordinates information. The quantization value is that value on which this small region is quantized at sixteen gray scale value and by using this value the region is reconstructed again. After finding the common regions, the next step is to divide the percentage of image for each sensor node. First the overlapping nodes compress its own part image, and then the area not compressed by the overlapping node is done by the reference node. Then next step is divide the regions based on the gray levels. The area that has gray level with in some threshold is selected for down quantization. The selection of the region for quantization is like region growing. The region is MxN matrix. The region growing is in two passes. In the first pass, if the column gray scale value is with in threshold value then it is include in the region. The second pass is on the row. The quantization process is done by using the following relation. The quantized region is sent to the destination with quantized value and its starting coordinate value in the original image. The regions are quantized up to sixteen gray levels. G(x, y) = f (x, y) ∗

Quantization Level . (Max-Min)Level in the regions

At destination, the region is up quantized using the quantization image value as shown in Figs. 4 and 5. Finally the small regions are recombined using coordinates value to form the image at destination. The final image is recomputed from sub regions. While these sub regions are up-quantized before recompilation of the original image at destination. 2 2 2 2 3 4 4 4 0 1 2 5 6 5 3 0

3 3 3 3 4 4 4 5 1 1 2 3 5 5 3 1

4 4 4 5 5 5 5 5 2 1 1 2 3 4 3 2

5 4 5 6 6 6 5 5 3 2 1 1 2 3 4 4

5 6 7 7 7 6 5 5 4 3 1 1 2 4 5 5

5 6 7 8 7 6 5 4 5 4 2 2 4 5 5 5

5 6 7 8 7 6 4 3 5 4 3 4 5 6 6 6

5 6 7 8 7 6 4 3 5 5 4 15 6 7 6 6

7 6 6 8 10 11 9 7 5 7 8 8 5 4 4 5

7 6 6 7 10 10 8 7 4 6 8 8 6 5 5 6

8 6 6 7 9 9 8 6 4 6 7 7 6 6 6 7

6 7 7 8 8 8 7 6 4 5 6 7 7 7 7 7

7 7 8 9 8 7 7 7 5 5 6 7 8 8 7 7

7 9 10 10 8 7 7 7 6 6 6 7 8 7 8 6

8 10 11 11 9 7 7 8 8 17 6 7 9 9 7 5

8 10 12 11 9 7 8 9 9 7 7 7 9 9 6 4

12 12 12 11 11 10 10 10 8 9 11 11 9 8 8 8

Fig. 5: 16 Bit Quantized Image with quantized value 179.

III. L IMITATION The discussed approach is feasible for the image that has large regions and low variation. This is due to the quantization based on the regions. It is more feasible for low contrast images. This proposed approach can be used for both every situation unlike [10] for static background. As the data sent from a node is not sent from the other node, thus the image obtain using propose approach contains only information from that node unlike super resolution image from low resolution image [8] that have the effect from each node image.

IV. CONCLUSIONS A novel technique is presented for collaborative image transmission in wireless sensor networks. To increase the life time of the network spatial and temporal correlations is considered. Sobal edge-detection is used to extract the feature points, so that the communication overhead between sensor nodes for registration can be reduced. Image is divided into sub regions based on the gray scale value. Then these sub regions is quantized on sixteen gray levels and compressed. The compressed sub image is sent to the destination with it spatial coordinates and minimum gray value. Finally at the destination image is reconstructed using spatial coordinates and up-quantization using gray value. R EFERENCES [1] K. Barr and K. Asanovi’c. Energy-aware lossless data compression. ACM Trans. Computer Systems, 24(4):250–291, August 2006. [2] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and distributed computation numerical methods. Prentice-Hall Inc., 1989. [3] M. Chen and M. L. Fowler. The importance of data compression for energy efficiency in sensor networks. In Conference on Information Sciences and Systems, 2003. [4] D. Estrin, D. Culler, K. Pister, and G. Sukhatme. Connecting the physical world with pervasive networks. IEEE Pervasive Computing, 1(1):59–69, 2002. [5] F. Marino, V. Piuri, and E. J. Swartzlander. A parallel implementation of the 2-D discrete wavelet transform without interprocessor communications. IEEE Transactions on Signal Processing, 47(11):3179–3184, November 1999. [6] G. J. Pottie and W. J. Kaiser. Wireless integrated network sensors. Communications of the ACM, 43(5):51–58, 2000. [7] C. Sadler and M. Martonosi. Data compression algorithms for energyconstrained devices in delay tolerant networks. In Proc. ACM Conf. on Embedded Networked Sensor Systems, 2006. [8] R. Wagner, R. Nowak, and R. Baraniuk. Distributed image compression for sensor networks using correspondence analysis and super-resolution. In Proceedings of IEEE International Conference on Image Processing (ICIP’03), volume 1, pages 597–600, Barcelona, Spain, September 2003. [9] Huaming Wu and A. A. Abouzeid. Energy efficient distributed JPEG2000 image compression in multihop wireless networks. In ASWN’2004, 2004. [10] Min Wu and Chang Wen Chen. Collaborative image coding and transmission over wireless sensor netowrks. EURASIP Journal on Advances in Signal Processing, 2007. Article ID 70481.

RAZZAK, HUSSAIN, MINHAS & SHER, COLLABORATIVE IMAGE COMPRESSION IN WIRELESS SENSOR NETWORKS

(a) Reference image.

(b) Overlapped image A.

(c) Overlapped image B.

(d) Soble edge detection on reference image.

(e) Soble edge Detection on overlapped image A.

(f) Soble edge Detection on overlapped image B.

Fig. 2: Soble edge detection on overlapped camera images.

27

28

INTERNATIONAL JOURNAL OF COMPUTATIONAL COGNITION (HTTP://WWW.IJCC.US), VOL. 8, NO. 1, MARCH 2010

Fig. 3: Overlapping of images.

Fig. 4: Gray scale value of small image fall within threshold.

RAZZAK, HUSSAIN, MINHAS & SHER, COLLABORATIVE IMAGE COMPRESSION IN WIRELESS SENSOR NETWORKS

Find Feature Points and Broadcast

Identifying the camera with overlapping field

Find the common area & divvied the Quota

Find the small region based on gray level

In Network Processing

Construct the Image from Small Regions

Up Quantized Regions

At Destination Processing Fig. 6: System diagram.

29

Down Quantized and Transmit