multiresolution lossless compression scheme - CiteSeerX

11 downloads 9997 Views 108KB Size Report
Email : [email protected]. ABSTRACT ... and with a new entropy coder, to give the overall com- ... variance and facilitate the coding, because the di er-.
MULTIRESOLUTION LOSSLESS COMPRESSION SCHEME P. Piscaglia and B. Macq

Universite Catholique de Louvain Unite de Telecommunications et Teledetection 2 place du Levant, 1348 Louvain-la-Neuve Belgium Email : [email protected]

ABSTRACT

A multiresolution lossless image compression scheme based on several new tools will be presented in this paper. An improved multiresolution Haar transform will be developed, combined with pre- and post-processing and with a new entropy coder, to give the overall compression scheme. This research is mainly usedful for image compression in the medical imaging area, where changes between original and decompressed images are to be avoided.

1. INTRODUCTION Data compression is aimed at reducing storage and communication costs through the reduction of redundancy in the data representation.The most ecient compression techniques can achieve very important compression ratios but introduce losses in the decompressed in the decompressed image compared with the original one. Lossless compression gives lower compression ratios, but is crucial in applications such as medical imaging or satellite photography, where reliability of reproduction of images is a critical factor. Compression can be decomposed in two steps : the decorrelation of the information, and the entropy coding of the decorrelated signal. One way of decorrelating an image is the wavelet transform : the Haar transform will be used and improved in the paper. Redundancy can be further removed by predicting the value of the transform coecients from the past, i.e. coecients already coded. This prediction has been imbedded into the multiresolution transform, increasing its eciency. Because compression techniques are based on the similarity in value of close pixels, a preprocessing of the color map has been developed for color images.

The second step of the compression is the entropy coding. We have developed in this paper a method combining the eciency of Hu man codes and the universality of codes that are not linked to a statistic. Called the Multi-Hu man encoder, it will be based on several Hu man tables. The decorrelated information is rst post-processed in order to homogenize the statistics and then introduced in the entropy coder.

2. COLOR PRE-PROCESSING Generally, medical and satellite images are grey-scaled. If not, the colors are usually taken from a palette of 256 or more elements. Nearly all compression techniques are based on the fact that two points visually close together have a similar value. Because of this palettization, it will not necessarily happen and the compression for this image will be less ecient. In order to obtain this property, we will modify the color palette. This does not reduce the entropy of the image (entropy is only based on the probability to meet a value, not on the value itself), but always lower the variance and facilitate the coding, because the di erences between the pixels becomes lower. The improvement is image dependent. It is never negative, and ranges from no gain, if the color palette of the image was already optimum, up to 0.5 bits per pixel.

3. MULTI-RESOLUTION TRANSFORM Afterwards, the Haar multiresolution transform[1] is applied on the image. We decompose it into a low resolution part and three high resolution oriented details (Haar transformed is based on the sum and di erence between neighboring points).

A rst improvement can be given to the Haar transform. It comes from the observation that a+b and a ? b have the same parity (they are integer values !). The entropy can be reduced if one of both parity bits is forgotten, which is done without losing any information. The Modi ed-Haar (m-Haar) and inverse m-Haar are both realized in two steps, using a temporary image tmp. The decomposition of the original image orig into a transformed image tran is performed as follows : mHaar1 tmp(:; x) tmp(:; width/2 + x) mHaar2 comp(y; :) tran(height/2 + y; :)

= = = =

RS(orig(:; 2x) + orig(:; 2x + 1)) orig(:; 2x) ? orig(:; 2x + 1) RS(tmp(2y; :) + tmp(2y + 1; :)) tmp(2y; :) ? tmp(2y + 1; :)

tmp(:; x) means column x of tmp, and tmp(y; :) means line y of tmp. RS is the Right Shift operator. It is more or less equivalent to an integer division by a factor 2 (the suppression of the parity bit) but is faster and presents a more interesting behavior with positive and negative values. The two steps are the vertical and horizontal processing of the image. If the pixels of the original image are coded unsigned with 8 bits, horizontal and vertical high resolutions are coded with 9 signed bits, the diagonal one with 10, and the low resolution remains coded with 8 unsigned bits. The gain of the Modi ed-Haar over the Haar transform can be expressed by the two following comparisons. The entropy of the three high resolutions after a Haar transform of "lena" is 5.66 and after a MH it lowers to 5.0. The low resolution entropy is respectively 9.56 and 7.56. Table 2 shows mean values computed over 6 images . Improvements in bits per pixel (bpp) of the modi ed Haar transformed are quite evident. We will lower the entropy of the multiresolution transformed images by predicting the value of the coecients. An element can be predicted from the values of its neighbors, from the values of the low resolution if it belongs to a high resolution subband of the image, or from a combination of both elements. PThe general form of a N th -order predictor is x^(n) = Nj=1 hj x(n ? j). Using optimum prediction theory , we obtain the YuleWalker prediction equations, modi ed because of the bi-dimensional process and corrected in order to take into account the statistic di erence between low resolution (uniform statistic) and high resolution (Gaussian statistic). Resolving these equations gives the linear predictor coecients hj . Results of the modi ed Haar

transform plus the prediction are given in table 2. The more correlation can be found in the image, the more ecient will be the prediction. In a m-Haar transformed image, predictions are performed on images four times smaller than the original. If the prediction is included into the transform, between horizontal and vertical decomposition (after step MH-1 de ned above), there is more information available. The horizontal prediction can be calculated on the half-sized image, and will be more ecient. Vertical prediction will be performed as in the normal transform + prediction. Figure 1 shows the decomposition. New predictors are calculated for the horizontal step, while vertical ones are the same than before. Results can be found in table 2.

4. INFORMATION REORGANIZATION The eciency of the entropy coders depends on the length and the statistic of the input sequence. For this reason, entropy coders will not take as one input all the coecients of one subband. The information is separated into sequences of N coecients, in order to have homogeneous statistics in each sequence. Entropy coders will perform better with homogeneous sequences than with inhomogeneous ones. The values are classi ed using a simple rule, based on the values of the corresponding point in the previous resolution, and on its neighbors in the current subband. If MNZSB means Most Non Zero Signi cant Bit, pi means a value in the current resolution and pi?1 treats with the same subband in the previous resolution, the classi cation is done according to the next formula. class = MNZSB( pi(y; x ? 1) + pi (y 3? 1; x) + pi?1(y; x) ) The decoder can classify points exactly the same way, the classi cation being only based on already known information (upper or left points of the image). No side information is needed to handle this classi cation if information arrives in the correct order to the decoder. With this purpose, a reorganization will take place to send the sequences in an order compatible with the desires of the decoder. The logical order to send the sequences is that a sequence is sent when totally lled. After the reorganization (and via a bu ering of the information), sequences are sent when their rst element is met while scanning the subband, i.e. the rst time that the sequence is needed by the decoder.

LRv

HRv Original

LRh HRh

LRh

pe (HRv)

pred err(HRh)

Figure 1: m-Haar 2 transform

5. THE MULTI-HUFFMAN CODER The last step is the entropy coding of the post-processed transformed image. The Multi-Hu man coder is an entropy coder based on the well-known Hu man coder. Eciency of Hu man codes is close to 98 % if the statistics of the information to encode are close to the statistics used for building the Hu man table. If not, the eciency can drastically fall down. The main idea of the multi-Hu man (MH) coding is to separate the input into xed length sequences (this step is already done in the 'information reorganization'), and to code every sequence with one of the prede ned Hu man table of the MH. Hu man tables are usually built using learning sequences and adapting the codes to their statistics. MH tables are built classifying learning sequences of various statistics into C classes, with a criterion based on their variance. One Hu man table is associated with each class. If the learning sequences come from di erent kind of images (medical, satellite, still images) then each sequence to be coded can nd a Hu man table matching its characteristics. The coder computes the code length of each sequence with every of the C tables (this operation does not imply C full coding operations but only a few integer additions and therefore is very fast). The index of the best table is sent as a header for the sequence, which will be Hu man encoded with this table. The decoder reads the table index and then decodes the sequence, as fast as if a single Hu man was used. Results can easily be shown as a comparison between the input entropy and the total bitstream length. The eciency can be higher than 100 % with this measure, because of the split of the input data into sequences. The sum of the entropy of every sequence can be smaller than the complete sequence entropy. Ta-

ble 1 shows that Hu man gives nearly the same results as MH if the Hu man table is computed on the input sequence (unrealistics) or with statistics similar to the ones of the input sequence, but that MH takes a sharp advantage compared to Hu man if the statistics of the input di er from the training statistics. Eciency (%) Hu man on input 99.5 Hu man on simil. stat. 97.2 Hu man on di . stat 68.8 Multi-Hu man 100.7 Table 1: Eciency of Hu man vs. Multi-Hu man encoders

6. GLOBAL RESULTS Global compression results over 6 test images (3 still images (`lena',`couple' and `f8'), one Computed Tomography, one angiographic and one Magnetic Resonance image) can be found in table 2. The six rst lines show the entropy of a 1-resolution decomposed image. Next lines show the eciency of the entropy coding : the mean number of bits per pixel (bpp) is lower than the entropy of the transformed image, thanks to the post-processing inserted before the Multi-Hu man entropy coder (both steps together have a 103 % ef ciency. The mean number of bpp remains constant with the number of multiresolution transforms, while the user-friendliness increases : this is one of the aims of the multiresolution. Loading a multiresolution image will always be visually more interesting, and the MR will improve eciency of a search through a database connected to a communication network for example.

Original images (bpp) Mean original entropy Haar Transform m-Haar transform m-Haar transform + prediction m-Haar2 transform (1-res) bpp of the complete scheme (0-res) bpp of the complete scheme (1-res) bpp of the complete scheme (2-res) bpp of the complete scheme (3-res) bpp (quasi-lossless JPEG) bpp (Ziv-Lempel) bpp (Reduced Di . Pyramid)

8.00 6.08 5.21 4.35 3.66 3.59

3.53 3.48 3.48 3.49 3.80 4.99 3.94

Table 2: entropy and compression results Comparison with other compression techniques shows the advantage of our scheme regarding to the other schemes (the Quasi-Lossless JPEG[2], the Ziv-Lempel algorithm[3] and a Reduced Di erence Pyramid method[4]). It gives a bitstream containing a lower number of bits per pixel than the three compared methods.

7. CONCLUSION A multiresolution lossless compression scheme has been developed, combining several new ideas regarding a color pre-processing smoothing pixel variations along the image, a multiresolution decomposition included an embedded prediction of pixel values, and a new entropy coding based on the Hu man coder. Its performances have been successfully compared with some other wellknow compression schemes. It combines good compression speed and ratio.

8. REFERENCES [1] M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies. Image coding using wavelet transform. IEEE Transactions on Image Processing, 1(2):205{220, 1992. [2] G. Wallace. Overview of the JPEG (ISO/CCITT) still image compression standard. In SPIE, editor, Image Processing Algorithms and Techniques, volume 1244, pages 220{233, 1990. [3] J. Ziv and A. Lempel. A universal algorithm for sequential data compression. IEEE Transactions on Information Theory, 23:337{343, 1977.

[4] L. Wang and M. Goldberg. Reduced-di erence pyramid : a data structure for progressive image transmission. Optical Engeneering, 28(7):708{716, 1989.