Digital Image Interpolation in Communication Systems

0 downloads 0 Views 3MB Size Report
Decimation could be used in compression in databases and to reconstruct the ...... Downsampled ..... processor i3 and a RAM of 4GBytes with Matlab Version 7.
Tanta University Faculty of Engineering Electronics and Communications Eng. Dept.

Digital Image Interpolation in Communication Systems A Thesis Submitted in Partial Fulfillment of the Requirement of Master of Science Degree In Electrical Engineering (Electronics and Communications Engineering)

By Eng. Maha Awad Abd El-Hamid Ibrahim B. Sc. in Electronics and Communications Engineering Supervisors Prof. Said E. El-Khamy

Prof. Mustafa M. Abd El-Naby

Electrical Engineering Dept., Faculty of Engineering Alexandria University

Electronics and Communications Engineering Dept., Faculty of Engineering Tanta University

Ass. Prof. Fathi E. Abd El-Samie Electronics and Communications Engineering Dept., Faculty of Electronic Engineering Minufiya University

2011

Tanta University Faculty of Engineering Electronics and Communications Eng. Dept. Researcher Name: Maha Awad Abd El-Hamid Ibrahim Thesis Title: Digital Image Interpolation in Communication Systems Degree

: Master of Science in Electrical Engineering (Electronics and Communications Engineering)

Supervisors Prof. Said E. El-Khamy

(

)

(

)

(

)

Electrical Engineering Dept., Faculty of Engineering, Alexandria University

Prof. Mustafa M. Abd El-Naby Electronics and Communications Engineering Dept., Faculty of Engineering, Tanta University

Ass. Prof. Fathi E. Abd El-Samie Electronics and Communications Engineering Dept., Faculty of Electronic Engineering, Minufiya University

2011

Tanta University Faculty of Engineering Electronics and Communications Eng. Dept. Researcher Name: Maha Awad Abd El-Hamid Ibrahim Thesis Title: Digital Image Interpolation in Communication Systems Degree

: Master of Science in Electrical Engineering (Electronic and Communication Engineering)

Approved by

Prof. Rashed M. El-Awady Electronics and Communications Engineering Dept., Faculty of Engineering, Mansora University

Prof. Said E. El-Khamy

(

)

(

)

(

)

(

)

Electrical Engineering Dept., Faculty of Engineering, Alexandria University.(supervisor)

Prof. Mustafa M. Abd El-Naby Electronics and Communications Engineering Dept., Faculty of Engineering, Tanta University.(supervisor)

Prof. El-Sayed M. El-Rabaie Vice Dean for Postgraduate Studies and Research, Faculty of Electronic Engineering, Minufiya University

2011

0B

‫ﺑﺴﻢ ﷲ ﺍﻟﺮﺣﻤﻦ ﺍﻟﺮﺣﻴﻢ‬

‫ﺳ ْﺒ َﺤﺎﻧَ َﻚ ﻻ ﻋ ْﻠ َﻢ ﻟَﻨَﺂ ﺇﻻ َﻣﺎ َﻋﻠ ْﻤﺘَﻨَﺂ‬ ‫ﻗَﺎﻟُﻮ ْﺍ ُ‬ ‫ﻧﺖ ﺍ ْﻟ َﻌﻠﻴ ُﻢ ﺍ ْﻟ َﺤﻜﻴ ُﻢ‬ ‫ﺇﻧ َﻚ ﺃَ َ‬ ‫ﺻﺪﻕ ﷲ ﺍﻟﻌﻈﻴﻢ‬

‫ﺳﻮﺭﺓ ﺍﻟﺒﻘﺮﺓ ﺍﻵﻳﺔ )‪(32‬‬

ACKNOWLEDGMENT

ACKNOWLEDGMENT First and foremost all praises and thanks to Allah for giving me the ability to write this thesis.

I wish to express my gratitude to my supervisors Prof. Said ElKhamy, Prof. Mostafa Abd El-Naby, and Dr. Fathi Abd El-Samie for their great efforts beginning with selecting the topic of research and support during the course of the work. Their advices have been very helpful in organizing the work and preparing a good research.

Special thanks to Rajesh Kumar Thakur from the School of Computer Science and Engineering, KIIT University, India for helping me with the software.

Last but not last, my sincere appreciation and gratitude to my family. I would like to thank my mother and my sisters for supporting me to move forward.

Eng. Maha Awad

I

List of Publications: 1. M. Awad, S. E. El Khamy and M. M. Abd Elnaby, “Neural Modeling of Polynomial Image Interpolation,“ International Conference Computer Engineering & Systems, ICCES 2009, pp. 469 – 474, 2009. 2. M. Awad, S. E. El Khamy, M. M. Abd Elnaby and F. E. Abd ElSamie,

“Application

of

Image

Interpolation

in

Pattern

Recognition, “Submitted for Publication in the International

Journal of Signal Processing. 3. M. Awad, S. E. El Khamy, M. M. Abd Elnaby and F. E. Abd ElSamie, “Color Pattern Classification as an Application of Color Image Interpolation, “Submitted for Publication in the Journal of Brazilian Computer Society.

II

ABSTRACT

ABSTRACT In this thesis, the issue of obtaining High-Resolution (HR) images from Low-Resolution (LR) images has been studied. We have concentrated on the problem called image interpolation. A survey of existing polynomial-based image interpolation techniques is presented. The different interpolation techniques such as the Basic spline (B-spline) interpolation are studied. Adaptive interpolation techniques are also studied for (gray and color) images.

The thesis presents a neural implementation of polynomial-based image interpolation techniques, which depends on a training phase to model the interpolation with a fixed neural structure, and a testing phase to carry out the interpolation process. The performance of the suggested neural image interpolation algorithm is compared to those of the traditional polynomial-based image interpolation techniques.

The thesis presents a new application for (gray and color) image interpolation, in reconstructing compressed images from databases. Compression in databases can be carried out by decimation. To reconstruct the images with their original sizes, we can carry out interpolation. The thesis presents a new feature extraction method from interpolated database images, which a low sensitivity to the synthetic pixels has realized through interpolation. So, we can reduce the size of image databases with a guarantee that extracted features after interpolation will not be affected, severely.

III

SUMMRAY

SUMMARY

The thesis is concerned mainly with a certain branch of digital image processing, named image interpolation. By image interpolation, we mean how to obtain an HR image from an LR image. A survey study of existing polynomial-based image interpolation techniques is presented. Also a neural implementation of polynomial-based image interpolation for gray images is introduced.

In reconstructing compressed images from databases, a new application for (gray and color) image interpolation is presented. Decimation could be used in compression in databases and to reconstruct the images with their original sizes, we can carry out interpolation. This approach is also extended for color images.

A survey of the existing polynomial-based image interpolation techniques. Bilinear and bicubic interpolation are discussed with their adaptive versions. Also an adaptive color image interpolation is presented. A neural implementation is suggested for polynomial-based image interpolation techniques. The performance of the suggested neural image interpolation algorithm is compared to those of the traditional polynomial-based image interpolation techniques.

A new application for image interpolation through reconstruction of HR images from LR database images is suggested. A cepstral feature extraction approach that has a low sensitivity to synthetic pixels in the IV

SUMMRAY

obtained HR images through interpolation is presented. This approach is also extended for color images.

Finally concluding comments on the existing as will as the proposed image interpolation techniques along with future trend in this area are discussed.

V

Contents

CONTENTS List of Tables ………………………………………………..………ix List of Figures ……………….…….…………………..…………. xi List of Abbreviations …………………………..…………………xxiii

Chapter 1 Introduction ……………………..…….……..……….1 1.1 Introduction …………………. ……..……….……..…..... 1 1.2 Objective of the Thesis ……………...……………..…...... 3 1.3 Organization of the Thesis ..…………………………........ 3

Chapter 2 Review of Interpolation of Gray & Color Images….. 4 2.1 Introduction ………………………………..…….…..…...…..4 2.2 Classical Image Interpolation ……………………………........7 2.3 B-Spline Image Interpolation 2.3.1 Polynomial Splines

…….………………...…...….8

……...…….….…………….…….8

2.3.2 B-Spline Basis Functions …………….…………...…10 2.3.2.1 Nearest-Neighbor Interpolation ……………..…10 2.3.2.2 Linear Interpolation ………..……………….…..10 2.3.2.3 Cubic Spline Interpolation ………………….….11 2.4 Digital Filter Implementation of B-Spline Interpolation

..…12

2.5 LR Image Degradation Model ..………………..……...….….15 2.6 Linear Space Invariant Image Interpolation ……….….…….17 2.7 Warped Distance Image Interpolation …………….….….…19 2.8 Weighted Image Interpolation

………..………….……..….19

2.9 Conventional Method of Color Image Interpolation ..…….…21 2.10 Adaptive Color Image Interpolation Method ………...……23 VI

Contents

2.10.1. Interpolation of Missing Green Pixels…..……………23 2.10.2. Refinement of Edge Directions…… .………………...23 2.10.3. Interpolation of Green Channel …….……………….25 2.10.4. Interpolation of Missing Red and Blue Pixels…...…....25 2.11 Simulation Results …………………………...…………... 27

Chapter 3 Neural Modeling of Polynomial Image Interpolation 42 3.1 Introduction

………………………………..……....…… 42

3.2 Neural Networks: Definitions and Properties .….. …….… 43 3.3 The Training of Neural Networks ………………..….….. 44 3.4 Neural Network Concepts …………………..……………. 45 3.5 Multi-Layer Feed-Forward Networks ………….……........51 3.6 Error Back-Propagation Algorithm… .……..……………....53 3.7 Neural Image Interpolation …………. ………...………….56 3.8 Simulation Results…...………………………….…….….…59

Chapter 4 Application of Image Interpolation in Pattern Recognition ……….……………………………..……..……..….99 4.1 Introduction ………..………………………………..………99 4.2 Pattern Recognition System ….......................................... 103 4.3 Feature Extraction .………………..…………………….….105 4.4 Feature Matching using Artificial Neural Networks …..…109 4.5 Experimental Results

………………………………….. 109

Chapter 5 Conclusions and Future Work ….……….……….....162 5.1 Conclusions .… ……………..…………………..………...162 5.2 Future Work .………………...…………………………….163 VII

Contents

Appendix Color Filter Array………………………………..…..165 References ………..……………………………………………….174

VIII

List of Tabels

LIST OF TABLES Table page 2.1

Positions of nearby samples for interpolation 26

2.2

Bilinear interpolation results of different noise free images 36

2.3

Bilinear interpolation results of different noisy images with SNR=20 dB.

2.4

37

Bicubic interpolation results of different noise-free images. 38

2.5

Bicubic interpolation results of different noisy images with SNR=20 dB.

2.6

Cubic spline interpolation results of different noise free images

2.7

41

PSNR for different types of space invariant interpolation Woman image.

3.2

40

Cubic spline interpolation results of different noisy images with SNR=20 dB.

3.1

39

62

PSNR for different types of interpolation with warped Woman image.

IX

69

List of Tabels

3.3

PSNR for different types of weighted interpolation Woman image.

3.4

PSNR for different types of space invariant interpolation for Bldg image.

3.5

90

PSNR for different types of weighted interpolation for Bldg image.

3.9

89

MSE for different types of interpolation with warped for Bldg image.

3.8

85

PSNR for different types of interpolation with warped for Bldg image.

3.7

84

MSE for different types of space invariant interpolation for Bldg image.

3.6

76

94

MSE for different types of weighted interpolation distance for Bldg image.

X

95

List of Figures

LIST OF FIGURES Figure

page

2.1

1-D signal interpolation. The pixel at position x is estimated using its neighborhood pixels and the distance s.

7

2.2

The interpolation kernel sinc(x).

9

2.3

B-spline interpolation basis functions.

12

2.4

Estimation of the cubic spline interpolation coefficients.

14

2.5

Block diagram of B-spline image interpolation.

15

2.6

Down-sampling process from the N×N HR image to the (N/2) × N/2) LR image.

16

2.7

Signal down sampling and interpolation.

18

2.8

Reference Bayer CFA sample.

21

2.9

Numeric edge maps.

23

2.10

Interpolation directions.

25

2.11

Test Pattern image.

29

2.12

Bilinear interpolation of Test Pattern image.

30

2.13

Error images for bilinear interpolation of Test Pattern image.

31

XI

List of Figures

2.14

Bicubic interpolation of Test Pattern image.

32

2.15

Error images for bicubic interpolation of Test Pattern image.

33

2.16

Cubic spline interpolation of Test Pattern image.

34

2.17

Error images for cubic spline interpolation of Test Pattern image.

35

3.1

A neuron

46

3.2

Difference between single and multi-layer networks.

49

3.3

Activation Functions.

50

3.4

Neural network learning.

52

3.5

A multi-layer network with l layer of units.

53

3.6

The descent in weight space.

56

3.7

Feed-forward neural network.

58

3.8

Woman image.

59

3.9

Bilinear interpolation.

63

3.10

Error images for bilinear interpolation of Woman image.

64

3.11

Bicubic interpolation.

65

3.12

Error images for Bicubic interpolation of Woman image.

66

XII

List of Figures

3.13

Cubic Spline interpolation.

67

3.14

Error images for cubic spline interpolation of Woman image.

68

3.15

Bilinear warped distance interpolation.

70

3.16

Error images for bilinear warped interpolation of Woman image.

71

3.17

Bicubic warped distance interpolation.

72

3.18

Error images for Bicubic warped interpolation of Woman image.

73

3.19

Cubic Spline warped distance interpolation.

74

3.20

Error images for Cubic Spline warped interpolation of Woman.

75

3.21

Bilinear weighted interpolation.

77

3.22

Error images for bilinear weighted interpolation.

78

3.23

Bicubic weighted interpolation.

79

3.24

Error images for Bicubic weighted interpolation.

80

3.25

Cubic Spline interpolation.

81

3.26

Error images for cubic spline weighted interpolation.

82

3.27

Bldg image.

85

3.28

Bilinear interpolation.

86

XIII

List of Figures

3.29

Bicubic interpolation.

87

3.30

Cubic Spline interpolation.

88

3.31

Bilinear warped distance interpolation.

91

3.32

Bicubic warped distance interpolation.

92

3.33

Cubic Spline warped distance interpolation.

93

3.34

Bilinear weighted distance interpolation.

96

3.35

Bicubic weighted distance interpolation.

97

3.36

Cubic Spline warped distance interpolation.

98

4.1

Examples of minutiae points.

101

4.2

Steps of the traditional landmine detection approach.

103

4.3

Schematic diagram of the proposed pattern recognition method.

105

4.4

Cepstral transformation of a 1-D pattern signal.

108

4.5

Samples of the fingerprint images used in the training phase.

113

4.6

Samples of the acoustic landmine images.

113

4.7

Recognition rate vs. SNR for the different feature extraction methods from fingerprint images contaminated by AWGN.

114

4.8

Recognition rate vs. the error percentage for the different feature extraction methods from fingerprint images contaminated by impulsive noise.

114

XIV

List of Figures

4.9

Recognition rate vs. SNR for the different feature extraction methods from blurred fingerprint images contaminated by AWGN

115

4.10

Recognition rate vs. the error percentage for the different feature extraction methods from blurred fingerprint images contaminated by impulsive noise.

115

4.11

Recognition rate vs. the noise variance for the different feature extraction methods from fingerprint images contaminated by speckle noise.

116

4.12

Recognition rate vs. the noise variance for the different feature extraction methods from blurred fingerprint images contaminated by speckle noise.

116

4.13

Recognition rate vs. SNR for the different feature extraction methods from fingerprint images contaminated by AWGN and interpolated with bilinear method.

117

4.14

Recognition rate vs. the error percentage for the different feature extraction methods from fingerprint images contaminated by impulsive noise and interpolated with bilinear method.

117

4.15

Recognition rate vs. SNR for the different feature extraction methods from blurred fingerprint images contaminated by AWGN and interpolated with bilinear method.

118

4.16

Recognition rate vs. the error percentage for the different feature extraction methods from blurred fingerprint images contaminated by impulsive noise and interpolated with bilinear method.

118

4.17

Recognition rate vs. the noise variance for the different feature extraction methods from fingerprint images contaminated by speckle noise and interpolated with bilinear method.

119

4.18

Recognition rate vs. the noise variance for the different feature extraction methods from blurred fingerprint images contaminated by speckle noise and interpolated with bilinear method.

119

4.19

Recognition rate vs. SNR for the different feature extraction methods from fingerprint images contaminated by AWGN and interpolated with bicubic method.

120

4.20

Recognition rate vs. the error percentage for the different feature extraction methods from fingerprint images contaminated by impulsive noise and interpolated with bicubic method.

120

4.21

Recognition rate vs. SNR for the different feature extraction methods from blurred fingerprint images contaminated by AWGN and interpolated with bicubic method.

121

XV

List of Figures

4.22

Recognition rate vs. the error percentage for the different feature extraction methods from blurred fingerprint images contaminated by impulsive noise and interpolated with bicubic method.

121

4.23

Recognition rate vs. the noise variance for the different feature extraction methods from fingerprint images contaminated by speckle noise and interpolated with bicubic method.

122

4.24

Recognition rate vs. the noise variance for the different feature extraction methods from blurred fingerprint images contaminated by speckle noise and interpolated with bicubic method.

122

4.25

Recognition rate vs. SNR for the different feature extraction methods from fingerprint images contaminated by AWGN and interpolated with warped bilinear method

123

4.26

Recognition rate vs. the error percentage for the different feature extraction methods from fingerprint images contaminated by impulsive noise and interpolated with warped bilinear method.

123

4.27

Recognition rate vs. the SNR for the different feature extraction methods from blurred fingerprint images contaminated by AWGN and interpolated with warped bilinear method.

124

4.28

Recognition rate vs. the error percentage for the different feature extraction methods from blurred fingerprint images contaminated by impulsive noise and interpolated with warped bilinear method.

124

4.29

Recognition rate vs. the noise variance for the different feature extraction methods from fingerprint images contaminated by speckle noise and interpolated with warped bilinear method.

125

4.30

Recognition rate vs. the noise variance for the different feature extraction methods from blurred fingerprint images contaminated by speckle noise and interpolated with warped bilinear method.

125

4.31

Recognition rate vs. SNR for the different feature extraction methods from fingerprint images contaminated by AWGN and interpolated with warped bicubic method.

126

4.32

Recognition rate vs. the error percentage for the different feature extraction methods from fingerprint images contaminated by impulsive noise and interpolated with warped bicubic method.

126

4.33

Recognition rate vs. SNR for the different feature extraction methods from blurred fingerprint images contaminated by AWGN and interpolated with warped bicubic method.

127

4.34

Recognition rate vs. the error percentage for the different feature extraction methods from blurred fingerprint images contaminated by impulsive noise and interpolated with warped bicubic method.

127

XVI

List of Figures

4.35

Recognition rate vs. the noise variance for the different feature extraction methods from fingerprint images contaminated by speckle noise and interpolated with warped bicubic method.

128

4.36

Recognition rate vs. the noise variance for the different feature extraction methods from blurred fingerprint images contaminated by speckle noise and interpolated with warped bicubic

128

4.37

Recognition rate vs. SNR for the different feature extraction methods from fingerprint images contaminated by AWGN and interpolated with neural bilinear method.

129

4.38

Recognition rate vs. the error percentage for the different feature extraction methods from fingerprint images contaminated by impulsive noise and interpolated with neural bilinear method.

129

4.39

Recognition rate vs. SNR for the different feature extraction methods from blurred fingerprint images contaminated by AWGN and interpolated with neural bilinear method.

130

4.40

Recognition rate vs. the error percentage error for the different feature extraction methods from blurred fingerprint images contaminated by impulsive noise and interpolated with neural bilinear method.

130

4.41

Recognition rate vs. the noise variance for the different feature extraction methods from fingerprint images contaminated by speckle noise and interpolated with neural bilinear method.

131

4.42

Recognition rate vs. the noise variance for the different feature extraction methods from blurred fingerprint images contaminated by speckle noise and interpolated with neural bilinear method.

131

4.43

Recognition rate vs. SNR for the different feature extraction methods from fingerprint images contaminated by AWGN and interpolated with neural bicubic method.

132

4.44

Recognition rate vs. the error percentage for the different feature extraction methods from fingerprint images contaminated by impulsive noise and interpolated with neural bicubic method.

132

4.45

Recognition rate vs. SNR for the different feature extraction methods from blurred fingerprint images contaminated by AWGN and interpolated with neural bicubic method.

133

4.46

Recognition rate vs. the error percentage for the different feature extraction methods from blurred fingerprint images contaminated by impulsive noise and interpolated with neural bicubic method.

133

4.47

Recognition rate vs. the noise variance for the different feature extraction methods from fingerprint images contaminated by speckle noise and interpolated with neural bicubic method.

134

XVII

List of Figures

4.48

Recognition rate vs. the noise variance for the different feature extraction methods from blurred fingerprint images contaminated by speckle noise and interpolated with neural bicubic method.

134

4.49

Recognition rate vs. SNR for the different feature extraction methods from landmine images contaminated by AWGN.

135

4.50

Recognition rate vs. the error percentage for the different feature extraction methods from landmine images contaminated by impulsive noise.

135

4.51

Recognition rate vs. SNR for the different feature extraction methods from blurred landmine images contaminated by AWGN.

136

4.52

Recognition rate vs. the error percentage for the different feature extraction methods from blurred landmine images contaminated by impulsive noise.

136

4.53

Recognition rate vs. the noise variance for the different feature extraction methods from landmine images contaminated by speckle noise.

137

4.54

Recognition rate vs. the noise variance for the different feature extraction methods from blurred landmine images contaminated by speckle noise.

137

4.55

Recognition rate vs. the SNR for the different feature extraction methods from landmine images contaminated by AWGN and interpolated with bilinear method.

138

4.56

Recognition rate vs. the error percentage for the different feature extraction methods from landmine images contaminated by impulsive noise and interpolated with bilinear method.

138

4.57

Recognition rate vs. SNR for the different feature extraction methods from blurred landmine images contaminated by AWGN and interpolated with bilinear method.

139

4.58

Recognition rate vs. the error percentage for the different feature extraction methods from blurred landmine images contaminated by impulsive noise and interpolated with bilinear method.

139

4.59

Recognition rate vs. the noise variance for the different feature extraction methods from landmine images contaminated by speckle noise and interpolated with bilinear method.

140

4.60

Recognition rate vs. the noise variance for the different feature extraction methods from blurred landmine images contaminated by speckle noise and interpolated with bilinear method.

140

XVIII

List of Figures

4.61

Recognition rate vs. the SNR for the different feature extraction methods from landmine images contaminated by AWGN and interpolated with bicubic method.

141

4.62

Recognition rate vs. the error percentage for the different feature extraction methods from landmine images contaminated by impulsive noise and interpolated with bicubic method.

141

4.63

Recognition rate vs. SNR for the different feature extraction methods from blurred landmine images contaminated by AWGN and interpolated with bicubic method.

142

4.64

Recognition rate vs. the error percentage for the different feature extraction methods from blurred landmine images contaminated by impulsive noise and interpolated with bicubic method.

142

4.65

Recognition rate vs. the noise variance for the different feature extraction methods from landmine images contaminated by speckle noise and interpolated with bicubic method.

143

4.66

Recognition rate vs. the noise variance for the different feature extraction methods from blurred landmine images contaminated by speckle noise and interpolated with bicubic method.

143

4.67

Recognition rate vs. SNR for the different feature extraction methods from landmine images contaminated by AWGN and interpolated with warped bilinear method.

144

4.68

Recognition rate vs. the error percentage for the different feature extraction methods from landmine images contaminated by impulsive noise and interpolated with warped bilinear method.

144

4.69

Recognition rate vs. SNR for the different feature extraction methods from blurred landmine images contaminated by AWGN and interpolated with warped bilinear method.

145

4.70

Recognition rate vs. the error percentage for the different feature extraction methods from blurred landmine images contaminated by impulsive noise and interpolated with warped bilinear method.

145

4.71

Recognition rate vs. the noise variance for the different feature extraction methods from landmine images contaminated by speckle noise and interpolated with warped bilinear method.

146

4.72

Recognition rate vs. the noise variance for the different feature extraction methods from blurred landmine images contaminated by speckle noise and interpolated with warped bilinear method.

146

4.73

Recognition rate vs. SNR for the different feature extraction methods from landmine images contaminated by AWGN and interpolated with warped bicubic method.

147

XIX

List of Figures

4.74

Recognition rate vs. the error percentage for the different feature extraction methods from landmine images contaminated by impulsive noise and interpolated with warped bicubic method.

147

4.75

Recognition rate vs. SNR for the different feature extraction methods from blurred landmine images contaminated by AWGN and interpolated with warped bicubic method.

148

4.76

Recognition rate vs. the error percentage for the different feature extraction methods from blurred landmine images contaminated by impulsive noise and interpolated with warped bicubic method.

148

4.77

Recognition rate vs. the noise variance for the different feature extraction methods from landmine images contaminated by speckle noise and interpolated with warped bicubic method.

149

4.78

Recognition rate vs. the noise variance for the different feature extraction methods from blurred landmine images contaminated by speckle noise and interpolated with warped bicubic method.

149

4.79

Recognition rate vs. SNR for the different feature extraction methods from landmine images contaminated by AWGN and interpolated with neural bilinear method.

150

4.80

Recognition rate vs. the error percentage for the different feature extraction methods from landmine images contaminated by impulsive noise and interpolated with neural bilinear method.

150

4.81

Recognition rate vs. SNR for the different feature extraction methods from blurred landmine images contaminated by AWGN and interpolated with neural bilinear method.

151

4.82

Recognition rate vs. the error percentage for the different feature extraction methods from blurred landmine images contaminated by impulsive noise and interpolated with neural bilinear method.

151

4.83

Recognition rate vs. the noise variance for the different feature extraction methods from landmine images contaminated by speckle noise and interpolated with neural bilinear method.

152

4.84

Recognition rate vs. the noise variance for the different feature extraction methods from blurred landmine images contaminated by speckle noise and interpolated with neural bilinear method.

152

4.85

Recognition rate vs. SNR for the different feature extraction methods from landmine images contaminated by AWGN and interpolated with neural bicubic method.

153

4.86

Recognition rate vs. the error percentage for the different feature extraction methods from landmine images contaminated by impulsive noise and interpolated with neural bicubic method.

153

XX

List of Figures

4.87

Recognition rate vs. SNR for the different feature extraction methods from blurred landmine images contaminated by AWGN and interpolated with neural bicubic method.

154

4.88

Recognition rate vs. the error percentage for the different feature extraction methods from blurred landmine images contaminated by impulsive noise and interpolated with neural bicubic method.

154

4.89

Recognition rate vs. the noise variance for the different feature extraction methods from landmine images contaminated by speckle noise and interpolated with neural bicubic method.

155

4.90

Recognition rate vs. the noise variance for the different feature extraction methods from blurred landmine images contaminated by speckle noise and interpolated with neural bicubic method.

155

4.91

Samples of flower images used in the experiments.

156

4.92

Samples of retinal images used in the experiments.

156

4.93

Recognition rate vs. SNR for the different feature extraction methods from flower images contaminated by AWGN.

157

4.94

Recognition rate vs. the percentage error for the different feature extraction methods from flower images contaminated by impulsive noise.

157

4.95

Recognition rate vs. SNR for the different feature extraction methods from blurred flower images contaminated by AWGN.

158

4.96

Recognition rate vs. the percentage error for the different feature extraction methods from blurred flower images contaminated by impulsive noise.

158

4.97

Recognition rate vs. the noise variance for the different feature extraction methods from flower images contaminated by speckle noise.

159

4.98

Recognition rate vs. the noise variance for the different feature extraction methods from blurred flower images contaminated by speckle noise.

159

4.99

Recognition rate vs. SNR for the different feature extraction methods from retinal images contaminated by AWGN.

160

4.100

Recognition rate vs. SNR for the different feature extraction methods from blurred retinal images contaminated by AWGN.

160

XXI

List of Figures

4.101

Recognition rate vs. the noise variance for the different feature extraction methods from retinal images contaminated by speckle noise.

161

4.102

Recognition rate vs. the noise variance for the different feature extraction methods from blurred retinal images contaminated by speckle noise.

161

XXII

LIST OF ABBREVIATIONS

LIST OF ABBREVIATIONS A/S

Acoustic to Seismic

AP

Antipersonnel

AT

Antitank

AWGN Additive White Gaussian Noise CCD

Charge-Coupled Devices

CFA

Color Filter Array

CMOS

Complementary Metal Oxide Semiconductor

DCDs

Digital Camera Designers

DCT

Discrete Cosine Transform

DSCs

Digital Still Cameras

DST

Discrete Sine Transform

DWT

Discrete Wavelet Transform

FFT

Fast Fourier Transformation

FIR

Finite Impulse Response

HDTV

High Definition Television

HR

High Resolution

LPC

Linear Prediction Coefficients

LPCC

Linear Predictive Cepstral Coefficients

LR

Low Resolution

MFCC

Mel Freq. Cepstral Coff.

MLPs

Multi-Layer Perceptrons

MSE

Mean Square Error

PLP

Perceptual Linear Predictive Analysis

PSNR

Peak Signal-to-Noise Ratio

RBF

Radial Basis Function

SNR

Signal-to-Noise Ratio XXIII

LIST OF ABBREVIATIONS

XXIV

Chapter 1

Introduction

Chapter 1

Introduction 1.1 Introduction Most electronic imaging applications require HR images. HR means that pixel density within an image is high, therefore an HR image can offer more details than that obtained from LR image. HR images have great importance in applications such as medical imaging, satellite imaging, military imaging, underwater imaging, remote sensing, and High-Definition Television (HDTV).

The demands for HR levels have been the motivation to find methods for increasing the LR levels obtained using the current imaging technology. In the past decades, traditional image vidicon and image orthican cameras have been the only available image acquisition devices. These cameras are analog cameras. Since 1970s, ChargeCoupled

Devices

(CCD)

and

Complementary

Metal

Oxide

Semiconductor (CMOS) image sensors have been widely used to capture digital images. Although these sensors are suitable for most imaging applications, the current resolution levels and their associated prices are not suitable for future demands. It is required to have very HR levels with as low price as possible.

There are some solutions to increase the resolution level. The direct solution is to reduce the pixel size in sensor manufacturing; 1

Chapter 1

Introduction

therefore the number of pixels per unit area is increased [1]. The drawback of this solution is that the amount of light available from each pixel decreases. The decrement of light amount leads to the generation of shot noise that degrades the image quality, severely. Unfortunately, the pixel size can not be reduced beyond a certain level, which is 40 µm2 for 0.35 µm2 CMOS process to avoid shot noise. This level has already been reached in the manufacturing process.

There is another solution to the problem of resolution level increment, which is to increase the chip size with the pixel size fixed [1]. This solution leads to an increase in the chip capacitance. It is wellknown that the large capacitance limits speeding up the charge transfer rate. The slow rate of charge transfer leads to a great problem in the image formation process. Generally, all hardware solutions to this problem are limited by the high cost of the high precision optics and the required image sensors.

The most feasible solution to this problem is to integrate between the hardware and software abilities to obtain the required HR level. Making use of as HR level as possible from the hardware can carry part of this task. The rest of the task is performed using software. This trend is the new trend in most up to date image capturing devices. Image processing algorithms can be used effectively to obtain HR images. When a single LR image is subjected to image processing to obtain an HR one, this is known as image interpolation.

2

Chapter 1

Introduction

1.2 Objectives of the Thesis The main objective of the thesis covers the study of polynomialbased image interpolation, color image interpolation, the neural implementation of interpolation algorithms, and the application of interpolation algorithms to obtain HR images from LR database images to reduce sizes of image databases for both color and gray images.

1.3 Organization of the Thesis The rest of thesis is organized as follows: In chapter 2, traditional polynomial-based image interpolation and its adaptive variants for gray-scale and color images are studied.

In chapter 3, neural implementation of gray-scale interpolation techniques is presented and discussed.

In chapter 4, a practical implementation for image interpolation is presented. Interpolation in this case is used to obtain HR images from LR databases for gray-scale and color images. A feature extraction method with low sensitivity to synthetic pixels is used.

In chapter 5, the concluding remarks and the directions for future research are presented.

3

Chapter 2

Review of Interpolation of Gray & Color Images

Chapter 2

Review of Interpolation of Gray & Color Images 2.1

Introduction This chapter highlights the image interpolation problem. Image

interpolation has a wide range of applications in image processing systems. It allows the user to vary the size of images, interactively, to concentrate on some details. Hence, interpolation can be used to obtain an HR image from an LR one. As mentioned in chapter 1, HR images are required in so many fields such as medical imaging, remote sensing, satellite imaging, image compression and decompression.

One of the most important definitions for the interpolation sets it as inserting extra values based on existing values e.g. introducing extra pixels into an existing digital image. Interpolation has several uses. It can be used to resize an image file when a bit-mapped image is enlarged. It can also be used to give an apparent (but not real) increase to resolution e.g. in scanned images. It may be needed when the image is rotated and in some other transformations e.g. animation. In some antialiasing techniques interpolation is necessary. Values of interpolated pixels always depend on values of neighboring pixels. Interpolation algorithms are chosen to preserve overall dimensions (as in scanners) or to preserve brightness values, depending on the use [2].

4

Chapter 2

Review of Interpolation of Gray & Color Images

There is no general consensus on image interpolation methods. Over the past few decades, a considerable number of studies have been made on image interpolation. Many studies have provided linear and nonlinear interpolation approaches.

Linear interpolation methods that include the nearest-neighbor, bilinear, cubic spline and bicubic methods are widely utilized to enlarge the resolution of images. These methods cause some annoying and blurring artifacts. Although the nonlinear interpolation methods can preserve sharpness of edges, they are unable to prevent the blurring effects in the interpolated image. Edges mean high frequencies in the images, and they are always important for human vision. Several nonlinear interpolation methods focus on edge preservation. Those methods utilize different reconstruction rules with different window sizes. Interpolation has been previously treated in the literature using different approaches [3-30]. One of the most interesting directions of research in image interpolation is adaptive interpolation.

The motivation of the research in the field of adaptive image interpolation is the demand for high visual quality interpolation results. Recently, a linear space-variant approach was proposed for image interpolation. This approach is based on the evaluation of a warped distance between the pixel to be interpolated and each of its neighbors [30]. The warping process is performed by moving the estimate of the pixel towards the more homogeneous neighboring side. This approach has succeeded to some extent for edge interpolation. It has been applied for all polynomial-based interpolation methods. 5

Chapter 2

Review of Interpolation of Gray & Color Images

There is also another simple method for adaptive polynomial-based image interpolation. It is performed by weighting the pixels used in the interpolation process with different adaptive weights [32]. The adaptation can be implemented with the warped-distance method.

Color interpolation or Demosaicing is a process by which a raw image generated by Digital Still Cameras (DSCs) with the help of the Color Filter Array (CFA) [Appendix] is converted to a full color image by estimating the missing color components of each pixel from its neighbors.

DSCs have been widely used as image input devices. In order to reduce the cost of DSCs, digital camera designers use a single CCD, instead of using three CCDs, with a CFA to acquire color images [33]. The CFA consists of a set of spectrally selective filters that are arranged in an interleaved pattern so that each sensor pixel samples one of three primary color components.

The Bayer CFA pattern [33] is the most frequently used pattern. Since there is only one color element available in each pixel, the two missing color elements must be estimated from the adjacent pixels. This process is called CFA interpolation, or demosaicing. In the Bayer CFA pattern, half of the pixels are assigned to G (green), and the R (red) and B (blue) channels, which are regarded as the chrominance signal, share the other half of the pixels.

In this chapter, we will focus on the classical B-spline and bicubic methods as they are the most frequently used methods and also 6

Chapter 2

Review of Interpolation of Gray & Color Images

on the adaptive versions of these methods. Also, a technique of color image interpolation is mentioned, where we use it to recover a missing color pixel of the color image.

2.2 Classical Image Interpolation The image interpolation process aims at estimating intermediate pixels values between the known pixels as shown in Fig. (2.1). To estimate the intermediate pixel value at position x, the neighboring pixels and the distance s are incorporated into the estimation process.

For an equally spaced 1-D sampled data sequence, f ( x k ) , several interpolation functions can be used. The value to be estimated, fˆ ( x ) , can, in general, be written in the form [17-23]: fˆ ( x ) =



∑ c( x

k = −∞

k

) β ( x − xk )

(2.1)

where β (x ) is the interpolation basis function . x and x k represent the continuous and discrete spatial distances, respectively. c( xk ) are the interpolation coefficients which need to be estimated prior to the interpolation process.

s xk-1

xk

⊗ 1-s x

xk+1

xk+2

Fig. (2.1) 1-D signal interpolation. The pixel at position x is estimated using its neighborhood pixels and the distance s.

7

Chapter 2

Review of Interpolation of Gray & Color Images

The basis functions are classified to two main categories; interpolating and non-interpolating basis functions. Interpolating functions do not require the estimation of the coefficients c( xk ) . The sample values f ( x k ) are used instead of c( xk ) . On the other hand, noninterpolating

basis

functions

require

the

estimation

of

the

coefficients c( xk ) . From the classical sampling theory, if f(x) is band-limited to [-π, π] , then [23,31] : fˆ ( x ) = ∑ f ( xk ) sinc( x − xk )

(2.2)

k

This is known as

ideal interpolation.

From numerical

computations point of view, the ideal interpolation formula is not practical for four reasons. First, it relies on the use of ideal filters, if implemented in frequency domain. Second, the band-limited hypothesis is in contradiction with the idea of finite duration signals. The bandlimiting operation tends to generate Gibbs oscillations, which are visually disturbing. Finally, the rate of decay of the interpolation kernel sinc(x) shown in Fig.(2.2) is slow, which makes computations in the time domain inefficient [22]. So, approximations such as the B-splines basis functions are used as alternatives.

2.3 B-Spline Image Interpolation 2.3.1 Polynomial Splines Splines are piecewise polynomials with pieces that are smoothly connected together [28]. The joining points of these polynomials are called knots. For a spline of degree n, each segment is represented by a polynomial of degree n. Thus, (n + 1) coefficients are required to 8

Chapter 2

Review of Interpolation of Gray & Color Images

represent each piece. The continuity of the spline and its derivatives up to order (n-1) must be preserved at the knots [22]. Our concentration is

1

0.8

sinc(x)

0.6

0.4

0.2

0

-0.2

-0.4 -5

-4

-3

-2

-1

0 x

1

2

3

4

5

Fig. (2.2) The interpolation kernel sinc(x). on splines with uniform knots and unit spacing. These splines are uniquely characterized in terms of the B-spline expansion given by the following equation [22]: fˆ ( x ) = ∑ c( x k ) β n ( x − x k )

(2.3)

k∈Z

where Z is a finite neighborhood around x. The B-spline basis function β n (x ) is a symmetrical bell-shaped function obtained by (n+1) fold convolutions of a rectangle pulse β 0 given by [20]:

9

Chapter 2

Review of Interpolation of Gray & Color Images

 1 1  0 β ( x) =  2  0  

1 1 |R 2 -R 4 |

(G 1 +G 2 +G 3 +G 4 )/4,

if | R 1 -R 3 | = |R 2 -R 4 |

171

(1)

Appendix

Color Filter Array

R4 G4

R1

B1

G1

G1

R

G2 R2

B4 G4

B

G3

G3

R3

B3

(a)

(b)

G2 B2

Fig. ( 5) Two possible cases for interpolating G component In other words, we take into account the correlation in the red component to adapt the interpolation method. If the difference between R 1 and R 3 is smaller than the difference between R 2 and R 4 , indicating that the correlation is stronger in the vertical direction, we use the average of the vertical neighbors G 1 and G 3 to interpolate the required value. If the horizontal correlation is larger, we use horizontal neighbors. If neither direction dominates the correlation, we use all four neighbors. Similarly, for Fig. (5) (b) we will have

G(B)=

(G 1 +G 3 )/2,

if | B 1 -B 3 | < |B 2 -B 4 |

(G 2 +G 4 )/2,

if | B 1 -B 3 | > |B 2 -B 4 |

(G 1 +G 2 +G 3 +G 4 )/4,

if | B 1 -B 3 | = |B 2 -B 4 |

(2)

To conclude this section, note that if the speed of execution is the issue, one can safely use simple linear interpolation of the green component from the four nearest neighbors, without any adaptation G = (G 1 +G 2 +G 3 +G 4 )/4

(3) 172

Appendix

Color Filter Array

According to [69], this method of interpolation executes twice as fast as the adaptive method, and achieves only slightly worse performance on real images. For even fast updates only two of the four green values are averaged. However, this method displays false color on edges or zipper artifacts.

173

References

References [1] E. Choi, J. Choi and M. Kang, “Super-Resolution Approach to

Overcome Physical Limitations of Imaging Sensors: An Overview,” 2004 Wiley Periodicals, Inc. [2] http://www.idigitalphoto.com/dictionary/interpolation. Date of

access 20/9/2011, time 10 pm. [3]

T. Sigitani, Y. Iiguni and H. Maeda, “Image Interpolation For Progressive Transmission by Using Radial Basis Function Networks,” IEEE Trans. Neural Networks, vol. 10, No. 2, pp. 381-390, March 1999.

[4]

N. Plaziac, “Image Interpolation Using Neural Networks,” IEEE Trans. Image Processing, vol. 8, No. 11, pp. 1647-1651, November 1999.

[5] W. K. Carey, D. B. Chuang and S. S. Hemami, “Regularity

Preserving Image Interpolation,” IEEE Trans. Image Processing, vol. 8, No.9, pp. 1293-1297, September 1999. [6] G. Chen, R. J. P. de Figueiredo, “A Unified Approach to Optimal

Image

Interpolation

Problems

Based

on

Linear

Partial

Differential Equation Models, ”IEEE Trans. Image Processing, vol. 2, No.1, pp. 41-49, January 1993. [7] C. Lee, M. Eden and M. Unser, “High-Quality Image Resizing

Using Oblique Projection Operators,” IEEE Trans. Image Processing, vol. 7, No.5, pp. 679-692, May 1998. [8] D. Ramanan and K. E. Barner, “Nonlinear Image Interpolation

through Extended Permutation Filters,” in Proc. ICIP, 2002.

174

References [9] A. Munoz, T. Blu and M. Unser, “Least-Squares Image Resizing

Using Finite Differences,” IEEE Trans. Image Processing, vol. 10, No.9, pp. 1365-1378, September 2001. [10] D. Darian and T. W. Parks, “Adaptively Quadratic (Aqua)

Image Interpolation,” IEEE Trans. Image Processing, vol. 13, No.5, pp. 690-698, May 2004. [11] J. Vesma, “A Frequency-Domain Approach to Polynomial-

Based Interpolation and the Farrow Structure,” IEEE Trans. Circuits and Systems--II, vol. 47, No.3, pp. 206-209, March 2000. [12] X.

Pan,

“A

Novel

Approach

for

Multidimensional

Interpolation,” IEEE Trans. Signal Processing Letters, vol. 6, No.2, pp. 38-40, February 1999. [13] T. M. Lehman, C. Conner and K. Spitzer, “Addendum: B-

Spline Interpolation in Medical Image Processing,” IEEE Trans. Medical Imaging, vol. 20, No.7, pp. 660-665, July 2001. [14] T. Bretschneider, C. Miller and O. Kao, “Interpolation of

Scratches in Motion Picture Films,” in Proc. ICASSP, 2000. [15] V. Caselles, J. M. Morel and C. Sbert, “An Axiomatic

Approach

to

Image

Interpolation,”

IEEE

Trans.

Image

Processing, vol. 7, No.3, pp. 376-386, May 2004. [16] H. S. Hou and H. C. Andrews, “Cubic Spline For Image

Interpolation and Digital Filtering,” IEEE Trans. Accoustics , Speech and Signal Processing, vol. ASSP-26 , No.9, pp. 508517, December 1978. [17] M. Unser, A. Aldroubi, and M. Eden “B-Spline Signal

Processing: Part I-Theory,” IEEE Trans. Signal Processing, vol. 41 No.2, pp. 821-833, February 1993. 175

References [18] M. Unser, A. Aldroubi, and M. Eden “B-Spline Signal

Processing: Part II-Efficient Design and Applications,” IEEE Trans. Signal Processing, vol. 41, No.2, pp. 834-848, February 1993. [19] B. Vrcelj and P. P. Vaidyanathan, “Efficient Implementation of

All-Digital Interpolation,” IEEE Trans. Image Processing, vol. 10, No.11, pp.1639-1646, November 2001. [20] P. Thevenaz, T. Blu and M. Unser, “Interpolation Revisited,”

IEEE Trans. Medical Imaging, vol. 19, No.7, pp. 739-758, July 2000. [21] T. Blu, P. Thevenaz and M. Unser, “MOMS: Maximal-Order

Interpolation of Minimal Support,” IEEE Trans. Image Processing, vol. 10, No. 7, pp. 1069-1080, July 2001. [22] M. Unser, “Splines A Perfect Fit For Signal and Image

Processing,” IEEE Signal Processing Magazine November 1999. [23] J. K. Han and H. M. Kim, “Modified Cubic Convolution Scaler

with Minimum Loss of Information,” Optical Engineering. , vol. 40, No. 4, pp. 540-546, April 2001. [24] A. Gotchev, K. Egiazarian, J. Vema and T. Saramaki, “Edge-

Preserving Image Resizing Using Modified B-Splines,” in Proc. ICASSP, 2000. [25] A. Gotchev, J. Vesma, T. Saramaki and K. Egiazarian,

“Digital Image Resampling By Modified B-Spline Functions,” in Proc. ICASSP, 2000. [26] E. Meijering and M. Unser, “A note on Cubic Convolution

Interpolation,” IEEE Trans. Image Processing, vol. 12, No. 4, pp. 477-479, April 2003. 176

References [27] T. Blu, P. Thevenaz and M. Unser, “How a Simple Shift Can

Significantly Improve the Performance of Linear Interpolation,” in Proc. ICIP, pp. III-377-III-380, 2002. [28] K. Ichige, T. Blu and M. Unser, “Interpolation of Signals By

Generalized Piecewise-Linear Multiple Generators,” in Proc. ICASSP, pp. VI-261-VI-264, 2003. [29] T. Blu, B. Thevenaz and M. Unser, “Linear Interpolation

Revitalized,” IEEE Trans. Image Processing, vol. 13, No. 5, pp. 710-719, May 2004. [30] G. Ramponi, “Warped Distance for Space Variant Linear Image

Interpolation,” IEEE Trans. Image Processing, vol. 8, pp. 629639, 1999. [31] B. P. Lathi, “Modern Digital and Analog Communication

Systems,” Rinehart and Winston, Inc., 1998. [32] S. E. El-Khamy, M. M. Hadhoud, M. I. Dessouky, and F. E.

Abd El-Samie, “Adaptive Image Interpolation Based on Local Activity Levels,” URSI National Radio Science Conference, NRSC’03, Cairo, Egypt, March 2003. [33] B. E. Bayer, “Color Imaging Array,” U. S. Patent 3 971 065,

1976. [34] J. H. Shin, J. H. Jung and J. K. Paik, “Regularized Iterative

Image Interpolation and Its Application to Spatially Scalable Coding,” IEEE Trans. Consumer Electronics, vol. 44, No.3,pp. 1042-1047, August 1998. [35] W. Y. V Leung, P. J. Bones “Statistical Interpolation of

Sampled Images,” Optical Engineering, vol. 40, No.4, pp.547553, April 2001. 177

References [36] S. E. El-Khamy,

Salam, and

M. M. Hadhoud, M. I. Dessouky, B. M.

F. E.Abd El-Samie, “Optimization of Image

Interpolation as an Inverse Problem Using The LMMSE Algorithm,” in Proc. of MELECON, pp. 247-250, May 2004. [37] S. E. El-Khamy, M. M. Hadhoud, M. I. Dessouky, B. M.

Salam, and F. E.Abd El-Samie, “A Computationally Efficient Image Interpolation Approach,” International Journal of Signal Processing, 2005. [38] S. E. El-Khamy, M. M. Hadhoud, M. I. Dessouky, B. M.

Salam, and F. E. Abd El-Samie, “Adaptive Least Squares Acquisition of High Resolution Images,” International Journal of Information Acquisition (IJIA), March 2005. [39] S. E. El-Khamy, M. M. Hadhoud, M. I. Dessouky, B. M. Salam

and F. E. Abd El-Samie, “Sectioned Implementation of Regularized Image Interpolation,” in Proc. MWSCAS, 2003. [40] S. E. El-Khamy, M. M. Hadhoud, M. I. Dessouky,

B. M.

Salam, and F. E. Abd El-Samie, “A New Approach For Regularized Image Interpolation,” Journal of The Brazilian Computer Society, 2006. [41] S. E. El-Khamy,

M. M. Hadhoud, M. I. Dessouky, B. M.

Salam, and F. E. Abd El-Samie, “Efficient Implementation of Image Interpolation as an Inverse Problem,” Journal of Digital Signal Processing, vol. 15, No. 2, pp.137-152, March 2005. [42] S. E. El-Khamy, M. M. Hadhoud, M. I. Dessouky,

B. M.

Salam, and F. E. Abd El-Samie, “A New Approach for Adaptive Polynomial Based Image Interpolation,” International Journal of Information Acquisition, vol. 3, No. 2, pp. 139–159, 2006. 178

References [43] S. E. El-Khamy, M. M. Hadhoud, M. I. Dessouky, B. M.

Salam, and F. E. Abd El-Samie, “A New Edge Preserving Pixelby-Pixel (PBP) Cubic Image Interpolation Approach,” URSI National Radio Science Conference, NRSC’04, Cairo, Egypt, March 2004. [44] S. E. El-Khamy, M. M. Hadhoud, M. I. Dessouky,

B. M.

Salam, and F. E. Abd El-Samie, “An Adaptive Cubic Convolution Image Interpolation Approach,” Journal of Machine Graphics & Vision vol. 14, No. 3, pp. 235–256, 2005. [45] R. K. Thakur, A. Tripathy and A. K. Ray “A Design

Framework of Digital Camera Images Using Edge Adaptive and Directionally Weighted Color Interpolation Algorithm,” IEEE Trans. Image Processing, vol. 1, pp. 905-909, 2009 [46] J. F. Hamilton and J. E. Adams, “Adaptive Color Plane

Interpolation in Single Sensor Color Electronic Camera,” U.S. Patent 5 629 734, 1997. [47] T. Acharya and A.K. Ray , “Image Processing – Principles and

Applications,” Wiley, 2005. [48] B. Krose, P. Smagt “An Introduction to Neural Networks,”

Eight Edition, Novamber 1996. [49] A. I. Galushkin, “Neural Networks Theory,” Springer-Verlag

Berlin Heidelberg 2007. [50] G. Dreyfus, “Neural Networks Methodology and Applications,”

Springer-Verlag Berlin Heidelberg 2005. [51] W. Chaohong, S. Zhixin, and V. Govindaraju,” Fingerprint

Image Enhancement Method Using Directional Median Filter,” Proceedings of the SPIE, vol. 5404, pp. 66-75, 2004. 179

References [52] S. Kasaei, M. deriche, and B. Boashash,” Fingerprint Feature

Enhancement Using Block-Direction on Reconstructed Images,” International Conference on Information, Communications and Signal Processing, pp. 721-725, 1997. [53] L.

Hong, Y. Wan, and A. Jain, “Fingerprint Image

Enhancement: Algorithm and Performance Evaluation,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 20, No. 8, pp. 777-789, 1998. [54] H. Kasban, O. Zahran, S. M. S. Elaraby, M. El-Kordy, and F. E.

Abd El-Samie, “Automatic Object Detection from Acoustic to Seismic Landmines Images,” IEEE International Conference on Computer Engineering & Systems, Cairo, Egypt, November 2008. [55] H. Kasban, O. Zahran, S. M. S. Elaraby, M. El-Kordy, S. El-

Rabie and F. E. Abd El-Samie, “Efficient Detection Of Landmines

From

Acoustic

Images,”

Progress

In

Electromagnetics Research C, vol. 6, 79–92, 2009. [56] N. Xiang, and J. M. Sabatier, “An Experimental Study on

Antipersonnel Landmine Detection Using Acoustic-To-Seismic Coupling, “J. Acoust. Soc. Am. 113 (3), March 2003. [57] N.

Xiang, and J.

Measurements

Using

M. Sabatier, “Landmine Detection Acoustic-To-Seismic

Coupling,”

Proceedings of SPIE, vol. 4038, 645–655, Orlando, USA, 2000. [58] Zou Lihua, “A New Fingerprint Image Recognition Approach

Using Artificial Neural Network, “ E-Health Networking, Digital Ecosystems and Technologies (EDT), International Conference , 2010 180

References [59]

R. Vergin, D. O. Shaughnessy, and A. Farhat, “Generalized

Mel-frequency Cepstral Coefficients for Large-Vocabulary Speaker-Independent Continuous-Speech Recognition,” IEEE Transactions on Speech And Audio Processing, vol. 7, No. 5, pp. 525-532, September 1999. [60] T.

Kinnunen, “Spectral Features for Automatic Text-

Independent

Speaker

Recognition,”

Licentiate’s

Thesis,

University of Joensuu, Department of computer science, Finland, 2003. [61]

R. Chengalvarayan, and L. Deng, “Speech Trajectory

Discrimination

Using

the

Minimum

Classification

Error

Learning,” IEEE Transactions on Speech and Audio Processing, vol. 6, No. 6, pp. 505-515, 1998. [62] P. D. Polur and G. E. Miller, “Experiments With Fast Fourier

Transform, Linear Predictive and Cepstral Coefficients in Dysarthric Speech Recognition Algorithms Using Hidden Markov Model,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 13, No. 4, pp. 558-561, 2005. [63] S. Dharanipragada, U. H. Yapanel, and B. D. Rao, “Robust

Feature Extraction for Continuous Speech Recognition Using the MVDR Spectrum Estimation Method”, IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, No. 1, pp. 224-234, 2007. [64]

Z. Tufekci, “Local Feature Extraction for Robust Speech

Recognition in The Presence of Noise,” Ph. D. Dissertation, Clemson University, 2001.

181

References [65]

R. Sarikaya, “Robust and Efficient Techniques for Speech

Recognition in Noise,” Ph. D. Dissertation, Duke University, 2001. [66] S. Furui, “Cepstral Analysis Technique for Automatic Speaker

Verification,” IEEE Transactions on Acoustics, Speech, And Signal Processing, vol. ASSP-29, No. 2, pp. 254-272, 1981. [67]

R. Gandhiraj, P.S. Sathidevi, “Auditory-based Wavelet Packet

Filter-bank for Speech Recognition using Neural Network,” Proceedings of the 15thInternational Conference on Advanced Computing and Communications, pp. 666-671, 2007. [68] A. Katsamanis, G. Papandreou, and P. Maragos, “Face Active

Appearance Modeling and Speech Acoustic Information to Recover Articulation,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, No. 3, pp. 411-422, 2009. [69] T. Sakamoto, C. Nakanishi and T. Hase, “Software Pixel

Interpolation for Digital Still Cameras Suitable for A 32-bit MCU,” IEEE Trans. Consumer Electronics, vol. 44, No. 4, November 1998.

182

‫ﺍﻟﻤﻠﺨﺺ ﺍﻟﻌﺮﺑﻰ‬ ‫ﻳﺨﺘﺺ ﻫﺬﺍ ﺍﻟﺒﺤﺚ ﺑﻤﺠﺎﻝ ﻣﻬﻢ ﻭﺣﻴﻮﻱ‬

‫ﻣﻦ ﻣﺠﺎﻻﺕ ﻣﻌﺎﻟﺠﺔ ﺍﻟﺼﻮﺭ ﺍﻟﺮﻗﻤﻴﺔ‬

‫ﻭﻫﻮ‬

‫ﺍﻟﺤﺼﻮﻝ ﻋﻠﻰ ﺻﻮﺭ ﻓﺎﺋﻘﺔ ﺍﻟﺪﻗﺔ ﻣﻦ ﺻﻮﺭ ﻣﺸﻮﻫﺔ ﺑﺘﻘﻨﻴﺎﺕ ﻣﺨﺘﻠﻔﺔ ‪ .‬ﻭ ﻟﻘﺪ ﺟﺬﺏ ﻫﺬﺍ ﺍﻟﻤﺠﺎﻝ ﺇﻧﺘﺒﺎﻩ‬ ‫ﻛﺜﻴﺮ ﻣﻦ ﺍﻟﺒﺎﺣﺜﻴﻦ ﻓﻲ ﺗﺨﺼﺺ ﻣﻌﺎﻟﺠﺔ ﺍﻟﺼﻮﺭ ﺍﻟﺮﻗﻤﻴﺔ ﻟﻤﺎ ﻟﻪ ﻣﻦ ﺃﻫﻤﻴﺔ ﻛﺒﻴﺮﺓ ﻓﻲ ﻛﺜﻴﺮ ﻣﻦ‬ ‫ﺍﻟﺘﻄﺒﻴﻘﺎﺕ ﻣﺜﻞ ﺍﻟﺘﻄﺒﻴﻘﺎﺕ ﺍﻟﻄﺒﻴﺔ ﻭﺍﻟﺘﻄﺒﻴﻘﺎﺕ ﺍﻟﻌﺴﻜﺮﻳﺔ ﻭﺗﻄﺒﻴﻘﺎﺕ ﺍﻹﺳﺘﺸﻌﺎﺭ ﻋﻦ ﺑﻌﺪ ﻭ ﺃﺟﻬﺰﺓ‬ ‫ﺍﻟﺘﻠﻴﻔﺰﻳﻮﻥ ﻓﺎﺋﻘﺔ ﺍﻟﺪﻗﺔ‪.‬‬ ‫ﺗﻘﻨﻴﺔ ﺭﺋﻴﺴﻴﻪ ﻳﻤﻜﻦ ﺇﺳﺘﺨﺪﺍﻣﻬﻤﺎ ﻓﻲ ﺍﻟﺤﺼﻮﻝ ﻋﻠﻰ ﺻﻮﺭ ﻓﺎﺋﻘﺔ ﺍﻟﺪﻗﺔ ﻗﺪ ﺗﻢ ﺗﻨﺎﻭﻟﻬﺎ ﻓﻰ‬ ‫ﻫﺬﺍ ﺍﻟﺒﺤﺚ ﺃﻻ ﻭﻫﻰ ﻋﻤﻠﻴﺔ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻲ ﻟﻠﺼﻮﺭ )‪ (Image Interpolation‬ﻭﻳﻘﺼﺪ ﺑﻬﺎ‬ ‫ﺍﻟﺤﺼﻮﻝ ﻋﻠﻰ ﺻﻮﺭﺓ ﻓﺎﺋﻘﺔ ﺍﻟﺪﻗﺔ ﻣﻦ ﺻﻮﺭﺓ ﻭﺍﺣﺪﺓ ﻣﺸﻮﻫﺔ‪.‬‬ ‫ﺍﻟﺘﺮﻛﻴﺰ ﻋﻠﻰ ﻋﻤﻠﻴﺔ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻲ ﻟﻠﺼﻮﺭ ﺏ‬

‫ﺇ ﺳﺘﺨﺪﺍﻡ ﺍﻟﻄﺮﻕ ﺍﻟﺪﺍﻟﻴﺔ ﺍﻟﻌﺎﺩﻳﺔ‬

‫)‪ (Polynomial-Based‬ﻭ ﻋﻤﻠﻴﺔ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻲ ﻟﻠﺼﻮﺭ ﺍﻟﻤﻠﻮﻧﺔ ﻛﺎﻧﺎ ﻣﻦ ﺍﻟﻨﻘﻂ ﺍﻷﺳﺎﺳﻴﺔ‬ ‫ﺍﻟﺘﻰ ﺑﺪﺃ ﺑﻬﺎ ﺍﻟﺒﺤﺚ ‪.‬‬

‫ﻭﺍﻟﺮﺳﺎﻟﺔ ﻣﻘﺴﻤﺔ ﺇﻟﻰ ﺧﻤﺴﺔ ﻓﺼﻮﻝ‪:‬‬ ‫ﺍﻟﻔﺼﻞ ﺍﻷﻭﻝ‪ :‬ﻳﻌﻄﻰ ﻣﻘﺪﻣﺔ ﻋﺎﻣﺔ ﻋﻦ ﺍﻟﺮﺳﺎﻟﺔ ﻭﺗﺴﻠﺴﻞ ﻣﻮﺿﻮﻋﺎﺗﻬﺎ ‪.‬‬ ‫ﺍﻟﻔﺼﻞ ﺍﻟﺜﺎﻧﻲ‪ :‬ﻳﺴﺘﻌﺮﺽ ﻧﻈﺎﻡ ﺇﺳﺘﻌﺎﺩﺓ ﺍﻟﺼﻮﺭ ﻓﺎﺋﻘﺔ ﺍﻟﺠﻮﺩﺓ ﺍﻟﻤﺒﻨﻰ ﻋﻠﻰ ﺩﺍﻟﺔ ﻛﺜﻴﺮﺓ ﺍﻟﺤﺪﻭﺩ‬ ‫)‪ (Polynomial-Based‬ﺍﻟﻤﻮﺟﻮﺩ ﺑﺎﻟﻔﻌﻞ ﻭﺍﻷﻧﻈﻤﺔ ﺍﻟﻤﺨﺘﻠﻔﺔ ﻣﺜﻞ ‪ B-spline‬ﻭﻧﻈﺎﻡ ﻹﺳﺘﻌﺎﺩﺓ‬ ‫ﺍﻟﺼﻮﺭ ﺍﻟﻤﻠﻮﻧﺔ‪.‬‬ ‫ﺍﻟﻔﺼﻞ ﺍﻟﺜﺎﻟﺚ ‪ :‬ﻳﺨﺘﺺ ﺑﻮﺻﻒ ﺍﻟﻨﻈﺎﻡ ﺍﻟﻤﻘﺘﺮﺡ‬ ‫)‪ Network‬ﻓﻰ ﻋﻤﻠﻴﺔ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻲ ﻟﻠﺼﻮﺭ‪.‬‬

‫ﻭﻫﻮ ﺇ ﺳﺘﺨﺪﺍﻡ ﺍﻟﺸﺒﻜﻪ ﺍﻟﻌﺼﺒﻴﺔ‬

‫ﺣﻴﺚ ﺗﻢ ﻋﻤﻞ ﻣﻘﺎﺭﻧﺔ ﻟﻠﻨﺘﺎﺋﺞ ﻟﻠﻌﺪﻳﺪ ﻣﻦ‬

‫ﺍﻷﻧﻈﻤﺔ ﺍﻟﻘﺪﻳﻤﺔ ﺑﻨﺘﺎﺋﺞ ﺍﻟﻨﻈﺎﻡ ﺍﻟﻤﻘﺘﺮﺡ ﻭﻗﺪ ﺗﻤﺖ ﺟﻤﻴﻊ ﺍﻟﺒﺮﺍﻣﺞ ﺑﺎﺳﺘﺨﺪﺍﻡ‬ ‫)‪7.0‬‬

‫‪(Neural‬‬

‫‪.(MATLAB ver.‬‬

‫ﺍﻟﻔﺼﻞ ﺍﻟﺮﺍﺑﻊ‪ :‬ﻭﻓﻴﻪ ﻧﻘﺪﻡ ﺗﻄﺒﻴﻖ ﺟﺪﻳﺪ ﻳﻤﻜﻨﻨﺎ ﻣﻦ ﺇﺳﺘﺨﺪﺍﻡ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻰ ﻟﻠﺼﻮﺭ ﻓﻰ ﺍﻟﺘﻌﺮﻑ‬ ‫ﻋﻠﻰ ﺍﻷﻧﻤﺎﻁ ﺍﻟﻤﺨﺘﻠﻔﺔ ﺣﻴﺚ‪.‬ﻳﻤﻜﻦ ﺍﺳﺘﺨﺮﺍﺝ ﺧﺼﺎﺋﺺ ﺍﻟﺼﻮﺭ ﺣﺘﻰ ﻟﻮ ﺗﻤﺖ ﻋﻤﻠﻴﺔ ﺍﻹﺳﺘﻜﻤﺎﻝ‬ ‫ﺍﻟﺪﺍﺧﻠﻰ ﻟﻠﺼﻮﺭﻟﺘﻘﻠﻴﻞ ﺣﺠﻢ ﺍﻟﺼﻮﺭﺓ ﺑﻨﺴﺐ ﺩﻗﺔ ﻣﺨﺘﻠﻔﺔ ﻋﻠﻰ ﺣﺴﺐ ﻧﻮﻉ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻰ ﻭ‬ ‫ﻛﻴﻔﻴﺔ ﺇﺳﺘﺨﺮﺍﺝ ﺧﺼﺎﺋﺺ ﺍﻟﺼﻮﺭ ﺍﻟﻤﻠﻮﻧﺔ ﺍﻟﺘﻰ ﺗﻢ ﺃﺟﺮﺍء ﻋﻤﻠﻴﺔ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻰ ﻟﻬﺎ ﻭﻧﺤﺪﺩ‬ ‫ﻧﺴﺒﺔ ﺍﻟﺪﻗﺔ ﻣﻘﺎﺭﻧﺔ ﺑﺎﻟﺨﺼﺎﺋﺺ ﺍﻟﻤﺴﺘﺨﺮﺟﺔ ﻣﻦ ﺍﻟﺼﻮﺭ ﺍﻷﺻﻠﻴﺔ‪.‬‬ ‫ﺍﻟﻔﺼﻞ ﺍﻟﺨﺎﻣﺲ‪ :‬ﻭﻳﺸﻤﻞ ﺗﻠﺨﻴﺼً ﺎ ﻟﻤﺎ ﺗﻢ ﺍﻟﻘﻴﺎﻡ ﺑﻪ ﻓﻰ ﺍﻟﺮﺳﺎﻟﺔ‪ ،‬ﻣﻊ ﻋﺮﺽ ﻷﻫﻢ ﺍﻟﻨﺘﺎﺋﺞ‬ ‫ﺍﻟﻤﺴﺘﺨﻠﺼﺔ ﻭﻛﺬﻟﻚ ﻋﺮﺽ ﻟﺒﻌﺾ ﺍﻟﻤﻘﺘﺮﺣﺎﺕ ﻟﻠﺪﺭﺍﺳﺎﺕ ﺍﻟﻤﺴﺘﻘﺒﻠﻴﺔ ﻓﻲ ﻧﻔﺲ ﺍﻟﻤﺠﺎﻝ‪ .‬ﻭﺗﺨﺘﺘﻢ‬ ‫ﺍﻟﺮﺳﺎﻟﺔ ﺑﺒﻴﺎﻥ ﻟﻠﻤﺮﺍﺟﻊ ﺍﻟﻤﺴﺘﺨﺪﻣﺔ‪.‬‬

‫ﺍﻟﻢ ﺳﺘﺦﻟﺺ ﺍﻟﻌﺮﺑﻲ‬ ‫ﺍﻟﻜﻠﻤﺎﺕ ﺍﻟﺪﻟﻴﻠﻴﺔ ‪ :‬ﺍﻟﺸﺒﻜﺔ ﺍﻟﻌﺼﺒﻴﺔ ـ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻲ ﻟﻠﺼﻮﺭـ ﺩﺍﻟﺔ ﻛﺜﻴﺮﺓ ﺍﻟﺤﺪﻭﺩ ‪ -‬ﺍﻟﺼﻮﺭ‬ ‫ﺍﻟﻤﻠﻮﻧﺔ – ﺍﻟﺘﻌﺮﻑ ﻋﻠﻰ ﺍﻷﻧﻤﺎﻁ‪.‬‬ ‫ﻳﺨﺘﺺ ﻫﺬﺍ ﺍﻟﺒﺤﺚ ﺑﻤﺠﺎﻝ ﻣﻬﻢ ﻭﺣﻴﻮﻱ ﻣﻦ ﻣﺠﺎﻻﺕ ﻣﻌﺎﻟﺠﺔ ﺍﻟﺼﻮﺭ‬

‫ﺍﻟﺮﻗﻤﻴﺔ ﻭﻫﻮ‬

‫ﺍﻟﺤﺼﻮﻝ ﻋﻠﻰ ﺻﻮﺭ ﻓﺎﺋﻘﺔ ﺍﻟﺪﻗﺔ ﻣﻦ ﺻﻮﺭ ﻣﺸﻮﻫﺔ ﺑﺘﻘﻨﻴﺎﺕ ﻣﺨﺘﻠﻔﺔ ﻭ ﺍﻟﺬﻯ ﻟﻪ ﻣﻦ ﺃﻫﻤﻴﺔ ﻛﺒﻴﺮﺓ ﻓﻲ‬ ‫ﻛﺜﻴﺮ ﻣﻦ ﺍﻟﺘﻄﺒﻴﻘﺎﺕ ﻣﺜﻞ ﺍﻟﺘﻄﺒﻴﻘﺎﺕ ﺍﻟﻄﺒﻴﺔ ﻭﺍﻟﺘﻄﺒﻴﻘﺎﺕ ﺍﻟﻌﺴﻜﺮﻳﺔ ﻭﺗﻄﺒﻴﻘﺎﺕ ﺍﻹﺳﺘﺸﻌﺎﺭ ﻋﻦ ﺑﻌﺪ‪.‬‬ ‫ﻟﻘﺪ ﺗﻨﺎﻭﻝ ﻫﺬﺍ ﺍﻟﺒﺤﺚ ﺗﻘﻨﻴﺔ ﺭﺋﻴﺴﻴﻪ ﻳﻤﻜﻦ‬ ‫ﺍﻟﺪﻗﺔ ﻭﻫﻰ ﻋﻤﻠﻴﺔ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻲ ﻟﻠﺼﻮﺭ‬

‫ﺇﺳﺘﺨﺪﺍﻣﻬﻤﺎ ﻓﻲ ﺍﻟﺤﺼﻮﻝ ﻋﻠﻰ ﺻﻮﺭ ﻓﺎﺋﻘﺔ‬ ‫)‪ (Image Interpolation‬ﻭﻳﻘﺼﺪ ﺑﻌﻤﻠﻴﺔ‬

‫ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻲ ﻟﻠﺼﻮﺭ ﺍﻟﺤﺼﻮﻝ ﻋﻠﻰ ﺻﻮﺭﺓ ﻓﺎﺋﻘﺔ ﺍﻟﺪﻗﺔ ﻣﻦ ﺻﻮﺭﺓ ﻭﺍﺣﺪﺓ ﻣﺸﻮﻫﺔ‪.‬‬ ‫ﻭﻟﻘﺪ ﺇﺳﺘﻬﻠﺖ ﺍﻟﺪﺭﺍﺳﺔ ﻓﻲ ﻫﺬﺍ ﺍﻟﺒﺤﺚ ﺑﺎﻟﺘﺮﻛﻴﺰ ﻋﻠﻰ ﻋﻤﻠﻴﺔ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻲ‬ ‫ﻟﻠﺼﻮﺭ)‪ (Image Interpolation‬ﺑﺎﺳﺘﺨﺪﺍﻡ ﺍﻟﻄﺮﻕ ﺍﻟﺪﺍﻟﻴﺔ ﺍﻟﻌﺎﺩﻳﺔ )‪ (Polynomial-Based‬ﻭ‬ ‫ﻋﻤﻠﻴﺔ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻲ ﻟﻠﺼﻮﺭ ﺍﻟﻤﻠﻮﻧﺔ‪.‬‬ ‫ﻭﺗﻢ ﻋﺮﺽ ﺍﻟﻨﻈﺎﻡ ﺍﻟﻤﻘﺘﺮﺡ ﻭﻫﻮ ﺇﺳﺘﺨﺪﺍﻡ ﺍﻟﺸﺒﻜﺔ ﺍﻟﻌﺼﺒﻴﺔ )‪ (Neural Network‬ﻓﻲ‬ ‫ﻋﻤﻠﻴﺔ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻲ ﻟﻠﺼﻮﺭ ﺣﻴﺚ ﺗﻢ ﻋﻤﻞ ﻣﻘﺎﺭﻧﺔ ﻟﻠﻨﺘﺎﺋﺞ ﻟﻠﻌﺪﻳﺪ ﻣﻦ ﺍﻷﻧﻈﻤﺔ ﺍﻟﻘﺪﻳﻤﺔ ﺑﻨﺘﺎﺋﺞ‬ ‫ﺍﻟﻨﻈﺎﻡ ﺍﻟﻤﻘﺘﺮﺡ‪ .‬ﻭﻗﺪ ﺗﺒﻴﻦ ﺃﻧﻪ ﺃﻓﻀﻞ ﻣﻦ ﻧﺎﺣﻴﺔ ﻧﺴﺒﺔ ﺍﻟﺨﻄﺄ ﻭﺍﻟﻌﻤﻠﻴﺎﺕ ﺍﻟﺤﺴﺎﺑﻴﺔ ﺍﻟﻤﻌﻘﺪﺓ ﺍﻟﻤﻄﻠﻮﺑﺔ‪.‬‬ ‫ﻭﻗﺪ ﺗﻢ ﺗﻘﺪﻳﻢ ﺗﻄﺒﻴﻖ ﺟﺪﻳﺪ ﻳﻤﻜﻨﻨﺎ ﻣﻦ ﺇﺳﺘﺨﺪﺍﻡ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻰ ﻟﻠﺼﻮﺭ ﻓﻰ ﺍﻟﺘﻌﺮﻑ‬ ‫ﻋﻠﻰ ﺍﻷﻧﻤﺎﻁ ﺍﻟﻤﺨﺘﻠﻔﺔ ﺑﺤﻴﺚ ﻳﻤﻜﻦ ﺇﺳﺘﺨﺮﺍﺝ ﺧﺼﺎﺋﺺ ﺍﻟﺼﻮﺭ ﺣﺘﻰ ﻟﻮ ﺗﻤﺖ ﻋﻤﻠﻴﺔ ﺍﻹﺳﺘﻜﻤﺎﻝ‬ ‫ﺍﻟﺪﺍﺧﻠﻰ ﻟﻠﺼﻮﺭﺍﻟﻤﺸﻮﻫﺔ ﺑﻨﺴﺐ ﺩﻗﺔ ﻣﺨﺘﻠﻔﺔ ﻋﻠﻰ ﺣﺴﺐ ﻧﻮﻉ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻰ ﻭﻛﻴﻔﻴﺔ ﺇﺳﺘﺨﺮﺍﺝ‬ ‫ﺧﺼﺎﺋﺺ ﺍﻟﺼﻮﺭ ﺍﻟﻤﻠﻮﻧﺔ ﺍﻟﺘﻰ ﺗﻢ ﺇﺟﺮﺍء ﻋﻤﻠﻴﺔ ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻰ ﻟﻬﺎ ﻭﺗﺤﺪﻳﺪ ﻧﺴﺒﺔ ﺍﻟﺪﻗﺔ ﻣﻘﺎﺭﻧﺔ‬ ‫ﺑﺎﻟﺨﺼﺎﺋﺺ ﺍﻟﻤﺴﺘﺨﺮﺟﺔ ﻣﻦ ﺍﻟﺼﻮﺭ ﺍﻷﺻﻠﻴﺔ‪.‬‬

‫ﺑﺴﻢ ﷲ ﺍﻟﺮﺣﻤﻦ ﺍﻟﺮﺣﻴﻢ‬

‫ﺳ ْﺒ َﺤﺎﻧَ َﻚ ﻻ ﻋ ْﻠ َﻢ ﻟَﻨَﺂ ﺇﻻ َﻣﺎ َﻋﻠ ْﻤﺘَﻨَﺂ‬ ‫ﻗَﺎﻟُﻮ ْﺍ ُ‬ ‫ﻧﺖ ﺍ ْﻟ َﻌﻠﻴ ُﻢ ﺍ ْﻟ َﺤﻜﻴ ُﻢ‬ ‫ﺇﻧ َﻚ ﺃَ َ‬ ‫ﺻﺪﻕ ﷲ ﺍﻟﻌﻈﻴﻢ‬

‫ﺳﻮﺭﺓ ﺍﻟﺒﻘﺮﺓ ﺍﻵﻳﺔ )‪(32‬‬

‫ﺟﺎﻣﻌﺔ ﻃﻨﻄﺎ‬ ‫ﻛﻠﻴﺔ اﻟﻬﻨﺪﺳﺔ‬ ‫ﻗﺴﻢ ﻫﻨﺪﺳﺔ اﻹﻟﻜﺘﺮوﻧﻴﺎت واﻹﺗﺼﺎﻻت اﻟﻜﻬﺮﺑﻴﺔ‬ ‫ﻋﻨﻮﺍﻥ ﺍﻟﺮﺳﺎﻟﺔ ‪ :‬ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻰ ﻟﻠﺼﻮﺭ ﺍﻟﺮﻗﻤﻴﺔ ﻓﻰ ﻧﻈﻢ ﺍﻹﺗﺼﺎﻻﺕ‬

‫ﺍﺳﻢ ﺍﻟﺒﺎﺣﺜﺔ‪:‬‬ ‫ﺍﻟﺪﺭﺟﺔ ‪:‬‬

‫ﻣﻬﺎ ﻋﻮﺽ ﻋﺒﺪ ﺍﻟﺤﻤﻴﺪ ﺇﺑﺮﺍﻫﻴﻢ‬

‫ﺩﺭﺟﺔ ﺍﻟﻤﺎﺟﺴﺘﻴﺮ ﻓﻲ ﺍﻟﻬﻨﺪﺳﺔ ﺍﻟﻜﻬﺮﺑﻴﺔ )ﻫﻨﺪﺳﺔ ﺍﻹﻟﻜﺘﺮﻭﻧﻴﺎﺕ‬ ‫ﻭ ﺍﻹﺗﺼﺎﻻﺕ ﺍﻟﻜﻬﺮﺑﻴﺔ(‬

‫ﻟﺠﻨﺔ اﻟﺤﻜﻢ واﻟﻤﻨﺎﻗﺸﺔ‬ ‫‪0B‬‬

‫ﺃ‪.‬ﺩ‪ /.‬ﺭﺷﻴﺪ ﻣﺨﺘﺎﺭ ﺍﻟﻌﻮﺿﻰ‬

‫)‬

‫(‬

‫ﺃﺳﺘﺎﺫ ﻣﺘﻔﺮﻍ ﺑﻘﺴﻢ ﻫﻨﺪﺳﺔ ﺍﻹﻟﻜﺘﺮﻭﻧﻴﺎﺕ ﻭ ﺍﻹﺗﺼﺎﻻﺕ ﺍﻟﻜﻬﺮﺑﻴﺔ‬ ‫‪ -‬ﻛﻠﻴﺔ ﺍﻟﻬﻨﺪﺳﺔ‪ -‬ﺟﺎﻣﻌﺔ ﺍﻟﻤﻨﺼﻮﺭﺓ‪).‬ﺭﺋﻴﺴﺎ(‬

‫‪1B‬‬

‫ﺃ‪.‬ﺩ‪ /.‬ﺳﻌﻴـﺪ ﺍﻟﺴﻴﺪ ﺍﻟﺨﺎﻣﻰ‬

‫)‬

‫(‬

‫ﺃﺳﺘﺎﺫ ﻣﺘﻔﺮﻍ ﺑﻘﺴﻢ ﺍﻟﻬﻨﺪﺳﺔ ﺍﻟﻜﻬﺮﺑﻴﺔ ‪ -‬ﻛﻠﻴﺔ ﺍﻟﻬﻨﺪﺳﺔ‬ ‫‪ -‬ﺟﺎﻣﻌﺔ ﺍﻻﺳﻜﻨﺪﺭﻳﺔ‪).‬ﻋﻦ ﻟﺠﻨﺔ ﺍﻹﺷﺮﺍﻑ(‬

‫ﺃ‪.‬ﺩ‪ /.‬ﻣﺼﻄﻔﻰ ﻣﺤﻤﻮﺩ ﻋﺒﺪﺍﻟﻨﺒﻰ‬

‫)‬

‫(‬

‫ﺃﺳﺘﺎﺫ ﻣﺘﻔﺮﻍ ﺑﻘﺴﻢ ﻫﻨﺪﺳﺔ ﺍﻹﻟﻜﺘﺮﻭﻧﻴﺎﺕ ﻭ ﺍﻹﺗﺼﺎﻻﺕ ﺍﻟﻜﻬﺮﺑﻴﺔ‬ ‫ ﻛﻠﻴﺔ ﺍﻟﻬﻨﺪﺳﺔ ﺟﺎﻣﻌﺔ ﻁﻨﻄﺎ‪).‬ﻋﻦ ﻟﺠﻨﺔ ﺍﻹﺷﺮﺍﻑ(‬‫‪3B‬‬

‫ﺃ‪.‬ﺩ‪ /.‬ﺍﻟﺴﻴﺪ ﻣﺤﻤﻮﺩ ﺍﻟﺮﺑﻴﻌﻰ‬ ‫‪10B‬‬

‫ﺃﺳﺘﺎﺫ ﻭﻭﻛﻴﻞ ﻛﻠﻴﺔ ﺍﻟﻬﻨﺪﺳﺔ ﺍﻻﻟﻜﺘﺮﻭﻧﻴﺔ ﺑﻤﻨﻮﻑ‬ ‫ﻟﻠﺪﺭﺍﺳﺎﺕ ﺍﻟﻌﻠﻴﺎ ﻭ ﺍﻟﺒﺤﻮﺙ‪).‬ﻋﻀﻮﺍ(‬

‫‪2011‬‬

‫)‬

‫(‬

‫ﺟﺎﻣﻌﺔ ﻃﻨﻄﺎ‬ ‫ﻛﻠﻴﺔ اﻟﻬﻨﺪﺳﺔ‬ ‫ﻗﺴﻢ ﻫﻨﺪﺳﺔ اﻹﻟﻜﺘﺮوﻧﻴﺎت واﻹﺗﺼﺎﻻت اﻟﻜﻬﺮﺑﻴﺔ‬ ‫ﻋﻨﻮﺍﻥ ﺍﻟﺮﺳﺎﻟﺔ ‪ :‬ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻰ ﻟﻠﺼﻮﺭ ﺍﻟﺮﻗﻤﻴﺔ ﻓﻰ ﻧﻈﻢ ﺍﻹﺗﺼﺎﻻﺕ‬

‫ﺍﺳﻢ ﺍﻟﺒﺎﺣﺜﺔ‪:‬‬ ‫ﺍﻟﺪﺭﺟﺔ ‪:‬‬

‫ﻣﻬﺎ ﻋﻮﺽ ﻋﺒﺪ ﺍﻟﺤﻤﻴﺪ ﺇﺑﺮﺍﻫﻴﻢ‬

‫ﺩﺭﺟﺔ ﺍﻟﻤﺎﺟﺴﺘﻴﺮ ﻓﻲ ﺍﻟﻬﻨﺪﺳﺔ ﺍﻟﻜﻬﺮﺑﻴﺔ )ﻫﻨﺪﺳﺔ ﺍﻹﻟﻜﺘﺮﻭﻧﻴﺎﺕ‬ ‫ﻭﺍﻹﺗﺼﺎﻻﺕ ﺍﻟﻜﻬﺮﺑﻴﺔ(‬

‫‪4B‬‬

‫ﺗﺤﺖ إﺷﺮاف‬ ‫ﺃ‪.‬ﺩ‪ /.‬ﺳﻌﻴـﺪ ﺍﻟﺴﻴﺪ ﺍﻟﺨﺎﻣﻰ‬

‫)‬

‫(‬

‫ﺃ‪.‬ﺩ‪ /.‬ﻣﺼﻄﻔﻰ ﻣﺤﻤﻮﺩ ﻋﺒﺪﺍﻟﻨﺒﻰ‬

‫)‬

‫(‬

‫‪5B‬‬

‫ﺃﺳﺘﺎﺫ ﻣﺘﻔﺮﻍ ﺑﻘﺴﻢ ﺍﻟﻬﻨﺪﺳﺔ ﺍﻟﻜﻬﺮﺑﻴﺔ‬ ‫– ﻛﻠﻴﺔ ﺍﻟﻬﻨﺪﺳﺔ ﺟﺎﻣﻌﺔ ﺍﻻﺳﻜﻨﺪﺭﻳﺔ‬

‫‪6B‬‬

‫ﺃﺳﺘﺎﺫ ﻣﺘﻔﺮﻍ ﺑﻘﺴﻢ ﻫﻨﺪﺳﺔ ﺍﻹﻟﻜﺘﺮﻭﻧﻴﺎﺕ ﻭﺍﻹﺗﺼﺎﻻﺕ ﺍﻟﻜﻬﺮﺑﻴﺔ‬ ‫– ﻛﻠﻴﺔ ﺍﻟﻬﻨﺪﺳﺔ ﺟﺎﻣﻌﺔ ﻁﻨﻄﺎ‬

‫ﺩ‪ /.‬ﻓﺘﺤﻰ ﺍﻟﺴﻴﺪ ﻋﺒﺪ ﺍﻟﺴﻤﻴﻊ‬ ‫ﺃﺳﺘﺎﺫ ﻣﺴﺎﻋﺪ ﺑﻘﺴﻢ ﻫﻨﺪﺳﺔ ﺍﻹﻟﻜﺘﺮﻭﻧﻴﺎﺕ ﻭﺍﻹﺗﺼﺎﻻﺕ ﺍﻟﻜﻬﺮﺑﻴﺔ‬ ‫– ﻛﻠﻴﺔ ﺍﻟﻬﻨﺪﺳﺔ ﺍﻹﻟﻜﺘﺮﻭﻧﻴﺔ‪ -‬ﺟﺎﻣﻌﺔ ﺍﻟﻤﻨﻮﻓﻴﺔ‬

‫‪2011‬‬

‫)‬

‫(‬

‫ﺟﺎﻣﻌﺔ ﻃﻨﻄﺎ‬ ‫ﻛﻠﻴﺔ اﻟﻬﻨﺪﺳﺔ‬ ‫ﻗﺴﻢ ﻫﻨﺪﺳﺔ اﻹﻟﻜﺘﺮوﻧﻴﺎت واﻹﺗﺼﺎﻻت اﻟﻜﻬﺮﺑﻴﺔ‬

‫" ﺍﻹﺳﺘﻜﻤﺎﻝ ﺍﻟﺪﺍﺧﻠﻰ ﻟﻠﺼﻮﺭ ﺍﻟﺮﻗﻤﻴﺔ ﻓﻰ ﻧﻈﻢ ﺍﻹﺗﺼﺎﻻﺕ "‬ ‫ﺭﺳﺎﻟﺔ‬ ‫ﻣﻘﺪﻣﺔ ﻹﺳﺘﻜﻤﺎﻝ ﻣﺘﻄﻠﺒﺎﺕ ﺍﻟﺤﺼﻮﻝ ﻋﻠﻰ ﺩﺭﺟﺔ‬ ‫ﺍﻟﻤﺎﺟﺴﺘﻴﺮ‬ ‫ﻓﻲ ﺍﻟﻬﻨﺪﺳﺔ ﺍﻟﻜﻬﺮﺑﻴﺔ )ﻫﻨﺪﺳﺔ ﺍﻹﻟﻜﺘﺮﻭﻧﻴﺎﺕ ﻭﺍﻹﺗﺼﺎﻻﺕ ﺍﻟﻜﻬﺮﺑﻴﺔ(‬ ‫ﻣﻘﺪﻣﺔ ﻣﻦ‬

‫ﺍﻟﻤﻬﻨﺪﺱﺓ ‪ /‬ﻣﻬﺎ ﻋﻮﺽ ﻋﺒﺪ ﺍﻟﺤﻤﻴﺪ ﺇﺑﺮﺍﻫﻴﻢ‬ ‫ﺗﺤﺖ ﺇﺷﺮﺍﻑ‬

‫‪7B‬‬

‫ﺃ‪.‬ﺩ‪ /.‬ﺳﻌﻴـﺪ ﺍﻟﺴﻴﺪ ﺍﻟﺨﺎﻣﻰ‬

‫ﺃﺳﺘﺎﺫ ﻣﺘﻔﺮﻍ ﺑﻘﺴﻢ ﻫﻨﺪﺳﺔ ﺍﻹﻟﻜﺘﺮﻭﻧﻴﺎﺕ‬ ‫ﻭﺍﻹﺗﺼﺎﻻﺕ ﺍﻟﻜﻬﺮﺑﻴﺔ ﺑﻜﻠﻴﺔ ﺍﻟﻬﻨﺪﺳﺔ‬ ‫ﺟﺎﻣﻌﺔ ﺍﻹﺳﻜﻨﺪﺭﻳﺔ‬ ‫‪9B‬‬

‫‪8B‬‬

‫ﺃ‪.‬ﺩ‪ /.‬ﻣﺼﻄﻔﻰ ﻣﺤﻤﻮﺩ ﻋﺒﺪﺍﻟﻨﺒﻰ‬ ‫ﺃﺳﺘﺎﺫ ﻣﺘﻔﺮﻍ ﺑﻘﺴﻢ ﻫﻨﺪﺳﺔ ﺍﻹﻟﻜﺘﺮﻭﻧﻴﺎﺕ‬ ‫ﻭﺍﻹﺗﺼﺎﻻﺕ ﺍﻟﻜﻬﺮﺑﻴﺔ ﺑﻜﻠﻴﺔ ﺍﻟﻬﻨﺪﺳﺔ‬ ‫ﺟﺎﻣﻌﺔ ﻁﻨﻄﺎ‬

‫ﺩ‪ /.‬ﻓﺘﺤﻰ ﺍﻟﺴﻴﺪ ﻋﺒﺪ ﺍﻟﺴﻤﻴﻊ‬

‫ﺃﺳﺘﺎﺫ ﻣﺴﺎﻋﺪ ﺑﻘﺴﻢ ﻫﻨﺪﺳﺔ ﺍﻹﻟﻜﺘﺮﻭﻧﻴﺎﺕ‬ ‫ﻭﺍﻹﺗﺼﺎﻻﺕ ﺍﻟﻜﻬﺮﺑﻴﺔ ‪ -‬ﻛﻠﻴﺔ ﺍﻟﻬﻨﺪﺳﺔ‬ ‫ﺍﻹﻟﻜﺘﺮﻭﻧﻴﺔ‪ -‬ﺟﺎﻣﻌﺔ ﺍﻟﻤﻨﻮﻓﻴﺔ‬ ‫‪2011‬‬