Republic of Iraq Ministry of Higher Education and Scientific Research University of Technology Department of Computer Science

Cuneiform Symbols Recognition Using Pattern Recognition Techniques

A Thesis Submitted to the Department of Computer Science of the University of Technology in a Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Computer Science BY

Ali Adel Saeid AL-Tammeme

Supervisor

Prof. Dr. Abdul Monem S .Rahma

Asst.Prof. Dr. Abdul Mohssen j. Abdul hossen

2018

الرِح ِيم ﴾ الر ْح َم ِن َّ ﴿ بِ ْس ِم اللّ ِه َّ

ك الَّ ِ ِ نسا َن ِم ْن َعلَق ﴿﴾٢ اْل ق ل خ ﴾ ١ ﴿ ق ل خ ي ذ َ َ ْ َ َ اس ِم َربِّ َ َ َ اقْ َرأْ بِ ْ َ ك ْاْلَ ْك َرم ﴿ ﴾٣الَّ ِذي َعلَّ َم بِالْ َقلَ ِم ﴿﴾٤ اقْ َرأْ َوَربُّ َ َعلَّ ِ نسا َن َما لَ ْم يَ ْعلَ ْم ﴿﴾٥ اْل م ْ َ َ صدق هللا العظيم

(سورة العلق :األيه )5-1

Acknowledgements

All my thanks first of all are addressed to Almighty Allah who has guided my steps towards the path of knowledge and without his help and blessing, this Thesis would not have progressed or have seen the light. My sincere appreciation is expressed to

my

supervisor Dr.

Abdul Monem S .Rahma for providing me with support, ideas and

Inspiration.

I am extremely grateful to all members of Computer Science Department of University of Technology for their general support Finally,

I would never have been able to finish my

Thesis

without the help from friends, and support from my family .

Thank you all

Ali,

Supervisor's Certification

We certify that this dissertation entitled (Cuneiform Symbols Recognition Using Pattern Recognition Techniques) was prepared by "Ali Adel Saied AL-temmeme" under our supervision at Computer Science Department/University of Technology in a partial fulfillment of the Degree of Ph.D. in Computer science .

Signature Name: Prof. Dr. Abdul Monem S.Rahma Date : /

/2018

Signature Name: Asst.Prof. Dr. Abdul Mohssen j. Abdul hossen Date : /

/2018

Table of Contents Subject Abstract List of Contents List of Tables List of Figures List of Abbreviations List of Algorithms Chapter One: General Introduction 1.1 introduction 1.2 Pattern recognition 1.2.1Pattern recognition approaches 1.2.2 pattern recognition and cuneiform writing 1.3 Literature Survey

Page No i iii Vi Viii Xi Xiii 1 1 3 4 4

1.4 Aim of Thesis 1.5 Thesis Contributions 1.6 Organization of Thesis Chapter Two: Theoretical Background 3.1 2.1Introduction 2.2 Character Recognition Types 2.2.1Online character recognition 2.2.2 Offline character recognition 2.3 Preprocessing 2.3.1 Image Enhancement 2.3.2 Image Enhancement Filters 2.4 Image Segmentation 2.4.1 image Thresholding 2.4.2 Image Thresholding Techniques 2.4.3 Image connected-component labeling 2.5Image distance transform 2.5.1 Distance Transforms With Sampled Functions 3.6 Feature extraction 2.6.1. Statistical Features 2.6.2. Global Transformation and Series Expansion Features 2.6.3. Structural Features 2.7 Classification 2.7.1Probabilistic neural networks (PNN) 2.7.2 Support Vector Machine (SVM) i

6 6 8 9 10 10 10 10 11 12 16 17 18 20 25 26 29 29 30 33 38 38 40

2.8 Post-Processing 2.9 Image fusion 2.10 Evaluation Measures of cuneiform recognition System 2.11 Proposed learning dataset Chapter three: Proposed Assyrian cuneiform recognition system 3.1 Introduction 3.2 Architecture of the Proposed System 3.3 Assyrian cuneiform recognition system (ACRS) Model1 3.3.1 Image Acquisition Stage 3.3.2 Preprocessing Stage (1) 3.3.3 Image Thresholding 3.3.4 Preprocessing (stage2): The Elimination Of Rejected 3.3.5 Features extraction: 3.3.6. Training stage 3.3.7 Classification stage 3.4. Post- processing 3.4.1. Proposed solution for duplicated problems 3.4.2. Locates the density centroid 3.4.4 Training Ttage 3.4.4 Test feature extraction stage 3.4.5 Classification stage 3.5 Accuracy supporting for cuneiform symbols recognition by Image fusion approach Chapter four: Experiments and Results Discussion 4.1 Introduction 4.2 Cuneiform tablets Images dataset 4.3 Cuneiform tablet image preprocessing 4.3.1 Image Enhancement 4.3.2 Removing spots 4.3.3 Removing writing lines 4.4 Image Thresholding 4.5 Feature extraction 4.5.1 Elliptical Fourier Descriptors (EFD) 4.5.2 Projection histograms 4.5.3 Hu’s moments 4.4.4 Zernike moments (ZM)

103 103 105 105 107 109 110 111 112 114 115 115

4.5.5 Polygon approximation 4.6 Classification 4.7 Results and Discussion

117 123 126

4.8 Analytical comparison

127 ii

43 44 45 45 49 49 51 52 53 54 59 74 84 86 92 93 94 95 96 98 100

Chapter five: Conclusions and Suggestions 5.1 Conclusions 5.2 Suggestions for Future Work References

129 131 149

Table of Tables Table No 2.1 3.1 3.2 3.3 3.4 3.5 3.6 3.7 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5.10 4.11 4.12 4.13

Table Caption

SVM Kernel decrement functions thresholds values computed agents Skewness metric the MSE values for each segments correspond it is length values break points generated from bordered point The computed AEV values for each dominate points The updated AEV values approximated points Structure points Comparison of results of recognition accuracy ratio after applied differing LPF with different size Comparison of results values agents each cut of frequency values of ideal filter. Comparison of results according to each thresholds values Comparison of results after applied the firs line remove algorithm according to different thresholds values Comparison of results of applying the follow thresholding methods according to their consuming time. Feature vector constructed by EFD Comparison of recognition results with applied EFD according each classifier Comparison of recognition results with applied Projection histograms according each classifier Comparison of recognition results with applied Hu’s moments according each classifier Comparison of recognition results with applied ZM according each classifier. Experimental results according to each diversity values by polygon approximation algorithm Experimental results according to each diversity values by proposed polygon approximation algorithm Comparison of recognition results and average Classification Time iii

Page No 43 58 73 67 67 77 77 83 106 107 109 110 110 113 118 118 115 116 118 119 121

4.14 4.15 4.16 4.17 4.18

according each classifier Comparison of recognition results according to different image size. Comparison of recognition accuracy results according to different standard deviation values δ Comparison of recognition accuracy results according to duplicated state recognition state. Comparison of recognition accuracy results according to image size Comparison of recognition accuracy results according to each eatures extraction method and average Classification time

122 123 124 128 127

Table of Figures Figure NO 1.1 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 2.18 2.19 2.20 2.21 2.22 2.23

Figure Caption pattern recognition system image enhancement frequency transform steps Gaussian kernel mask values frequency domain technique image enhancement with low frequency Ideal Low pass filter Image segmentation methods Image connected-component labeling Stricture element Dilation and erosion process regenerated connected component segment . image distance transform the lower envelop of n parabolas distance transformed with it's states Projection histograms feature extraction method The computation of unit disk polygon approximation polygon approximation approaches associated approximate error break points General architecture of a PNN SVM with hard margin hyperplane H support vector maximum margin iv

Page No 2 12 14 14 15 16 17 21 22 22 24 25 26 27 30 31 34 34 35 36 38 40 41 42

2.24 2.25 2.26 2.27 2.28 2.29 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22 3.23 3.24 3.25 3.26 3.27 3.28 3.29 3.30 3.31

SVM model with soft margin changing data space Wavelet Based image fusion three dimension shape model of cuneiform symbols Virtual dataset confirm symbol versus it’s virtual symbols probabilities Proposed ACRS system architecture Architecture of module1 cuneiform images tablets enhanced image by frequency domain, enhanced images with frequency domain with difference values of cut off frequency. Image Binirizetion methods Binrized image with low quality Binarized image after applied histogram equalizing process cuneiform images of same character with difference features extract connection component image labeling spots free cuneiform image the affection of thresholding value cuneiform image segmentation distance transform Preface 1 Erasing cuneiform lines. lines off binary cuneiform problem of statistical algorithm. The cuneiform writing line will removed after select suitable threshold value. Preface 2 MSE line remove Boundary extraction Edge thinning Freeman’s chain code Break points approximate boundary figure Polygon approximation Quality approximation . Cuneiform patterns Approximated features points v

42 43 44 46 47 48 50 51 52 53 54 55 55 56 59 60 62 63 63 64 65 66 68 69 69 70 70 72 74 74 75 75 78 78 79 80 82

3.32 3.33 3.34 3.35 3.36 3.37 3.38 3.39 3.40 3.41 3.42 3.43 3.44 3.45 3.46 3.47 3.48 3.49 3.50 3.51 3.52 3.53 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15 4.16

Approximate points Features vector cuneiform character Classification process Color cuneiform image Enhanced State Cuneiform Binirized Image Spot off cuneiform image cleared cuneiform binary image Cuneiform labeled image The classification results against each symbol Cuneiform character matching code problem cuneiform patterns with their centroid virtual cuneiform patterns density centroid pixel separated training binary cuneiform symbols extract search point Model (2) post-processing Illumination problem Classification problem Cuneiform image fusion proposed diagram approximated figures output by the proposed method. deformation problem cuneiform image correspond their enhanced mages output about different LPF with different size. removing spots removing writing lines image binarized methods learning patterns with same direction features vector of EFD Elliptical Fourier Descriptors Hu’s moments features vector four ZM values about each square zoon polygon approximation approximating steps Symbols classification results conform symbol deformation Character cuneiform recognition

vi

84 84 86 78 89 89 90 90 91 91 92 92 93 93 94 94 95 97 99 100 100 101 104 105 106 108 109 111 112 112 113 113 116 117 120 121 122 125

Table of Algorithms Algorithm No. 2.1 2.2 2.2.1 2.3 2.4 2.5 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12

Algorithm Caption image enhancement by frequency domain Iterative threshold algorithm Iterative threshold algorithm (locally applying ) Connected Components Extraction. sampling distance transform Reverse polygonization algorithm Cuneiform image Thresholding spots elimination Isolation algorithm Statistical lines remove method MSE lines remove method Cuneiform Symbolled approximate algorithm Generate Structure feature Training virtual symbol learning set PNN Cuneiform symbols classification Training post processing test features post processing Cuneiform image fusion algorithm

vii

Page No 15 20 20 24 28 37 57 61 64 67 71 80 82 85 88 96 98 102

Table of Abbreviations Abbreviations ACO ACRS BP BPR CBDT CNN CCL CR DP DT EDT EFD FE FT GFT HPF ILPF KNN LPF MS NN OCR PDF PSO PNN PR Rr SVM SSV TP TS WT ZM

Meaning Ant colony Optimization Assyrian cuneiform character recognition system Break Points Back Propagation City block Distance Transform Convolution Neural Network Connected-Component Labeling Compression Ratio Dominate Points Distance Transform Euclidean Distance Transform Elliptical Fourier Descriptor Feature Extraction Fourier Transform Gabor Fourier transform High-Pass Filters Ideal Lowpass Filters k- Nearest Neighbor Low-Pass Filters Matching Score Neural Networks Optical Character Recognition Probability Density Function Particle Swarm Optimization Probabilistic Neural Network Pattern Recognition Recognition Ratio Support Vector Machine Symbol Structure Vector True Positive Tabu Search Wavelet Transform Zernike Moments

viii

Abstract

Writing represents one of the most oldest important inventions in humanity, where the beginning was in the land of Mesopotamia, Writing has undergone many stages of development, This type of writing takes the engraving patterns on stone or pressed on clay tablets which were formed to make the cuneiform characters. International museums like Iraqi Museum include thousands of cuneiform tablets, where the large proportion of them has not translated as result of the lack numbers of translators and the difficulty of this language. Therefore this thesis presents a proposed recognition cuneiform system as solution for mention problem depending on patterns recognition techniques especially with optical character recognition (OCR). In addition to solve the problems that are related with erosion unwanted objects like spots and writing lines, as a new approach , by depending on image morphology and distance transform. Where the proposed training dataset in this thesis is the virtual rectangular shapes as a new approach distributed in four patterns forming number of classes, which takes in consideration of shadows probabilities, which associated with cuneiform geometric feature, ( as new proposed factor ), as results of reflected light . This thesis presents comparison state between a proposed algorithm for features extraction by polygon approximation principle and the classical features extraction methods like elliptic Fourier descriptor, zerink moments, H’u moments, and projection histogram. Where the classification task is implemented depending on more than one classifier like probabilistic neural network (PNN) and support vector machine (SVM) with multiple kernel discriminant functions, for achieving reliable decision by depending on evaluating a new proposed features extraction methods. Another side offered in this thesis is a proposed post processing algorithm to solve the duplicated state where the cuneiforms character have the same classification features , depending on the computed approximated points with applied distance transform. In order to evaluate the system performance and

the mention comparison about the features extraction techniques,. The accuracy results obtained from the comparing test for features extraction techniques its consecutively, 95% about the proposed approximating technique with using PNN, 70% for EFD with SVM by a polynomial kernel, 57% for projection histogram with SVM by a RBF kernel , 35% for Hu’s moment with SVM by a RBF kernel ,26% for ZM with PNN. Therefore after adopting the proposed algorithm for a features extraction the recognition accuracy by proposed system is 94 %. Finally the high accuracy result that archived about preprocessing proposed algorithms according to erosion process for unwanted objects like spots and writing line is 95% and 92% consecutively.

Chapter one General Introduction

1.1 introduction Pattern Recognition is one of the important branch of artificial intelligence. It is the science which tries to make machines as human intelligent to recognize patterns and classify them into desired categories in a simple and reliable way, to make right decision, in various applications like remote sensing, artificial intelligent and computer vision, through by its metrics or developed method like statistical estimation and recognition, clustering, fuzzy sets, syntactic recognition, approximate reasoning and (NN) Neural Networks. Therefore the humans want to take advantage of this development for automating its applications for more seek, manipulating, accessing, analyzing, and decision making. Writing has been and remained the essential aspect of humanity because it reflects all aspects of human communication and documentation over time. However it supports the attention about historical stages of writing development which is important as an implementing with automated aspects in writing, archiving, and translating .

1.2 Pattern Recognition Pattern recognition (PR) is a field of machine learning that achieved learning process by theories and methods for design machine to recognize the pattern of object , that lead finally to assign the correct pattern to object . However the structure PR system can be summarized as follows [Pri13] .

1

Chapter one

general introduction

Data acquisition and preprocessing: the raw data will take from environment and subject to preprocessed to removed noise and unwanted features. Feature extraction : the relevant data from processed data are extract to create the classification features that represented by features vector. Decision making: here the decision makes by classifier or the descriptor from extracted features. The block diagram about PR system is shown in figure (1.1) [Pri13], [Vin12].

Figure 1.1 pattern recognition system

Generally PR can be categorized about training state in two types [Pri13]. Supervised learning type: with this type of learning, the training set has been provided where it is instance labeled according to correct output, therefore the learning procedure leads to generate a function model attempt to satisfy a meeting between the input pattern and the output target pattern. Unsupervised learning type : the learning set has not been labeled according to target pattern, therefore this model attempts to find inherent patterns in dataset that can be used to inform the new testing data about the correct pattern. 2

Chapter one

general introduction

1.2.1Pattern recognition approaches [Ani00]. a- Template Matching approach. One of the simplest and earliest approaches to pattern recognition is based on template matching. The pattern to be recognized is matched against the stored template while taking into account all allowable pose (translation and rotation) and scale changes. The similarity measure, often is a correlation. b- Statistical Approach. In the statistical approach, each pattern is represented in terms of features or measurements and is viewed as a point in a dimensional space. c. Syntactic Approach. In many recognition problems involving complex patterns, it is more appropriate to adopt a hierarchical perspective where a pattern is viewed as being composed of simple sub patterns which are themselves built from yet simpler sub patterns d- Neural Networks approach. Neural networks can be viewed as massively parallel computing systems consisting of an extremely large number of simple processors with many interconnections. Neural network models attempt to use some organizational principles (such as learning, generalization, adaptively , fault tolerance and distributed representation, and computation) in a network of weighted directed graphs in which the nodes are artificial neurons and directed edges (with weights) Pattern recognition today has been applied in a wide area of science and engineering with their applications like manufacturing, healthcare and military. Below are some important applications of PR.

Optical character recognition (OCR). Automatic speech recognition. Personal identification systems. Object recognition. 3

Chapter one

general introduction

1.2.2 pattern recognition and cuneiform writing Despite of extensive applications about PR in different directions but this aspect is still limited with field of cuneiform writing, especially with respect to research and scientific thesis. Therefore this thesis will support the trend in the recognition field especially with OCR technique in Assyrian cuneiform writing about the first millennium BC.

1.3 Literature Survey The reviews of various methods and approaches that were used for developing cuneiform writing approaches as recognition, retrieving and preprocessing are presented in this section. In 2018 Nils M. Kriege. et al. proposed two methods for recognized cuneiform symbols, the first one is a graphical model based on graph edit distance computed by efficient heuristic model. The second approach is related to convolution neural network (CNN) which is presented to overcome the cost of computation with learning face. However the recognition accuracy with second model is increased according to increasing the size of dataset. The recognition rate achieved was 90.23%, [Nil18]. In 2016 Khalid Fardousse. et al. introduced a simple and efficient motives recognition system using feature extraction technique from motives images, the features is based are polygonal approximation and chain-code normalized. This system which has been evaluated on our polygonal forms database and basics motives database. The system for recognizing off-line handwritten craft motive has been developed. Different Pre-processing, segmentation techniques and classifier neuronal with different features are also discussed. The maximum recognition rate that achieved was 90% with Radial Basis Function classifier, and 94% with Feed Forward NN,[Kha16]. In 2014 Fahimeh Mostofi. et al . proposed intelligent recognition system for Ancient Persian Cuneiform characters based on supervised learning model, it’s a back propagation neural network model(BP) where the testing data set is created by subjecting the original learning set to Gaussian Filter with different values of stander devotion. The otsu’s binirized model was adopted for 4

Chapter one computing global threshold achieved was 89-100 % [fah14].

general introduction value.

The

recognition

rate

that

In 2013 Naktal M.Edan. proposed methods for recognizing the cuneiform symbols depending on statistical and structure features derived by projection histogram, center of gravity and connected component features. However to separate each distinguish feasters according to each class of symbols, the kmean clustering was used. Multilayer Neural Network (MLP) was applied for classification task where the recognition rate of accuracy level was different according to each class from 83.3% to 90.4%.[Nak13]. In 2012 Kawther K. Ahmed. proposed more than one method for extracting recognition information from cuneiform image tablets about off line and online approaches and applying evaluation with them. It was depended on structure feature as skeleton features vector defined as Symbol Structure Vector (SSV) with depending on k- nearest neighbor (KNN) as classification model. Where the recognition rate achieved was 97%.[Kaw12]. In 2006 IN 2006 R Sanjeev Kunte. et al present an OCR system developed for the recognition of basic characters (vowels and consonants) in printed Kannada text, which can handle different font sizes and font types. Hu’s invariant moments and Zernike moments that have been progressively used in pattern recognition are used in this system to extract the features of printed Kannada characters. Neural classifiers have been effectively used for the classification of characters based on moment features. An recognition rate of 96·8% has been obtained,[San06]. Hilal Yousif. et al suggested recognition method for a hand written images of cuneiform text . The method make use of the fact that there are finite number of images for the symbols and trying to differnation between them depending on intensity profile curves, which represent intensities of selected pixels in the image . Accuracy rate was achieved to be above 90% [Hil06]. In 2001 Al-Aai proposed recognition approach for cuneiform symbols depending on extract recognition features that generated from binary cuneiform image tablet by depending on suggested seven transform forms applied on 5

Chapter one

general introduction

each pixel’s with their neighbors about each cuneiform symbols. However each cuneiform characters will have distinguish features related to number of symbols and their directions used for recognition task. The classification process was implemented through by (association rules) like indexing process was distributed on tree structure, [Ani01]. The results published in the literature showed all research did not take into consideration the geometric shape about cuneiform symbols which therefore effected on generating a different segmented pattern depending on reflected light angle. The other side is not concerns and interest about treating the unwanted objects like spots and writing cuneiform line associated with cuneiform patterns, where the spots as a result of the images' distortion. Finally the duplicate state about recognizing cuneiform characters occurred when the recognition features are correspond therefore it leads to reaching a duplicated recognition state. However, the previous problems related with shadows and spots will be illustrated in (appendix A ). This thesis will stand on these problems and solve them by proposing algorithms with a new proposed virtual training dataset.

1.4 Aim of Thesis The aim of thesis is to developing approach to recognize cuneiform Assyrian image tablets by applied Pattern recognition techniques especially with optical character recognition (OCR), depending on the proposed virtual training dataset of regular triangles patterns and assisting approach to adopt a polygon approximation technique as features extraction method and proposed post processing algorithm to solve duplicated recognition state.

1.5 Thesis Contributions: The main contribution of this thesis can be illustrated as follows: 1- proposing Assyrian cuneiform character recognition system (ACRS)

depending on applied the principals of optical character recognition (OCR) that leads to create evident advantage related with supporting search engine process.

6

Chapter one

general introduction

2- proposing efficient virtual training dataset consisting of trigonometric

3-

4-

5-

6-

forms reflecting all the possibilities in which the cuneiform symbols are formed as a patterns forms, depending on the analysis of the threedimensional geometry about the cuneiform symbol with its shadows are formed by the effect of the light angle. proposing preprocessing algorithms for erasing an effected objects like spots and writing lines. Two algorithms were proposed for erasing writing lines depending on distance transform with sample function as a new object and approaches and proposed erasing spots algorithm depending on image labeling approach through by image morphology. proposing a new approach for extract features to create features vector by polygons approximation with dominate points (DP) through by proposing algorithm that combines the two approximation approaches . Adopting probabilistic neural network (PNN) as a new classification approach to classify the cuneiform symbols direction as (horizontal, vertical or diagonal). Compering the recognition accuracy results that achieved, about cuneiform symbols between the common features extractions methods used and proposed method by employing the PNN and support vector machine (SVM) for multi classification process .

7- proposing a new post processing technique for solving duplicated recognition state depending on (extracted features by specific approximated points according to each patterns and distance transform) . 8-adopting image fusing technique with transform approach by wavelet transform to increasing accuracy recognition results. 9-proposeing thresholding algorithm for implements segmentation state depending on Niblack , Sauvola methods . were the selection criterion of them based on statistical Skewness measure.

7

Chapter one

general introduction

1.6 Organization of Thesis This thesis organize in six chapters, here a brief description of their contents are given: Chapter two: This chapter briefly describes the historical stages of the emergence of cuneiform writing in addition to the associated problems related to processes of cuneiform recognition. Chapter three: It describes optical character recognition, its types, theoretical background steps and algorithms that will be based to design the recognition system. Chapter four: It presents a proposed algorithms that are used to design the proposed system and the implementation of each one. Chapter five: It discusses the experimental and the result obtained from implementation of the proposed recognition system steps and compares the result with traditional methods. Chapter six: It presents the conclusions and illustrates a number of suggestions for future work.

8

Chapter two Theoretical Background

2.1 Introduction Optical character recognition (OCR) is considered one of the main branches in pattern recognition starting from the middle of the century, specifically in 1950 until today, this field is subjected to research and development aspects, as a result about it supporting for institutional government applications, which can easily be explored in financial, banking and archiving applications .

There are many definitions of (OCR), some of them are defined as process for selecting an image segment from scanned image file and determine the corresponding text character [You12],[ Pri17] , or it’s the process of choosing the right pattern for the image segment. Typically the framework of optical character recognition systems can be reviewed as the following steps[Roh12],[Pri17] :

1-preprocessing. 2- segmentation. 3- features extraction . 4- classification. 5- post-processing.

9

Chapter Two

Theoretical Background

2.2 Character Recognition Types Character recognition manly is classified into two types character recognition [Mus16], [Dew09].

online and offline

2.2.1 Online Character Recognition It’s automatic conversion process that cause to convert the digital information (digital ink) generated by Personal Digital Assistant (PDA) or tablets to suitable text in real time [Dew09] . This process is applied with depending on spatial similarity metrics related with different strokes features as number, directions and ordered. Frequently this feature is digital information translated as dynamic representation of sensor states about electronic pen-tip states as up, dawn and movement with number of challenges interface, this process since reduces the time consuming and accuracy. Generally online character recognition process is less difficult than offline recognition process through by available dynamic information , [Mus16] . 2.1.2 Offline Character Recognition This technique is applied on scanned image of type written or hand written text, for recognizing it’s characters. Generally all offline character recognition techniques starting by submitting image firstly to enhanced process for generating suitable features agree with classification model[Dew09] . However offline recognition process is more difficult than online techniques with reason of the problems that relative with noise, distortion and different styles of handwriting, [Mus16] .

2.3 Preprocessing Preprocessing represent the essential step in recognition systems after image acquisition process. Basically the preprocessing step is designed and applied for next analysis step to reduce the amount of noise and maintain as possible the significant information. Generally the preprocessing operations include image thinning, , edge detection, noise removal… and image normalization [pov14] .

10

Chapter Two

Theoretical Background

2.3.1 Image Enhancement It is one of the most important image processing techniques and through it the digital image features are reconstructed to suit the nature of the its application, whether it is medical, military or satellites images.. The primary objective is to treated all the associated problems related to blurring, contrast and noise [Jan15] .The process of enhancement task takes two directions, the first one submitted to human vision as criterion for evaluation and the second is moving towards supports and improves image qualities used to support the identity process by machine vision [Moh17].Image enhancement techniques can be classified in to categories [Jan15] , [Moh17] : A. Spatial Domain Techniques In this technique, each pixel in the image is treated independently with its pixel neighbors. For obtain the required results these techniques are used in this direction methods like histogram, equalization, power and logarithmic transforms. The advantages of this techniques are that it is easy to apply and understand. The disadvantages are the implementation process include all components of the image as a uniform way and it’s not useful and serious when the implementation is limited to specific areas in the image[Gur14],[Sne12] .This technique can be formulated according to the mathematical formula: g(x,y)=T[f(x,y)]

.. (2.1)

where f(x,y) represents input image, g(x,y) the output image and (T) is spatial technique’s operator B. Frequency Domain In frequency domain technique image enhancement process is implemented by transforming the image to frequency domain by discrete transform. Like discrete (Fourier transforms (DFT) , Discrete Cosine Transforms (DCT) and Discrete wavelet Transforms (DWT)) , and manipulates image’s coefficient transformed by selected operator (filter) for subject to invers transform process .Where the image orthogonal transform has two parts phase and magnitude, the first one is used to restore the real 11

Chapter Two

Theoretical Background

image values. The transformed technique can be represented by following equation form[Raf02], [pin14]: g(x,y)=h(x,y)*p(x,y)

... (2.2)

Where g(x,y) is transformed image ,p(x,y) original image and h(x,y) is transformation function ,the block diagram steps for this technique can be summarized as follows [pin14] [32]:

Figure 2. 1: image enhancement frequency transform steps.

About pros, this technique gives excellence results for smoothing and eliminates high frequency noise, compared with spatial approach, the coins side is represented by it cannot with as a simultaneously process treated all image region For a satisfactory result [Moh17] . With first type the output processed image has a smooth version compared with the original where the second result is the sharping image features.

2.3.2 Image Enhancement Filters The type of filters in image enhancement paradigm can be categorized in two types 1- Low-Pass filters (LPF) 2- High-pass filters (HPF) 1. Low-Pass Filters In Spatial Domain This type of low-pass filters in spatial domain takes two faces, or models which is a linear or non-linear. In linear the value of new image’s pixel came from computation process with participation of its neighbors pixels where in non–linear 12

Chapter Two

Theoretical Background

methods it depends on selection criterion that predefined for selecting the optimal pixel among their neighbors [pin14] . In the below mention low-pass filters in spatial domino. Generally the term of linear filters is defined as follow, [Man15] , [Raf02] . K(x,y)=∑

∑

(

) (

)

…(2.3)

Where w is kernel filter with (s,t) coordinate ,f image's pixel, (c,r) is the row and column indexes. Mean or Averaging Filter: average filter is computed by dividing the summation value of image pixels in local predefined window W size by the number of pixels in window, where the computed value represents a new image pixel value . ∑ F(x,y)= 1/N∑ ( ) … (2.4)

N=number of pixels in window. Median Filter Y(n)= med[X(n - k),..., X(n),..., X(n + k)]

… (2.5)

Where y(n), output image. [X(n - k),..., X(n + k)], Ranked pixels values in specific window size. Gaussian Filter: the Gaussian filter is used for smoothed image’s edges through by attuned the high frequencies of image’s color . The kernel mask is approximate by Gaussian function as follow for applying the convolution mask process by previous form (3.3) . where is standard deviation. G𝝳=

e –(x2+y2)/

2

2

13

…(2.6)

Chapter Two

Theoretical Background

Figure 2.2: Gaussian kernel mask values

2. Low-Pass Filters In Frequency Domain Gray image color tones in frequency domain distributed in two frequencies low and high, each of them takes his turn about incorporating the images components. Therefore most image gray levels tones occupy the low color frequencies while the edges and noise take high. In frequency domain space the high frequencies color tones take its please around the origin of the axes where the low centric is near to origin coordinate figure (2.3). Therefore for reaching to eliminate state about the noise or attenuates the high one, the suitable chosen employs a low pass filter about this problem, this filter makes cutoff about the high distributed frequencies and only allows the low frequencies to take its please in new generated image figure(2.4.d). As with following algorithm steps [Raf02] , [Mil08] .

(a)

(b)

Figure 2.3: frequency domain technique . a) cutoff the high color frequencies ,b) allow the high color frequencies

14

Chapter Two

(a)

Theoretical Background

(b)

(d)

(c)

(e)

Figure 2 :4 a) image enhancement with low frequency . original image ,b) histogram about cutoff high color frequencies c) histogram about cutoff low color frequencies, d) blurred enhanced image features , e) sharped enhanced image features .

Algorithm (2.1): Image enhancement by frequency domain Input: Gray image Output: Enhanced image begin Step1:read gray image (Igray). Step2:comput foriour image F(u,v) by applied DFT on Igray. step3 :Multiply F(u,v) by low pass filter H(u,v) as follow k(u,v)= F(u,v)* H(u,v). step4: compute inverse f(u,v) of DFT about k(u,v). step5: take the real part from previous step to create enhanced image (Im). Step6:return (Im).

End Where …(2.7)

F(u,v) image furrier transformed . …(2.8)

15

Chapter Two

Theoretical Background

f(x,y) inverse image furrier transformed. 3.Smoothing Frequency Domino Filters

1- Ideal Lowpass Filters ( ILPF). 2- Butterworth Lowpass (BLPF). 3- Gaussian Lowpass Filters (G LPF).

4.Ideal Lowpass Filters ( ILPF). Ideal lowpass filters can be define as H(u,v) ={

( (

) )

….

(2.9)

Where D0 is nonnegative value representing the radios of cutoff frequency and ( ) is the distance value starting from point to center of frequency, ideal filter pints that all low frequencies with amount value less than and equal D 0 will pass where other (outside) was attenuated, figure(2.5), [Raf02] .

Figure 2.5 . Ideal Low pass filter

16

Chapter Two

Theoretical Background

2.4 Image Segmentation Its image processing techniques that lead to segment the image's pixel to segments of regions were each of them has distinguish labels. This simplification process is used to simplify images features to easier or meaningful feature form that used to support the advanced analysis's or recognition stages [Suj14],[Anj17]. Image segmentation techniques are categorized into two branches its block and layer based segmentation as seen in following diagram [Nid15].

2.6 image segmentation methods

2.4.1 Image Thresholding Thresholding is popular image segmentation technique it's adopted by large number of binriztion methods. It leads to separate the image into two sets group of regions based on selected threshold value (T). If pixel intensity color value is larger than the threshold, it will represent foreground region in the opposite case. It's considered as background, as mathematical formula below Thresholding [Anj17], is implemented in two sides either local or global side. In first one, the value of threshold is determined in every position of image's window where the global threshold is computed once depending on all image information's. The second method is adopted where the image has evident separation between the character's and image's 17

Chapter Two

Theoretical Background

background. On the contrary, local method has clear results about the image that has locally color features [Anj17] . In below, the various thresholding techniques will be reviewed. G(x,y)={

( (

) )

… (2.10)

2.4.2 Image Thresholding Techniques A. Niblack Method The threshold is computed by this method in every rectangle image's window locally through calculating the values of mean and variance for all intensity color pixels in each window as follows[Pra06] . T=M+kσ .

… (2.11)

Where k takes a constant value between [0,1] . The (m, σ) represents the mean and standard deviation respectively. In communally the size of the local image's windows is [15 x15], .The disadvantage side of this method has low results exactly where the original image has a degradation feature (noise) .

B -Sauvola’s Method This method is proposed to solve the noise problem about Nilblack methods, by depending on the same parameter that was reviewed in advanced like mean and standard deviation, the formula that will be depended to compute the threshold as follows. σ T=m(1-k(1- ) ) 𝑟 …(2.12) It sets to 0.5, and 128 respectively. Where k and r are constants. Like the previous method, the size of window must be determined previously and it produces low results where the edge between the background and character has low contrast. [Pra06] .

18

Chapter Two

Theoretical Background

C. Otsu’s Method It’s global thresholding method that convert gray image to binary image . It’s linear discriminant statistical method that separates the image features as to homogeneity two colors bands, the first one related with foreground (objects, symbols ) and other background . Otsu’s thresholding method starting with iterative histogram procedure separates the image colors as two colors intervals (I0=dark, I1= light),. The color density with first is I0={0,1,2,3…,I}, and second is I1={I+1,i+2,..,k-1}.Therefore the global thresholding value is computed by the following formula [jam11] .

𝝳2w= wb(I) * 𝝳2b(I)+ wf(I) * 𝝳2f(I)

…(2.13)

Where ( ).

wb(I)=∑

( ).

wf(I)=∑ µb(I)= ∑

𝝳2f(I)= ∑

…(2.15)

( )/wb(I).

µf(I)= ∑ 𝝳2b(I)= ∑

…(2.14)

...(2.16)

( )/ wf(I) (

...(2.17)

( )) /wb(I) (

…(2.18)

( )) / wf(I)

…(2.19)

Where wα , µα, and 𝝳α2 is weight, mean, and variance of class α. Repeating the separation process for choosing new intervals density color (ex.in each iterating shifting one level density color ) iteratively and recalculating the above equations (14-19) until satisfy minimum value of (13) as the selection criterion. D. Iterative threshold The global threshold value can be determined by iterative threshold technique as in the flowing algorithm steps [She13] :

19

Chapter Two

Theoretical Background

Algorithm (2.2): Iterative threshold Algorithm: input: gray image , output binary image. Begin Step1: compute the initial threshold value (T) ,by using average color image intensity . Step2: by using initial threshold value ,separate the image into two sets of group regions R1,R2. Step3:comput the mean values M1,M2 for each group. Step4: update the new value of threshold as flow: T=M1+M2/2; Step5: repeat steps 2-4 iteratively until M1,M2 has no change End The adopt new version of iterative threshold by applying it locally as follow algorithm (2.2.1) steps [Par97] : Algorithm (2.2.1):Iterative Threshold Algorithm: (locally applying ) input: gray image , output binary image. Begin

Step1: compute global( T) threshold by iterative threshold algorithm (3-2) . Step 2: for each pixel with it's eight neighbored compute adaptive threshold locally as follow (3-4) steps: Step3: if difference between maximum and minimum is less than T assign new value of new pixel relative to it density color’s pixel (bright or dark) . Step4: if difference between maximum and minimum is greater than or equal T assign new pixel value to be white if the old pixel value is near to maximum or the new takes black value if old pixel value is near minimum. End

2.4.3 Image Connected-Component Labeling Image connected-component labeling (CCL) represents an important filed in pattern recognition, computer vision and machine intelligence. By using this technique each connected segment in binary image it will own characterized unique label distinguish it from other labels [KUA03] [Kur15] , figure(2.7). This technique is required in different applications like target identification, diagnosis application, and 20

Chapter Two

Theoretical Background

biometric applications . There are many theories and algorithms that have contributed to the evolution of this technique especially the speed of performance in real time applications, generally these can be classified in four classes[Lif09] . Multi-scan algorithms, Two-scan algorithms, Hybrid algorithms and tracing-type algorithms . The first category can be adopted with image morphology as an implementation [Raf02] depending on the dilation principle as follows

(a)

(b)

Figure 2.7. Image connected-component labeling .a) binary image, b) labeling image

1-Image Morphology Image morphology represents an important field in image processing technique their theoretical side was introduced in 1964 by two French researchers (Matheron and Serra) when they presented a set of formulas about image analysis. Image morphology is a combination of non-linier process relevant with form of binary image feature, it depends on structure pixels values (geometry and topology features ) instead of their color density [Mil08] values, therefore the result come about morphology processing with new image features, it supports the pattern recognition and image analysis techniques. 2-Stricture Elements The important factor to applied morphology process in binary image is achieved by rectangular array structures or kernel mask (Stricture element), it takes different patterns contain of zeros and ones , figure (3.8), [Rav13]. The design about choosing suitable one of them it depends on particular problems .By sliding this mask on image 21

Chapter Two

Theoretical Background

the morphology process can take two states. The first one when values of pixels (ones) about stricture element matched the corresponding neighbors of image pixel, this state is called fit state , and when the match condition satisfied for at least a single pixel the hit state was found, [Jan12] .

Figure 2.8: Stricture element

3-Dilation and Erosion, [Rav13]. The dilation and erosion represents major important operators states in image morphology, they satisfy after applying the convolution kernel mask (Stricture element) on the image . The dilation state is met when image pixel has hit state then the value of new pixel in (new image) equaled one this leads to increasing state will satisfy for object figure (2.9.e ). About erosion state, the value of image pixel equals one if the fit state is satisfied (revers state compeer with first state) figure(2.9.c).

(a)

(b)

(c)

Figure 2. 9 ) Dilation and erosion process a) binary image , b) erosional image , b) dilated image

22

Chapter Two

Theoretical Background

Dilation A ϕ B={Z|(B^)z⋂

…(2.20)

≠ϕ}

Erosion: A Ө B={Z|(B)z≤A}

…(2.21)

Where ϕ is dilation operators , B is structure element. 4-Extraction of Connected Components For reach to image labeled ,the first approach or strategy (Multi-scan algorithms) will be applied (represented by Extraction of Connected Components ) on binary image A where it contains foreground pixels with labeled value equals (1) and background their pixel’s labeled value is (0). This process is implemented iteratively with restricted condition depending on dilation (20) concepts. Initial step started by locating the first foreground pixel (p ) where it represents seed point for reconstructed matrix XK (eq) with the structure element B scan the image A for computing the following form [Mil08] , [Raf02] ,[Shi09] . let X0=p where K=0,1,2…n …(2.22)

This iterative process would be terminated after the terminated condition was satisfied as XK=XK-1. Note: for applied image labeling concept, each regenerated connected components will have assigned distinguished labeled for each connected component element .

23

Chapter Two

Theoretical Background

With applying the bellow algorithm, it can be seen below the steps for regenerating the original binary image figure(2.10) .However image labeling is satisfied after each connected object subject to below algorithm with distinguished labeled value [Mil08] , [Raf02]. Algorithm (2.3) : Connected Components Extraction. Input :Binary image. Output : Connected Component segment begin Step1: read input binary image IB Step2: locate the first foreground pixel p and it's location p(x,y). Step3:initlize the stricture element B . Step4:k=0; Step4:intilize the connection component matrix XK(0,0). Step5 :set XK(x,y) =p(x,y). Step6: repeat Y=Xk Applied the dilation process on Xk and interest the result with original Image IB as following formula. Xk+1= dilation(B,XK )∩ IB . K=K+1. Untiled ( Y==XK+1) Step7: set CC_MATRIX=Y. Step8:return ( CC_MATRIX). end

Figure 2.10: regenerated connected component segment .

24

Chapter Two

Theoretical Background

2.5 Image Distance Transform Image distance transform (DT) plays an essential role in many applications like pattern recognition, computer vision , robotics and image matching particularly for binary image matching with using suitable features crated by matching approaches [Muh00]. Distance transform (DT) is a conversion process that applied on binary image to produce gray-level image that each pixel of it represents real value corresponds the minimum distance between object pixel (Ob) and background pixel (Bg), figure(3.11) that can be defined as in the following form[Don04] . Where I(x,y) ɛ {Ob,Bg} Id(x,y)={

(||

Where Ob object pixel,

(a)

||

(

)

( ) * + … (2.23) ) * + ) (

background pixel .

(b)

(c)

Figure 2.11: image distance transform a) binary image, b) binary values representation ,c) image distance transform .

Distance transform algorithms use different distance metrics for computing real distance beginning from non-euclidean metric like city block distance transform (CBDT), chamfer distance and Euclidean Distance Transform (EDT). Each of them was reflected positively or negatively on outputs qualities related with time consuming and precision .

25

Chapter Two

Theoretical Background

2.5.1 Distance Transforms With Sampled Functions [Ped12] Distance transform by sample function represents generalized approach for distance transform of binary image on grid (rows, columns) instead of binary value as depended. Therefore with samples functions, the basic intuition for computing the image distance transform depending on appearance or loss feature with each pixels defined as cost feature related with each pixel . Let (£={1,2,3,…, n}) uniform 1D is one dimensional grid, F: £→R where F is a function , then the distance transform defined by sampled function (2.24), is demonstrated as follows. D f(p) =min qϵ£((p-q)2 +f(q)).

….(2.24)

Where Df Euclidean distance transform value, p testing point, q nearest point. About every point qϵ£, there is a restriction where distance transform (F) is bordered from above by parabola presenting the rooted position (q, f(q)). Therefore distance transform is realize by lower envelop of these parabolas, figure (2.12) and its value corresponds the high of lower envelop.

Figure 2.12: the lower envelop about n parabolas

For computing image distance transform, the following two steps must be implemented: 1- calculating the lower envelope of n parabolas. 2- solving the mention equation (2.24) by substituting the lower envelope’s height at grid position. Where the two parabolas determine the distance transform that are intersect at single point therefore the intersection (s) position between two parabolas defined by grid positions (r,q) as follows equation (2.25). . 26

Chapter Two ( ( )

S=

) ( ( )

Theoretical Background )

…(2.25).

The lower envelop is calculated by sequentially calculating the first q ordered parabolas related to their horizontal positions. Where parabola is considered from q and finding the intersection position with another parabola in v[k]. Therefore there are two states that are satisfied. First, figure (2.13.a ) if the intersection position is after z[k] then the lower envelope must be adjusted as with the following algorithm steps. The second opposite state figure (2.13.b) considers the deleted state of th k parabola that lead to K parabola from v[k] is not contained in new lower envelop

(a)

(b)

2.13 : distance transformed with it's states. a) state 1, b) state2 The following one dimension distance transform algorithm by sample function grid defined as sample function ,

27

Chapter Two

Theoretical Background

Algorithm (2.4): Sampling Distance Transform. Input =Row image pixels. Output= Distance value. begin Step1: k=0. Step2: v[0] =0. Step3: z[0] =-∞. Step4: z[1] =+∞. Step5: for q= 1 to n-1 while (true) Step6: compute the intersection point by the following form s=(( f (q)+ q2) -( f (v[k])+ v[k]2))/(2q - 2v[k]). Step7: if s End K=k+1. V[q]=q. Z[k]=s. Z[k+1]= +∞. Step8: k=0. Step9: for q= 0 to n-1 Step 10 while z[k+1]

Cuneiform Symbols Recognition Using Pattern Recognition Techniques

A Thesis Submitted to the Department of Computer Science of the University of Technology in a Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Computer Science BY

Ali Adel Saeid AL-Tammeme

Supervisor

Prof. Dr. Abdul Monem S .Rahma

Asst.Prof. Dr. Abdul Mohssen j. Abdul hossen

2018

الرِح ِيم ﴾ الر ْح َم ِن َّ ﴿ بِ ْس ِم اللّ ِه َّ

ك الَّ ِ ِ نسا َن ِم ْن َعلَق ﴿﴾٢ اْل ق ل خ ﴾ ١ ﴿ ق ل خ ي ذ َ َ ْ َ َ اس ِم َربِّ َ َ َ اقْ َرأْ بِ ْ َ ك ْاْلَ ْك َرم ﴿ ﴾٣الَّ ِذي َعلَّ َم بِالْ َقلَ ِم ﴿﴾٤ اقْ َرأْ َوَربُّ َ َعلَّ ِ نسا َن َما لَ ْم يَ ْعلَ ْم ﴿﴾٥ اْل م ْ َ َ صدق هللا العظيم

(سورة العلق :األيه )5-1

Acknowledgements

All my thanks first of all are addressed to Almighty Allah who has guided my steps towards the path of knowledge and without his help and blessing, this Thesis would not have progressed or have seen the light. My sincere appreciation is expressed to

my

supervisor Dr.

Abdul Monem S .Rahma for providing me with support, ideas and

Inspiration.

I am extremely grateful to all members of Computer Science Department of University of Technology for their general support Finally,

I would never have been able to finish my

Thesis

without the help from friends, and support from my family .

Thank you all

Ali,

Supervisor's Certification

We certify that this dissertation entitled (Cuneiform Symbols Recognition Using Pattern Recognition Techniques) was prepared by "Ali Adel Saied AL-temmeme" under our supervision at Computer Science Department/University of Technology in a partial fulfillment of the Degree of Ph.D. in Computer science .

Signature Name: Prof. Dr. Abdul Monem S.Rahma Date : /

/2018

Signature Name: Asst.Prof. Dr. Abdul Mohssen j. Abdul hossen Date : /

/2018

Table of Contents Subject Abstract List of Contents List of Tables List of Figures List of Abbreviations List of Algorithms Chapter One: General Introduction 1.1 introduction 1.2 Pattern recognition 1.2.1Pattern recognition approaches 1.2.2 pattern recognition and cuneiform writing 1.3 Literature Survey

Page No i iii Vi Viii Xi Xiii 1 1 3 4 4

1.4 Aim of Thesis 1.5 Thesis Contributions 1.6 Organization of Thesis Chapter Two: Theoretical Background 3.1 2.1Introduction 2.2 Character Recognition Types 2.2.1Online character recognition 2.2.2 Offline character recognition 2.3 Preprocessing 2.3.1 Image Enhancement 2.3.2 Image Enhancement Filters 2.4 Image Segmentation 2.4.1 image Thresholding 2.4.2 Image Thresholding Techniques 2.4.3 Image connected-component labeling 2.5Image distance transform 2.5.1 Distance Transforms With Sampled Functions 3.6 Feature extraction 2.6.1. Statistical Features 2.6.2. Global Transformation and Series Expansion Features 2.6.3. Structural Features 2.7 Classification 2.7.1Probabilistic neural networks (PNN) 2.7.2 Support Vector Machine (SVM) i

6 6 8 9 10 10 10 10 11 12 16 17 18 20 25 26 29 29 30 33 38 38 40

2.8 Post-Processing 2.9 Image fusion 2.10 Evaluation Measures of cuneiform recognition System 2.11 Proposed learning dataset Chapter three: Proposed Assyrian cuneiform recognition system 3.1 Introduction 3.2 Architecture of the Proposed System 3.3 Assyrian cuneiform recognition system (ACRS) Model1 3.3.1 Image Acquisition Stage 3.3.2 Preprocessing Stage (1) 3.3.3 Image Thresholding 3.3.4 Preprocessing (stage2): The Elimination Of Rejected 3.3.5 Features extraction: 3.3.6. Training stage 3.3.7 Classification stage 3.4. Post- processing 3.4.1. Proposed solution for duplicated problems 3.4.2. Locates the density centroid 3.4.4 Training Ttage 3.4.4 Test feature extraction stage 3.4.5 Classification stage 3.5 Accuracy supporting for cuneiform symbols recognition by Image fusion approach Chapter four: Experiments and Results Discussion 4.1 Introduction 4.2 Cuneiform tablets Images dataset 4.3 Cuneiform tablet image preprocessing 4.3.1 Image Enhancement 4.3.2 Removing spots 4.3.3 Removing writing lines 4.4 Image Thresholding 4.5 Feature extraction 4.5.1 Elliptical Fourier Descriptors (EFD) 4.5.2 Projection histograms 4.5.3 Hu’s moments 4.4.4 Zernike moments (ZM)

103 103 105 105 107 109 110 111 112 114 115 115

4.5.5 Polygon approximation 4.6 Classification 4.7 Results and Discussion

117 123 126

4.8 Analytical comparison

127 ii

43 44 45 45 49 49 51 52 53 54 59 74 84 86 92 93 94 95 96 98 100

Chapter five: Conclusions and Suggestions 5.1 Conclusions 5.2 Suggestions for Future Work References

129 131 149

Table of Tables Table No 2.1 3.1 3.2 3.3 3.4 3.5 3.6 3.7 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5.10 4.11 4.12 4.13

Table Caption

SVM Kernel decrement functions thresholds values computed agents Skewness metric the MSE values for each segments correspond it is length values break points generated from bordered point The computed AEV values for each dominate points The updated AEV values approximated points Structure points Comparison of results of recognition accuracy ratio after applied differing LPF with different size Comparison of results values agents each cut of frequency values of ideal filter. Comparison of results according to each thresholds values Comparison of results after applied the firs line remove algorithm according to different thresholds values Comparison of results of applying the follow thresholding methods according to their consuming time. Feature vector constructed by EFD Comparison of recognition results with applied EFD according each classifier Comparison of recognition results with applied Projection histograms according each classifier Comparison of recognition results with applied Hu’s moments according each classifier Comparison of recognition results with applied ZM according each classifier. Experimental results according to each diversity values by polygon approximation algorithm Experimental results according to each diversity values by proposed polygon approximation algorithm Comparison of recognition results and average Classification Time iii

Page No 43 58 73 67 67 77 77 83 106 107 109 110 110 113 118 118 115 116 118 119 121

4.14 4.15 4.16 4.17 4.18

according each classifier Comparison of recognition results according to different image size. Comparison of recognition accuracy results according to different standard deviation values δ Comparison of recognition accuracy results according to duplicated state recognition state. Comparison of recognition accuracy results according to image size Comparison of recognition accuracy results according to each eatures extraction method and average Classification time

122 123 124 128 127

Table of Figures Figure NO 1.1 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 2.18 2.19 2.20 2.21 2.22 2.23

Figure Caption pattern recognition system image enhancement frequency transform steps Gaussian kernel mask values frequency domain technique image enhancement with low frequency Ideal Low pass filter Image segmentation methods Image connected-component labeling Stricture element Dilation and erosion process regenerated connected component segment . image distance transform the lower envelop of n parabolas distance transformed with it's states Projection histograms feature extraction method The computation of unit disk polygon approximation polygon approximation approaches associated approximate error break points General architecture of a PNN SVM with hard margin hyperplane H support vector maximum margin iv

Page No 2 12 14 14 15 16 17 21 22 22 24 25 26 27 30 31 34 34 35 36 38 40 41 42

2.24 2.25 2.26 2.27 2.28 2.29 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22 3.23 3.24 3.25 3.26 3.27 3.28 3.29 3.30 3.31

SVM model with soft margin changing data space Wavelet Based image fusion three dimension shape model of cuneiform symbols Virtual dataset confirm symbol versus it’s virtual symbols probabilities Proposed ACRS system architecture Architecture of module1 cuneiform images tablets enhanced image by frequency domain, enhanced images with frequency domain with difference values of cut off frequency. Image Binirizetion methods Binrized image with low quality Binarized image after applied histogram equalizing process cuneiform images of same character with difference features extract connection component image labeling spots free cuneiform image the affection of thresholding value cuneiform image segmentation distance transform Preface 1 Erasing cuneiform lines. lines off binary cuneiform problem of statistical algorithm. The cuneiform writing line will removed after select suitable threshold value. Preface 2 MSE line remove Boundary extraction Edge thinning Freeman’s chain code Break points approximate boundary figure Polygon approximation Quality approximation . Cuneiform patterns Approximated features points v

42 43 44 46 47 48 50 51 52 53 54 55 55 56 59 60 62 63 63 64 65 66 68 69 69 70 70 72 74 74 75 75 78 78 79 80 82

3.32 3.33 3.34 3.35 3.36 3.37 3.38 3.39 3.40 3.41 3.42 3.43 3.44 3.45 3.46 3.47 3.48 3.49 3.50 3.51 3.52 3.53 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15 4.16

Approximate points Features vector cuneiform character Classification process Color cuneiform image Enhanced State Cuneiform Binirized Image Spot off cuneiform image cleared cuneiform binary image Cuneiform labeled image The classification results against each symbol Cuneiform character matching code problem cuneiform patterns with their centroid virtual cuneiform patterns density centroid pixel separated training binary cuneiform symbols extract search point Model (2) post-processing Illumination problem Classification problem Cuneiform image fusion proposed diagram approximated figures output by the proposed method. deformation problem cuneiform image correspond their enhanced mages output about different LPF with different size. removing spots removing writing lines image binarized methods learning patterns with same direction features vector of EFD Elliptical Fourier Descriptors Hu’s moments features vector four ZM values about each square zoon polygon approximation approximating steps Symbols classification results conform symbol deformation Character cuneiform recognition

vi

84 84 86 78 89 89 90 90 91 91 92 92 93 93 94 94 95 97 99 100 100 101 104 105 106 108 109 111 112 112 113 113 116 117 120 121 122 125

Table of Algorithms Algorithm No. 2.1 2.2 2.2.1 2.3 2.4 2.5 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12

Algorithm Caption image enhancement by frequency domain Iterative threshold algorithm Iterative threshold algorithm (locally applying ) Connected Components Extraction. sampling distance transform Reverse polygonization algorithm Cuneiform image Thresholding spots elimination Isolation algorithm Statistical lines remove method MSE lines remove method Cuneiform Symbolled approximate algorithm Generate Structure feature Training virtual symbol learning set PNN Cuneiform symbols classification Training post processing test features post processing Cuneiform image fusion algorithm

vii

Page No 15 20 20 24 28 37 57 61 64 67 71 80 82 85 88 96 98 102

Table of Abbreviations Abbreviations ACO ACRS BP BPR CBDT CNN CCL CR DP DT EDT EFD FE FT GFT HPF ILPF KNN LPF MS NN OCR PDF PSO PNN PR Rr SVM SSV TP TS WT ZM

Meaning Ant colony Optimization Assyrian cuneiform character recognition system Break Points Back Propagation City block Distance Transform Convolution Neural Network Connected-Component Labeling Compression Ratio Dominate Points Distance Transform Euclidean Distance Transform Elliptical Fourier Descriptor Feature Extraction Fourier Transform Gabor Fourier transform High-Pass Filters Ideal Lowpass Filters k- Nearest Neighbor Low-Pass Filters Matching Score Neural Networks Optical Character Recognition Probability Density Function Particle Swarm Optimization Probabilistic Neural Network Pattern Recognition Recognition Ratio Support Vector Machine Symbol Structure Vector True Positive Tabu Search Wavelet Transform Zernike Moments

viii

Abstract

Writing represents one of the most oldest important inventions in humanity, where the beginning was in the land of Mesopotamia, Writing has undergone many stages of development, This type of writing takes the engraving patterns on stone or pressed on clay tablets which were formed to make the cuneiform characters. International museums like Iraqi Museum include thousands of cuneiform tablets, where the large proportion of them has not translated as result of the lack numbers of translators and the difficulty of this language. Therefore this thesis presents a proposed recognition cuneiform system as solution for mention problem depending on patterns recognition techniques especially with optical character recognition (OCR). In addition to solve the problems that are related with erosion unwanted objects like spots and writing lines, as a new approach , by depending on image morphology and distance transform. Where the proposed training dataset in this thesis is the virtual rectangular shapes as a new approach distributed in four patterns forming number of classes, which takes in consideration of shadows probabilities, which associated with cuneiform geometric feature, ( as new proposed factor ), as results of reflected light . This thesis presents comparison state between a proposed algorithm for features extraction by polygon approximation principle and the classical features extraction methods like elliptic Fourier descriptor, zerink moments, H’u moments, and projection histogram. Where the classification task is implemented depending on more than one classifier like probabilistic neural network (PNN) and support vector machine (SVM) with multiple kernel discriminant functions, for achieving reliable decision by depending on evaluating a new proposed features extraction methods. Another side offered in this thesis is a proposed post processing algorithm to solve the duplicated state where the cuneiforms character have the same classification features , depending on the computed approximated points with applied distance transform. In order to evaluate the system performance and

the mention comparison about the features extraction techniques,. The accuracy results obtained from the comparing test for features extraction techniques its consecutively, 95% about the proposed approximating technique with using PNN, 70% for EFD with SVM by a polynomial kernel, 57% for projection histogram with SVM by a RBF kernel , 35% for Hu’s moment with SVM by a RBF kernel ,26% for ZM with PNN. Therefore after adopting the proposed algorithm for a features extraction the recognition accuracy by proposed system is 94 %. Finally the high accuracy result that archived about preprocessing proposed algorithms according to erosion process for unwanted objects like spots and writing line is 95% and 92% consecutively.

Chapter one General Introduction

1.1 introduction Pattern Recognition is one of the important branch of artificial intelligence. It is the science which tries to make machines as human intelligent to recognize patterns and classify them into desired categories in a simple and reliable way, to make right decision, in various applications like remote sensing, artificial intelligent and computer vision, through by its metrics or developed method like statistical estimation and recognition, clustering, fuzzy sets, syntactic recognition, approximate reasoning and (NN) Neural Networks. Therefore the humans want to take advantage of this development for automating its applications for more seek, manipulating, accessing, analyzing, and decision making. Writing has been and remained the essential aspect of humanity because it reflects all aspects of human communication and documentation over time. However it supports the attention about historical stages of writing development which is important as an implementing with automated aspects in writing, archiving, and translating .

1.2 Pattern Recognition Pattern recognition (PR) is a field of machine learning that achieved learning process by theories and methods for design machine to recognize the pattern of object , that lead finally to assign the correct pattern to object . However the structure PR system can be summarized as follows [Pri13] .

1

Chapter one

general introduction

Data acquisition and preprocessing: the raw data will take from environment and subject to preprocessed to removed noise and unwanted features. Feature extraction : the relevant data from processed data are extract to create the classification features that represented by features vector. Decision making: here the decision makes by classifier or the descriptor from extracted features. The block diagram about PR system is shown in figure (1.1) [Pri13], [Vin12].

Figure 1.1 pattern recognition system

Generally PR can be categorized about training state in two types [Pri13]. Supervised learning type: with this type of learning, the training set has been provided where it is instance labeled according to correct output, therefore the learning procedure leads to generate a function model attempt to satisfy a meeting between the input pattern and the output target pattern. Unsupervised learning type : the learning set has not been labeled according to target pattern, therefore this model attempts to find inherent patterns in dataset that can be used to inform the new testing data about the correct pattern. 2

Chapter one

general introduction

1.2.1Pattern recognition approaches [Ani00]. a- Template Matching approach. One of the simplest and earliest approaches to pattern recognition is based on template matching. The pattern to be recognized is matched against the stored template while taking into account all allowable pose (translation and rotation) and scale changes. The similarity measure, often is a correlation. b- Statistical Approach. In the statistical approach, each pattern is represented in terms of features or measurements and is viewed as a point in a dimensional space. c. Syntactic Approach. In many recognition problems involving complex patterns, it is more appropriate to adopt a hierarchical perspective where a pattern is viewed as being composed of simple sub patterns which are themselves built from yet simpler sub patterns d- Neural Networks approach. Neural networks can be viewed as massively parallel computing systems consisting of an extremely large number of simple processors with many interconnections. Neural network models attempt to use some organizational principles (such as learning, generalization, adaptively , fault tolerance and distributed representation, and computation) in a network of weighted directed graphs in which the nodes are artificial neurons and directed edges (with weights) Pattern recognition today has been applied in a wide area of science and engineering with their applications like manufacturing, healthcare and military. Below are some important applications of PR.

Optical character recognition (OCR). Automatic speech recognition. Personal identification systems. Object recognition. 3

Chapter one

general introduction

1.2.2 pattern recognition and cuneiform writing Despite of extensive applications about PR in different directions but this aspect is still limited with field of cuneiform writing, especially with respect to research and scientific thesis. Therefore this thesis will support the trend in the recognition field especially with OCR technique in Assyrian cuneiform writing about the first millennium BC.

1.3 Literature Survey The reviews of various methods and approaches that were used for developing cuneiform writing approaches as recognition, retrieving and preprocessing are presented in this section. In 2018 Nils M. Kriege. et al. proposed two methods for recognized cuneiform symbols, the first one is a graphical model based on graph edit distance computed by efficient heuristic model. The second approach is related to convolution neural network (CNN) which is presented to overcome the cost of computation with learning face. However the recognition accuracy with second model is increased according to increasing the size of dataset. The recognition rate achieved was 90.23%, [Nil18]. In 2016 Khalid Fardousse. et al. introduced a simple and efficient motives recognition system using feature extraction technique from motives images, the features is based are polygonal approximation and chain-code normalized. This system which has been evaluated on our polygonal forms database and basics motives database. The system for recognizing off-line handwritten craft motive has been developed. Different Pre-processing, segmentation techniques and classifier neuronal with different features are also discussed. The maximum recognition rate that achieved was 90% with Radial Basis Function classifier, and 94% with Feed Forward NN,[Kha16]. In 2014 Fahimeh Mostofi. et al . proposed intelligent recognition system for Ancient Persian Cuneiform characters based on supervised learning model, it’s a back propagation neural network model(BP) where the testing data set is created by subjecting the original learning set to Gaussian Filter with different values of stander devotion. The otsu’s binirized model was adopted for 4

Chapter one computing global threshold achieved was 89-100 % [fah14].

general introduction value.

The

recognition

rate

that

In 2013 Naktal M.Edan. proposed methods for recognizing the cuneiform symbols depending on statistical and structure features derived by projection histogram, center of gravity and connected component features. However to separate each distinguish feasters according to each class of symbols, the kmean clustering was used. Multilayer Neural Network (MLP) was applied for classification task where the recognition rate of accuracy level was different according to each class from 83.3% to 90.4%.[Nak13]. In 2012 Kawther K. Ahmed. proposed more than one method for extracting recognition information from cuneiform image tablets about off line and online approaches and applying evaluation with them. It was depended on structure feature as skeleton features vector defined as Symbol Structure Vector (SSV) with depending on k- nearest neighbor (KNN) as classification model. Where the recognition rate achieved was 97%.[Kaw12]. In 2006 IN 2006 R Sanjeev Kunte. et al present an OCR system developed for the recognition of basic characters (vowels and consonants) in printed Kannada text, which can handle different font sizes and font types. Hu’s invariant moments and Zernike moments that have been progressively used in pattern recognition are used in this system to extract the features of printed Kannada characters. Neural classifiers have been effectively used for the classification of characters based on moment features. An recognition rate of 96·8% has been obtained,[San06]. Hilal Yousif. et al suggested recognition method for a hand written images of cuneiform text . The method make use of the fact that there are finite number of images for the symbols and trying to differnation between them depending on intensity profile curves, which represent intensities of selected pixels in the image . Accuracy rate was achieved to be above 90% [Hil06]. In 2001 Al-Aai proposed recognition approach for cuneiform symbols depending on extract recognition features that generated from binary cuneiform image tablet by depending on suggested seven transform forms applied on 5

Chapter one

general introduction

each pixel’s with their neighbors about each cuneiform symbols. However each cuneiform characters will have distinguish features related to number of symbols and their directions used for recognition task. The classification process was implemented through by (association rules) like indexing process was distributed on tree structure, [Ani01]. The results published in the literature showed all research did not take into consideration the geometric shape about cuneiform symbols which therefore effected on generating a different segmented pattern depending on reflected light angle. The other side is not concerns and interest about treating the unwanted objects like spots and writing cuneiform line associated with cuneiform patterns, where the spots as a result of the images' distortion. Finally the duplicate state about recognizing cuneiform characters occurred when the recognition features are correspond therefore it leads to reaching a duplicated recognition state. However, the previous problems related with shadows and spots will be illustrated in (appendix A ). This thesis will stand on these problems and solve them by proposing algorithms with a new proposed virtual training dataset.

1.4 Aim of Thesis The aim of thesis is to developing approach to recognize cuneiform Assyrian image tablets by applied Pattern recognition techniques especially with optical character recognition (OCR), depending on the proposed virtual training dataset of regular triangles patterns and assisting approach to adopt a polygon approximation technique as features extraction method and proposed post processing algorithm to solve duplicated recognition state.

1.5 Thesis Contributions: The main contribution of this thesis can be illustrated as follows: 1- proposing Assyrian cuneiform character recognition system (ACRS)

depending on applied the principals of optical character recognition (OCR) that leads to create evident advantage related with supporting search engine process.

6

Chapter one

general introduction

2- proposing efficient virtual training dataset consisting of trigonometric

3-

4-

5-

6-

forms reflecting all the possibilities in which the cuneiform symbols are formed as a patterns forms, depending on the analysis of the threedimensional geometry about the cuneiform symbol with its shadows are formed by the effect of the light angle. proposing preprocessing algorithms for erasing an effected objects like spots and writing lines. Two algorithms were proposed for erasing writing lines depending on distance transform with sample function as a new object and approaches and proposed erasing spots algorithm depending on image labeling approach through by image morphology. proposing a new approach for extract features to create features vector by polygons approximation with dominate points (DP) through by proposing algorithm that combines the two approximation approaches . Adopting probabilistic neural network (PNN) as a new classification approach to classify the cuneiform symbols direction as (horizontal, vertical or diagonal). Compering the recognition accuracy results that achieved, about cuneiform symbols between the common features extractions methods used and proposed method by employing the PNN and support vector machine (SVM) for multi classification process .

7- proposing a new post processing technique for solving duplicated recognition state depending on (extracted features by specific approximated points according to each patterns and distance transform) . 8-adopting image fusing technique with transform approach by wavelet transform to increasing accuracy recognition results. 9-proposeing thresholding algorithm for implements segmentation state depending on Niblack , Sauvola methods . were the selection criterion of them based on statistical Skewness measure.

7

Chapter one

general introduction

1.6 Organization of Thesis This thesis organize in six chapters, here a brief description of their contents are given: Chapter two: This chapter briefly describes the historical stages of the emergence of cuneiform writing in addition to the associated problems related to processes of cuneiform recognition. Chapter three: It describes optical character recognition, its types, theoretical background steps and algorithms that will be based to design the recognition system. Chapter four: It presents a proposed algorithms that are used to design the proposed system and the implementation of each one. Chapter five: It discusses the experimental and the result obtained from implementation of the proposed recognition system steps and compares the result with traditional methods. Chapter six: It presents the conclusions and illustrates a number of suggestions for future work.

8

Chapter two Theoretical Background

2.1 Introduction Optical character recognition (OCR) is considered one of the main branches in pattern recognition starting from the middle of the century, specifically in 1950 until today, this field is subjected to research and development aspects, as a result about it supporting for institutional government applications, which can easily be explored in financial, banking and archiving applications .

There are many definitions of (OCR), some of them are defined as process for selecting an image segment from scanned image file and determine the corresponding text character [You12],[ Pri17] , or it’s the process of choosing the right pattern for the image segment. Typically the framework of optical character recognition systems can be reviewed as the following steps[Roh12],[Pri17] :

1-preprocessing. 2- segmentation. 3- features extraction . 4- classification. 5- post-processing.

9

Chapter Two

Theoretical Background

2.2 Character Recognition Types Character recognition manly is classified into two types character recognition [Mus16], [Dew09].

online and offline

2.2.1 Online Character Recognition It’s automatic conversion process that cause to convert the digital information (digital ink) generated by Personal Digital Assistant (PDA) or tablets to suitable text in real time [Dew09] . This process is applied with depending on spatial similarity metrics related with different strokes features as number, directions and ordered. Frequently this feature is digital information translated as dynamic representation of sensor states about electronic pen-tip states as up, dawn and movement with number of challenges interface, this process since reduces the time consuming and accuracy. Generally online character recognition process is less difficult than offline recognition process through by available dynamic information , [Mus16] . 2.1.2 Offline Character Recognition This technique is applied on scanned image of type written or hand written text, for recognizing it’s characters. Generally all offline character recognition techniques starting by submitting image firstly to enhanced process for generating suitable features agree with classification model[Dew09] . However offline recognition process is more difficult than online techniques with reason of the problems that relative with noise, distortion and different styles of handwriting, [Mus16] .

2.3 Preprocessing Preprocessing represent the essential step in recognition systems after image acquisition process. Basically the preprocessing step is designed and applied for next analysis step to reduce the amount of noise and maintain as possible the significant information. Generally the preprocessing operations include image thinning, , edge detection, noise removal… and image normalization [pov14] .

10

Chapter Two

Theoretical Background

2.3.1 Image Enhancement It is one of the most important image processing techniques and through it the digital image features are reconstructed to suit the nature of the its application, whether it is medical, military or satellites images.. The primary objective is to treated all the associated problems related to blurring, contrast and noise [Jan15] .The process of enhancement task takes two directions, the first one submitted to human vision as criterion for evaluation and the second is moving towards supports and improves image qualities used to support the identity process by machine vision [Moh17].Image enhancement techniques can be classified in to categories [Jan15] , [Moh17] : A. Spatial Domain Techniques In this technique, each pixel in the image is treated independently with its pixel neighbors. For obtain the required results these techniques are used in this direction methods like histogram, equalization, power and logarithmic transforms. The advantages of this techniques are that it is easy to apply and understand. The disadvantages are the implementation process include all components of the image as a uniform way and it’s not useful and serious when the implementation is limited to specific areas in the image[Gur14],[Sne12] .This technique can be formulated according to the mathematical formula: g(x,y)=T[f(x,y)]

.. (2.1)

where f(x,y) represents input image, g(x,y) the output image and (T) is spatial technique’s operator B. Frequency Domain In frequency domain technique image enhancement process is implemented by transforming the image to frequency domain by discrete transform. Like discrete (Fourier transforms (DFT) , Discrete Cosine Transforms (DCT) and Discrete wavelet Transforms (DWT)) , and manipulates image’s coefficient transformed by selected operator (filter) for subject to invers transform process .Where the image orthogonal transform has two parts phase and magnitude, the first one is used to restore the real 11

Chapter Two

Theoretical Background

image values. The transformed technique can be represented by following equation form[Raf02], [pin14]: g(x,y)=h(x,y)*p(x,y)

... (2.2)

Where g(x,y) is transformed image ,p(x,y) original image and h(x,y) is transformation function ,the block diagram steps for this technique can be summarized as follows [pin14] [32]:

Figure 2. 1: image enhancement frequency transform steps.

About pros, this technique gives excellence results for smoothing and eliminates high frequency noise, compared with spatial approach, the coins side is represented by it cannot with as a simultaneously process treated all image region For a satisfactory result [Moh17] . With first type the output processed image has a smooth version compared with the original where the second result is the sharping image features.

2.3.2 Image Enhancement Filters The type of filters in image enhancement paradigm can be categorized in two types 1- Low-Pass filters (LPF) 2- High-pass filters (HPF) 1. Low-Pass Filters In Spatial Domain This type of low-pass filters in spatial domain takes two faces, or models which is a linear or non-linear. In linear the value of new image’s pixel came from computation process with participation of its neighbors pixels where in non–linear 12

Chapter Two

Theoretical Background

methods it depends on selection criterion that predefined for selecting the optimal pixel among their neighbors [pin14] . In the below mention low-pass filters in spatial domino. Generally the term of linear filters is defined as follow, [Man15] , [Raf02] . K(x,y)=∑

∑

(

) (

)

…(2.3)

Where w is kernel filter with (s,t) coordinate ,f image's pixel, (c,r) is the row and column indexes. Mean or Averaging Filter: average filter is computed by dividing the summation value of image pixels in local predefined window W size by the number of pixels in window, where the computed value represents a new image pixel value . ∑ F(x,y)= 1/N∑ ( ) … (2.4)

N=number of pixels in window. Median Filter Y(n)= med[X(n - k),..., X(n),..., X(n + k)]

… (2.5)

Where y(n), output image. [X(n - k),..., X(n + k)], Ranked pixels values in specific window size. Gaussian Filter: the Gaussian filter is used for smoothed image’s edges through by attuned the high frequencies of image’s color . The kernel mask is approximate by Gaussian function as follow for applying the convolution mask process by previous form (3.3) . where is standard deviation. G𝝳=

e –(x2+y2)/

2

2

13

…(2.6)

Chapter Two

Theoretical Background

Figure 2.2: Gaussian kernel mask values

2. Low-Pass Filters In Frequency Domain Gray image color tones in frequency domain distributed in two frequencies low and high, each of them takes his turn about incorporating the images components. Therefore most image gray levels tones occupy the low color frequencies while the edges and noise take high. In frequency domain space the high frequencies color tones take its please around the origin of the axes where the low centric is near to origin coordinate figure (2.3). Therefore for reaching to eliminate state about the noise or attenuates the high one, the suitable chosen employs a low pass filter about this problem, this filter makes cutoff about the high distributed frequencies and only allows the low frequencies to take its please in new generated image figure(2.4.d). As with following algorithm steps [Raf02] , [Mil08] .

(a)

(b)

Figure 2.3: frequency domain technique . a) cutoff the high color frequencies ,b) allow the high color frequencies

14

Chapter Two

(a)

Theoretical Background

(b)

(d)

(c)

(e)

Figure 2 :4 a) image enhancement with low frequency . original image ,b) histogram about cutoff high color frequencies c) histogram about cutoff low color frequencies, d) blurred enhanced image features , e) sharped enhanced image features .

Algorithm (2.1): Image enhancement by frequency domain Input: Gray image Output: Enhanced image begin Step1:read gray image (Igray). Step2:comput foriour image F(u,v) by applied DFT on Igray. step3 :Multiply F(u,v) by low pass filter H(u,v) as follow k(u,v)= F(u,v)* H(u,v). step4: compute inverse f(u,v) of DFT about k(u,v). step5: take the real part from previous step to create enhanced image (Im). Step6:return (Im).

End Where …(2.7)

F(u,v) image furrier transformed . …(2.8)

15

Chapter Two

Theoretical Background

f(x,y) inverse image furrier transformed. 3.Smoothing Frequency Domino Filters

1- Ideal Lowpass Filters ( ILPF). 2- Butterworth Lowpass (BLPF). 3- Gaussian Lowpass Filters (G LPF).

4.Ideal Lowpass Filters ( ILPF). Ideal lowpass filters can be define as H(u,v) ={

( (

) )

….

(2.9)

Where D0 is nonnegative value representing the radios of cutoff frequency and ( ) is the distance value starting from point to center of frequency, ideal filter pints that all low frequencies with amount value less than and equal D 0 will pass where other (outside) was attenuated, figure(2.5), [Raf02] .

Figure 2.5 . Ideal Low pass filter

16

Chapter Two

Theoretical Background

2.4 Image Segmentation Its image processing techniques that lead to segment the image's pixel to segments of regions were each of them has distinguish labels. This simplification process is used to simplify images features to easier or meaningful feature form that used to support the advanced analysis's or recognition stages [Suj14],[Anj17]. Image segmentation techniques are categorized into two branches its block and layer based segmentation as seen in following diagram [Nid15].

2.6 image segmentation methods

2.4.1 Image Thresholding Thresholding is popular image segmentation technique it's adopted by large number of binriztion methods. It leads to separate the image into two sets group of regions based on selected threshold value (T). If pixel intensity color value is larger than the threshold, it will represent foreground region in the opposite case. It's considered as background, as mathematical formula below Thresholding [Anj17], is implemented in two sides either local or global side. In first one, the value of threshold is determined in every position of image's window where the global threshold is computed once depending on all image information's. The second method is adopted where the image has evident separation between the character's and image's 17

Chapter Two

Theoretical Background

background. On the contrary, local method has clear results about the image that has locally color features [Anj17] . In below, the various thresholding techniques will be reviewed. G(x,y)={

( (

) )

… (2.10)

2.4.2 Image Thresholding Techniques A. Niblack Method The threshold is computed by this method in every rectangle image's window locally through calculating the values of mean and variance for all intensity color pixels in each window as follows[Pra06] . T=M+kσ .

… (2.11)

Where k takes a constant value between [0,1] . The (m, σ) represents the mean and standard deviation respectively. In communally the size of the local image's windows is [15 x15], .The disadvantage side of this method has low results exactly where the original image has a degradation feature (noise) .

B -Sauvola’s Method This method is proposed to solve the noise problem about Nilblack methods, by depending on the same parameter that was reviewed in advanced like mean and standard deviation, the formula that will be depended to compute the threshold as follows. σ T=m(1-k(1- ) ) 𝑟 …(2.12) It sets to 0.5, and 128 respectively. Where k and r are constants. Like the previous method, the size of window must be determined previously and it produces low results where the edge between the background and character has low contrast. [Pra06] .

18

Chapter Two

Theoretical Background

C. Otsu’s Method It’s global thresholding method that convert gray image to binary image . It’s linear discriminant statistical method that separates the image features as to homogeneity two colors bands, the first one related with foreground (objects, symbols ) and other background . Otsu’s thresholding method starting with iterative histogram procedure separates the image colors as two colors intervals (I0=dark, I1= light),. The color density with first is I0={0,1,2,3…,I}, and second is I1={I+1,i+2,..,k-1}.Therefore the global thresholding value is computed by the following formula [jam11] .

𝝳2w= wb(I) * 𝝳2b(I)+ wf(I) * 𝝳2f(I)

…(2.13)

Where ( ).

wb(I)=∑

( ).

wf(I)=∑ µb(I)= ∑

𝝳2f(I)= ∑

…(2.15)

( )/wb(I).

µf(I)= ∑ 𝝳2b(I)= ∑

…(2.14)

...(2.16)

( )/ wf(I) (

...(2.17)

( )) /wb(I) (

…(2.18)

( )) / wf(I)

…(2.19)

Where wα , µα, and 𝝳α2 is weight, mean, and variance of class α. Repeating the separation process for choosing new intervals density color (ex.in each iterating shifting one level density color ) iteratively and recalculating the above equations (14-19) until satisfy minimum value of (13) as the selection criterion. D. Iterative threshold The global threshold value can be determined by iterative threshold technique as in the flowing algorithm steps [She13] :

19

Chapter Two

Theoretical Background

Algorithm (2.2): Iterative threshold Algorithm: input: gray image , output binary image. Begin Step1: compute the initial threshold value (T) ,by using average color image intensity . Step2: by using initial threshold value ,separate the image into two sets of group regions R1,R2. Step3:comput the mean values M1,M2 for each group. Step4: update the new value of threshold as flow: T=M1+M2/2; Step5: repeat steps 2-4 iteratively until M1,M2 has no change End The adopt new version of iterative threshold by applying it locally as follow algorithm (2.2.1) steps [Par97] : Algorithm (2.2.1):Iterative Threshold Algorithm: (locally applying ) input: gray image , output binary image. Begin

Step1: compute global( T) threshold by iterative threshold algorithm (3-2) . Step 2: for each pixel with it's eight neighbored compute adaptive threshold locally as follow (3-4) steps: Step3: if difference between maximum and minimum is less than T assign new value of new pixel relative to it density color’s pixel (bright or dark) . Step4: if difference between maximum and minimum is greater than or equal T assign new pixel value to be white if the old pixel value is near to maximum or the new takes black value if old pixel value is near minimum. End

2.4.3 Image Connected-Component Labeling Image connected-component labeling (CCL) represents an important filed in pattern recognition, computer vision and machine intelligence. By using this technique each connected segment in binary image it will own characterized unique label distinguish it from other labels [KUA03] [Kur15] , figure(2.7). This technique is required in different applications like target identification, diagnosis application, and 20

Chapter Two

Theoretical Background

biometric applications . There are many theories and algorithms that have contributed to the evolution of this technique especially the speed of performance in real time applications, generally these can be classified in four classes[Lif09] . Multi-scan algorithms, Two-scan algorithms, Hybrid algorithms and tracing-type algorithms . The first category can be adopted with image morphology as an implementation [Raf02] depending on the dilation principle as follows

(a)

(b)

Figure 2.7. Image connected-component labeling .a) binary image, b) labeling image

1-Image Morphology Image morphology represents an important field in image processing technique their theoretical side was introduced in 1964 by two French researchers (Matheron and Serra) when they presented a set of formulas about image analysis. Image morphology is a combination of non-linier process relevant with form of binary image feature, it depends on structure pixels values (geometry and topology features ) instead of their color density [Mil08] values, therefore the result come about morphology processing with new image features, it supports the pattern recognition and image analysis techniques. 2-Stricture Elements The important factor to applied morphology process in binary image is achieved by rectangular array structures or kernel mask (Stricture element), it takes different patterns contain of zeros and ones , figure (3.8), [Rav13]. The design about choosing suitable one of them it depends on particular problems .By sliding this mask on image 21

Chapter Two

Theoretical Background

the morphology process can take two states. The first one when values of pixels (ones) about stricture element matched the corresponding neighbors of image pixel, this state is called fit state , and when the match condition satisfied for at least a single pixel the hit state was found, [Jan12] .

Figure 2.8: Stricture element

3-Dilation and Erosion, [Rav13]. The dilation and erosion represents major important operators states in image morphology, they satisfy after applying the convolution kernel mask (Stricture element) on the image . The dilation state is met when image pixel has hit state then the value of new pixel in (new image) equaled one this leads to increasing state will satisfy for object figure (2.9.e ). About erosion state, the value of image pixel equals one if the fit state is satisfied (revers state compeer with first state) figure(2.9.c).

(a)

(b)

(c)

Figure 2. 9 ) Dilation and erosion process a) binary image , b) erosional image , b) dilated image

22

Chapter Two

Theoretical Background

Dilation A ϕ B={Z|(B^)z⋂

…(2.20)

≠ϕ}

Erosion: A Ө B={Z|(B)z≤A}

…(2.21)

Where ϕ is dilation operators , B is structure element. 4-Extraction of Connected Components For reach to image labeled ,the first approach or strategy (Multi-scan algorithms) will be applied (represented by Extraction of Connected Components ) on binary image A where it contains foreground pixels with labeled value equals (1) and background their pixel’s labeled value is (0). This process is implemented iteratively with restricted condition depending on dilation (20) concepts. Initial step started by locating the first foreground pixel (p ) where it represents seed point for reconstructed matrix XK (eq) with the structure element B scan the image A for computing the following form [Mil08] , [Raf02] ,[Shi09] . let X0=p where K=0,1,2…n …(2.22)

This iterative process would be terminated after the terminated condition was satisfied as XK=XK-1. Note: for applied image labeling concept, each regenerated connected components will have assigned distinguished labeled for each connected component element .

23

Chapter Two

Theoretical Background

With applying the bellow algorithm, it can be seen below the steps for regenerating the original binary image figure(2.10) .However image labeling is satisfied after each connected object subject to below algorithm with distinguished labeled value [Mil08] , [Raf02]. Algorithm (2.3) : Connected Components Extraction. Input :Binary image. Output : Connected Component segment begin Step1: read input binary image IB Step2: locate the first foreground pixel p and it's location p(x,y). Step3:initlize the stricture element B . Step4:k=0; Step4:intilize the connection component matrix XK(0,0). Step5 :set XK(x,y) =p(x,y). Step6: repeat Y=Xk Applied the dilation process on Xk and interest the result with original Image IB as following formula. Xk+1= dilation(B,XK )∩ IB . K=K+1. Untiled ( Y==XK+1) Step7: set CC_MATRIX=Y. Step8:return ( CC_MATRIX). end

Figure 2.10: regenerated connected component segment .

24

Chapter Two

Theoretical Background

2.5 Image Distance Transform Image distance transform (DT) plays an essential role in many applications like pattern recognition, computer vision , robotics and image matching particularly for binary image matching with using suitable features crated by matching approaches [Muh00]. Distance transform (DT) is a conversion process that applied on binary image to produce gray-level image that each pixel of it represents real value corresponds the minimum distance between object pixel (Ob) and background pixel (Bg), figure(3.11) that can be defined as in the following form[Don04] . Where I(x,y) ɛ {Ob,Bg} Id(x,y)={

(||

Where Ob object pixel,

(a)

||

(

)

( ) * + … (2.23) ) * + ) (

background pixel .

(b)

(c)

Figure 2.11: image distance transform a) binary image, b) binary values representation ,c) image distance transform .

Distance transform algorithms use different distance metrics for computing real distance beginning from non-euclidean metric like city block distance transform (CBDT), chamfer distance and Euclidean Distance Transform (EDT). Each of them was reflected positively or negatively on outputs qualities related with time consuming and precision .

25

Chapter Two

Theoretical Background

2.5.1 Distance Transforms With Sampled Functions [Ped12] Distance transform by sample function represents generalized approach for distance transform of binary image on grid (rows, columns) instead of binary value as depended. Therefore with samples functions, the basic intuition for computing the image distance transform depending on appearance or loss feature with each pixels defined as cost feature related with each pixel . Let (£={1,2,3,…, n}) uniform 1D is one dimensional grid, F: £→R where F is a function , then the distance transform defined by sampled function (2.24), is demonstrated as follows. D f(p) =min qϵ£((p-q)2 +f(q)).

….(2.24)

Where Df Euclidean distance transform value, p testing point, q nearest point. About every point qϵ£, there is a restriction where distance transform (F) is bordered from above by parabola presenting the rooted position (q, f(q)). Therefore distance transform is realize by lower envelop of these parabolas, figure (2.12) and its value corresponds the high of lower envelop.

Figure 2.12: the lower envelop about n parabolas

For computing image distance transform, the following two steps must be implemented: 1- calculating the lower envelope of n parabolas. 2- solving the mention equation (2.24) by substituting the lower envelope’s height at grid position. Where the two parabolas determine the distance transform that are intersect at single point therefore the intersection (s) position between two parabolas defined by grid positions (r,q) as follows equation (2.25). . 26

Chapter Two ( ( )

S=

) ( ( )

Theoretical Background )

…(2.25).

The lower envelop is calculated by sequentially calculating the first q ordered parabolas related to their horizontal positions. Where parabola is considered from q and finding the intersection position with another parabola in v[k]. Therefore there are two states that are satisfied. First, figure (2.13.a ) if the intersection position is after z[k] then the lower envelope must be adjusted as with the following algorithm steps. The second opposite state figure (2.13.b) considers the deleted state of th k parabola that lead to K parabola from v[k] is not contained in new lower envelop

(a)

(b)

2.13 : distance transformed with it's states. a) state 1, b) state2 The following one dimension distance transform algorithm by sample function grid defined as sample function ,

27

Chapter Two

Theoretical Background

Algorithm (2.4): Sampling Distance Transform. Input =Row image pixels. Output= Distance value. begin Step1: k=0. Step2: v[0] =0. Step3: z[0] =-∞. Step4: z[1] =+∞. Step5: for q= 1 to n-1 while (true) Step6: compute the intersection point by the following form s=(( f (q)+ q2) -( f (v[k])+ v[k]2))/(2q - 2v[k]). Step7: if s End K=k+1. V[q]=q. Z[k]=s. Z[k+1]= +∞. Step8: k=0. Step9: for q= 0 to n-1 Step 10 while z[k+1]

Step2 : for each row of f applied the distance sampling procedure with m iterations as the following and save the result about each row in calculate matrix (cal)

2-1: k=0. 2-2: v[0] =0. 2-3: z[0] =-∞. 2-4: z[1] =+∞. 2-5: for q= 1 to n-1 2-6 while (true) 2-7: compute the intersection point by the following form. s=(( f (q)+ q2) -( f (v[k])+ v[k]2))/(2q - 2v[k]). 2-8: if s 2-13 End 2-14 K=k+1. 2-15 V[q]=q. 2-16 Z[k]=s. 2-17 Z[k+1]= +∞. 2-18 End < end for> 2-19: k=0. 2-20: for q= 0 to n-1 2-21 while z[k+1]threshold value then set CS=0; else Step11: G=CS. Step 12: return ( G) end

The previous proposed algorithm starting with subject of each spots off binary cuneiform image segments (for each separated segment ) figure (3.17) to distance transform. Computing the number of distance pixels is less than the predefine number 7 for computing the distance ratio DR, as step (9). Therefore DR related with each separated object compared with predefined threshold (α) to erasing or reaming the object ( line or symbol ) as in step 10.

(b)

(a)

(c)

Figure(3-17) Erasing cuneiform line. ,a) Cuneiform image , b) Binary image ,c) Spots off cuneiform image .

55

Chapter Three

Assyrian cuneiform recognition system (ACRS)

Therefore the writing line cuneiform symbols was erased depending on comparison in step 10 with predefined threshold value as seen in figure(3.18)

Figure 3.18: lines off binary cuneiform .

The problem related with this proposed algorithm is the predefined threshold value. For example as with the figure below, it illustrates the difference of quality results with same threshold value equal 0.9.

(a)

(b) Figure(3-19) : problem of statistical algorithm. the writing cuneiform line will remain in (b) after applied the proposed algorithms (3-4) compere with (a) . 55

Chapter Three

Assyrian cuneiform recognition system (ACRS)

The browser can recognize the remnant the writing line in bottom of image after the proposed algorithm was implemented . Therefore to achieve the optimal result with pure binary image that contains just cuneiform symbols, threshold value must change to new value, figure (3-20).

Figure(3-20): The cuneiform writing line will removed after select suitable threshold value.

2.MSE Algorithm For Removing Writing Line The second proposed method for removing cuneiforms liens, adopts mean square error (MSE) as decision criterion about unwanted cuneiform writing liens. where the MSE is computed between the first result matrix which came from applied distance transform - sample function, on each row of binary image , and second matrix (image distance transform ) was resulted after applying the previous transform on each column's matrix (output from starting transform). However the basic principle of this algorithm depends on concept that the MSE computed value (as mention ) had a distinct verities between vertical and horizontal cuneiform symbols (where the symbols shape limit to line ) as with the flowing example, MSE for vertical shape had large value compared with horizontal shape.

(a)

(b)

Figure(3-21) : Preface 2. the MSE value about left image equal 6.1(a) compere with right image (b) where MSE equal 1458.2.

55

Chapter Three

Assyrian cuneiform recognition system (ACRS)

However with previous concept the second proposed algorithm adopted MSE as computed value that related with each separated cuneiform symbol , compared with the length of cuneiform symbol, to erase or remain it . Algorithm 3.5 : MSE Algorithm For Removing Writing Line. Input: spots off binary cuneiform image. Output: lines off binary cuneiform image. begin Step1: read cuneiform image segment CS set f= CS < applied distance transform> Step2 : for each row of f applied the distance sampling procedure algorithm with m iterations as the following and save the result about each row in calculate matrix (cal)

2-1: k=0. 2-2: v[0] =0. 2-3: z[0] =-∞. 2-4: z[1] =+∞. 2-5: for q= 1 to n-1 2-6 while (true) 2-7: compute the intersection point by the following form s=(( f (q)+ q2) -( f (v[k])+ v[k]2))/(2q - 2v[k]). 2-8: if s 2-13 End 2-14 K=k+1. 2-15 V[q]=q. 2-16 Z[k]=s. 2-17 Z[k+1]= +∞. 2-18 End < end for> 2-19: k=0. 2-20: for q= 0 to n-1 2-21 while z[k+1]

Step11: Normalize each image cuneiform symbols into size Si 200x200. Step12: applied polygon approximation proposed Algorithm (3-6) to generate test feature vector tv(i). step13: determine number of verities (number of stricture feature points ) sp.. step14: from learning dataset select feature learning vector lv that has same sp point. Where z=Number of training Vector Step15: for j= 1 to z step16: applied Normalize proses on each element in tv(i) and lv (z) Step17 For k=1 to sp

G(J)=

Step18

Step19

Next k.

Step20

Next j

𝟏

𝟏

𝒅

𝑵𝒊

𝟐𝝅𝟐 𝞭𝟐

55

𝑵𝒊 𝒋=𝟏

exp(-

𝒍𝒗𝒊𝒋 − 𝒕𝒗 𝒕 (𝒍𝒗𝒊𝒋 − 𝒕𝒗) 𝟐𝜹𝟐

Chapter Three

Assyrian cuneiform recognition system (ACRS)

Step21 Find max_val =max(G(j)) .

Step22 Find the class label that correspond rank of max_val < pattern that create with test pattern maxima probabilistic value through PNN model> Step23

Next i.

.step24: generate the recognition code according to number of symbols and their direction. end

With applying the previous proposed algorithms for recognition task , cuneiform character image figure(3-36) subjected to recognition state according to previous algorithms, steps as follows.

Figure(3-36) Color cuneiform image

(a)

(b)

(c)

(d) Figure(3-37). Enhanced State 55

Chapter Three

Assyrian cuneiform recognition system (ACRS)

The cuneiform image is enhanced after subjected to enhanced process with frequency domain as seen in figure (3-37-d) where it’s initially subjected to frequency transform and the resulted figure ((3-37-b) ) that is multiplied by suitable low pass filter, figure ((3-37-b)). Therefore the last process is related with computing the enhanced image from an inversed transform image . The next step is related with computing the binirize process by proposed algorithm (3.1) method where the output can be seen in figure(3-38) .

figure(3-38): Cuneiform Binirized Image

For removing spots that resulted from Binrized process, therefore the output from previous process will be subjected to spots elimination proposed algorithm (3-2) and the results spot off image can be seen in figure (3-39).

Figure(3-39) :Spot off cuneiform image

For removed cuneiform writing line the spot off cuneiform image will be subjected to proposed algorithm (3-5) and the output is illustrated below in

55

Chapter Three

Assyrian cuneiform recognition system (ACRS)

Figure(3-40) : cleared cuneiform binary image .

However for separating each symbols with individual matrix ,therefore CCL labeling algorithm must be applied previously and separates each labeled symbol for generating the individual its matrix as follows in figure (4-41) .

(a)

(b)

(c)

(d)

Figure 3.41: a) cuneiform labeled image ., b,c,d) separated cuneiform images .

However each cuneiform symbols will be subjected to previous classification algorithm to determine the class label according to training set. Therefore the output recognized classes agents each symbol is reviewed in the following figure (3.42). 55

Chapter Three

Assyrian cuneiform recognition system (ACRS)

Figure 3.42:The classification results against each symbol

After the archived results, the recognition code is constructed based on the number of symbols with individual number of each direction. As with previous example the recognition code is (4v2h2d0O0) , where the first number refers to the total number of cuneiform symbol and with sequentially the classified distributed numbers according each direction (vertical, horizontal, diagonal and obelic ). Finally the recognized cuneiform image character after constructing the recognition code is illustrates in figure (3.43).

Figure 3.43: Cuneiform character.

3.4. Post- processing Model (2) This section presents the post-processing procedure to solve the output problem result related with duplicating recognition code as mention An example is shown in figure below (3.44) seeing the evident matching code problem where each cuneiform character after subject to classification algorithm (3.8) ,each of 55

Chapter Three

Assyrian cuneiform recognition system (ACRS)

them had the same recognition code according to number of symbols and their directions.

(a)

(b)

Figure 3.44 : matching code problem . a,b) cuneiform characters with same recognition code

3.4.1. Proposed solution for duplicated problems In ordered to find approach leads to distinguish between the previous state . This solution with briefly depends on centroid values for each triangle , figure (3.45) as classification criterion adopted to creates a features vector for classification task .

(a)

(b)

Figure 3.45 : cuneiform patterns with their centroid . diagram of cuneiform characters with theirs centroid according to each triangle.

Therefore to extract the coordinate for each triangle density centroid, the distance transformed and the approximate points (structure features points) was adopted to satisfy this process. However with structure points and after determining the class with it is direction through by initial classification stage the subset of structure points (search points) will be depended to determine the single or multiple search points according to type and direction 55

Chapter Three

Assyrian cuneiform recognition system (ACRS)

about each class , figure (3.46). This search points is adopted to determine the search path for locating the density centroid coordinate after applied DT.

(a)

(b)

(c)

(d)

Figure 3.46: virtual cuneiform patterns : a) single structure point with single path of search ,b) two structure points with multiple path, c) three structure point with three paths ,c) four structure points with four search paths.

As seen in above figure each starting point of each arrow represents structure point (search points) . Its number depends on type of class.

3.4.2. Locates the density centroid It is obvious after applying the DT, the density centroid coordinate point represents the maximum distance pixel value as seen in the following example figure(3.47) where the distance pixel equal 7 and its coordinate is (36,82).

Figure 3.47: density centroid pixel equal 7 and coordinate (36,28) . 55

Chapter Three

Assyrian cuneiform recognition system (ACRS)

Therefore the post-processing procedure is second recognition state and depends on two stages . 3.4.4 Training Ttage In this stage after determining the recognition code by firs classification task , the training stage is applied to create the training database depending on separated binary cuneiform images as seen figure (3.48) where each image's symbol (separated training binary cuneiform symbols ) is separated in distinguish matrix and subject to DT for determining the density centroid coordinate as mention . However each Feature training vector in (training database) is collected from all density centroid coordinate points. Following the proposed learning algorithm .

Figure 3.48: separated training binary cuneiform symbols .

50

Chapter Three

Assyrian cuneiform recognition system (ACRS)

Algorithm 3-10: training post -processing Input: binary cuneiform image , output: learning features vectors .

Begin Step1: read separated learning binary cuneiform image Ism Step2 :applied extraction connected components image labeling algorithm Step3: separate each connected labeling segment with each distinguish matrix Mati(x,y) . Step4: for each segmented labeled 4.1: applied DT. 4.2:extract the density centroid pixels coordinate according to maximum value . 4.3:Save the density centroid pixels coordinate. 4.4: create learning features vector Lfv . 4.5: end. step 5:return learning dataset . End

3.4.4 Test feature extraction stage In this stage, before applying the classification task, the test vector must be prepared. However the process of determining the centroid points is more complex than the previous one (training stage ), because the variety of cuneiforms classes topology. Therefore the initial step starts from determining the type of class and their direction for determining the beginning position of search process for reaching to centroid position. For example as seen in figure (3.49), the triple cuneiforms symbols class can take different probabilities according to reflected light on image or the nature of the writer styles, then the proposed features extraction method about test state must take all previous considerations depending on selected stricture approximate points created from the previous classification as ention for each cuneiform class in follow

55

Chapter Three

Assyrian cuneiform recognition system (ACRS)

figure had different collected approximate structure points that depended to extract the search

(a)

(b)

(e)

(c)

(f)

(g)

(d)

(h)

Approximate stricter point Approximate strecher point (search point) Search path

Figure 3.49. extract search point: a,b,s,d) cuneiform symbols with differ styles. e,f,g,h) determine the search points according each approximate stretcher points.

As seen in above figure the numbers of approximate point is different, based on the style of cuneiform symbols which are (7,9,7,7) according to each figure (e,f,g,h) . That leads to change the procedure for selecting the search points which subset from all stretcher points. Another must take the direction of each cuneiform symbol that imposes the direction of search path. As follows, the proposed algorithm for test feature is presented: 55

Chapter Three

Assyrian cuneiform recognition system (ACRS)

Algorithm 3-11: testing features extraction . Input: binary cuneiform testing image , output: testing features vectors .

begin Step1: read test binary cuneiform image Tcm Step2 :applied extraction connected components image labeling algorithm Step3: separate each connected labeling segment with each distinguish matrix Mati(x,y) . Step4: for each segmented labeled i in Mati(x,y) . 4.1:determine the conform class with its direction. 4.2:generat the search point sp as subset from structure approximate points. 4.3:determine the search direction path according to search process. 4.4: applied DT. 4.5:extract the density centroid pixels coordinate . 4.6:save the density centroid pixels coordinate. 4.7 end. Step6: return testing features vector Lfv . step 7:end.

3.4.5 Classification stage After generating the testing features vector and learning dataset the classification process depends on applied the PNN classifier to determine the classification label between the testing vector and training dataset vectors that lead to select the write cuneiform image character . As follow the postprocessing diagram

55

Chapter Three

Assyrian cuneiform recognition system (ACRS)

Training state

Testing state

Input training dataset image Read conform binary testing image Read conform binary learning image

Applied image labeling procedure and separates each symbols in distinguish matrix

For each matrix applied DT procedure

Extract density centroid Points for each symbols

Applied image labeling procedure and separates each symbols in distinguish matrix

From previous classification stage determine the structure points and the classes for each image

For each matrix applied DT procedure Generate feature vector for each training image

Generate feature vector for testing image

Termination condition=(size of training images)

false

True

Construct learning dataset

Applied classification by PNN classifier

Figure(3-50) : model (2), Post- processing classification model55

Chapter Three

Assyrian cuneiform recognition system (ACRS)

3.5 Accuracy supporting for cuneiform symbols recognition by Image fusion approach This section presents the proposed solution for inaccurate recognition task problem about cuneiform symbols. This problem is evident when the low quality of illumination affects negatively on segmentation process as seen with following figure (3.51.a). However when performing the (ACRS) on the mention cuneiform image character, the inaccurate result was achieved as result of insufficient segmented patterns figure (3.51.b) . Therefore these results will lead to select a wrong pattern instead of writing one, figure (3.52.).

(b)

(a)

Figure(3-51): Illumination problem , a) cuneiform color image, binary segmented image .

SegNO

Cuneiform image segment

Wrong selected pattern

Write selected pattern

(a)

(c)

(e)

(b)

(d)

(f)

1

2

Figure (3-52): classification problem . a,b) cuneiform patterns , c,d ) inaccurate classes, e,f) correct classes. 555

Chapter Three

Assyrian cuneiform recognition system (ACRS)

Therefore for solving this problem ,image fusion technique is applied to support the quality of density color that leads to reach the correct selection decision about recognition process. However the fusing process is stare by providing two cuneiform images with same sense, figure (3.53), that means the photos of the same cuneiform character is taken with different light angles . This procedure contributes to complement the cuneiform segmented pattern .Therefore the initial processing step concerns with applying the wavelet transform on each previous images to produce the fused coefficient map based on fusion rules. The last step is concern with constructing the fused image from inversed image. The fusing process can illustrated as follows:-

Wavelet transform Coefficient Map1 Fused Coefficient Map

Wavelet transform

Inversed Transform

Coefficient Map1

ACRS\segmented step

gray fused image cuneiform text character

figure(3.53) cuneiform image fusion proposed diagram . 555

Chapter Three

Assyrian cuneiform recognition system (ACRS)

lastly the fused image will subject to recognition process by (ACRS) and the recognition results will satisfy the correct state. Following the proposed cuneiform image fusion Algorithm steps for supporting the accuracy of recognition.

Algorithm 3-12: Cuneiform image fusion Algorithm . Input: cuneiform testing images , output: Classify image .

begin Step1: read cuneiform images (Im1, Im2) Step2 :applied wavelet transform on each image . Where w_ Im1,= w transform (Im1), w_ Im2,= w transform (Im2), Step3: applied Coefficient Map on each transformed image from previous step. Where Out1= Coefficient Map (w_ Im1) ,

Out2= Coefficient Map (w_ Im2)

Step4: applied Fused Coefficient Map on Out1 and Out2 Where Fused _ Out= Fused Coefficient Map(Out1, Out2). Step5: applied symbol Classification algorithm (4.9) on Fused image . step 6:return the classify image . END

555

Chapter Four Experiments and Results Discussion

4.1 Introduction This chapter presents the results that was obtained after Appling the proposed recognition system’s algorithms presented mention . where their distributed in following sections according to preprocessing, segmentation, features extraction and classification .the proposed system is implemented in Matlab 2017a where the experiments where performs on Intel Core i7 ,64 bit operating system 2.5 GHz processor and 8 GB RAM.

4.2 Cuneiform Tablets Images Dataset The proposed (ACRS) is evaluated according to the dataset is distributed in to training set consist of main four patterns distributed in to 17 main virtual rectangular classes reflects all cuneiform symbols probabilities related with their direction .The testing set separated in to two groups. First one (G1) is consist of 240 cuneiform images about the 75 characters . were the third factor (appendix A) is take into account of testing. In contrast the second part (G2) is consist of 280 binary image reflects all the states or the probabilities about the cuneiform symbols classes which represents the adopted testing domain for the evaluation process to test the optimal features extraction method according to evaluated matching patterns state and determine its direction about them through the Hu’s moments, ZM , Projection histograms , EFD and Polygon approximation . However the supervised classifiers is depended like a PNN and SVM and evaluate which one is adopted according to the two main factors is taken in the recognition consideration ,the accuracy and processing time . This thesis adopted a virtual triangular forms training data set which mentioned, agree with approximated output figures of cuneiforms symbols by the proposed features

103

Chapter Four

Experiments and Results Discussion

extraction method and evaluated it's with previous metrics , as seen in following figures. The approximated figures below (4.12) is approximated or closed to triangular forms. Binary images

Approximated

Binary images

output

Approximated output

Figure 4.1. approximated figures output by the polygon approximated approaches method.

104

Chapter Four

Experiments and Results Discussion

4.3 Cuneiform Tablet Image Preprocessing In this section several proposed algorithms was applied to enhance the image features and removed unwanted features likes cuneiform writing (lines and spots) to supports the classification task. The evaluation of these algorithms will be explain in following sections. However the testing set is (G1) 4.3.1 Image Enhancement For evaluating the optimal domain enhanced filters about enhancement task according to spatial or frequency domain, the first evaluation state starting with applied (ACRS) system without adopting any enhanced filter to indicate whether it is required or not. Therefore after applied (ACRS) system the recognition ratio is equal (0.631) This is due to deformation features of the symbol’s edge and these lead to select uncorrected class as seen in figure (4.2.e) bellow.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

Figure 4.2: deformation problem , a) original image , b,c,d) cuneiform binary symbols, f,g) correct classification classes e) is wrong classification classes.

Now after Applying the low pass filters (Median , Gaussian, Average ) on dataset (G1) with different size as seen in following figure (4.3) ,for evaluating the spatial domino filters , the recognition results that achieved as illustrate below in table (4.1) related with each filter type with different size . 105

Chapter Four

Cuneiform image

Experiments and Results Discussion

LPF (size)

Median

Gaussian

Average

3X3

5X5

7X7

Figure 4.3: cuneiform image correspond their enhanced images output about different LPF with different size.

Table 4.1 : Comparison of results of recognition accuracy ratio after applied differing LPF with different size . 3X3 5X5 7X7 Filter size 0.594 0.643 0.633 Medina 0.668 0.618 0.512 Gaussian 0.512 0.475 0.426 Average In previous table (4.1) can be seen the weakness of recognition results when applied spatial domain with LPF. However in contrast the value of recognition is higher when applied LPF in frequency domain especially with ideal filter . bellow the evaluation recognition results ratio after applied frequency domain filter in table (4.2) with different cut of frequency values . 106

Chapter Four

Experiments and Results Discussion

Table 4.2: Comparison of results values about each cut of frequency values of ideal filter. Enhanced Cuneiform image

Cut of frequency Image

Transformed image

Recognition accuracy

0.500 Cut of frequency=0.2

0.614 Cut of frequency=0.4

0.940 Cut of frequency=0.6

0.627 Cut of frequency=0.8

0.676 Cut of frequency=0.9

The above table showed the higher recognition accuracy is achieved after adopts the cut of frequency value equal 0.6 compare with another values.

107

Chapter Four

Experiments and Results Discussion

4.3.2 Removing spots. As previously mentioned and for removing any spots from cuneiform image to get an uniform results ( binary images are free off spots ), the proposed algorithm (3.2) will be evaluated by using Equation (2.43) between the binary cuneiform data test images with expected binary cuneiform images set ,subject to manually erosion process on spot , where all spots have been removed as show with following examples in figure (4.4)

Cuneiform image

(a)

Binary image

Expected image

(b)

(c)

(e) (d) (f) Figure (4.4) removing spots a,d ) cuneiform color images , b,e) binary image , c,e) expected spots off image subject to manually erosion process.

The results was achieved after applied the proposed algorithm according to different thresholds values can be seen in following table.

108

Chapter Four

Experiments and Results Discussion

Table:4.3 Comparison of results according to each thresholds values with optimal interval [ 0.003 0.007] . Thresholding values The accuracy rate

0.003 0.910

0.005 0.950

0.007 0.906

4.3.3 Removing writing lines This section reviews the evaluation state for proposed algorithms about erosion distinguish writing cuneiform line among cuneiform symbols, where the evaluation process depending on applied equation (2.43) between the expected binary image that generated manually through by removing each cuneiform lines from the cuneiform binary image came from previous process (each image clear from spots) . as seen in figure (4.5) . Cuneiform image

(a)

(b)

Binary image clear from spots

(c)

Expected Binary image clear from writing line

(e)

(e)

(d)

Figure (4.5) removing writing lines. a,b) cuneiform tablet image ,c.d) Binary image clear from spots ,e,f) Expected Binary image clear from writing line.

Therefore after applying the proposed algorithms (3.4,3.5 ) for erasing cuneiform writing lines . Can concludes that the (MSE) algorithm has more 109

Chapter Four

Experiments and Results Discussion

confidence level for erasing cuneiform lines compare with the first algorithm (statistical algorithm) depending on the accuracy values. Where the first algorithms is implemented five times which result in a different erosion accuracy values as seen in table 4.4 depend on applying equation (2.44). Table 4.4: Comparison of results after applied the first line remove algorithm according to different thresholds values. Threshold value Accuracy 0.5 0.67 0.6 0.68 0.7 0.55 0.8 0.37 0.9 0.21 Where the erosion accuracy value after applied the second algorithm (MSE) is equal 0.92.

4. 4 Image Thresholding This section presents the comparison between the results were achieved after Appling the thresholding technique according to Otsu’s Niblack , Sauvola iterative and cuneiform image thresholding proposed algorithm 4.1. However the selection criterion is depend on selects the optimal one of them according to the accuracy level . Table (4.5) showed the values of accuracy recognition level agents each threholding method . Table :4.5 : Comparison results after applying the follow Thresholding methods according to their accuracy recognition values . Thresolding methods Accuracy level

Otsu’s

Niblack

Sauvola

0.504

0.614

0.897

110

Iterative

0.471

cuneiform image Thresholding

0.940

Chapter Four

Experiments and Results Discussion

As seen with the previous table the suggestion algorithm (4.1) is an optimal . Therefore can figure out (for example) the evident results as follow after applied the thresholding methods where the amount of unwanted elements is little in a binariza image through by Sauvola and Niblack

Cuneiform image

(a)

Otsu’s

Niblack

(b)

(c)

Sauvola

(d)

Iterative

(e)

Figure 4.6: image binarized methods . a) cuneiform image , b,c,d,e) binrized images according to each thresholding technique with different quality results.

4.5 Feature extraction This section reviews the comparative evaluated process about the classical methods of feature extraction was used for constructs features vector in order to comparing them with new proposed approximation feature extraction method. Where the assessing is based on recognition accuracy testing on the proposed virtual training dataset initially about the patterns and then with their direction .The adopted training dataset in this test is a rectangular virtual shapes as mention distributed in four main patterns figure (4.7),. However the testing set (G2) reflects all possible probabilities of cuneiform symbols and each one represents by binary image, For evaluating the proposed training data set . The recognition task is implemented by supervisor classifiers like a PNN and SVM with discriminant kernel function ( liner ,polynomial ,RBF ) therefore the evaluation related with each extraction method is explained as follows.

111

Chapter Four

abcd-

Experiments and Results Discussion

Elliptical Fourier Descriptors. Projection histograms Hu’s moments. Zernike moments (ZM).

4.5.1 Elliptical Fourier Descriptors (EFD):

This section review the evaluated state of EFD , were training patterns set in this method can seen in figure (4.7) , to determine this method is appropriate for adopted by recognition system depending on achieved accuracy results ? ,The features vector generated by EFD is based on quadruple Fourier coefficient (ai,bi.ci,di) values that defined by equations (40,41,42,43,) ,where the predefined number of harmonic n, determined the size of features vector. That means these parts, each one of them corresponds the quadruple

Figure 4.7: learning patterns with same direction .

coefficient as sequentially and each part it's size agree with number of harmonic, figure (4.8) .

Size(n) ai

Size(n)

Size(n)

Bi

Size(n)

ci

Figure 4.8: features vector of EFD and its coefficient contain. 112

di

Chapter Four

Experiments and Results Discussion

n= harmonic size; Initially for generates quadruple Fourier coefficient each cuneiform image’s symbol subject to extract its boundary (canny operators ) as closed contour for generates freeman chain code that leads to determine the length of chain (T) as seen in figure (4.9), shows the EFD process steps .

(a)

(b)

(c)

(d)

Figure 4.9: Elliptical Fourier Descriptors ,a) binary cuneiform symbols, b) boundary image . c) quadruple Fourier curve ,d) matching class .

For example Fourier coefficient descriptor features vectors with four harmonic (n=4) as see in following table. Table: 4.6: features vector constructed by (EFD) coefficient.

-30.6489

12.0626

-34.5081

(a) Coefficient

4.6260

55.5311

27.0116

-16.3438

-3.2365

(b) coefficient

64.9505

24.0365

6.9418

(c)coefficient

-0.5701

6.9000

4 5.2830

5.3341

(d) coefficient

Therefore after applied the EFD as a features extraction approached , the recognition accuracy results for cuneiform symbols dataset is illustrate with following table (4.7) according to the classifiers methods PNN and SVM according to it’s kernels functions . Where the harmonic coefficient equal 10 .

113

1.0962

Chapter Four

Experiments and Results Discussion

Table: 4.7 Comparison of recognition results with average classification time by EFD according each classifier. Classification approaches SVM with RBF kernel SVM with polynomial kernel SVM with linear kernel PNN

Accuracy

Average Classification Time (second)

0.6590 0.6623

1.106 1,073

0.5550 0.7020

1,066 0.293

As seen with above table the higher recognition accuracy is satisfy by PNN compering with other accuracy results . 4.5.2 Projection histograms Now the dataset is subject to and evaluates with another features extraction methods it is a projection histograms. As mention in chapter three the adopted approach for generating features where the length of features vector is equal the count number of rows and columns and each value in its represents the counting number of foreground pixels for each row and column with respect to changing color factor. However after applying the testing in the dataset by classifiers methods, PNN and SVM according to it’s kernels functions. the recognition accuracy illustrates with follow tables . Table 5.8 : Comparison of recognition results by applied Projection histograms with average classification time according each classifier. Classification approaches

Accuracy

Average Classification Time (second)

SVM with RBF kernel SVM with polynomial kernel SVM with linear kernel PNN

0.5734 0.4910 0.3790 0. 4444

5,533 5,366 5,333 1,466

As seen with above table the higher recognition accuracy is satisfy by SVM with RBF kernel function compering with others. 114

Chapter Four

Experiments and Results Discussion

4.5.3 Hu’s moments With Hu’s moments metric , the generated feature vector is consist of seven parts each one of them carry the value of moments, by applying equations (29-35) figure (4.10). Therefore the evaluation state concern with matching accuracy between the proposed training class and testing class ,to evaluate , a Hu’s moments as a features extract method for recognition system . However the recognition accuracy values shows in the table (4.9) correspond each classifier . G7

G6

G5

G4

G3

G2

G1

Figure 4.10:Hu’s moments features vector.

Table: 4.9 Comparison of recognition results by applied Hu’s moments with average classification time according each classifier. Classification approaches

Accuracy

Average Classification Time (second)

SVM with RBF kernel SVM with polynomial kernel SVM with linear kernel PNN

0.3512 0.3440 0.3470 0.3010

0.211 0.200 0.210 0.048

As seen with above table the higher recognition accuracy is satisfy by SVM with RBF kernel functions, the reason is related with size of features vector were it compose of eight values.

4.5.4 Zernike moments (ZM). For evaluating the ZM, the constructed process for generating a feature vector is implemented with depending on computed the ZM values with respect to phase angel moment on each square zoon according to zoning proses as showed in 115

Chapter Four

Experiments and Results Discussion

figure (4.11). However after computes the couple four values the features vector is achieve by them depend on applying equation (2.26) , this concept is adopted by training state and testing again. Therefore the recognition accuracy values shows in the table (4.10) correspond each previous classifier ,

Zm=0.934

Zm=0.155

ZM=0.0968

Z=0.1812

figure (4.11) : four ZM values about each square zoon. table: 4.10: Comparison of recognition results by applied ZM according each classifier with average classification time.

Average Classification Classification Accuracy Time (second) approaches SVM with RBF kernel 0.2007 0.221 SVM with polynomial 0.1397 0.214 kernel SVM with linear kernel 0.1111 0.213 PNN 0.2616 0.058 As seen with above table the higher recognition accuracy is satisfy by PNN compering with others.

116

Chapter Four

Experiments and Results Discussion

4.5.5 Polygon approximation : In this section the generated features vector is constructed by a new suggested approach by polygon approximation mythology specifically with dominate points method. This section illustrates the evaluated a comparison state between the generated approximate points by algorithm (2.5) and the suggested algorithm (3.6), after applying each of them on the experimental cuneiform symbols testing set (G2). The main goal here is to find the set of approximate points equal the require predefined numbers (3,5,7,9) which correspond each patterns figure (3.32). Firstly applied the approximating algorithm (3.5) on testing set (G2) for experiment several trial times depending on individual diversity values (ɛ) according to specific interval [ 0 1]. Therefore the experimental results values its a values of compressing ratio (CR) define by equation (2.45) and accuracy ( ratio of the numbers of corrected approximated images to all tested images) define by equation (2.44).Where the correct images is an images numbers that each of them satisfy the expected cuneiform image class according to approximate points number figure (4.8) .

(a)

(b)

: approximation point Figure (4.12) : polygon approximation . a) boundary cuneiform image figure by five approximate points

,b) approximate

The table below reviews the average compressing values and accuracy according each experiment depending on each diversity value .

Table (4.11): Experimental results according to each diversity values by polygon approximation algorithm . 117

Chapter Four

Experiment no

Experiments and Results Discussion

Diversity values

ɛ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

0.0010 0.0020 0.0030 0.0040 0.0050 0.0060 0.0070 0.0080 0.0090 0.0100 0.0200 0.0300 0.0400 0.0500 0.0600 0.0700 0.0800 0.0900 0.0920 0.0940 0.09800 0.10000

Compression ratio 166.9059 166.9059 166.9059 166.9059 166.9059 166.6461 165.4961 163.1622 157.6265 149.5529 89.1224 62.1263 47.9412 38.8383 32.6839 28.2009 24.8047 22.1224 21.6944 21.0281 20.2186 20.6050

Accuracy 0.444 0.444 0.444 0.444 0.444 0.444 0.444 0.444 0.444 0.444 0.121 0 0 0 0 0 0 0 0 0 0 0

From the above table can be conclude that the optimal result was achieved with maximum contrast interval of compression ratio (CR) accuracy is satisfy with experiments (5-10) where the diversity interval value starting from 0.005 to 0.01 sequentially. Therefore the previous diversity interval [0.005 0.01] value can be deepened as proposed diversity testing value for evaluates the proposed approximation algorithm (3.6). However now applied the proposed algorithm(3.6) to evaluate its accuracy for generate the approximate points according with each patterns .Depending on main diversity interval as mention ,the following table distributes in 13 proposed diversity values . The evaluated state depending on subject the 118

Chapter Four

Experiments and Results Discussion

cuneiform testing set (G2) to the proposed algorithm works as approximate process about each boundary rejoin of cuneiform image symbol .Therefore this process (for each individual diversity value ɛ ) will repeated a number of times with initial diversity value and will decrease with fixed step value until the stopping criteria was satisfy, (where the number of approximate points is equal the predefined set numbers (3, 5,7.9)) . The decreased step value is set to (0.001). The proposed algorithm (3.6) was cooperated sequentially after the approximate point was created for generate structure features. However the below table illustrate the experimental values related with CR ,number of iteration (Noi) and (accuracy. equation (44) ) correct images values . Table: 4.12: Experimental results according to each Diversity values. Excrement no

Diversity Values (ɛ)

CR

Noi

Accuracy

1 0.0040 535.211 1.032 0.401 2 0.0050 507.127 1.103 0.419 3 0.0060 444.275 1.107 0.580 4 0.0070 405.430 1.136 0.730 5 0.0080 369..251 1.121 0.878 6 0.0090 356.954 1.161 0.953 7 0.0092 354.662 1.154 0.960 8 0.0094 356.208 1.175 0.956 9 0.0097 354.665 1.234 0.964 10 0.0100 353.556 3.231 0.972 11 0.0200 350.119 3.487 0.870 12 0.0300 335.424 5.258 0.731 13 0.0400 337.911 8.1075 0.69 As seen in above table (4.12) the higher accuracy was achieved with diversity value equal (0.01) ,were the accuracy value equal 0.971. Through the different independent values of diversity as mention with previous table the increasing value of them lead gradually reach to construct a polygon approximate like the original boundary shape. As seen with following figure the

119

Chapter Four

Experiments and Results Discussion

browser can recognize the reconstructing process about each individual diversity value .

(a)

(b)

(c)

(d)

Figure 4.13 : approximating steps , a) binary cuneiform symbols ,b) approximate figure according to experiment 2 , c) approximate figure according to experiment 4, d) approximate figure according to experiment 8,

However the testing set (G2) will subject to testing state where the proposed polygon approximation algorithm (4.6) is adopts as feature extraction model . The recognition accuracy results with average classification time is illustrated in table (4.13) after applied the PNN and SVM classifier .

120

Chapter Four

Experiments and Results Discussion

Table:4.13: Comparison of recognition results and average Classification Time according each classifier. Average Classification Classification Accuracy Time (second) approaches SVM with RBF kernel 0.950 0.332 SVM with polynomial 0.795 0.322 kernel SVM with linear kernel 0.573 0.320 PNN 0.950 0.0880 As shown in above table (4.13) the recognition accuracy is nearly equal according to SVM with it is discriminant function RBF and PNN classifiers, but, the classification consuming time for PNN less to one third compare with SVM. The reason about the difference in processing time is due to the fact that the first classifier model (SVM) is more complicated compares with second(PNN) that appearing with each discriminant kernel function starting with (RBF , polynomial and linear) , achieving to the highest accuracy with (RBF) function . As seen in following figure (4.14) Some classification result between testing set (G2) and the virtual classes.

figure :4.14: Symbols classification results.

The testing data (G2) set is subject to resizing process to evaluate the proposed features extraction by polygon approximate method with PNN classifier . As seen in bellow table the average accuracy recognition result according each size . 121

Chapter Four

Experiments and Results Discussion

Table: 4.14 : Comparison of recognition results according to different image size. Image size (pixel)

Accuracy

200x200

0.950

128x128

0.900

64x64

0.887

With applied Gaussian filter on testing set (G2) many times according each standard deviation values δ as seen in figure (4-15) . The accuracy values about each δ is illustrates in below table (4.15) .

Binary cuneiform symbols

δ =1

δ =3

δ =7

Figure 4.15: conform symbol deformation. applied Gaussian filter on each cuneiform symbols with different values of each standard deviation.

122

Chapter Four

Experiments and Results Discussion

4.15 : Comparison of recognition accuracy results according to different standard deviation values δ standard deviation value

Accuracy

1

0.860

3

0.867

4

0.820

7

0.770

As we have seen in previous table (4.15) the accuracy values was decreased according to increasing value of standard deviation . That means that the value of standard deviation when it’s increased that lead to deformed each subjected shape of testing in set (G2) and reflected that negatively on the value of accuracy .

4.6 Classification as mention in previous sections the optimal features extraction method for generate a features vector is a proposed algorithm, based on the achieved high accuracy recognition results . PPN classifier is adopted as result of it’s accuracy and saving the consuming time .However based on mention the proposed (ACRS) will adopt the previous metrics as illustrated with previous proposed algorithms where it is satisfy a high accuracy .therefore the target of the proposed system is generate the recognition code was illustrated in (4.3.6). However the proposed system is evaluated after testing process on testing set (G1) .The recognition accuracy after applied the proposed system is equal 0.94 % and some correct stats is showed in below figure (4.16). However about the duplicated state the evaluated stage is achieved with accuracy values and consuming time was illustrated in following table (4.16) ,where the SVM and PNN is applied for post- processing state (model2).

123

Chapter Four

Experiments and Results Discussion

4.16:Comparison of recognition accuracy results state recognition state.

Classification

according to duplicated

Accuracy

Classification Time (sec)

SVM with RBF kernel

98%

4.333

SVM with polynomial kernel

90%

4.566

SVM with linear kernel

93%

4.300

PNN

100%

2.800

approaches

As have seen in previous table (4.16 ) ,PNN classifier is more accurate Than second classifier according to accuracy and processing time factors . The processing time with SVM according to it’s discriminant kernel functions is approximated more than four seconds as results of complexity of this classifier compare with the another one (PNN). This fact is becoming clearer when the size of training set is increasing . About the accuracy factor is satisfy with (PNN) classifier .

124

Chapter Four

Experiments and Results Discussion

Figure(4.16) character cuneiform recognition

125

Chapter Four

Experiments and Results Discussion

Lastly the recognition results that achieved according to different cuniform image size (G2) can be illustrated in following table . Table :4.17 Comparison of recognition accuracy results according to image size

Image size (pixel)

Accuracy

200x200

0.720

300x300

0.782

400x400

0.940

4.7 Results and Discussion The system testing process was completed according to the proposed algorithms. Where These algorithms adopted the relevant aspects are pre-processing ,features extraction, classification and the post-processing tasks. Image enhancement state is adopts a frequency domain with low pass filter (ideal filter ) lead to high recognition rate compare with spatial domain filters . For a achieved to uniform images testing form , free of spots and writing lines where the proposed algorithms based on image morphology and distance transforms was achieved a high accuracy erosion process . The proposed features extraction algorithm by polygon approximation with dominate points was result to high recognition rate compare as shown with other features extraction methods like EFD, Hu’ moments , ZM and projection histogram. The proposed vertical triangular data set is evaluated according to proposed system derive to achieved the high accuracy results equal 0.94%, average classification time is . Through the evaluating process with classification stage about the features extraction methods ,SVM with it’s discriminant kernel functions and PNN was adapted .The last one satisfy the high accurate results with minimum average consuming time about the polygon approximation proposed algorithm .where the SVM achieved nearly equal accuracy results through RBF function among others kernel function . Where the average recognition consuming time is higher than the last 126

Chapter Four

Experiments and Results Discussion

classifier (PNN). Additionally the other features extraction methods the evaluating recognition results is achieved to 0.702 and average classification time is (0.211,sec) about EFD where PNN depended . With Projection histograms the higher achieved results is equal to 0.5734 and average classification time is (5,533,sec) , according to SVM with RBF kernel function . the recognition results is achieved to 0.3512 with Hu’s moment and average classification time is (0.211, sec) and SVM according to RBF kernel function and with ZM the recognition rate is achieved to 0.2616 and average classification time is (0.058,sec) ,where the PNN is adopted classifier. Lastly the recognition accuracy about the duplicated state is equal to 100% when PNN classifier is adopted . bellow table present the symmetry accuracy result for each feature extraction metrics with average classification time about each one of them. Table :4.18 Comparison of recognition accuracy results according to each features extraction method and average Classification time. Features Extraction Metric

Accuracy

Classification approaches

Classification Time (sec)

Polygon Approximation Elliptic Fourier Descriptor Projection Histograms Hu’s moments Zernike moments

0.950 0.702 0.5734 0.3512 0.2616

SVM with RBF kernel PNN SVM with RBF kernel SVM with RBF kernel PNN

0.332 0.293 5,533 0.211 0.058

4.8 Analytical comparison This section include the analytical comparison between the achieved results of the thesis and the mention research in chapter one [Nak13].Therefore this comparison will distributed in following entitled sections. Dataset : the adopted dataset in this thesis is a real cuneiform images for clay or stone tablets ,while the dataset is used in research is cuneiform hand writing symbols on paper. Features extractions : This thesis presents a new approach to approximate and extra the structure features by polygon approximation methods while, the comparative research depending on structure and statistical features 127

Chapter Four

Experiments and Results Discussion

based on( connected component labeling, histogram projection and center of gravity) metrics . Classifier : Classifier model applied in this theses is PNN, it’s supervised learning model like comparative research which depending on (MLP) mule layer neural network (backpropagation) model . Accuracy results: The accuracy recognition results was achieved in this thesis is equal 94% ,while in comparative research is distributed starting from 83% to 95% according to each cuneiform class with average recognition rate is 89.86% .

128

Chapter five Conclusions and Suggestions for Future Work

5.1 Conclusions

This chapter summarizes the evaluation of thesis results and reviews the contributions of the proposed work (ACRS), based on the achieved results .The next conclusions can be illustrated as follows : 1. The high recognition rate was achieved by proposed recognition system depends on OCR’s principals to implement its task is equal 94% .Where each cuneiform symbol (wedge) is subjected to recognition process according to two factors. The first one relates with recognizing the cuneiform symbol's pattern and the second concerns with Determining its directions and position along with considering the constructed patterns of wedges as result to reflected light angel and the style of writer, as a new analysis approach. 2. Image enhancement preprocessing technique is important factor which affects on the recognition accuracy results .Where the image frequency enhancement domain is more sufficient and supports the recognition task, 3. The used proposed training virtual dataset is a appropriate new proposed approach, which contributes to solve the shadows problem of cuneiform symbols , according to recognition process. 4. Sauvola and Niblack image thresholding methods are more sufficient than other thresholding methods when they are adopted by proposed cuneiform thresholding algorithm with Skewness statistical metric as a decision criterion.

129

Chapter five

Conclusions and Suggestions for Future Work

5. It is preferable to subject the cuneiform image tablet to erosion process about unwanted objects like (spots and writing lines) ,by the propose algorithms (3.2,3.5) as these objects will affect on the accuracy of the recognition results. 6. Relaying on erasing spots proposed algorithm (3.2) according to its threshold value as unique way for erosion process (for spots and writing lines ) can erase some important features (symbols ). Therefore the erosion writing line proposed algorithm (3.5) must be applied after erasing spots proposed algorithm (3. 2). 7. Inefficient and low recognition results were achieved after testing Zerink's and Hu's moments as feature extraction methods for generating feature vector According to the evaluated recognition results .

8. High recognition results were achieved after adopting the proposed polygon approximation algorithm (3.6) with DP as features extraction method according to PNN And SVM with RBF discriminant function. 9. the optimal diversity interval is satisfied between [0.0090 0.0100], interval was adopted to identify the optimal diversity value by the proposed algorithm(3.6) according to compression and accuracy values . 10.PNN classifier is adopted by the proposed recognition system as the results of satisfying the two mention criterion (accuracy and processing time) compare the SVM with its discriminant functions was equal to one to four. 11.The duplicated problem, about cuneiform symbols recognition is solved according to post processing as new approach, depends on applying DT and PNN classifier model with accuracy result equal to 100% .

031

Chapter five

Conclusions and Suggestions for Future Work

5.2 Suggestions for Future Work The proposed system can be improved by the following directions:1.

To adopt polygon approximation methods for generating the features vector like Heuristic, dynamic programing and Split approaches, where the selected factor of them depends on recognition accuracy.

2.

To apply the synchronize recognition task about cuneiform symbols by implementing the parallel processing techniques to reduce the consuming time.

3.

To Adopt the fuzzy set principals for evaluate the optimal value of cutoff frequency about the enhancement process and the approximated approaches for polygon approximation as feature extraction method.

030

References

References [Abd14] Abdelhadi LOTFI and Abdelkader BENYETTOU," A reduced probabilistic neural network for the classification of large databases ",Turkish Journal of Electrical Engineering & Computer Sciences , (2014) 22: 979 – 989 [Aga00] P. K. Agarwal and K. R. Varadarajan. (2000)" Efficient Algorithms for Approximating Polygonal Chains ", Discrete & Computational geometry, Volume 23, Issue 2, pp 273–291. [Alx03] Alexander Kolesnikov., and Pasi Fränti., " Polygonal Approximation of Closed Contours " ., 13th Scandinavian Conference, SCIA 2003 almstad, Sweden, June 29 – July 2, 2003. [And07] Andrew George A,(2007)," Babylonian an Assyrian :history of Akkadian" university of London. [Ani01] AL-Ani s. 2001," Image enhancement and recognition of cuneiform writing ", PhD Thesis, Instate of higher students for computer and information. [Ani00] Anil K., Robert P., and Jianchang Mao,(2000) Statistical Pattern Recognition: A Review", IEEE Transactions ON Pattern Analysis and Machine intelligence , VOL. 22, NO. 1. [Anj17] Er. Anjna, Er.Rajandeep Kaur.,(2017)" Review of Image Segmentation Technique ",International Journal of Advanced Research in Computer Science, Volume 8, No. 4, May 2017. [Asi08] Asif Masood.," Dominant point detection by reverse polygonization of digital curves ", Image and Vision Computing 26 (2008) 702–715. [Bal02] B. Ballarµ. , P .G. Reas and D.T egolo.," Elliptical Fourier Descriptors for shape retrieval in biological images ", Conf. on Electronics, Control & Signal , 2002. 941

References

[Bim94] Bimal .kr. Ray, Kumar S. Ray, and dutta Mujumder.,"An optimal algorithm for polygonal Approximation for digitized curve "Pattern Recognition Letters Volume 15, Issue 8, August 1994, Pages 743-750. [Bin14] Bincy Bavachan, Prem Krishnan ." A Survey on Image Fusion Techniques" , "International Journal of Research in Computer and Communication Technology" , VOL 3, NO 3 (2014). [Dan06] Daniel V. Hahn., Donald D. Duncan, and Kevin C. Baldwin " Digital Hammurabi: Design and Development of a 3D Scanner for Cuneiform Tablets ", SPIE 6056, Three-Dimensional Image Capture and Applications VII, 60560E [Chr15] Christopher woods ,(2015),"Visual language", oriental institute Chicago. [Dee12] Deepak Kumar Sahu, M.P.Parsai," Different Image Fusion Techniques – A Critical Review",International Journal of Modern Engineering Research, Vol. 2, Issue. 5, Sep.-Oct. 2012 pp-4298-4301 . [DES16] C.N. DESHMUKH N.G.BAJAD," A COMPARATIVE STUDY OF DIFFERENTIMAGE FUSION TECHNIQUES FOR TONE-MAPPED IMAGES", International Journal of Scientific & Engineering Research, Volume 7, Issue 2, February-2016 [Dew09] Dewi Nasien, Habibollah Haron, and Siti Sophiayati Yuhaniz, ,(2009) " The Study of Handwriting Character Recognition (HCR) and Support Vector Machine (SVM) ",Allen institute for artificial intelligent. [Dha13] Dhaval Salvi, Jun Zhou, Jarrell Waggoner., and Song Wang., “Handwritten Text Segmentation using Average Longest Path Algorithm”, Applications of Computer Vision (WACV), IEEE Workshop, pp. 15-17, 2013 [Don04] Donald G Bailey, " An Efficient Euclidean Distance Transform " conference paper lecture note in computer science, "Radio Electronics & Info Communications" (UkrMiCo), International Conference, , December 2004. [Fah14] Fahimeh Mostofi., Adnan Khashman, (2014)" Intelligent Recognition of Ancient Persian Cuneiform Characters " NCTA Conference. [Gau14] Gaurav Kumar, Pradeep Kumar Bhatia," A Detailed Review of Feature Extraction in Image Processing Systems ", Fourth International Conference on Advanced Computing & Communication Technologies,. Publisher: IEEE,2014. 951

References

[Geo04] V .L. Georgiou., Pavlidis N., N.G. Pavlidis, K.E, and Alevizos, M.N. Vrahatis," Optimizing the Performance of Probabilistic Neural Networks in aBioinformatics Task", Proc of the EUNITE 2004 Conference . [Gri03] Grigore Ovidiu and Remco .C.veltkamp, (2003). "On the Implementation of Polygonal Approximation Algorithms". Department of Information and Computing Sciences, Utrecht University. technical report UUCS2003-005. [Gur14] Gurpreet kaur, and Rajdavinder Singh,( 2014)," Image Enhancement and Its Techniques-A Review ", International Journal of Computer Trends and Technology (IJCTT), volume 12 number 3. [Hai06] Haithem Abdul lateef Al-Ani, " Data Extraction of Cuneiform Tablets Digital Images ", PhD Thesis, Al Rasheed College University of Technology., September 2006. [Ham13] Hamid Abbasi ,Mohammad Olyaee and Hamid Reza Ghafari," Rectifying Reverse Polygonization of Digital Curves for Dominant Point Detection " IJCSI International Journal of Computer Science Issues, Vol. 10, Issue 3, No 2, May 2013. [Hil06] Hilal Yousif., R. Abdul.Munim Rahma , Haithem Alani , "Cunform Symbols Recognition Using Intensity Curve". The international Arab journal information Technology vol. 3 No 3. [Jan15] P. Janani., J. Premaladha, and K. S. Ravichandran,( 2015)" Image Enhancement Techniques: A Study " , Indian Journal of Science and Technology, vol 8.num 22. [Jan12] Jan Bartovsky," Hardware Architectures for Morphological Filters with Large Structuring Elements ", Phd thesis,University of West Bohemia in Pilsen , November 2012. [jam11] Jamileh Youse ,(2011)," Image Binarization using Otsu Thresholding Algorithm", University of Guelph, Ontario, Canada. [Jia98] Jianming H U and Hong Yan, (1997), "Structural Primitive extraction and coding for handwritten numeral recognition", Pattern Recognition, Vol 31 No 5 PP493-509, 1998 [Jia16] Jiao-Hong Yi , Jian Wang and Gai-Ge Wang.,(2016), " Improved probabilistic neural Networks self-adapt strategies for transformer fault diagnosis problem " , Advance in Mechanical Engendering Vol. 8(1) 1–13. 959

References

[JON04] Jonathan Cohen, Donald Duncan, Dean Snyder, Jerrold Cooper Subodh Kumar, Daniel Hahn, Yuan Chen, Budirijanto Purnomo, and John Graettinger (2004), "iClay: Digitizing Cuneiform",The 5th International Symposiumon Virtual Reality, Archaeology Cultural Heritage VAST. [Kal13] Kalaivani Selvakuma and Bimal Kumar Ray," SURVEY ON POLYGONAL APPROXIMATION TECHNIQUES FOR DIGITAL PLANAR CURVE " International Journal of Information Technology, Modeling and Computing (IJITMC) Vol.1, No.2, May 2013. [Kaw12] Kawther K. Ahmed, (2012) "Online Sumarians Cuneiform Detection Based on Symbol Structural Vector Algorithm ", J. Of College Of Education For Women vol. 23 (2) 2012. [Kha 16] Khalid FARDOUSSE, Hassan QJIDAA” Handwritten motives image recognition using polygonal approximation and chain-code”, WSEAS TRANSACTIONS on SYSTEMS and CONTROL,vol 11, E-ISSN: 2224-2856. [Kin16] L W King, M.A., and FS A.," Assyrian language " copywriter FB and ClTd, 2016. [KUA03] KUANG-BORWANG, TSORNG-LINCHIA, and ZENCHEN.," Parallel Execution of a Connected Component LabelingOperation on a Linear Array Architecture "JOURNAL OF INFORMATION SCIENCE ANDENGINEERING19, 353-370. [Kur15] Kurt Schwenk., Felix Huber,( 2015), " Connected Component Labeling Algorithm for very complex and high resolution images on an FPGA platform ", High-Performance Computing in Remote Sensing, V, 964603. [Lif09] Lifeng He., Yuyan Chao, Kenji Suzuki, and Kesheng Wu,(2009)," Fast connected-component labeling", Pattern Recognition 42. [Leo15] Leonard Rothacker, Denis Fisseler., Gerfrid G.W, Müller G, Frank Weichert,(2015)," Retrieving Cuneiform Structures in a SegmentationfreeWord Spotting Framework", 3rd International Workshop on Historical Document Imaging and Processing , August 22, Pages 129-136. 951

References

[Mar13] Marc Van De Mieroop. (1999), "Cuneiform Texts and the Writing of History",Routledge. [Mil08] Milan Sonka, Vaclav Hlavac, and Roger Boyle" Imaga processing , Analysis and Machine Vision ", Thomson ,Third edition , 2008. [Muh00] Muhammad H. Alsuwaiyel, Marina Gavrilova. "On the Distance Transform of Binary Images ".The 2000 international conference on imaging science system and Technology. Vol. I. Las Vegas, Nev. p 83–6. [Mus16] Mustafa Salam Kadhm AL-Shammari .(2016)," Arabic Handwritten Text Recognition and Writer Identification", PhD Thesis, University of technology [Moh17] Mohammad Abdul Ramiz. and Ruhina Quazi,( 2017)," Design Of An Efficient Image Enhancement Algorithms Using Hybrid Technique " , International Journal on Recent and Innovation Trends in Computing and Communication, Volume: 5 Issue: 6 , June. [Nab08] Nabih N. Abdelmalek, and William A. Malek, Numerical Linear Approximation in C, Taylor & Francis Group,2008. [NAG09] PV NAGESWARA RAO, T UMA DEVI and Kaladhar D., and DSVGK KALADHAR" A probabilistic neural network approach FOR PROTEIN SUPERFAMILY CLASSIFICATION ",Journal of Theoretical and Applied Information Technology,2009. [Nak13] Naktal M.Edan, (2013),"Cuneiform Symbols Recognition Based on KMeans and Neural Network " , Fifth Scientific Conference Information Technology,. , Vol. 10, No. 1, 2013. [Nid15] Nida M. Zaitoun and Musbah J. Aqel,(215)" Survey on Image Segmentation Techniques",International Conference on Communication, Management and Information Technology , Procedia Computer Science 65 (797 – 806). [Nil18] Nils .M. Kriege, Matthias Fey, Denis Fisseler, Petra Mutzel and Frank Weichert,(2018) " Recognizing cuneiform Signs Using Graph Based Methods " , Computer Vision and Pattern recognition, 9 Mar 2018 . [OIV96] OIVIND DUE TRIER., ANIL K. JAIN TORFINN TAXT," FEATURE EXTRACTION METHODS FOR CHARACTERRECOGNITION--A SURVEY" . Pattern Reco#nition, Vol. 29, No. 4, pp. 641-662, 1996. 951

References

[Pau97] Paul L. Rosin.," Techniques for Assessing Polygonal Approximations of Curves "., IEEE Transactions on Pattern Analysis and Machine Intelligence, Jun 1997. [Ped12] Pedro F. Felzenszwalb, and Daniel P. Huttenlocher, " Distance Transforms of Sampled Functions" ., THEORY OF COMPUTING, Volume 8 (2012), pp. 415–428. [pin14] Pinki Agrawal, Vishakha Chourasia, Ravikant Kapoor, and Sanjay Agrawal," A Comprehensive Study of the Image Enhancement Techniques. ",(2014), International Journal of Advance Foundation and Research in Computer Volume 1, Issue. [Pra06] B. Gatos,I. Pratikakis, and S.J. Perantonis, " Adaptive degraded document image Binarization ", Pattern Recognition 39 (2006) 317 – 327. [Pri13] Priyanka Sharma and Manavjeet Kau.,2013, " Classification in Pattern Recognition: A Review", International Journal of Advanced Research in Computer Science and Software Engineering. Volume 3, Issue 4. [Pri17] Pritpal Singh., Sumit Budhiraja ,( 2017) ," Feature Extraction and Classification Techniques in O.C.R Systems for Handwritten Gurmukhi Script " , International Journal of Engineering Research and Applications, Vol. 1, Issue 4, pp. 1736-1739. [pov14] Poovizhi P.," A Study on Preprocessing Techniques for the Character Recognition ",( 2014 ), International Journal of Open Information Technologies vol. 2, no. 12. [Raf02] Rafael C. Gonzalez , Richard E. Woods, (2002), "Digital Image Processing", Prentice Hall. [raj15] M P Raj, P R Swaminarayan, J R Saini and D. K. Parmar", (2015), "Applications of Pattern Recognition Algorithms in Agriculture: A Review", Int. J. Advanced Networking and Applications Volume: 6 Issue: 5. [Ram17] Rama Gaur and V.S. Chouhan ,(2017), "Survey on Feature Extraction Techniques for Handwritten Character Recognition" international Journal on Recent and Innovation Trends in Computing and Communication, Volume: 5 Issue 5. [Rav13] Ravi y S., and A M Khan,(2013)" Morphological Operations for Image Processing",NCVSComs-13 CONFERENCE PROCEEDINGS. [Roh12] Rohit Verma , and Jahid Ali,(2012)" A-Survey of Feature Extraction 954

References

and Classification Techniques in OCR Systems " International Journal of Computer Applications & Information Technology Vol. I, Issue III. [Sab13] Sabhara R., Lee C., and Lim K.," Comparative Study of Hu Moments and Zernike Moments in Object Recognition ",Smart Computing Review, vol. 3, no. 3, June 2013. [SAN07] SANJEEV KUNTE .p, D SUDHAKER SAMUEL, 2007 “A simple and efficient optical character recognition systemfor basic symbols in printed Kannada text” Sadhan, journal of Indian Academy of science, Vol. 32, Part 5. [Sam10] Sammut C., and Webb G., “Encyclopedia of Machine Learning”, US: Springer 2010. Sciences, Vol. 3, No. 1. [SEA02]Sean Eron Anderson and Marc Levoy (2002), "Unwrapping and Visualizing Cuneiform Tablets ",Stanford University 2002. [She13] Sheenam Bansal, and Raman Maini,(2013)," A Comparative Analysis of Iterative and Ostu’s Thresholding Techniques ", International Journal of Computer Applications Volume 66– No.12. [Shi09] Shih.Y.Shih, "image processing and mathematical morphology fundamental and application" , Tyler & Francis group 2009. [Sne12] Snehal O.Mundhada, and V. K.Shandilya, 2012, " Spatial and Transformation Domain Techniques for Image Enhancement ", International Journal of Engineering Science and Innovative Technology (IJESIT) Volume 1, Issue 2. [Suj14] Sujata Saini., Komal Arora, , 2014, " A Study Analysis on the Different Image Segmentation Techniques ",International Journal of Information & Computation Technology Volume 4, Number 14. [You12] Youssef Bassil, and Mohammad Alwani, (2012)," OCR Post-processing error correction algorithm using googles's online spelling suggestion " Journal of Emerging Trends in Computing and Information. [Vin12] Vinita Dutt., Vikas Chaudhry., and Imran Khan 2012," Pattern Recognition :an Overview", American Journal of Intelligent Systems, Volume 2 No 1.

955

References

956

Assyrian Language

1.Introduction Cuneiform writing is considered one of the oldest writing systems in the world, originated in the land of Mesopotamia in the third millennium B.C. This writing relies on drilling cuneiform symbols on clay tablets or tablets of stone to form cuneiform groups which reflected the basic language meanings. These tablets reflect the commercial and historical transactions in addition to civil rights legislation and charters. The term cuneiform came from the Latin term “cuneus” Which means wedge and “forma” form. The beginning of invention was in the Sumerian city Uruk in southern of Iraq at 3000 BC [Chri15],[Mar13]. The cuneiform writing (Sumerian) was not initially composed of cuneiform symbols, but was in the form of lines that formed a shape express the meaning of something such as (sun, farm, bull, star ,ox...) [Chri15] figure (1),

(a)

(b)

Figure 1: Clay tablets of Sumerian language .

132

Appendix (A)

Assyrian language

The process of drawing these shapes on clay tablets suffered from representing their on clay with maintaining their characteristics. Therefore the first Sumerian writing underwent to many stages of development that led to the emergence of Acadian cuneiform writing, which is the basis of cuneiform languages (Babylonian and Assyrian) [And07]. The development here is intended to rerepresent graphic shapes into cuneiform symbols, figure (2) [Kin16], thus leads to reducing the number of cuneiform symbols from 1200 to 500 . [Leo15]. Meaning

Out line Character B.C. 4500

Cuneiform

Assyrian

Babylonian

B. C 2500

B.C. 700

B.C. 500

Fish

Mountain

Man

Figure(2) : The stages of cuneiform writing development.

The Babylonian and Assyrian cuneiform languages can be classified according to the stages of historical development, as follows [And07] : 1-Old stage: B. C. 2000-1500: Old -Babylonian ,old- Assyrian. 2-Middel stage B. C. 1500-1000: Middle Babylonian , Middle Assyrian. 3- Neo stage B. C. 1000 - 600 New- Babylonian, New- Assyrian 133

Appendix (A)

Assyrian language

2.New-Assyrian cuneiform language : The Assyrian cuneiform language represents the one of the stages of the development of cuneiform writing in Mesopotamia, which continued from the beginning of the first millennium to 600 BC. It relies on drilling cuneiform symbols on clay tablets or tablets of stone from left to wright to form cuneiform groups which reflected the basic language meanings. This language consists of a set of letters see (appendix B). Each one consists of one or set of cuneiform symbols, this symbols or wedges are organized in different directions either horizontal, vertical, Oblique or diagonal. [Mar13] , [Leo15] , [JON04] , therefore, these letters with their symbols vary from one character to another according to the number of symbols, their direction and their location) figure (3).

(a)

(b)

(c)

Figure (3): cuneiform writing features , (a-c) cuneiform symbols with their directions styles takin from Iraqi museum.

As with location factor (third factor) can see the evident in following figure (4) where each character has the same numbers of cuneiform symbols and directions but they are different according to their position [Hai06].

(b)

(a)

Figure 4:Cuneiform writing with distinguish position feature differ according to their position 134

a,b) cuneiform character

Appendix (A)

Assyrian language

Therefore when applying the previous distinguished factor in cuneiform Assyrians characters (appendix B) can conclude the evident third factor. The Assyrian cuneiform language differs from the Heliographic language that the letters in the first one represent the shapes, meaning or takes an vocal. It represents a different meaning depending on the combination of characters. [And07] .

3.Graphic Representation of Cuneiform: The visualizing process about cuneiform tablets is impotent task especially when documenting it's images or dealing with cuneiform recognition stat, unfortunately, this task is accompanied by a set of determinants or obstacles .The first problem relates to the nature of the writing medium, whether it is stone or clay, with a three-dimensional writing form. The second relates to the nature geometry of the cuneiform symbol, which takes the threedimensional form (three surfaces) [Leo15] figure(4,a) .The last problem is the cuneiform writing style, it does not depend on writing on one face but may take writing on all the surfaces of the tablet, figure(4,b) [JON04] .

(a)

(b)

Figure(5) cuneiform writing with geometry shape feature . a) the 3-D geometry shape of cuneiform symbol , b) the style of cuneiform writing on more than surfaces.

The process of getting digital files for cuneiform tables can take two ways, the first one (autograph method) starting from manually coping the tablet 135

Appendix (A)

Assyrian language

symbols and characters to produce image similar to cuneiform tablet figure(5.a) and scanning it to produce a digital image, the major disadvantage of this method, the time cost requirement and it depends on the interpretation of the author [SEA02] , so it suffered from some mistakes. The second method is (Photographs method), it produces two-dimensional visual image for the cuneiform tablet. This suffers from shadow problem as result of the nature of cuneiform writing on the according to its geometrical nature. therefore for solving this problem the photography process require registrar image in different light angles in order to obtain adequate cumulative 3D information , this technique is defined as prototype scanning as seen in figure (6), [Dan06] where this technique maintains the fixed distance between the camera and tablets leading to accurate registering process, where the illumination sours will vary according to rotate its figure(7). Finally the required information is subjects to preprocessing process agonist noise and non-linearity problems . [Dan06] .

(a)

(b)

Figure(6): autograph method a).cuneiform image b) Hand-drawn copy

136

Appendix (A)

Assyrian language

Figure (7) prototype scanning model

4. Cuneiform Sign Structure , [Hai06] . Assyrian writing system takes various combinations toward a different type of wedges according to their directions style which represented by upright, horizontal, diagonal or slopping as seen in figure(8). This writing system contained approximately 1,200 signs are known. At first, symbols were written from top to bottom. Later, they were turned onto their sides and written from left to right. In later periods harder materials were also used. Five basic orientations are applied: horizontal, two diagonals, a hook and a vertical stroke

Figure.8:cuniform symbols styles 137

Appendix (A)

Assyrian language

5. Conform symbols problems : There are many problems and constraints that accompany the images of cuneiform tablets, which negatively affect the processes related to the recognize of cuneiform symbol’s and their analysis. Below is a review of these problems according to specific case. Shadows problem: As mentioned previously, cuneiform symbols have an geometry form as three-dimensional structure , so when the angle of light is changed this leads to change the properties of cuneiform images (depending on the level of the reflected light angle), figure (9), this meant here the difference of shadow reflect on the value of the segmented features. Therefore This problem will have an effect on the direction of analysis for character recognition, this is evident when the two images (for the same character with different light angle) are subject to segmented process, leading to different results! .The figure(10) below shows the difference in the shadows area depending on the direction of the beam of light , producing different results.

(a)

(b)

Figure(9) effected of light angle a) upper triangle shadow affected by write light angle . b) left triangle shadow affected by left light angle .

138

Appendix (A)

(a)

Assyrian language

(b)

(e )

(c)

(f)

(g)

(d)

(h)

Figure(10): Shadow vs segments problem . (a-c) organelle conform images is takin from Iraqi museum , (e-h) segmented images with difference result after subject to binrize process ,effected by the variation of light .

Font problem: There are no specific specifications for the cuneiform font. The observer can distinguish a difference font size about cuneiform symbols for the same character depending on the estimation by writer. The figure below shows the variation of the size about symbols from one image to another, in addition to the effect of shadows, especially on the right image.

Figure(11) : cuneiform images with different fonts.

distortion Problem: As a result of the aging of cuneiform tablets, since thousands of years after its writing, some of its cuneiform symbols suffered from erosion or breakage, which negatively effects on their features. 139

Appendix (A)

Assyrian language

(a)

(a)

Figure (12): The impact of distortion .

Problem of writing lines: There is a fundamental problem related with cuneiforms symbols it’s writing lines that is in sum time associated with cuneiform characters, which have different properties, that vary from one image to another. Therefor presence of these symbols will effect on recognition state exactly with stage of generates features vectors.

(a)

(b)

Figure (13): Writing line problem can ,see the difference between the two images, the presence and absence of liens .

140

APPENDIX B

List Of Assyrian cuneiform Symbols

141

142

143

145

146

147

148

المستخلص تعتبر الكتابة واحدة من اقدم األختراعات التً شهدتها البشرٌة حٌث كانت البداٌة فً ارض مابٌن النهرٌن وبالتحدٌد فً جنوب العراق فً منطقة اورك عند األلف الثالث قبل المٌالد.خضعت الكتابة وعلى مختلف العصور الى سلسلة من مراحل التطور والتً تمثلت وبمرور الزمن الى تحول اشكال الكتابة السومرٌة القدٌمة وبرموزها الصورٌة الى رموز مسمارٌه مما ادى بالتالً الى اختزال عدد الرموز الصورٌة الى عدد اقل من الرموز المسمارٌة انعكس ذلك التطور الى والدة الكتابة المسمارٌة األكدٌة التً هً بمثابة األم للكتابات البابلٌة واألشورٌة .تتشكل الكتابات المسمارٌة على هٌئة رموز مسمارٌة تتخذ اتجاهات افقٌة ,مائلة او عامودٌة والتً تكتب بضغط هذ الرموز على الطٌن اوالحفرعلى الحجر.تحتوي المتاحف العالمٌة ومنها المتحف العراقً على العدٌد من هذه الكتابات وعلى هٌئة الواح طٌنٌة او حجرٌة ولصعوبة تفسٌر هذة الرموز مع قلة المفسرٌن ,ظهرت الحاجة الى حل هذة المشكلة.وعلٌة ٌقدم هذا البحث نظاما مقترحا لتمٌز هذة الرموز فً صوراأللواح المسمارٌة بأستخدام تقنٌات تمٌز األنماط وبالتحدٌد بتوظٌف تقنٌة تمٌز الحرف الضوئً عن طرٌق الخوارزمٌات المقترحة .أضافة الى الخوارزمٌات المقترحة فً مجال تحسٌن وتوحٌد معالم الصورة المتمثلة بأزالة البقع وخطوط الكتابة التً تمثل خضائص تتغٌر نسبة تواجدها من صورة الى اخرى وبالتالً البد من ازالتها للحصول على صٌغة موحدة قبل عملٌة التمٌز.اعتمادا على تقنٌات (مورفولوجٌة الصور ومحول المسافة) باسلوب مقترح جدٌد .تم خالل هذا البحث اقتراح مجموعة الصور لتدرٌب النظام على هٌئة اشكال مثلثٌة افتراضٌة تعكس بدورها كافة احتماالت األنماط المتشكلة من انعكاس الضوء على الرمز المسماري الثالثً البعد ( كعامل معتمد وألول مرة ) موزعة بواقع اربعة انماط رئٌسٌة لتشكل بدورها سبعه عشر صف تختلف الواحدة منها عن األخرى بأألتجاه .تم ومن خالل هذا البحث المقدم الى اجراء مقارنة دقة التمٌزللحرف المسماري حٌنما ٌتم المتشكل باستخام وألول مرة تقنٌة تقارب األعتماد على بناء متجة الصفات المستخلصة المنحنى( )polygon approximationومقارنة نتٌجة التمٌز مع الطرق التقلٌدٌة فً بناء متجة الصفات وهً: (. )Hu and zerink moment ,histogram projection and elliptic Fourierعلما انه قد تم األعتماد على اكثر من مصنف لزٌادة معولٌة اتخاذ القرار حول افضلٌة اي طرٌقة فً استخالص الصفات .وهما probabilistic neural network , support vector machineحىٌث كانت النتائج وبعد تنفٌذ األختبار على مجموعة صور األختبار بنسبة تمٌز هً األعلى ( )%95عند اعتماد منحنى التقارب كمستخلص للصفات والمصنف ( )PNNاما بالنسبة للطرق األخرى فقد كانت النسب متدنٌة مقارنة مع الطرٌقة المقترحة اعتمادا على المصنفات انفة الذكر.تم ومن خالل البحث اٌضا اقتراح ثالث خوارزمٌات ألزالة البقع وخطوط الكتابة المسمارٌة للوصول الى صٌغة موحدة قبل اجراء عمٌات التمٌز حٌث كانت دقة ازالة البقع وباستخدام الخوارزمٌة بنسبة عالٌة ٌعول علٌها تصل الى .%95اما بالنسبة ألزالة خطوط الكتابة فقد كانت نتائج الخوارزمٌة الثانٌة افضل من األولى بدقة , %92اضافة الى تفضٌل اعتمادها عن األولى لعدم اعتمادها على حداللعتبة مسبق التعرٌف .اما الجانب األخٌر الذي تم استعراضه فً هذا البحث وباقتراح خوارزمٌة تتعلق بحل مشكلة تطابق صفات الحرف المسماري مع حرف اخر .

وزارة التعلٌم العالً والبحث العلمً الجامعة التكنولوجٌة قسم علوم الحاسوب

تمٌٌز الرموز المسمارٌة باستخدام تقنٌات تمٌٌز األنماط أطروحة مقدمة الى قسم علوم الحاسوب فً الجامعة التكنولوجٌة كجزء من متطلبات نٌل درجة الدكتوراه فلسفة فً علوم الحاسوب

أعدت من قبل

8102م

1439هه

علي عادل سعيد التميمي بأشراف أ.د عبد المنعم صالح رحمة

أ .م.د عبد المحسن جابر عبدالحسٌن