Intelligent segmentation of industrial radiographic images ... - CiteSeerX

18 downloads 0 Views 317KB Size Report
Intelligent segmentation of industrial radiographic images using neural networks. Shaun W. Lawson and Graham A. Parker. Mechatronic Systems and RoboticsĀ ...
Intelligent segmentation of industrial radiographic images using neural networks Shaun W. Lawson and Graham A. Parker Mechatronic Systems and Robotics Research Group Department of Mechanical Engineering University of Surrey, Guildford, Surrey, GU2 5XH, United Kingdom.

ABSTRACT An application of machine vision, incorporating neural networks, which aims to fully automate real-time radiographic inspection in welding processes is described. The current methodology adopted comprises two distinct stages - the segmentation of the weld from the background content of the radiographic image, and the segmentation of suspect defect areas inside the weld region itself. In the first stage, a back propagation neural network has been employed to adaptively and accurately segment the weld region from a given image. The training of the network is achieved with a single image showing a typical weld in the run which is to be inspected, coupled with a very simple schematic weld 'template'. The second processing stage utilises a further backpropagation network which is trained on a test set of image data previously segmented by a conventional adaptive threshold method. It is shown that the two techniques can be combined to fully segment radiographic weld images. Keywords: automated radiographic inspection, backpropagation neural networks, image segmentation, defect detection. 1. INTRODUCTION Radiographic inspection is widely used in industry to non-destructively examine all types of welded joints for possible flaws or defects such as cracks, porosity, lack of fusion and solid inclusions. Increasingly, real time radiography (RTR) systems, featuring the use of video cameras, are being used which allows for on-line inspection of critical weldments, for example in lay barge pipe construction, where it is imperative that any defects are located as soon as possible after the weld bead is laid. This enables any faults that are present to be quickly rectified without serious capital loss. Such real time systems have also enabled the concept of on-line digital radiography to become a reality, where image processing can be used to enhance image quality to aid the manual inspection procedure. The current advances in computer processing power and the advent of relatively inexpensive dedicated video image processors will result in the increased speed and sophistication, but reduced cost, of such systems. In addition to the relatively simple operation of image enhancement, sophisticated image analysis for digital radiography is a widely studied research field, with much recent research being directed at fully automated inspection systems 1-5. Such systems aim to eliminate the variability in inspection cycles in which human observers are currently used. In such cases the outcome of the manual inspection of a component is subject to factors such as operator fatigue and experience, poor radiographic quality and so on. The majority of published work in fully automated inspection has concentrated on defect detection which is clearly the critical stage of any inspection system. Once potential defects have been located, it is also clear that shape analysis techniques coupled with pattern recognition, are potentially highly successful in recognising different defect types 6. This paper details a novel approach to two of the stages required in an automated weld inspection scheme - weld localisation and defect detection. The approach used in each of these tasks is based on a multi layer artificial neural network (ANN) trained as an image segmentation tool. Each of the two tasks can be considered as a pixel classification problem - the class of the pixel depending on it's value and that of it's neighbourhood region. Section

2 of the paper gives an overview of image segmentation based on neural techniques, and a brief description of multi-layer backpropagation networks. Section 3 describes the development of a network to solve the problem of weld segmentation in radiographic images. Section 4 details the development of a second ANN which is trained to detect potential defective areas within the weld region of the image. An example of the combined use of the two networks for full image segmentation is given in section 5. Finally the performance of the networks are discussed in the context of an on-line automated radiographic inspection system, along with future developments to the work. 2. NEURAL NETWORKS IN IMAGE SEGMENTATION Artificial neural networks, as pattern classifiers, have been described in great length in the literature. When it is remembered also that an image segmentation operation can be regarded as a pixel classification procedure, then it is not surprising that neural networks have also been used to discern between various objects and regions within a scene. Waard7 describes a backpropagation network trained to classify pixels in a text segmentation application. Similarly Yan and Wu8 successfully use a multi-layer perceptron network to segment text and line features from coloured digital images of maps whilst Silverman and Noetzel9 describe the use of a 4-layer back-propagation network trained to locate tumors in medical ultrasound images. input1

PE

input 2

PE PE

input 3

class1 class2 classn

output hidden layer layer

inputn input layer

Fig(1) - typical 3-layer multi-layer perceptron (MLP) neural network architecture A different approach is adopted by many researchers who favour large self organising networks whose dimensions match that of the raw input image itself, thus if the image is 512x512 pixels in size then the input layer of the network will comprise of 512x512 processing elements. The procedure then is to adopt some energy minimisation approach to iteratively classify neighbouring like groups of pixels within the image. The advantage of this approach over multi-layer perceptron networks is that the segmentation of an image can be made without previous training. However, it is clear that these approaches all share a similar inherent iterative computation scenario and therefore share a heavy and expensive computational burden. Indeed, simulation of neural networks of the size required in many industrial inspection applications (512x512 elements) is clearly not achievable on many computing platforms. For automated inspection applications therefore the most promising approach appears to be via the route of the relatively simple multi-layer perceptron (MLP) network employed to classify individual pixels as to be of one object class or another. The topology of a MLP neural network of the type used in the work presented here is given in fig (1) above (an exhaustive description of multi-layer perceptron artificial neural networks and the backpropagation rule is not within the scope of this paper - a comprehensive explanation is given in Wasserman10 for example). The figure shows a three layer network incorporating a single hidden layer of processing elements (PE's). Generally each PE in one layer has a weighted connection to every PE in the next layer. Each PE performs a summation of it's

inputs and passes the result through a transfer function - this is usually a linear function at the input layer and a non-linear sigmoid function at every other layer. Hence if

Xsj is the weighted sum of inputs to the jth neuron in layer s, Ysj is the output of the jth PE in layer i, and wsjk is the weighting from the kth PE in layer (s-1) to the jth PE in layer s, then Xsj = f (Wsj) = f (? wsjk .Ysj )

(1)

where the sum is over the k PE's in the preceding layer. If the transfer function is chosen to be a sigmoid :

f (z) = (1+e-z)-1

(2)

The training method used in this work is the backpropagation rule where initially the weightings on all connections are random. A set of training data is used to adaptively modify the weights of the network so that it is able to produce an output pattern which is a satisfactory response to a set of input stimuli. This assumes that prior classification of the input data is available - thus this type of network training is often termed 'supervised learning'. During the training stage the backpropagation rule modifies the weightings of the network PE's in a fashion as to iteratively reduce the error between the desired output response and the actual output response, given a particular input stimuli. The problem with this approach is deciding which weights in the network to alter and by what magnitude - that is to say assigning individual blame for the total error at the output to the individual PE's. The backpropagation method solves this problem by assuming that every PE element and connection in the network is to some extent to blame for the total network error. The modification of the weights is performed backwards through the network hence the term 'backpropagation'. Firstly the individual errors at each of the output PE's is calculated and the connected weights adjusted accordingly. Next the errors at the preceding layer are calculated and so on until the input layer is reached. The modification of the weighting at each the of the PE in the network is based on a gradient descent rule and is given by

wsjk (t+1) = wsjk (t) + lcoeff. esj .Y(s-1)k where lcoeff is a learning coefficient and the local error is given by

(3)

esj is the local error associated with the PE. For PE's in the output layer

esj = Ysj.(1-Y sj)(dsj - Ysj )

(4)

where dsj is the desired output of the jth neuron in the output layer. For all other PE's

esj = Ysj.(1-Y sj).? (e(s+1)n .w(s+1)jn)

(5)

where the sum is over the number of PE's in the (s+1)th layer. When a set of input stimuli are applied to the input layer of a trained network the resultant outputs of the interconnected PE's is propagated layer by layer through to the final output layer using equ(1). During training the internal nodes of the network architecture form a structured representation of the input pattern classes - this can be thought of as the knowledge base of the system. A trained multi layer network is able to generalise about different classes of object even if the input vector used does not linearly discern between the classes. Each output PE is usually trained to signify a particular class - thus the 'winner takes all' scenario is the simplest method of selecting the output class, where the output node with the largest value is taken as the resultant classification. In the application of defect detection a network can be configured to have only a single output - registering a maximum when the classification is a defect and a minimum when non-defect. In the case of larger scale region segmentation, such as the weld localisation problem, again a single output could register object or non-object. It is apparent that the response of such networks be triggered at the input by a combination of the pixel and its

immediate neighbouring population. If we can also generalise about the object position in a given image then the relative location of the pixel under classification may also be used as an input stimuli. Such a descriptor is used in the weld localisation neural network which is described in the following section. 3. LOCALISATION OF THE WELD REGION The task of accurately locating the weld bead within the radiographic image is essential in subsequent stages of processing in order (a) to substantially reduce the volume of pixel data to be subsequently processed (we are not generally interested in defects outside of the weld region), and (b) to produce an objective description of the weld profile and shape - this in itself is a critical inspection task. It is interesting to note the relative lack of published work on the localisation of welds in automated radiographic inspection applications. Koshimizu and Yoshida11 describe an approach based on simple edge detection in the 'expected' area of the weld region. For many applications this is unsuitable since the component is likely to change orientation and position during the inspection procedure. Furthermore, if the component being welded is of complex geometry then the simple edge detection of the weld boundaries may prove insufficient. Additional problems arise if the weld image contains significant amounts of noise or if the transition between weld and base material is somewhat diffuse. For these reasons a novel technique of adaptively recognising the weld region has been developed which is based on a multi-layer artificial neural network (ANN). The development of this network is described in this section.

(a)

(b)

(c)

Fig(2) - (a) typical real time radiographic image of pipe weld, (b) manual interpretation of the weld region (c) template image, showing weld region and 'don't know' areas, used to train neural network. A typical digital radiographic image of a pipe weld acquired using a real time radiography system is shown in fig (2a), above. In this work we assume that the weld region is always oriented in a horizontal left to right position roughly located near the centre of the image. Each radiographic image is digitised to 512 x 512 pixels x 8 bits. We also assume that the x-ray image is as with film radiography where low density regions, such as cavities, appear darker than their corresponding backgrounds. Hence the non-dressed weld region, which is typically thicker than the surrounding host material, will appear much lighter than the surrounding parent metal. Typical images may contain other objects as well as the weld and parent material - the image shown in fig(2a) has two Image Quality Indicators (IQI's) towards the bottom of the picture. These are used by human inspectors to measure the performance of the radiographic process. A typical human response when presented with a weld image would be first to look at the overall grey level, or intensity, distribution, recognise the light coloured area as the weld and if necessary label any given pixel in the image as either weld or non-weld. The main ambiguity in this procedure arises in the transition region, where it is a subjective decision as to where the weld starts and the base material stops and vice-versa. The confidence of the pixel classification is quite clearly based on the position of the pixel within the image - fig(2b) illustrates a schematic of the regions likely to be labelled by a human observer given the example weld in fig(2a). Such an observer is able to label pixels in the transition area based only on their intuition and experience, coupled with the local area information contained in the grey levels in that region - this is a subjective operation which is open to varying interpretation.

3.1 Training data Clearly if a manual segmentation of a weld image was to be used as the training data to a neural network then any errors in the segmentation would be unavoidably incorporated into the network's ability as a classifier. Secondly, the training of a network had to be based on real image data, since simulation of an entire weld image was not only impractical but liable to be misleading. Therefore some means had to be devised to classify typical image data by hand with 100% confidence. The technique used was based on a simple 'template' image used to label pixels in a training image as one of three classes : weld (white), non-weld (grey), and don't know (black). The template generated for the image in fig(2a) is shown in fig(2c). Training data could then be generated from the original and the 'segmented' image on only definitely labelled pixels - the 'don't know' areas were ignored at the training stage, with the prediction that the neural network would generalise about those areas when presented with test data. 3.2 Network topology and training The problem of the segmentation was approached as a pixel classification problem, with the input to the classifier arranged as a feature vector describing the 'attributes' of a given pixel. The input parameters were chosen as to try and emulate the response of the human observer, therefore the simplest input vector consisted of only three inputs the pixel intensity, I, at position (i,j), and the image coordinates i and j. However, to compensate for noise spikes and to give greater general flexibility, a combination of the pixel and it's 8 immediate neighbours was used. The feature vector used as an input to the network eventually comprised 6 elements: horizontal and vertical image position, local mean, median, maximum pixel value, and actual pixel value. A three layer perceptron was used for the network architecture. A single hidden layer comprising 5 processing elements was chosen, with a single output element, the desired output being a binary state - pixels that are non-weld (labelled 0), and pixels that are weld (labelled 1). The output could then be scaled to represent values from 0-255 for visual evaluation of the segmentation.

(b) (a) Fig(3) - automatic segmentation of image shown in fig(2) by neural network. (a) after 15,000 training samples (b) after 40,000 training samples. The network was trained using the backpropagation technique described in section 2 on sample data from three different weld images. Training was forced to use a ratio of 4:1 example background pixels to weld pixels. The network was subjected to a maximum of 40,000 training samples from the three images. 3. 3 Results on training and test images Once trained the network could be applied to any weld image. By experimentation it was found that the following sequence of operations segmented the weld cleanly from the background in the majority of images :a) Perform a (5x5) low pass filter over the entire image so as to reduce the image noise. b) Pass the smoothed image through the trained ANN. c) Threshold the segmentation at grey level 128. The result of the application of the neural network to the training image shown in fig(2) is given in fig(3). Fig(3a) shows the segmentation after training to 15,000 samples - at this stage an IQI was still partially labelled as weld region. Also the weld area is expanded near to the edges of the images - this is most likely due to the 'blooming' effect suffered by real time radiography systems 12. If the network was trained to 40,000 samples then the segmentation (shown in fig(3b) was much more accurate. Figs(4) and (5) show further segmentation examples on

images not shown to the network during the training stage. Fig(4a) shows a weld which is fairly well defined but exhibits a curious gap near the left of the image. The segmentation result is shown in fig(4b) with the gap, and the weld clearly labelled. The image shown in fig(5a) has many IQI artefacts and also a number of cavity-like defects along the centre of the weld bead. The segmentation in fig(5b) successfully ignores the IQI's and also, correctly, labels the defects as part of the weld structure.

(a)

(b)

Fig(4) - segmentation of image not used used during training by neural network. (a) original image -note gap in weld bead (b) segmentation result.

(a) (b) Fig(5) - segmentation of further example image not used used during training (a) original image with numerous image quality indicators (IQI's) (b) segmentation result. 4. DEFECT DETECTION As already stated in section 1, defect detection, or flaw segmentation, is arguably the most critical step in an automated radiographic inspection system. As such, many researchers have concentrated their efforts in this area, some adopting generic adaptive thresholding schemes developed from other applications - Jain and Dubuisson 13 for example used a modified version of Yanowitz and Bruckstein's method14 of fitting a threshold surface to an edge detected version of the original image, with impressive results. This approach however contains an iterative surface fitting procedure which is very computationally expensive and which severely limits it's application in an on-line inspection process. Kehoe15 describes another generic, but much faster, approach based on scanning a 'window' over the image. In this method the variance and gradient properties in the window are calculated and a decision on the classification of the centre pixel as been defect or non-defect is made based on these properties. In this way, the technique searches the image for 'unusual' pixel areas - thus it is liable to produce false alarms around component edges and also around defects themselves. The advantages of both Jain and Dubuisson's and Kehoe's approaches is that they do not rely on any a priori knowledge of the characteristic appearance of a defect and thus they can adaptively locate any possible defective area regardless of it's size and shape. The by-product of doing this however is that both techniques are liable to produce false alarms when presented with innocuous, though 'unusual' areas such as component structures and weld edges. Other researchers have avoided this approach and adopted methods based on prior knowledge of the component or defect structure. Gayer3 et al describe several methods of defect detection including image

background subtraction - this is a frequently used technique in image segmentation which assumes that the varying background of the image can be approximated in some way. The background model can either be generated beforehand using prior knowledge of the image content, or more accurately using the actual image under analysis. Accurate generation of background images by either method is a computationally non-trivial affair involving for example least squares or spline approximations to the varying two dimensional image surface for approximating the current image, or radiographic image formation modelling for the prior modelling of the image content. Gayer et al also describe a template matching approach where a series of 'defect templates' are used to try and match defective areas within the image - this works well for the defect types which are included in the template set but will clearly fail when presented with defects of a type or shape not represented. The main problem therefore in the development of an optimal defect detection scheme appears to be in achieving the balance between speed of execution and accuracy of the segmentation. This section describes the development of neural network based technique for adaptive defect detection in radiographic images. The segmentation approach used is based on the method originally proposed by Silverman and Noetzel9 for segmentation of medical ultrasound images.

(a) (b) Fig(6) - two of the image pairs used to train the defect detection neural network. Shown in each case is the original image and the output from Kehoe's adaptive threshold operator (desired output). (a) cavity (b) transverse cracking. 4.1 Training data Since supervised learning by propagation was to be used, once again a means of accurately segmenting a selection of training images had to be devised. Since defect detection is often a very subtle and subjective procedure, it is difficult to manually label the exact extent of flaw areas in a radiographic image. Therefore, another means of automated defect detection was used to classify the training data - the operator used was Kehoe's adaptive threshold method15. Two of the images used in the training of the network are shown in fig(6) along with their respective segmentation by the adaptive threshold method. This segmentation was achieved using a window size of 15x15 pixels. A further 3 training images were used, each of size 128x128 pixels containing different defect examples from different images. In each case the desired output was generated by the adaptive threshold method. 4.2 Network topology and training The input layer to the network comprises the n 2 grey level values to be found in an n x n sub-image centred on the current pixel under scrutiny. The sub-image is moved across the image in a raster fashion, centered on every pixel in turn. Thus a classification for each pixel in the entire image is achieved. Initially the network was trained with a 15x15 input window size in order to match the performance of the adaptive threshold operator. In order to reduce execution time this was systematically reduced and it was found that no degradation of performance occurred until the image size was less than 9x9 pixels. Experiments with smaller sub-images (3x3, 5x5, 7x7) resulted in the network not converging at the training stage. Following the guidelines of Silverman and Noetzel, two hidden layers, each of 10 processing elements, were used. A single output element indicated the classification of the pixel ideally either a '0' for background, or a '1' for a defect.

(a) (b) Fig(7) - segmentation of training images shown in fig(6) by fully trained neural network.Shown in each case is the raw output from the network and the final thresholded segmentation. 4.3 Performance of segmentation The optimum network performance was achieved using 50,000 random sets of inputs generated from the training images. The performance of the trained network was then tested on the training images - two examples of this (the images shown in fig(6)) are given in fig(7). It should be noted that the raw output (varying from 0.0 to 1.0) has been scaled to show grey levels varying from 0 to 255. The final result of the segmentation is achieved by thresholding the scaled output according to the following rule: if (255 - scaled_output) > (original pixel value) then pixel is a defect (label white) else pixel is non-defect (label black)

Comparing the results of the segmentation with the output of Kehoe's technique reveals that the detection of the defects by the network actually appears to be improved over it's supervisor. The boundaries of the two definite cavities in fig(7a) appear much smoother than in fig(6a), as such defects should appear. The segmentation of the transverse cracks in fig(7b) appear slightly more erratic though the defects are at least detected. Some examples of testing the trained network on 'new' images are given in fig(8) alongside the performance of Kehoe's adaptive threshold operator and also an implementation of Yanowitz & Bruckstein's (YB) technique14. It should be noted that a fast, though not entirely satisfactory, surface fitting technique has been used in the implementation of the latter method so the performance is not optimum. However, in terms of speed of execution, this implementation is comparable with that of the neural network. Fig(8a) shows the results of the three segmentation methods on an image containing an area of porosity, or small cavities (air holes). All three methods locate the defects - however the YB method seems to 'shrink' the air holes, whilst Kehoe's technique produces some spurious signals around the weld edge. The neural network appears to perform the best segmentation in this case. Another obvious defect example is shown in fig(8b) - which illustrates an example of lack of weld penetration in the metal joint. In this example all three techniques appear to locate the defect to a high degree. Fig(8c) shows an example of very severe cracking in the weld region. Kehoe's method locates the main components of the cracking but also seems to produce some false signals whilst the YB method seems to actually miss some of the defect areas. Once more the neural network appears to produce the 'cleanest' and most accurate segmentation result. The last two examples in fig(8) show more subtle defects which are difficult to detect. Indeed, in the first case, a transverse crack in fig(8d) the YB method fails to locate the defect at all. Kehoe's method fares better but once more produces some false signals around the weld edge. The neural network on the other hand successfully segments the defect. Finally, fig(8e) shows a longitudinal crack . Once again the YB method does not detect the defect area to any degree. Kehoe's method does detect it (again with false alarms around the weld boundary), and to some extent so does the neural network, though without any false alarms. This is the only case in the five examples where it may be argued that the neural network approach is outperformed by one of the other two techniques.

original image

Kehoe's adaptive threshold

Yanowitz & Bruckstein's method

neural network

(a) porosity

(b) lack of weld penetration

(c) severe cracking

(d) faint transverse crack

(e) faint longitudinal crack

Fig(8) - performance of the trained neural network on 5 different defect examples in radioscopic weld images. Comparison is made with the original image, the result of Kehoe's adaptive threshold operator and also an implementation of Yanowitz and Bruckstein's method. 5. FULL RADIOGRAPHIC IMAGE SEGMENTATION The two techniques described respectively in sections 3 and 4 can be combined to fully segment a given radiographic weld image into the weld region and any internal defects. Fig(9) illustrates an example of this where a portion of the image shown in fig(5) has been segmented first using the weld localisation network (fig(9b)) and then the defect detection network (fig(9c). Any 'defect' areas located outside of the weld region can hence be discarded with the final segmentation shown in fig(9d), onto which the boundaries of the weld and parent metal located have been superimposed. Clearly in an on-line situation the defect detection procedure will only be applied to the area found by the weld localisation method so as to substantially reduce computation time.

(a)

(b)

(c)

(d)

Fig(9) - combination of weld localisation and defect detection results. (a) original image (b) weld segmentation (c) defect detection (d) combination of (b) and (c) showing defects in weld region and weld/metal boundaries. 6. CONCLUSIONS AND FURTHER WORK Two applications of neural networks to the automated segmentation of radiographic images have been described. The first, a multi layer perceptron backpropagation network has been trained to locate the weld region based on the grey level and spatial structure of a given image. The second, again a multi-layer perceptron, has been trained using the labelling of a conventional adaptive threshold technique to detect suspect defect areas within the image. It has been shown that the combination of the two networks is potentially successful in fully segmenting radiographic weld images. The two techniques have been developed as part of on-going research work aiming to fully automate an on-line weld inspection procedure using a real time radiography system for image acquisition. The research is ultimately aimed at the automation of 'in-progress' weld inspection , rather than inspection after the welding process is complete. For this reason the speed of inspection must aim to be comparable with the speed of the welding. Clearly the computation time of any algorithms developed for such a system is a critical factor in their performance. In both of the techniques described in this work the neural network must undergo a lengthy training period. However this is an off-line procedure and once the networks are trained then their performance is only measured by the propagation of data through their architecture. The weld localisation network has only three layers and a relatively small input vector. The elements of the input vector (mean, median etc.) however require calculation for each pixel in the image. The current execution time for this network on a full 512x512 image is in the order of 20 seconds on a Sun SPARC 10 workstation. The defect detection network has, by comparison, a large input vector and 2 internal layers. The elements of the input vector in this case are raw pixel values. The current execution time for this network is the region of 60 seconds for a 512x512 image. It should be remembered that the data input to a defect detection scheme will be substantially reduced by first localising the weld region. However, in both cases, work is underway to try to improve the speed of execution of the networks. This involves the experimental 'pruning' of the network architecture to try and obtain some speed increase by reducing the number of processing elements, or input vector size. The requirement for speed improvement must also be balanced with success of the segmentation. In the case of the defect detection network, further work will investigate the use of a different labelling scheme at the training stage. Kehoe's adaptive threshold does not give a highly accurate segmentation of most defects and it is likely that any network trained by this technique will also suffer inherent segmentation inaccuracies. Therefore work is underway

to develop a more satisfactory way of labelling the training data, including optimised variations on the Yanowitz and Bruckstein method, and also unsupervised learning networks. 7. ACKNOWLEDGEMENTS The authors wish to acknowledge the funding of this work by the Commission of the European Communities under the Brite-EuRam II (Industrial & Materials Technologies) shared cost project scheme (project number: BRE2-0319). In addition the authors wish to thank two of the partners of this project, the Institut de Soudure and Isotopen Technik Dr. Sauerwein GmBH, for kindly providing image data for the work. 8. REFERENCES 1.

H. Boerner and H. Strecker, "Automated x-ray inspection of aluminium castings", IEEE Trans on Pattern Anal & Machine Intelligence, PAMI-10(1), 1988, pp. 79-91.

2.

K. Demandt and L.K. Hansen, "Real-time x-ray system with fully automated defect detection and quality classification", X-ray Real Time Radiography and Image Processing, Proc. of Symposium, ed. by R. Halmshaw, Newbury, Berkshire, 1988, pp. 96-119.

3.

A. Gayer, A. Sayer and A. Shiloh, "Automatic recognition of welding defects in real time radiography", NDT International, vol. 23(3), 1990, pp. 131-136.

4.

J.H Builtjes, P. Rose, and W. Daum, "Automatic evaluation of weld radiographs by digital image processing", X-ray Real Time Radiography and Image Processing, Proc. of Symposium, ed. by R. Halmshaw, Newbury, Berkshire, 1988, pp 63-72.

5.

J.J. Munro et al, "Weld inspection by real time radioscopy", Materials Evaluation, vol. 45(11), 1987, pp. 13031309.

6.

A. Kehoe and G.A. Parker, "An intelligent knowledge based approach for the automated radiographic inspection of castings", NDT & E International, vol 25(1), 1992, pp.23-36.

7.

W.P. de Waard, "Neural techniques and postal code detection", Pattern Recognition Letters, 15, 1994, pp. 199205.

8.

H. Yan and J. Wu, "Character and line extraction from color map images using a multi layer neural network", Pattern Recognition Letters, 15, 1994, pp. 97-103.

9.

R.H. Silverman and A.S. Noetzel, "Image processing and pattern recognition in ultrasonograms by backpropagation", Neural Networks, vol. 3, 1990, pp. 593-603.

10.

P.D. Wasserman, Neural Computing: Theory and Practice, Van Nostrand Reinhold, New York, 1989.

11.

H. Koshimizu and T. Yoshida, "A method for visual inspection of welding by means of image processing of x-ray photograph", Trans. of IECE of Japan, vol E-66(11), 1983, pp. 641-648.

12.

D. Throup, "Improving images from real time radioscopic sources" , X-ray Real Time Radiography and Image Processing, Proc. of Symposium, ed. by R. Halmshaw, Newbury, Berkshire, 1988, pp 87-95.

13.

A.K. Jain and M-P. Dubuisson, "Segmentation of x-ray and c-scan images of fiber reinforced composite materials, Pattern Recognition, 25, 1992, pp. 257-269.

14.

S.D. Yanowitz and A.M. Bruckstein, "A new method for image segmentation", Computer Vision, Graphics and Image Processing, 46, 1989, pp. 82-95.

15.

A. Kehoe, "The detection and evaluation of defects in industrial images", PhD Thesis,University of 1990.

Surrey,