Automatic License Plate Location and Recognition - NOPR

1 downloads 0 Views 861KB Size Report
Indian Journal of Engineering & Materials Sciences. Vol. 14, October 2007, pp. 337-345. Automatic license plate location and recognition. Z X Chena*, CY Liub, ...
Indian Journal of Engineering & Materials Sciences Vol. 14, October 2007, pp. 337-345

Automatic license plate location and recognition Z X Chena*, CY Liub, G Y Wanga & J G Liua a

Huazhong University of Science and Technology, State Education Commission Laboratory for Image Processing & Intelligence Control, Institute of Pattern Recognition & Artificial Intelligence, Wuhan 430074, China b Wuhan University of Science and Technology, College of Science, Wuhan 430081, China Received 21 September 2006; accepted 17 September 2007 Automatic license plate recognition (LPR), which plays an important role in intelligent transport system has been proposed. In this study, a novel method to recognize license plates is presented. The proposed LPR technique consists of three main phases. Firstly, a segmentation phase contains the one-dimension cycle clear method locates the license plate within the image. Then, a procedure based upon feature projection separates the license plate into seven single-characters. Finally, the character recognizer extracts some feature points and uses the multi-stage classifier to get a robust solution under multiple acquisition conditions. Experiments have been conducted for the respective modules. Combining the above all rates, the overall rate of success algorithm is 92.5%. Our experiments show that the proposed method for LPR is correct and efficient.

Automatic LPR is a necessary capability for the realization of unattended tollgates. A vision system for the car identification can also help a human operator and improve the overall quality of a service. Highways, parking areas, bridges or tunnels are places where such a system can be applied. Any situation requiring the automatic control of the presence and identification of a vehicle provided with a license character may represent a potential application1-3. If a payment operation is also expected, then the use of a credit card or an automatic cashier might be a solution, but a number of different situations have to be considered, namely, the machine is out of order or the customer has not proper money. In such a case, a picture of the car can be taken using a TV camera, and then analyzed by a human operator or by a vision system to send an invoice to the car owner. Typically, an LPR process consists of three main stages: (i) locating license plates, (ii) segmenting license characters and (iii) recognizing license characters. In the first stage, license plate candidates are determined based on the features of license plates. Features commonly employed have been derived from the license plate format and the characters constituting license plate. The features regarding license plate format include shape, symmetry4, height-to-width ratio5,6, colour7,6, texture of grayness8, spatial frequency9, and variance of intensity values10,11. __________ *For correspondence (E-mail: [email protected])

Character features include line12, blob13, the sign transition of gradient magnitudes, the aspect ratio of characters14, the distribution of intervals between characters15 and the alignment of characters16. In reality, a small set of robust, easy-to-detect and multistage object features would be adequate. In this paper a vision system for the recognition of the Chinese license character is presented. The paper mainly addresses the issue of reading the license characters within an acquired image. This system is a new and efficient way to pay the imposed toll on the Chinese highways, by means of a radio link between the motor vehicle and the toll-station. System Overview In this section a brief description of the Chinese license plates is given. The styles of license plate that are considered in this study are discussed, followed by a brief description of the proposed LPR process. A Chinese car license characters consist of four fields: the first character contains the short for province; the second character is one of the 26 letters; the third and fourth characters are the mix of the letter and the number; the last three characters are numbers. We need to do character sequence location. The presented system has been designed to recognize automatically the characters written on the license plate placed on the motor vehicles. The image acquisition is performed by a CCD TV camera mounted on the framework of a tollgate. The

338

INDIAN J. ENG. MATER. SCI., OCTOBER 2007

analog image is then converted into a digital signal to be processed later by a computer. A frame grabber is used to obtain the proper data. Therefore, in order to manage such a difficult recognition task, this system is composed by three modules: the license plate area location module, the characters segmentation module and the characters recognition module (see Fig. 1). Module 1: It tries to locate the position of the license plate within the acquired image. The estimation of the image portion containing the license plate is not yet very accurate, because the utilized algorithm does not segment the single characters but the whole license plate area. Module 2: In this module, the seven characters were isolated with feature projection.

Module 3: The recognition phase employs a template matching technique in order to find a match for the characters into the license plate. If it fails, a reject massage is output. License Plate Location

In this section the technique to locate it within the input image is presented. As showed in Fig. 1, the technique for automatically detecting license plate (LP) consists of four steps has been described in detail. Vertical edge detection

If an image consists of objects of interest on a contrasting background, an edge is a transition from background to object or vice versa. For the image that containing LP may also including the dynamo and fore-battle etc, which have very strong horizontal edges. These edges have great effect on the LP

Fig. 1 — The tasks of the system

CHEN et al.: AUTOMATIC LICENSE PLATE LOCATION AND RECOGNITION

localization. Therefore we use Eq. (1) to detect the vertical edge map in order to suppress horizontal noise. The vertical edge extracting result is shown in Fig. 2. Before vertical edge detection, we use a line filter to smooth the image and apply the illuminance normalization to reduce the influence of light.

g ( x, y ) = f ( x − d , y ) − 2 * f ( x , y ) + f ( x + d , y )

x ∈ [1, M ], y ∈ [1, N ]

…(1)

where f(x,y) represents the gray image of the input image after denoising, g(x,y) represents the vertical edge map, image size is M×N. The vertical edges will be enhanced by Eq. (1). And d usually samples value from 1∼4. Horizontal projection

Scanning image from top to bottom, we judge the fluctuation of each pixel at each row. If the fluctuation is larger than threshold TH1, we make its value set ‘1’; otherwise, set ‘0’. Then we compute the sum of ‘1’ at each raw, if the sum is larger than threshold TH2, we

make the raw projection value to set ‘1’, otherwise set ‘0’. So the horizontal projection matrix Hor is an M*1 column vector than only consists of ‘0’ and ‘1’. The projection map is shown in Fig. 3 (a). 1-DCC

In full image, there are some regions that texture feature is similar with LP, but these regions are narrower. This thing reflects as scattered ‘1’ at matrix Hor. It is well known that the gray fluctuations in LP are centralized and not detached (such as the 10 Arabic digitals and 26 letters). In order to clear these clutters, the method of 1-DCC will be used in this paper. This method can compute the number of sequence ‘1’ at matrix Hor after scanning it only once. Then it can clear sequence ‘1’ that the sum is smaller than the setting threshold TH3 according to LP height. Step 1: Initialize accessing symbol matrix A {Assis(m)=0}, where 0 denotes it has not been scanned. Initialize stack S1 and S2, where S1 note coordinate, S2 note the number of sequence ‘1’;

(a)

(b)

Fig. 2 — (a) Original image and (b) vertical edge

(a)

339

(b)

Fig. 3 — (a) The projection map before clear and (b) the projection map after clear

340

INDIAN J. ENG. MATER. SCI., OCTOBER 2007

Step 2: Scan in turn every dot hor(i) at matrix Hor from top to bottom. If hor(i) 1 and Assis(i)=0, we push hor(i) coordinate into stack S1 and add 1 to stack S2. At the same time we set this coordinate accessing symbol as 1, i.e., Assis(i)=1; Step 3: Check the neighborhood pixels around this coordinate, if their values are ‘1’ and their accessing symbols are ‘0’, go to Step 2; else go to Step 4; Step 4: Check the value SUM in stack S2, if SUM