An Accurate Shadow Removal Method for Vehicle ... - IEEE Xplore

3 downloads 0 Views 504KB Size Report
Abstract—This paper proposed an accurate shadow removal method for vehicle tracking. Firstly we detect and remove the shadow using optical gain based ...
2010 International Conference on Artificial Intelligence and Computational Intelligence

An Accurate Shadow Removal Method For Vehicle Tracking Zhou Zhu

Xiaobo Lu

School of Transportation Southeast University Nanjing, China [email protected]

School of Automation Southeast University Nanjing, China Corresponding author: [email protected] to detect shadow, and the optical gain based region growing method is used in [6]. Though these methods can remove shadow precisely in most cases, much of them have to face a challenging problem that some parts of vehicle(for instance, the vehicle window)which are similar with shadow region in color space may be detected and removed as shadow, and this will remain some holes in the vehicle region. To handle this problem, this paper presents a novel method which extracts vehicle’s skeleton and uses it to fill the possible holes remained in the vehicle region. In the next section, we detect vehicle and shadow area together using the background subtraction method. In section 3, the optical gain based gradient analysis method is used to detect and remove shadow preliminarily. In section 4 we first extract the skeleton of vehicle and use it to fill the possible holes remained in vehicle, and then eliminate the shadow edge by geometric scanning. Experimental results are shown in section 5. Finally section 6 outlines the conclusion. Fig.1 shows the flowchart of the proposed method.

Abstract—This paper proposed an accurate shadow removal method for vehicle tracking. Firstly we detect and remove the shadow using optical gain based gradient analysis , in this process some parts of vehicle which are similar to shadow region in color space may be detected as shadow and removing these parts will leave some holes in the vehicle region. Then we fill these holes using the skeleton information of vehicle. Finally the shadow edge is eliminated by geometric scanning. Experimental results show that the shadow can be removed precisely using the proposed method.

Keywords- Shadow removal, Shadow detection, Background substraction, Vehicle tracking, Skeleton

I.

INTRODUCTION

With the development of social economy, video monitoring is used more and more widely in transportation information collection. Video-based vehicle tracking is an important technology for helping collect traffic parameters and detect traffic accidents, and one of its main challenges is shadow detection and removal. The shadow may be detected erroneously as vehicle region which it is adjacent to, and this will affect the vehicle tracking process in two aspects. Firstly it makes the vehicle’s shape and trajectory inaccurate, secondly the shadow may connect two adjacent vehicles and make these two vehicles detected as one vehicle. It’s imperative to remove shadow accurately for stable vehicle tracking. The shadow region is different with the vehicle region in color, texture, gray level, location and other properties. These differences can be used to detect and remove the shadow region, and there are already many shadow removal methods which may be mainly classified as modelbased and color-based. An illumination model which assumes parallel incoming light and constant illumination direction is used to take shadow edges of the vehicle in [1]. A simplified 2D vehicle/shadow model of six types projected to a 2D image plane is used in [2] and a 3D projection model is used in [3]. These model-based methods can remove shadow effectively in limited environments. In contrast, the color-based methods can handle more conditions, such as the color information in HSV color space [4] and RGB color space [5] is used respectively

978-0-7695-4225-6/10 $26.00 © 2010 IEEE DOI 10.1109/AICI.2010.135

Background subtraction Vehicle detection Preliminarily shadow detection and removal Filling the holes using skeleton information Removing the edge of shadow Figure1. Flowchart of the proposed method

II.

VEHICLE DETCTION

Before shadow detection and removing, it is necessary to segment vehicles from the background image. We use the background subtraction method to do this, and at the same time 59

the shadow regions are also detected because the differences between their gray level and the gray level of the background image difference often exceed the segmentation threshold.

have the similar color tune, and that is the ratio between the RGB components of shadow region is equal with that of the background image, so the gray of shadow region can be seen as the gray of background image multiplied by a stable optical gain factor k as shown in (4). This means that the optical gain factor k is distributed uniformly in shadow region. In contrast, because the color tune of vehicle is distinctly different with that of the background image [8][9], the optical gain factor is distributed non uniformly in vehicle region. According to the above analysis we use the optical gain matrix D to detect the shadow region.

The background image used for detecting vehicle is abstracted using the average method as shown in (1). B (i , j ) =

1 N

N

∑H k =1

k

(i, j )

(1)

Where H is the k-th image using for background initialization, N is the number of images used. k

D (i , j ) =

Graycurr (i , j ) Grayback (i , j )

(3)

Where D is the optical gain matrix, Graycurr is the current image’s gray level, Grayback is the background image’s gray level. Gray shadow = 0.299 Rshadow + 0.587Gshadow + 0.114 Bshadow

Figure2. The abstracted background image

= 0.299k ⋅ Rback + 0.587 k ⋅ Gback + 0.114k ⋅ Bback

Fig.2 shows the abstracted background image, and this background image is updated using the method proposed in [7].

⎧ B (i , j ) + M Bk +1 (i, j ) = ⎨ k ⎩ Bk (i , j ) − M

if I k (i , j ) > Bk (i , j ) if I k (i , j ) < Bk (i , j )

= k ⋅ ( 0.299 Rback + 0.587Gback + 0.114 Bback )

(4)

= k ⋅ Grayback

(2)

Where, k is a factor of optical gain which is smaller than 1, and some researches reported that k is between 0.4 and 1 in the shadow region [8]. The variables Rback , Gback , Bback and

Where Bk is the k-th background image, I k is the k-th image needed for shadow removal, M is a modification factor and is reported to be 1 or 2 in [7].

Grayback are factors of the background image, while the factors with index ‘shadow’ are of the shadow region. According to the histogram of optical gain matrix D , a rough domain[ T1 T2 ] is determined to detect the shadow region using the following method. Sh1(i , j ) =

Figure3. Current image

if T1 < D (i , j ) < T2 else

(5)

The none-zero elements of Sh1 compose the preliminary shadow region as the white region shown in Fig.5, and some parts of this region belong to the vehicle because their optical gain is close to that of the shadow region.

Figure4. Result of vehicle detection

With the updated background image, we use binary segmentation to segment vehicle and its shadow from background image. Fig.3 is the current image with vehicles passing, the white region in fig.4 show the segmented vehicle and its shadow region. We will detect and remove the shadow region roughly from the vehicle in next section, and then make the remained vehicle region more accurate in section 4.

III.

⎧ D (i , j ) ⎨ ⎩ 0

PRELIMINARY SHADOW DETECTION AND REMOVAL

There are some differences between vehicle and its shadow, such as color, texture, location,gray scale and so on,and the most apparent one is the difference in gray scale. We can divide the gray scale of background with that of the image needed for shadow detection and get a matrix of optical gain D as shown in (3). The shadow region and the background image

Figure5. Initial shadow region

Figure6. Shadow after gradient analysis

To handle this problem, we use gradient analysis [7] to remove these vehicle regions which are detected as shadow by mistake. The result is shown in fig.6.

60

Sh 2(i , j ) =

⎧1 ⎨ ⎩0

if grad (i , j ) < T3

Considered that vehicle region is texture-rich, while the shadow region has little texture, we detect the texture information which is called vehicle’s skeleton and use it to fill the holes remained in vehicle region. At the same time the big hole caused by the shadow edge is preserved because it belongs to the shadow region which has little texture information. First, a sobel operator is used on the background subtraction image to detect the skeleton of vehicle region which is shown in fig.9. Then we use an algorithm of corrosion and expansion to expand the skeleton of vehicle in which the factor of expansion is 5 pixels. Accordingly an expanded vehicle’s skeleton can be got as the white region shown in fig.10. Then this expanded vehicle’s skeleton is added to fig.7. The result in fig.11 shows that the holes in vehicle region are filled mostly, and the big hole caused by the shadow edge is remained.

(6)

else

Where grad is the gradient of image Sh1 , and T is the 3

threshold. If the gradient of a pixel in Sh1 is smaller than T3 , it is considered to belong to the shadow region. From fig.5 and fig.6, we can find that certain regions which belong to vehicle and are detected as shadow in fig.5 have been reduced in fig.6, but there are still some similar regions remained in fig.6. The reason is that these remained regions are similar to the shadow region in not only optical gain factor but also gradient. We can remove the detected shadow region in fig.6 by subtracting it from the white region in fig.4 and get the vehicle region as shown in fig.7. It can be found that there are some holes in the vehicle region in fig.7. That is because some regions which belong to vehicle and are detected as shadow in fig.6 are subtracted.

Figure.9 Vehicle’s skeleton Figure7. Vehicle region

Figure8. Vehicle after shadow removal

Fig.8 shows the vehicle image extracted according to the vehicle region in fig.7. The extraction method is that if the pixel’s value in fig.7 is 1, the pixel’s value in fig.3 is copied into the corresponding pixel in fig.8, and if the pixel’s value in fig.7 is 0, the pixel’s value in fig.2 is copied into the corresponding pixel in fig.8. In fig.8 there are some holes remained in the vehicle which are as same as the holes in fig.7. These remained holes will make the area of vehicle inaccurate, and sometimes they may even cut the vehicle to two vehicles. In fact, besides our shadow detection method, some other methods probably leave some holes in vehicle too, for no attribution can be used to segment shadow from vehicle absolutely precisely.

Figure11. Vehicle region after holes filling

B. Shadow Edge Removal In this subsection we scan the image to detect and eliminate the shadow edge in following steps.

In addition to the holes, the shadow edge is also remained in fig.8. In next section we will fill the holes remained, eliminate the shadow edge and get a more accurate vehicle region. IV.

Figure10. Expanded skeleton

1.

Get the smallest rectangle which contains the vehicle region and shadow region together, as shown in fig.12.

2.

Scan the image in left and right directions respectively from the rectangle’s right side and left side, as shown in fig.12. When scanning from vehicle region to shadow region, the value of pixel changes from 1 to 0, contrarily when scanning from shadow region to background region, the value of pixel changes from 0 to 1. So if the value of pixel first changes from 1 to 0 then remains unchanged for a certain distance and finally changes from 0 to 1, we consider these pixels whose values remain 0 in that distance belong to the shadow edge and remove them. It is necessary to point out that this distance is actually the width of the shadow edge which is assumed to between a range, such as [7 13] pixels in fig.12.

3.

Scan the image in up and down directions respectively from the rectangle’s down side and up side. Likewise, if the value of pixel first changes from 1 to 0 then remains unchanged for a certain distance and finally changes from

HOLES FILLING AND SHADOW EDGE REMOVAL

A. Holes Filing It is necessary to fill the holes remained in the vehicle region for accurate vehicle tracking. Many holes filling methods are based on topological information. From fig.8 we can find that besides the holes in vehicle, there is also a big hole adjacent to the vehicle which is caused by the shadow edge. If the topological-based hole filling method is used, this big hole will be filled too and the shadow region which has been removed in the above section would be recovered again.

61

VI.

0 to 1, we consider the pixels whose values remain 0 in that distance belong to the shadow edge and remove them. The scanning process is shown in fig.13.

Figure12. Horizontal scanning

CONCLUSIONS

Usually in shadow removing, some parts of vehicle maybe detected as shadow for they are similar to shadow region in color space, and removing these parts will leave some holes in the vehicle region which may make the area of vehicle inaccurate. To handle with this problem, an effective shadow removal method is proposed for stable vehicle tracking in this paper. The difference in optical gain between vehicle region and shadow region is used to detect and remove shadow, and the vehicle’s skeleton information is detected and used to fill the possible holes which remained in vehicle after shadow removed, furthermore the shadow edge is also eliminated by geometric scanning. Experiments show that this approach can mostly fill the possible holes remained in vehicle region and remove shadow accurately.

Figure13. Vertical scanning

After shadow edge removed we get the final precise vehicle region as the white region shown in fig.14, and fig.15 shows the corresponding vehicle image.

ACKNOWLEDGMENT This work is supported by National Natural Science Foundation of China under grant No.60972001. REFERENCES [1]

Figure14. Vehicle region detected

V.

[2]

Figure15. Vehicle image

EXPERIMENTAL RESULTS [3]

Some experimental results are shown as following. The original vehicle images are shown in fig.16, and the vehicles with their shadows removed are shown in fig.17. These experiment results prove that our shadow detection and removing method can eliminate shadow precisely.

[4]

[5]

[6]

(a)

(b) Figure16. Original vehicle images

[7]

(c)

[8]

[9] (a)

(b) (c) Figure17. Vehicles after shadow removed

62

D. Koller, K. Daniilidis, H. H. Nagel, “Model-based Object Tracking in Monocular Image Sequences of Road Traffic Scenes,” Computer Vision, 1993, vol. 10(3), pp. 257-281. A. Yoneyama, C. H. Yeh, C.-C. J. Kuo, “Moving Cast Shadow Elimination for Robust Vehicle Extraction based on 2D Joint Vehicle/Shadow Models,” Proc. IEEE Conf. on Advanced Video and Signan Based Surveillance(AVSS’03), 2003, pp.229-236. S. Nadimi, B. Bhanu, “Physical Models for Moving Shadow and Object Detection in Video,” IEEE Trans on Pattern Analysis and Machine Intelligence, 2004, vol.26(8), pp.1079-1087. R. Cucchiara, C. Grana, M. Piccardi, “Improving Shadow Supperession in Moving Object Detection with HSV Clolor Information,” Proc. 4th IEEE Int Conf on Intelligent Transportation Systems, 2001, pp.334-339. T. Horprasert, D. Harwood, LS. Davis, “A Statistical Approach for RealTtime Robust Background Subtraction and Shadow Detection,” IEEE Frame-Rate Applications Workshop, Kerkyra, Greece, 1999. P. L. Rosin, T. Ellis, “Image Difference Threshold Strategies and Shadow Detection,” Proc. the 6th British Machine Vision Conference, 1995, vol. 99, pp.347-356. Y. J. Jung, Y. S. Ho, ”Traffic Parameter Extraction using Video-based Vehicle Tracking,” Proc. IEEE Conf on Intelligent Transportation Systems, ITSC, 1999, vol.99, pp.64-769. A. Bevilacqua, R. Roffilli, “Robust Denoising and Moving Shadows Detection in Traffic Scenes,” Proc. IEEE Conf on Computer Vision and Pattern Recognition(CVPR)-Technical Sketches, Kauai, Hawai, 2001, pp.1-4. M. Izadi, P. Saeedi , “Robust Region-based Background Subtraction and Shadow Removing Using Color and Gradient Information”, ICPR 2008, pp.1-5.