Multiple - Sensor - Collision avoidance system for automotive ...

9 downloads 19 Views 200KB Size Report
automotive applications using an IMM approach for obstacle tracking .... asynchronously, a different course of action is followed. In that case one sensor leads ...

Multiple - Sensor - Collision avoidance system for automotive applications using an IMM approach for obstacle tracking Dr. Angelos Amditis, Aris Polychronopoulos, Ioannis Karaseitanidis, Dr. George Katsoulis ICCS/I-SENSE group National Technical University of Athens 9, Iroon Polytechniou St. 15773, Athens, Greece. {angelos,arisp,gkara,gkats}

b) Electromagnetic techniques (microwave radars): unlike the optical techniques, they perform well under adverse environmental conditions. Despite its relatively high cost, FMCW radar seems to be the best technique for long-range distance measurement for automotive applications. It could also be used at short and medium range, rendering a quite flexible technique.

Abstract – In this paper a multi-sensor collision avoidance system is presented for automotive applications. For obstacle detection and tracking, Millimeter Wave (MMW) Radar and a Far Infrared (FIR) Camera are chosen in order to provide object lists to the sensors’ trackers respectively. The algorithm of the track management, data association, filtering and prediction for both sensors is also presented, focusing on the Kalman filtering. Thus, an Interacting Multiple Model (IMM) filter is designed for coping with all possible modes in which a car may be. Finally, distributed fusion architecture, using a central track file for the objects’ tracks, is adopted and described analytically. The results of the work presented will be used, among others, for the purposes of European co-funded project “EUCLIDE”.

c) Acoustic techniques (ultrasonic): well-suited in applications where only short relative distance measurements are required, because they are able to provide high resolution for a relatively low cost. The state of the art in CW/A Systems is offering single sensor techniques. In this work a multiple-sensor approach is adopted, merging the functionality – on both data fusion and human machine interaction level - of the different sensors, in a way that takes advantage of the complementarity of these sensors. In particular we consider:

Keywords: Tracking, Collision avoidance, data fusion, IMM, MMW radar, FIR camera.



Great effort is being recently spent to develop collision warning and avoidance systems (CW/A) to give as an effective support to vehicle drivers. The main aim of such systems is to address the strong social needs of reducing the total number of accidents. A collision warning system operates, generally, in the following manner: a sensor installed at the front end of a vehicle constantly scans the road ahead for vehicles or obstacles. If such an obstacle is found, the system determines whether the vehicle is in imminent danger of crashing, and if so, a collision warning is given to the driver. The sensors used, fulfil the tasks of obstacle detection and tracking, which is the basis of collision warning sensing techniques, can be classified in three main groups:

A FM-CW 77GHz mmW radar sensor, which offers accurate range, velocity and angular measurements of the objects in the area of interest, as well as a reconstruction of the road boarders.

A far infrared sensor based system, which offers good resolution images (based on the thermal map of the scene) of the road scenario, equipped with an image processing unit providing angle measurements and an estimation of the range of the detected object in the field of view.

One part of the added value that may be presented by integrating the two sensors can be easily estimated through the following Table ([6] and [7]), where “!” denotes appropriate functioning and “?”, a specific problem. Another effect occurs in cases of overlapping coverage, where sensor data fusion results in an enhanced performance as compared to single sensor operation. Really heavy rain and snow can cause a decrease in the infrared image quality, since big droplets are seen as

a) Optical techniques (Passive infrared, laser radar and vision systems): they all suffer from the disadvantage of being sensitive to external environmental conditions. Passive infrared and vision cannot provide a direct measurement of distance to an object.

ISIF © 2002

Dr. Evangelos Bekiaris Hellenic Institute of Transport 6th km Thessaloniki-Thermi St. Thessaloniki Greece [email protected]


“spots” of different temperature, thus decreasing image quality. In fog the behavior of the infrared sensor is dependent on fog droplets dimension, so that the advantage (in terms of increased visibility) could be decreased in presence of particular type of fog formed by droplets of big dimensions.

and the image collected by the IR Camera is applied on a suitable graphic tool, which produces the output image presented by the HMI to the driver. The Fusion Module will be presented thoroughly in the remainder of the paper.

Table 1: Sensor vs. Weather conditions Sensor Type

Radar sensor

Adverse Weather Condition Fog

Anticollision Radar Far IR Sensor Multisensor System



Path Prediction Module

road boarders




Heavy rain !

? !

? !

! !

? !

host vehicle trajectory radar object list

Far Infrared sensor

Image Processing Module

Data fusion Module

Human Machine Interaction

? ! !

! !

Figure 1: System Dataflow Traffic Density ! ! !

Day or night !

The proposed system is hosted in two PC’s, the one for the fusion process and the other for the image processing. In order to achieve real time performance the Internet Protocol, IP, is selected as the main communication channel in between the 2 PC’s. The use of IP and standard 100 Megabit network cards provides a cheap and robust solution for high-speed communication. One proposal for protocol over IP to use is UDP, which does not provide error correction and retransmits but is very reliable in short links and provides very low channel load and delay. An alternate choice is TCP that provides a more robust link but the system may “hang” if all computers are not talking in a correct way. Vehicle data are read from the PCs through vehicle’s CAN bus. The network architecture is shown in Figure 2.

! !

This paper is organised as follows. First, the general concept is presented through the design of the system architecture and the inter-module dataflow. Then, the fusion scheme is presented, explaining the choice of distributed fusion structure, but focusing on the role of the tracking modules, while sensor synchronisation – due to sensors’ different refresh rates - is solved. Finally, an Interacting Multiple Model (IMM) algorithm is presented, in a way that can be applicable in a CW/A system.


System Architecture

Vehicle sensors

For the purpose of the CW/A System, an open and interoperable architecture is designed. This architecture consists of several modules including modules for the different sensors (including the vehicle sensors) and an innovative fusion module, as shown in Figure 1.




Following Figure 1, the radar and the camera provide separate object lists (vehicles, pedestrians, stationary objects etc.) to the Data Fusion Module, assuming the following format: [time stamp; object s/n; velocity; azimuth; range] for the radar and [time stamp; object s/n; azimuth; range] for the camera. The result of the Data Fusion Process, which is one object list, is fed as input to computational algorithms – calculating time-tocollision and the predicted minimum distance for every object - on which risk levels/criteria are applied. When a critical situation comes up, given predefined thresholds for the aforementioned parameters, both this output data

Radar sensor

CAN bus

Anticollision Radar Far IR Sensor Multisensor System

Number of lanes !

Warning Strategies Module


Traffic scenarios Urban

Threat Assessment Module

IR object list

Table 2 : Sensor vs. Traffic scenarios Sensor Type

Vehicle sensors (odometer yaw-rate, steering angle)

Image Processing PC


Data Fusion PC

video audio


Figure 2: Network architecture


Sensors’ synchronization

With the term synchronisation it is assumed the referencing of sensor data to a common time and spatial origin. Since algorithms (using coordinates’


transformations), which will be implemented, can easily construct a common spatial origin, the main problem encountered is the existing difference between the refresh rate of the IR Camera, compared with the corresponding rate of the radar. The objects detected must be checked for matching while they are originated from a radar sensor whose refresh rate is 10Hz, and from an IR camera whose refresh rate is 25Hz. In the next Figure, a time frame of one-second duration is shown, with the output from each sensor represented by a dot. In this figure, the two sensors are assumed to start their operation synchronized a parameter that if guaranteed, provides solutions by easier mechanisms / algorithms.

So at each time frame of 10ms the variable time_radar (time_IR) either retains its last value and flag_radar (flag_IR) is zero, or updates its value and flag_radar = 1 (flag_IR = 1) when new data arrive from the radar (IR) sensor. In the case that at a specific time frame new data arrive simultaneously from both sensors (flag_radar = 1 and flag_IR = 1), a common time base is already established and it is equal to both time_radar and time_IR. The variable elapsed_time is set equal to the difference between the time that new data arrived (time_radar or time_IR) minus the last time base, expressed by the corresponding variable time_base. Then the track files of both sensors are updated, with the tracks advanced in time for the time elapsed, using the programs radar_track and IR_track.

IR 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

However, when data from sensors arrive asynchronously, a different course of action is followed. In that case one sensor leads and the other lags, so in order to maintain a common time base, the sensor providing the new data normally updates its track file, while the lagging sensor updates its track file based on the last known motion parameters of its tracks.

1 sec 25











1 sec 10

Figure 3: Sensor synchronisation


The problem of time synchronization arises, since not only the radar and IR sensors have different refresh rates, but the time interval at which each sensor updates its detection file may differ as well. This might happen for various reasons. For example due to either technical or weather conditions the IR sensor may not be able to update its detection file for 2 or 3 consecutive time frames.

Data Fusion

In our approach, we consider distributed fusion architecture as more suitable for such an application. Sensor – level tracks are combined into a composite track file (central – level) by performing track to track association and track fusion. Thus, for every sensor tracking is performed – including data association, gating and filtering - providing a local track file for the fusion (Figure 4).

The need to maintain a common time base is obvious, in order to be able to proceed to the data fusion of the tracks coming from different sensors, by comparing tracks corresponding to a common time reference. An algorithm is proposed, having the advantage that it does not assume time synchronization, working as follows: Initially the common time base is set to zero. It is then updated throughout the program each time new data arrive from a sensor. Then the track files for both sensors are advanced in time to the new time base. If a sensor provides new data, its track file is normally updated using this new information. In the opposite case, the track file of the lagging sensor is advanced to the new time base, with its tracks updated using their last known motion parameters. To further describe the algorithm, it is assumed that at a refresh rate of 10ms, the synchronization program is informed (by either a function or a file) about whether the radar sensor, the IR sensor or both have new data to provide, via four parameters sequentially presented in a row vector:

Observation Array


scan k k Observation to track assignment


Track management

k Create association matrix

Filtering and prediction k+1 k

k Local track Array

Tentative tracks

Confirmed Tracks


Local Track History

Figure 4: Sensor Tracking

[flag_radar, time_radar, flag_IR, time_IR]

Whenever an observation for a new target is present a new track is initialized, stated as tentative track. The tracker must wait to collect sufficient additional hits to


allow the track to be promoted to a firm track. For this reason, our track initiation scheme follows, in a sense, the “m out of n” criterion, where m hits out of n scans will be required to form a firm track. For our application, a tentative track is confirmed when hit=6, and the track has not met any of the deletion criteria. The Deletion criteria are: 1.

(hit+miss) = 5 and (miss>0)


(hit=1) and (miss=1)


miss = 5 (that means 5 misses for a firm track)

The tracks which management function are:


Multiple model estimation solves the problem of the motion uncertainty by using a bank of filters for the representation of the possible system behavior patterns (referred as modes of the system). We assume a set of M models for the estimator; run a bank of filters, each based on a unique model; combine the estimations from these filters. Each filter obeys the following equations [2]:

x k +1 = Fki x k + Gki wki



z k = H ki x k + v ki

(1) (2)

where w and v is the process and measurement noise respectively, z is the measurement vector, x is the state vector, F is the transition matrix from state k to state k+1, while i denotes quantities pertinent to model mi. The jumps of the system mode are assumed to have the following transition probabilities:


a) Updated with the Kalman filter (update state vector x and covariance P), if they are assigned to an observation in the current scan or

P{m kj+1 | m ki } = P{s k +1 = m j | s k = mi } = π ij

b) Updated using the last known state vector and the state transition matrix Φ (the covariance is updated as well: Pk+1 = ΦPkΦΤ, if they are not paired with observations (but surviving). This approach and the Multiple Hypothesis Tracking (MHT) technique, introduced by Reid [5], have a lot of similarities. In this approach, a number of hypotheses (tentative tracks) are formed, but the tentative tracks are only maintained until no additional measurements associate with the tentative track or one of the tracks is promoted to a firm track.



Where mk denotes the event that model mi matches the system mode at time k. Detailed description of the Multiple Model Estimator will follow in Chapter 5 of this paper. Coming back to the distributed fusion approach, which is adopted in this work, the fusion modular scheme is described in Figure 5. According to this Figure, the radar tracker and the IR tracker report their confirmed track file to the fusion module. The sub-module “create association/fusion matrix” is similar to the function used for the single sensor case. After the synchronization function, where state estimation and covariance matrices have been aligned in time, statistical distances are formed between two tracks (i, j), and the state difference vector is computed:

The sub-module observation-to-track finds the assignment of observations to tracks (observation to track assignment) using the following algorithm, which is similar to the auction algorithm introduced by Bertsekas and presented in [3]: we first find at each row of the matrix, which is the minimum number (and its index that is a column index). If two indexes are the same a conflict occurs since two observations pair with the same track. In that case we find the minimum number, while the other numbers of the column are set to a maximum value, and the loop continues. This technique guarantees that the observation having the least difference from a certain track, 'steals' the assignment.

xij =| xi − x j |


Thus, the statistical distance between the tracks (i, j) [3]:

The core of the target tracking is the Kalman filtering where the state of the target is estimated, both at current time (filtering) and at any point in the future (prediction). The state estimation is conducted in the presence of two types of uncertainty:

T T 2 −1 d statistica l = xij ( Pi + Pj − Pij − Pij ) xij


Where Pi and Pj are state estimation covariance matrices and Pij is the cross covariance matrix. Gating procedures follow to find fusion candidates and logic is imposed for track-to-track association. In this case, the “one to one” rule of the observation to track assignment is not valid, since one radar object may correspond to more than one IR objects or vice versa.

a) Measurement origin uncertainty due to the sensory system or countermeasures of the target b) Target motion uncertainty due to the maneuvering of the target


Radar Track

tracker. The role of “measurement” is performed by the fusion output objects’ state vectors which are introduced together with the composite track file, formed in a previous frame, to gating, association and filtering procedures. These procedures (fusion track module) are subject to the sensor tracking scheme presented in Figure 4. The composite track file is updated and presented through the HMI to the driver directly or it is filtered with respect to time to collision or predicted minimum distance thresholds.

IR Track

local radar file


local IR file

Radar display

Create fusion matrix

image display

Data fusion


fused tracks Composite track file (k-1)

Interacting Multiple Model

For the purposes of the CW/A system, as well as for other applications (e.g. [1], [8]), the Interacting Multiple Model Estimator is chosen. IMM is considered to provide superior tracking performance compared to maneuver detection schemes. In IMM, all filters in the bank are working in parallel and in any sampling period the state estimate is a weighted combination of the estimates of the individual filters.

Fusion track

Composite track file (k)

Central display

Vehicles have two basic motion modes [8]: uniform motion and maneuver. The former refers to the motion with a constant speed and can be represented by a near constant velocity model (or white noise acceleration model – WNA) and the latter refers to starting, stopping or turning (e.g. overtaking). The modeling of the turns of a vehicle is a difficult design issue. In this paper, we assume that can be modeled with a coordinated turn (CT model), a WNA model or a Singer model (SM).

Figure 5: Distributed fusion scheme Gating is based on track courses. This gating is applied to exclude from possible association the tracks whose courses differ more than a given threshold. This action is necessary since the distance metric for two tracks may be small, but if their respective courses differ a lot, no association should be allowed. Consider for example the case of an incoming target having a direct hit closing course. This target should by no means be allowed to associate with a moving away target.

Taking into account the aforementioned vehicle “behavior”, the IMM Estimator will consist of 2 discrete filters for the vehicle modes respectively. The block diagram of such a filter is shown in Figure 6.

Then, the combined state estimate is computed:

xˆ c = xˆ i + C ( xˆ j − xˆ i )

Following other papers’ results and bibliography for vehicle’s motion (e.g. [8]), a variety of combinations can be chosen in order to provide a sufficient tracking performance for the IMM estimator. Such paradigms are mentioned here:

(6) T

C = ( Pi − Pij ) ⋅ ( Pi + Pj − Pij − Pij ) −1


• Finally, the covariance matrix for the fused state estimate xˆc is: T

PC = Pi − C ⋅ ( Pi − Pij )

Two WNA filters - the process noise power spectral and q2=5m2/s3 densities are q1=0.01m2/s3 respectively. The transition matrix is:

 0.9 0.1 

π =    0.1 0.9 


The objects that are not fused (i.e. have not “passed” the gating threshold) maintain the state vector (and all statistics) from the sensor tracker. Thus, the output objects are carrying a flag (0, 1 or 2) whether they are fused objects or they belong to the radar or IR local track file respectively. In a certain way, one can say that a new and better sensor has been developed, which needs its own



A WNA and a CT model – the process noise power spectral density is q1=0.01m2/s3 for the uniform motion. For the CT model there is an additive process noise q=0.01m2/s3 and the standard deviation of the turn rate process is 1o/s2. The transition matrix is the same.

[2] Bar–Shalom Y.Blair W.D., 2001, “Multitarget Multisensor Tracking: Applications and Advances”, Volume III, Artech House.

A WNA and a Singer model - the process noise power spectral densities are q1=0.01m2/s3 and q1=2m2/s3 respectively. The transition matrix remains the same.

[3] Blackman S.S. and Popoli R., 1999. “Design and Analysis of Modern Tracking Systems”, Norwood, MA: Artech House. Interacting

[4] Bar–Shalom Y., Rong Li X., Kirubarajan T., 2001, “Estimation with applications to Tracking and Navigation”, John Wiley and Sons Inc.

measurements πij Filter 2 (maneuver)

Filter 1 (uniform motion)

[5] Reid, D.B., 1977. “A multiple hypothesis filter for tracking multiple targets in a cluttered environment”, Sunngvale, CA: Lockheed Missiles and Space Company Report No. LMSC Report D-560254.

Prob. Calculation

[6] Amditis A., Andreone L., Bekiaris A.: “Using aerospace technology to improve obstacle detection under adverse environmental conditions for car drivers”, ITS Conference, New Orleans, 2001.

Estimate fusion

[7] Andreone L., Amditis A., Bekiaris A., Wildroiter H., Laurentini A., “EUCLIDE: Fusing data from radar and IR sensors for enhancing automotive driver’s vision under night and adverse weather conditions”, IEEE M2VIP Conference, Hong Kong SAR, 2001.

Figure 6 : IMM structure



Although, as explained in [3], the application of distributed fusion is not so robust compared to centralized fusion in terms of probabilistic analysis, our approach is favorable to the first method. Influenced by the idea of also applying tracking techniques to military systems, we think that there will be always constraints in relying solely upon the ‘artificial sensor’ provided by data fusion techniques. One can always imagine scenarios when an operator will need to also resort to tracking information extracted at sensor level, even in terms of simultaneously watching monitors displaying fused tracks as well as sensor level tracks. So we applied much time and effort to tune the parameters affecting sensor level tracking, prior to merging the individual sensor information into a new signal processing entity.

[8] Lin X., Kirubarajan T., Bar-Shalom T., Li X., “Enhanced Accuracy GPS Navigation Using the Interacting Multiple Model Estimator”. In Proc. 2000 Aerospace Conf. Big Sky, MT, 2001.

The idea of using an IMM estimator guarantees almost perfect tracking quality in the case of normal high way traffic. However, the traffic congestion in urban areas renders the application of our system impractical.

References [1] Polychronopoulos A., Karaseitanidis I., Amditis A., Katsoulis G. and Uzunoglu N., “Interacting Multiple Model Filtering for a double-active-sensor Tracking System in sea environment”, SET-049 SY 10 Symposium of Radar and Radar Complementarity”, Prague, 2002.


Suggest Documents