AN AUTONOMOUS SYSTEM FOR AERIAL IMAGE ... - Navmatica

1 downloads 0 Views 1MB Size Report
M is the georeferenced 3D position vector of an arbitrary object point G in the ... is the 3D coordinate vector of the image point g in the c-frame, of the imaging ...
AN AUTONOMOUS SYSTEM FOR AERIAL IMAGE ACQUISITION AND GEOREFERENCING Mohamed M.R. Mostafa and Klaus-Peter Schwarz Department of Geomatics Engineering, The University of Calgary Calgary, Alberta, Canada T2N 1N4 Email: [email protected] and [email protected]

ABSTRACT This paper describes the development and testing of a fully digital multi-sensor system for aerial image capture and georeferencing for applications in remote sensing, digital mapping, and GIS. This system has been developed at The University of Calgary to acquire and georeference digital imagery in remote areas where ground control is neither available nor needed. The system incorporates a navigation grade strapdown Inertial Navigation System (INS), GPS receivers, and two high resolution digital cameras. The two digital cameras capture strips of overlapping nadir and oblique images. They are configured in such a way as to overcome the geometric limitation of their small image format size. Image exposure time and INS digital records are time-tagged in real time by the GPS. The INS/GPS-derived trajectory allows for the determination of the six exterior orientation parameters for each single image of both cameras. This approach eliminates the need for ground control to provide 3D position of objects that appear in the field of view of the digital cameras. Test flights were conducted over the campus of The University of Calgary in October, 1998. In this paper, the multi-sensor system configuration and calibration are presented and preliminary results are discussed. System accuracy indicate that major applications of such a system are in the mapping of utility lines, roads, pipelines (at image scale of 1:10 000 and smaller), and the generation of digital elevation models for engineering applications. In addition, topographic mapping projects at image scale of 1:10 000 and smaller can be done with this technique. INTRODUCTION Due to advances in digital image acquisition technology, it became feasible to capture aerial images in fully digital form, which allows immediate computer access to the imagery after the acquisition process. Compared to film-based photos, digitally acquired imagery is advantageous since no time is needed for film development and image scanning. Further, digital image processing and computer vision have been successfully utilized to facilitate automated procedures in digital aerial images. In addition, a number of sophisticated digital photogrammetric workstations are commercially available, e.g., Leica/Helava DPW (c.f., DeVenecia et al, 1996). Among the currently available digital imaging sensors, still video Digital Frame Cameras (DFCs) are the most convenient ones for aerial applications due to their freezing motion feature during exposure time. To reconstruct a 3D model from 2D imagery acquired by DFCs, requires, in principle, only the modeling and estimation of the camera interior and exterior orientation parameters. DFCs have been extensively used in close-range applications utilizing self-calibration, network design optimization schemes, and sophisticated image processing techniques for target mensuration. Results have been reported (c.f., Fraser, 1997; Lichti and Chapman, 1997.) In airborne applications, however, DFCs have not been frequently used due to limitations in geometry, resolution, and data rate. Current pixel resolution of commercial CCD chips is typically 7-15 µm, which is almost an order of magnitude poorer than that of film. DFCs performance has been analyzed in tests and results have been reported in (Novak, 1992, King et al, 1994; Mills et al, 1996; Maas and Kersten, 1997; and Mostafa et al 1997). On the other hand, direct georeferencing by GPS/INS received significant attention over the past few years in land-based and airborne applications, see Schwarz et al (1993) for the mathematical model and proposed applications, and Lechner and Lahmann (1995); Schwarz (1995); El-Sheimy (1996); Škaloud et al (1996); Cramer et al (1997); Mostafa et al (1997); Toth and Grejner-Brzezinska (1998); and Reid et al (1998) for applications. Recently, fully digital systems comprising GPS, INS, and digital image acquisition systems have been implemented. Independently, Cramer et al (1997) and Mostafa et al (1997) showed comparable results of positioning accuracy at the meter level for ground objects directly georeferenced by GPS/INS. Furthermore, The Ohio State Center for Mapping is currently developing the AIMS system for large scale mapping purposes (c.f., Toth and Grejner-Brzezinska, 1998.)

1. DIRECT IMAGE GEOREFERENCING MODEL In the sequel, note that except for orientation matrices, any subscripts refer to a specific point, while superscripts account for the coordinate frame in which the coordinate component is given. Matrix subscripts and superscripts indicate the rotation from the subscript system to the superscript system. Upper-case letters refer to a mapping frame (M-frame), while lowercase letters refer to a camera frame (c-frame). Bold-faced lowercase letters are used for vectors, while boldfaced upper-case letters are used for rotation matrices. Furthermore, t denotes time, while T denotes vector transposition. Direct georeferencing of digital images can be described by the formula: M

M

c

M

r G ' rEi (t) % sg Rci (t) r gi (t)

(1)

i

Where rGM is the georeferenced 3D position vector of an arbitrary object point G in the M-frame, expressed by : M

M

r G ' XG

M

T

M

YG

ZG

,

(2)

M

while, rE (t) is the 3D position vector of coordinates of the exposure station Ei, of imaging sensor i, at the instant of exposure, in the M-frame, represented by: M

M

rEi (t) ' XE i (t) c r gi

M

M

YEi (t)

ZE i (t)

T

,

(3)

(t) is the 3D coordinate vector of the image point g in the c-frame, of the imaging sensor i, expressed by:

c

(t) '

c c x g i (t) & x o i

c

T

c

c

c rg i

y g i (t) & y o i ky

&f i

,

(4)

i

where xg i (t ) and yg i ( t) are the image pointc coordinates of the image point g corresponding to the arbitrary object point c G, as projected by the imaging sensor i; xo i and yo i are the principal point offsets from the CCD format center for the imaging sensor i; ky is a factor accounting for the non-squareness of the CCD pixels; fi is the calibrated focal length of the lens in use in , imaging sensor i,; sg is an image point scale factor implicitly derived during the 3D photogrammetric i M reconstruction of objects using image stereopairs; and Rc (t) is the orientation matrix rotating the ci-frame, of the i imaging sensor i, into the M-frame utilizing the three ci-frame orientation angles Ti(t), Ni(t), and 6i(t), and the primary, secondary, and tertiary elementary rotation matrices R1, R2, and R3, respectively. Thus this orientation matrix can be expressed by: M Rc (t) ' R3 6i (t) R2 Ni (t) R1 Ti (t) . (5) i

2. GEOMETRIC ANALYSIS OF SMALL FORMAT DIGITAL CAMERAS In aerial applications, the use of small format cameras has not been favored by photogrammetrists over the past decades because of their poorer geometrical characteristics, which, when compared with the routinely used aerial cameras (approx. 23 cm). Therefore, their use is still limited to reconnaissance work and spectral rather than metric information extraction (c.f., Mausel, et. al., 1992.) Two reasons explain the attention given to those cameras today; namely, their capability of capturing imagery in digital form and their cost-effectiveness. Their main geometric limitations are evident when analyzing space resection and space intersection (Mostafa, et. al., 1998) with such cameras. The quality of space resection becomes a problem when computing the orientation angles T, N, and 6 for an image using Ground Control Points (GCPs), while the quality of space intersection directly affects the accuracy with which object positions are determined. This is dependent on the Base/Height ratio, which is shown in Figure 1 for different DFCs (1k, 2k, and 4k) and for an Aerial Film-based Camera (AFC). The larger the image size, the larger the B/H ratio, the better the quality of space intersection for height determination from imagery. To compute the B/H ratio, f, and pixel size p are needed. For the 1k and 2k cameras, f = 28 mm and p = 9 µm are typical for Kodak DCS used here, while for the 4k camera, f = 50 mm and p = 15 µm used by Grejner-Brzezinska (1998) are more representative. The quality of space intersection can better be demonstrated by the intersection (parallax) angle ( between individual camera exposure stations and an arbitrary single ground object which can be expressed by: ( ' 2 tan&1

w p (1 & O) LFOV . cos , 2f 2

LFOV ' 0 °, GCP at image center LFOV ' tan&1( Lp / 2 f ), GCP at image edge

(6)

where, ( is the space intersection angle; w is the image size parallel to flight direction; p is the pixel size; O is the overlap percentage; LFOV is the camera lateral field of view angle; Lp is the lateral location of a GCP in an image; f is the calibrated focal length of the lens in use. Table 1 lists the values of ( for different image sizes and overlap percentages. Note that, ( changes by only 1.5% in case of 1k x 1k DFC and by 16% in case of an AFC, due to a change in a lateral target location in an image. That is, the change of geometry in case of AFC is approximately 10 times that of the a 1k DFCs. Further, note that, ( is mainly a function of w, for the same O and f. Table 1. Intersection Angle ( for Different Image Sizes and Forward Lap Image Size Overlap Percentage

w=1k

w=2k

w=3k

w=4k

w = 15 k (AFC)

O = 40%

11.00°

21.83 °

32.26°

39.60 °

48.08 °

O = 60%

07.40 °

14.65 °

21.83°

27.00 °

33.12 °

O = 80%

03.66 °

07.36 °

11.02°

13.68 °

16.90 °

Since positioning accuracy depends on the intersection geometry ( (which is dependent on image size, especially for the height component), the theoretical accuracy of height determination can be derived by standard error propagation techniques. This yields: 1.4 Fpxl ZE & ZG (7) . FZ ' G w (1 & O) p The new terms of this Equation are FZ which is the height standard deviation of ground object and Fpxl which is the G image measurement precision. Note that, the effect of image position and orientation have been neglected in order to separate and emphasize the effect of intersection geometry. Although, a similar formula can be derived to express FZ G as a function of (, it is more convenient to express FZ as a function of image format size for design purposes. In G Equation 7, the first term on the right-hand side indicates the effect of image measurement precision on object height accuracy, while the second term indicates the effect of intersection geometry. Figure 2 shows FZ values for different G image sizes and mean elevations, for a standard overlap of 60%, and (Fpxl / p) = 1. From Figure 2, one can conclude that FZ improves by approximately the same factor for different w. W = 1k W = 2k W = 4k

10 80%

Height Error (m)

Percentage of Overlap

G

60% 40% 20%

0

0.2

W=1k

0.4

0.6 0.8 (B/H) Ratio

W=2k

W=4k

1

1.2

AFC

Figure 1. Base/Height Ratio for Different Image Sizes

8 6 4 2 0

300 Flat Terrain

900 1500 Flight Height (m) Zav = 0.1 H

2100

2700 Zav = 0.2 H

Figure 2. Height Positioning Accuracy for Different Image Sizes

The standard deviation FZ is a linear function of (Fpxl / p) ; if (Fpxl / p) decreases by an arbitrary factor, FZ will improve G G by the inverse of this factor. Further, note that, the smaller the image format size, the more the object height accuracy is affected by the mean elevation (Zav). In other words, for a 1k x 1k imaging sensor the effect of the area topography on positioning accuracy is more than that of a 4k x 4k sensor. Therefore, the larger image size yields not only better accuracy, but also less dependence on the area topography and, thus, more consistency in the results for different mapping missions. Figure 2 depicts that at 1000 m flying height and 10% mean elevation for w = 1 k, FZ should be G about 2.8 m. Mostafa et al (1997) reported RMSZ = 3.6 m from real test data using a digital camera with a w = 1k. This G number included the positioning and orientation error contribution from GPS/INS. On the other hand, at 300 m altitude, FZ is about 0.25 m for a DFC with a w = 4 k. Grejner-Brzezinska (1998) reported an RMSZ = 0.32 m, which G G again included the positioning and orientation error contribution from GPS/INS. It is worth mentioning that, results from real data reported by both aforementioned authors, and theoretical results shown in Figure 2, indicate that the error contribution of high precision GPS/INS to height determination accuracy is about 28% of the total error budget.

Since the 4k DFCs are still too expensive and the 1k and 2k DFCs are more convenient in a number of applications due to their better data rate and smaller image digital files, Mostafa et al (1998) proposed an imaging system that incorporates a 3k x 2k nadir and a 1.5k x 1k oblique camera, to enhance the intersection geometry, especially for height determination. The proposal included a validation through computer simulation which showed that at 25°-30° obliquity, the height accuracy can be improved by a factor of about 3.5 from that using a nadir camera only. Another advantage of the additional oblique imagery is that it provides the information that can be used in a number of digital photogrammetric applications (e.g., reconstruction of surfaces from imagery) as well as in remote sensing applications. The proposed system has been developed at The University of Calgary and has been tested over the university campus in October, 1998. The multi-sensor system configuration, test description, system calibration, and preliminary results are presented in the following. 3. EQUIPMENT SELECTION FOR DATA ACQUISITION Aboard a CESSNA 310 twin engine airplane, shown in Figure 3, two GPS receivers were used as the positioning component, namely, an Ashtech Z XII and a Trimble 4000 SSI, connected to a dual frequency antenna through a splitter. The University of Calgary’s two Kodak DCS 420 1.5k x 1k digital cameras were used as the system imaging components. A navigation grade INS, a Honeywell LRF III, is used as the inertial component. The GPS receivers, the INS, and the digital cameras were interfaced to three laptops, which control the different tasks required for data acquisition, as shown in Figure 4.

Figure 3. Cessna 310 Aircraft used in October Test Flights Over University of Calgary Campus

Figure 4. The Multi-Sensor System used in October Test Flights Over The University of Calgary Campus

4. THE MULTI-SENSOR SYSTEM CALIBRATION The overall system calibration is required to relate GPS-derived positions, INS-derived attitude parameters, and imagederived object point coordinates. In addition, the digital cameras have to be calibrated to establish their interior geometry and to determine their lens distortion. 4.1 Camera Calibration Both digital cameras were calibrated by capturing a total of 36 images per camera of 24 precisely surveyed circular retroreflective targets randomly mounted on the walls of the engineering building loading dock, shown in Figure 5, from 9 different camera locations. Four shots were taken at each camera location with a 90° rotation difference at each shot. A total of 900 image observations together with approximate camera locations and precise GCP coordinates were used in a self-calibrating free-network bundle adjustment software (Lichti and Chapman, 1997) to determine the interior geometry and the distortion parameters of the lens in use for both digital cameras. Table 2 shows the calibrated parameters and their associated standard errors. Those figures show the potential of the camera at hand for photogrammetric applications. Lens distortions are only about one order of magnitude higher than those of an aerial optical camera, keeping in mind that these are SLR camera lenses. Although the decentering distortions are much smaller than the radial ones, they should not be trivialized since their effect is projectively equivalent to a shift in the principal point location.

Figure 5. The Loading Dock Used for camera calibration and INS/Camera Orientation Calibration

Figure 6. The INS/Imaging System Calibration Setup

Table 2. Calibration Results of The Digital Cameras Imaging Sensor

Nadir Camera

Oblique Camera

F

Calibrated Value (mm)

xo

0.522

3.83

1.422

4.85

yo

0.028

3.43

0.054

5.48

f

28.486

2.39

52.16

5.8

Lens Distortion Parameters

Calibrated Value

F

Calibrated Value

F

-1.8591e-008

3.9407e-010

-1.1779e-008

4.3602e-010

K2

2.8115e-014

1.2763e-015

0 2.8058e-014

1.2765e-015

K3

-3.0324e-020

1.1791e-021

-3.0897e-020

1.0386e-021

P1

9.9692e-007

1.5425e-007

1.5534e-006

2.1216e-007

P2

1.1254e-006

2.2421e-007

-3.6279e-007

1.7994e-007

K1 Radial

Decentering

Adjustment Parameters

(µm)

F

Interior Geometry Parameters

Calibrated Value (mm)

(µm)

#dF = 591, #EOm = 108, FEO ¯ = 10 cm, Fpxl ¯ = 10 cm, ¯ = 2.8µm #dF = 477, #EOm = 108, FEO m

m

Fpxl ¯

= 2 µm

4.2 Position Offsets for INS/GPS/Cameras Since the INS, the GPS antenna, and the two digital cameras cannot occupy the same spot in 3D space, their position offsets have to be determined. Thus, before testing the system in flight, the GPS/INS/camera position offsets have to be measured and accounted for during the 3D photogrammetric reconstruction of objects using Equation 1. This requires a simple modification to Equation 1 augmenting it by an offset vector from either the INS center or the GPS antenna to the origin of the coordinate system of the imaging sensor frame. As shown in Figure 4, the INS is closer to the digital cameras than the GPS antenna is. Further, the INS and the digital cameras were tightly coupled in a metal frame that is strapped down to the aircraft platform through specially designed shock mounts (Mostafa et al, 1997) that separate the INS and the cameras from the aircraft vibrational environment. Thus, the INS center was chosen as the reference for the position offsets to the digital cameras and the GPS antenna. This will modify Equation 1 to: M

M

M

INS

r G ' rEi (t) % R INS (t) a i

M

c

% sg Rci (t) rgi (t)

(8)

i

where a iINS is the position offset vector between the INS center and each imaging sensor i, and RINSM(t) will be defined in section 4.3 below. The position offsets between the INS, the GPS antenna, and the digital cameras were surveyed at the Calgary airport using standard survey procedures from a GPS-fixed baseline. They made use of the INS-derived attitude angles for azimuth transfer and perfect leveling of the survey procedure. The survey was repeated before and after the test flight and the results were consistent to 5 mm for different components of the offset vectors. Both aiINS and b iINS, which is the GPS/INS position offset, are used to transform the GPS//INS-derived positioning information for each imaging sensor i, as shown in Figure 7.

4.3 INS/Cameras Orientation Offset When using the direct georeferencing approach, the image plane orientation angles are independently computed by INS. The INS-derived matrix RINSM(t) (rotating the INS-frame into the M-frame) is derived from the INS/GPS integration scheme shown in the middle panel of Figure 7, using the KINGSPAD software of The University of Calgary. For details on the strapdown INS mechanization and GPS/INS integration modeling and error estimation, see Schwarz (1998). RINSM(t) is the product of the elementary rotation matrices corresponding to the vehicle axes using the navigation conventions (YINS pointing forward, XINS pointing east, and ZINS pointing upward). They can be expressed by: M

RINS (t) ' R3 R (t) R1 2 (t) R2 n (t)

(9)

where, n, 2, and R are the three Euler angles, roll, pitch, and yaw, respectively. Therefore, those Euler angles have to be related to the c-frame angles Ti(t), Ni(t), and 6i(t), for each imaging sensor, using the formula: M

M

INS

Rci (t) ' R INS (t) Rci

(10)

INS

where Rci is a transformation matrix which rotates the INS body-frame into the c-frame of the imaging sensor i, and will be called the INS/camera orientation offset in the sequel. Substituting Equation 10 into Equation 8 yields: M

M

M

INS

r G ' rEi (t) % R INS (t) a i

M

INS

% sg R INS (t) Rc i i

c

rgi (t)

(11)

INS

Three methods can be used to compute the INS/cameras orientation offsets, Rc , namely: i 1. Measuring orientation offsets using an additional onboard sensor (Schwarz et al, 1993). This way, Equation 10 can INS be modified to accommodate Rc as a function of time and, thus, the INS and the camera can be allowed to mutually i rotate. 2. Tightly couple the INS and the cameras during photography. Using aerotriangulation by GCP, image exterior orientation parameters can be determined, which, together with the INS attitude angles, can be used to compute such orientation offsets. 3. Same as method # 2, except for the implementation is in close-range fashion. By tightly coupling the INS and the cameras during photography, their orientation offsets can be kept fixed over time. The orientation offsets can thus be computed once per flight using a target field either in aerial or in close-range fashion. If the INS and the cameras are detached from their frame between flights, then the offset has to be recomputed for each flight. If these sensors are permanently attached to their mount, such offsets are determined only once after equipment installation and could be checked on a regular basis. The procedures, differences, advantages and drawbacks of methods 2. and 3. are listed in Table 3.

Figure 7. Direct Image Georeferencing Data Flow

Table 3. INS/Camera Orientation Offset Calibration Method

procedure

In-Flight (aerial)

1. 2. 3. 4.

Establish targeted GCP field (in the mapped area, or close to the airport, etc.) Pre-calibrate digital camera in close-range mode Fly over the pre-surveyed target field before or after mission Compute camera exterior orientation using aerotriangulation by GCP using Equation 1 re-arranged for

5. 6.

Advantages

M

• •



M

Rci (t) and rEi (t) .

Extract image orientation angles from INS/GPS-derived trajectory Compute INS/camera orientation offset from 4. and 5. using



INS Equation 10 re-arranged for Rc . i

Post-Mission (close-range)

1. 2. 3.

Establish targeted GCP field (in the lab, e.g., Figure 5.) Capture a set of images and run static INS (applying ZUPTs and CUPTs) during each image acquisition time Simultaneously compute camera interior geometry and exterior orientation parameters using aerotriangulation by GCP only using M

Equation 1 rearranged for Rc

4. 5.

i

• • •

M

(t) and rE (t) in a self-calibrating i

mode Compute INS-derived attitude parameters at the instant of capturing each image. Compute INS/camera orientation offset and camera interior geometry from 3. and 4. Using Equation 10 rearranged for



Drawbacks

Consistent with the operational environment Cancels INS errors developed over time before photography Horizontal maneuvers improve INS-derived azimuth precision Much more convenient if the system is used with different aircrafts

• •

No GCP needed during aerial photography Stronger network geometry for aerotriangulation Higher precision target mensuration in imagery (typically 1/20 - 1/50 pixel) Much more controlled than the in-flight approach





• •



INS .

Rc

i

GCP needed for calibration Calibration accuracy is dependent on camera geometry (poorer results for smaller image size) Time consuming process, especially if the target field is far from the mapped area

Geometry is different from that obtained during aerial image acquisition Not consistent with the operational environment INS-derived azimuth is of poorer quality than that achieved in airborne case. Any error in equipment installation would result in the loss of calibration parameters and the calibration would have to be repeated

5. TESTING THE MULTI-SENSOR SYSTEM A test flight was conducted over the University campus in Calgary which covers an area of about 150 Hectare. According to the characteristics of the digital cameras (9mm x 13mm image size and f = 28 and 52mm, respectively, which yield image scales shown in Figure 8), and their data rate (typically, 4 seconds per image) the flight pattern was designed to cover the entire campus by a block of standard 60% endlap and 40% sidelap, using repeated flight lines. The flight pattern is shown in Figure 9. A total of 124 nadir and 108 oblique images were collected. Figure 10 and Figure 11 show a stereopair of nadir and oblique images, that was used for the preliminary results discussed in section 6. 51.14

0.25

51.12

Average Image Scale

1:35000 1:30000

Nadir Camera 0.2

1:25000 1:20000

0.15

1:15000

Oblique Camera

1:10000 1:5000 300

400

500

600

700

800

900

0.1

0.05 1000

Flight Altitude (m)

Figure 8. Image Scale and Ground Resolution

Latitude (Deg)

0.3

Ground Resolution (m)

1:40000

51.1 Calgary Airport

51.08 U of C Campus

51.06 51.04 -114.2

-114.15

-114.1

-114.05

-114

Longitude (deg)

Figure 9. GPS/INS-derived Trajectory

This test flight was intended to test the system performance and capability to produce large scale digital mapping products. The flight altitude was therefore chosen to be about 400 m which yielded 13 cm pixel resolution for the nadir camera and 7 cm for the oblique camera.

Figure 10. A Nadir Digital Image Captured Over U of C campus, from October 98 Test Flight.

Figure 11. An Oblique Digital Image Over U of C campus, from October 98 Test Flight.

5.1. GCP Preparation There are three main reasons to establish GCPs with high accuracy in the test area. First, an independent reference is needed to test the overall system performance using the GPS/INS-derived trajectory and both nadir and oblique image stereopairs. Secondly, the possibility of calibrating the digital cameras in-flight can be tested and compared to the closerange approach followed here. Lastly, the entire system in-flight aerial calibration approach, as discussed in section 4.3, can be tested in this way. The U of C campus was chosen because of its detailed urban character as well as the ease with which GCP could be placed on roads, in parking lots, and on building rooftops. Almost a hundred GCPs were painted all over the campus. They were distributed in such a way that large height differences between adjacent points could be obtained (e.g., on the ground and on building rooftops) to be able to calibrate the digital cameras and the entire system in-flight, as shown in Figure 10. All GCPs were surveyed using fast static GPS techniques utilizing multiple master stations. A network adjustment was finally performed to double check the GPS-fixed baselines. The network adjustment yielded GCP coordinate precisions at the level of 6 mm in horizontal and 1.0 cm in height. 5.2. Forward Image Motion Compensation Since the digital cameras at hand were not designed for aerial photography, no image motion compensation has been implemented in those cameras. This contaminates the imagery with a motion blur, which can be computed as:

$ '

J f

M ZE

2

2

vn % v e

(12)

where $ is the image blur; J is the camera shutter speed; and vn, and ve, are the north, and east velocity components respectively. Figure 12 shows the $ value for both cameras for different flying altitudes, which indicates that image blur is inevitable, especially for low flight altitudes. In this test, the nadir camera had an automatic SLR lens and, thus, the shutter speed was constrained to 1/500 s, and the f-stop was adjusted automatically in real-time. The oblique camera, on the other hand, was used with an ordinary 52mm lens, which was adjusted at f-5.6 , and, thus, no adaptation for different illumination conditions was possible. A byproduct of the GPS/INS-trajectory determination is the high precision of the derived velocity of the carrier platform. As shown in the third panel of Figure 7, the velocity values can be used to restore the digital imagery prior to its processing. Such a restoration process is a software level emulation of the forward image motion compensation implemented by the AFCs on the hardware level; not to mention the ease and accuracy of this process using the INS/GPS high precision velocity due to the fact that the blur is dominated by a single direction in fixed wing applications, which is compensated for by a linear shift in image coordinates opposite to flight direction. 5.3. GPS/INS Processing Results Two master stations were kept running during the test period, to allow for GPS differential positioning. One was located close to the airport which was occupied by a Trimble 4000 SSI, and the other on the rooftop of the U of C engineering building, which was occupied by an Ashtech Z XII receiver. This resulted in two independent GPS trajectories, which when compared provided consistent results. The Ashtech GPS raw data together with the INS specific force and angular

Nadir Camera Oblique Camera 1

0 300

0.1

10

0.08

8

0.06

6

0.04

4

0.02

2

0

0

0 600

900 1200 1500 Flight Height (m)

1800

Figure 12. Digital Image Blur

PDOP

2

Positioning Accuracy (m)

Image Blur (Pixel)

velocity outputs were processed using the KINGSPAD software. Figure 13 shows the Kalman filter-derived estimates of the positioning precision for the three components as well as the PDOP during the entire test flight.

Latitude

1000

2000 3000 4000 GPS Elapsed Time (s)

Longitude

Height

5000 PDOP

Figure 13. GPS/INS Positioning Accuracy

6. MULTI-SENSOR SYSTEM PERFORMANCE The INS/camera orientation offsets were determined using one image over the campus and its time-sync INS-derived attitude angles.. The system calibration parameters were then applied to compensate for the spatial position offsets of the GPS/INS/cameras, and the INS/camera orientation offsets. The reduced exterior orientation parameters where then introduced to the nadir and oblique stereopair shown in Figures 10 and 11. Image coordinates of the commonly available 5 GCPs in that stereopair and 18 tie points were manually measured one image at a time. The reduced exterior orientation parameters, camera interior geometry, camera lens distortions, and image coordinates were used in a bundle adjustment, to solve for the GCP coordinate values. The independently computed GCP coordinates were then compared to those surveyed by GPS. The differences are RMSX = 54 cm, RMSY = 61 cm, and RMSZ = 78 cm. Such an accuracy level is still poorer than the results from computer simulations. Manual mensuration of the targets in the image contributes a major part to the error budget. The accuracy is, however, sufficient enough for applications that need results in near real-time, and in cases where image target mensuration is done manually as shown here. However, to achieve ultimate system accuracy, all the GCPs should be processed on a digital photogrammetric softcopy, e.g., PCI, or Leica/Helava DPW. Further, a stereopair of vertical photographs, augmented by an oblique image would be a better processing strategy to increase redundancy and improve the geometry, especially for the horizontal components. The three sources of errors which strongly affect the overall system accuracy are the accuracy of the navigation component, the resolution and geometry of the imaging component, and the overall system calibration. Figure 14 depicts those factors in some detail. Processing different stereopairs and single strips , double strips, and blocks of images is planned for the next few weeks and the results will be presented at the meeting.

Figure 14. Factors Affecting Direct Image Georeferencing By GPS/INS

7. SUMMARY AND FUTURE WORK The multi-sensor system developed in this research incorporates GPS, INS, and two digital cameras. The system was tested over The University of Calgary campus in October 1998. Accuracy achieved for ground object positioning using a single stereopair of a nadir and an oblique images are RMSX = 0.54 m, RMSY = 0.61 m, and RMSZ = 0.78 m as compared to GPS-derived coordinates. The accuracy achieved with the system so far meets the requirements of most resource applications and has much faster turn-around time than comparable missions with AFCs. It is expected that better results can be achieved when using strip or block adjustment techniques. The planned future work is as follows: 1. To investigate the system performance using the two different calibration strategies explained in section 4.3; 2. To mensurate image targets on a digital photogrammetric softcopy; 3. To process different stereopairs of overlapping vertical and oblique images; 4. To process single strips of digital images for corridor mapping purposes; and 5. To process an entire block of images using the GPS/INS data with minimal GCP configuration to investigate by how much the overall system accuracy can be improved. ACKNOWLEDGMENT Financial Support for this research was obtained through an NSERC operating grant of the second author, an Egyptian scholarship, and Graduate Research Scholarships and Special Awards from The University of Calgary of the first author. Mrs. T. Ludwig, J. Yom, A. Bruton, and J. Škaloud are gratefully acknowledged for their cooperation during the Calgary test flight period. Geodesy Remote Sensing Inc., Calgary, is gratefully acknowledged for their help in such a project. REFERENCES Cramer, M., D. Stallmann, and N. Halla, 1997. High Precision Georeferencing Using GPS/INS and Image Matching, Proceedings of the International Symposium on Kinematic Systems in Geodesy, Geomatics and Navigation, Banff, Canada, pp. 453-462. DeVenecia, K., S. Miller, R. Pacey, and S. Walker, 1996. Experiences With A Commercial Package for Automated Aerial Triangulation. Technical Papers of the ACSM/ASPRS Annual Convention, 1:548-557. Fraser, C.S., 1997. Digital Camera Self Calibration, ISPRS Journal of Photogrammetry & Remote Sensing, 52(1997):149-159. Grejner-Brzezinska, D.A., 1998. Direct Platform Orientation with Tightly Integrated GPS/INS in Airborne Applications.ION-GPS 98, Nashville, Tennessee, Sept., 15-18, pp. 885-894. King, D., Walsh, P., and Ciuffreda, F., 1994. Airborne Digital Frame Camera for Elevation Determination, PE&RS, 60(11): 1321-1326. Lechner, W. and Lahmann, P., 1995. Airborne Photogrammetry Based on Integrated DGPS/INS Navigation. Proceedings of The 3rd International Workshop on High Precision Navigation (K.Linkwitz and U. Hangleiter, edss), pp. 303-310. Lichti, D. and M. A. Chapman, 1997. Constrained FEM Self-Calibration, PE&RS, 63(9): 1111-1119. Maas, H.G. and T. Kersten, 1997. Aerotriangulation and DEM/Orthophoto Generation from High-Resolution Still-Video Imagery, PE&RS, 63(9): 1079-1084. Mausel, P.W., J.H. Everitt, D.E. Escobar, and D.J. King (1992) “Airborne Videography: Current Status and Future Perspectives” PE&RS 58(8) 1189-1195. Mills, J.P., I. Newton, and R.W. Graham, 1996. Aerial Photography for Survey Purposes With a High Resolution, Small Format, Digital Camera, Photogrammetric Record, 15(88): 575-587. Moffit, F. and E.M. Mikhail, 1980. Photogrammetry, Harper and Row, Inc. Mostafa, M.M.R., K.P. Schwarz, and P. Gong, 1997. A Fully Digital System for Airborne Mapping, Proceedings of the International Symposium on Kinematic Systems in Geodesy, Geomatics and Navigation, Banff, Canada, pp. 463-471. Mostafa, M.M.R., K.P. Schwarz, and M.A. Chapman, 1998. Development and Testing of an Airborne Remote Sensing Multi-Sensor system, International Archives of Photogrammetry and Remote Sensing, 32 (2):217-222.

Novak, K., 1992. Application of Digital Cameras and GPS for Aerial Photogrammetric Mapping, International Archives of Photogrammetry and Remote Sensing, 29(B4):5-9. Reid, D.B. E. Lithopoulos and J. Hutton, 1998. Position and Orientation System for Direct Georeferencing (POS/DG), Proceedings: ION 54th Annual Meeting:445-449. Schwarz, K.P., M.A. Chapman, M.E. Cannon and P. Gong, 1993. An Integrated INS/GPS Approach to The Georeferencing of Remotely Sensed Data, PE&RS, 59(11): 1167-1674. Schwarz, K.P., 1995. Integrated Airborne Navigation Systems for Photogrammetry, Photogrammetric Week’95 in (D. Fritsch and D. Hobbie, editors) Wichmann, pp. 139 - 153. Schwarz, K.P., 1998. Mobile Multi-Sensor Systems: Modelling and Estimation. Proceedings: International Association of Geodesy Special Commission 4, Eisenstadt, Austria, pp. 347-360. Shufelt, J.A., 1996. Projective Geometry and Photometry for Object Detection and Delineation, Ph.D. Thesis, CMU-CS96-164, School of Computer Science, Carnegie Mellon University, PA, USA. Škaloud, J., M. Cramer, and K.P. Schwarz, 1996. Exterior Orientation by Direct Measurement of Position and Attitude, International Archives of Photogrammetry and Remote Sensing, 31 (B3):125-130. Toth, C. and D.A. Grejner-Brzezinska, 1998. Performance Analysis of The Airborne Integrated Mapping System (AIMSTM), International Archives of Photogrammetry and Remote Sensing, 32 (2):320-326.