Photogrammetric exploitation of IKONOS imagery for mapping ...

4 downloads 0 Views 542KB Size Report
uses the rational function model (RFM), also known as rational polynomial camera model, instead of the physical IKONOS sensor model to communicate.
INT. J. REMOTE SENSING, VOL.

25,

NO.

20 JULY, 2004, 14, 2833–2853

Photogrammetric exploitation of IKONOS imagery for mapping applications C. VINCENT TAO*{, Y. HU{ and W. JIANG§ {Geospatial Information and Communication Technology (GeoICT) Lab, York University, 4700 Keele Street, Toronto ON Canada M3J 1P3 {Department of Geomatics Engineering, The University of Calgary, 2500 University Drive, NW, Calgary, AB, Canada T2N 1N4 §LIESMARS, Wuhan University, 129 Luoyu Road, Wuhan, Hubei, China, 430079 (Received 1 November 2002; in final form 11 July 2003 ) Abstract. The launch of IKONOS by Space Imaging opens a new era of highresolution satellite imagery collection and mapping. The IKONOS satellite simultaneously acquires 1 m panchromatic and 4 m multi-spectral images in four bands that are suitable for high accuracy mapping applications. Space Imaging uses the rational function model (RFM), also known as rational polynomial camera model, instead of the physical IKONOS sensor model to communicate the imaging geometry. As revealed by recent studies from several researchers, the RFM retains the full capability of performing photogrammetric processing in absence of the physical sensor model. This paper presents some RFM-based processing methods and mapping applications developed for 3D feature extraction, orthorectification and RPC model refinement using IKONOS imagery. Comprehensive tests are performed to test the accuracy of 3D reconstruction and orthorectification and to validate the feasibility of the model refinement techniques.

1.

Introduction The launch of IKONOS by Space Imaging opens a new era of high-resolution satellite imagery collection and also promotes the mapping applications of satellite imagery. The IKONOS satellite simultaneously acquires 1 m panchromatic and 4 m multi-spectral images in four bands with 11-bit radiometric resolution that are suitable for high accuracy photogrammetric processing and mapping applications. Space Imaging provides a broad category of mono and stereo products including georectified, orthorectified and stereo imagery products with different accuracy levels (Dial 2000, Grodecki and Dial 2001). Investigations of IKONOS imagery exploitation for mapping applications have been initially performed. Dial et al. (2001) studied automated road extraction mainly in suburban areas using IKONOS imagery. Hofmann (2001) combined a lidar digital surface model (DSM), and used pan-sharpened IKONOS data to detect buildings and roads in urban areas using *Corresponding author; e-mail: [email protected] International Journal of Remote Sensing ISSN 0143-1161 print/ISSN 1366-5901 online # 2004 Taylor & Francis Ltd http://www.tandf.co.uk/journals DOI: 10.1080/01431160310001618392

2834

C. V. Tao et al.

the segmentation and object-orientated classification techniques provided by the software eCongnition. Baltsavias et al. (2001) reported their work on the 3D building reconstruction. Instead of delivering the interior and exterior orientation geometry and other physical properties associated with physical IKONOS sensor, Space Imaging uses the rational function model (RFM), also known as rational polynomial camera (RPC) model, to communicate the imaging geometry. The RFM supplied by Space Imaging with the imagery is determined by a terrain-independent approach and approximates the physical IKONOS sensor model very well. There are two different ways to determine the physical IKONOS sensor model, depending on the availability and usage of ground control points (GCPs). Without GCPs, the orientation parameters are derived from the satellite ephemeris and attitude. The satellite ephemeris is determined using onboard global positioning system (GPS) receivers and sophisticated ground processing of the GPS data. The satellite attitude is determined by optimally combining star tracker data with measurements taken by the onboard gyros. With GCPs used, the modelling accuracy can be significantly improved, as shown in Dial (2000). The specified horizontal accuracies of the Geo, Reference, Pro, Precision and Precision Plus IKONOS products are 50 m, 25 m, 10 m, 4 m and 2 m CE90 (Circular Error 90%), respectively. The Geo product is projected with no terrain model used, so its accuracy is exclusive of terrain displacement. The Reference, Pro, Precision and Precision Plus products are all ortho-rectified using digital terrain models (DTMs). The Standard stereo product does not use GCPs, and its specified accuracy is 25 m CE90 horizontally, and 22 m LE90 (Linear Error 90%) vertically. The Precision stereo product uses GCPs and has a planimetric accuracy of 2 m CE90 and a vertical accuracy of 3 m LE90. In addition, the stereo products are resampled to epipolar geometry with epipolar resampling errors minimized. Use of the RFM to ‘replace’ the rigorous physical sensor models has been in practice for over a decade (Paderes et al. 1989). The RFM was initially used in the US military intelligence community. Recently, the RFM has gained considerable interest in the remote sensing and Geographical Information System (GIS) community, mainly due to the fact that some commercial satellite data vendors (e.g. Space Imaging and Digital Globe, CO, USA) have adopted it as a replacement sensor model for image exploitation. Such a strategy may help keep information about the sensors confidential, as it is not possible or practical to derive the physical sensor parameters from the RFM. On the other hand, RFM facilitates the applications of high-resolution satellite imagery due to its high fitting capability and simplicity, and enables photogrammetric interoperability of imagery from different vendors due to its generality. There are some publications available to researchers, developers and users mainly in the last four years (Madani 1999, Dowman and Dolloff 2000, Yang 2000, Tao and Hu 2001a, b). The least-squares solution to the nonlinear RFM has been derived and described (Tao and Hu 2001a, Hu and Tao 2002). The numerical properties and accuracy assessment on the use of RFM for replacing the physical sensor model are reported in Tao and Hu (2001a, b), Yang (2000) and Dowman and Dolloff (2000). In Tao and Hu (2001a) it is reported that, when using the terrain-independent scenario, the RPC model yields a worse case error below 0.055 pixel for SPOT imagery compared with its rigorous model. For IKONOS imagery, the RPC model differs from its rigorous model at a worse case error of 0.04 pixel and the rms. error below 0.01 pixel under all possible acquisition conditions (Grodecki 2001). Numerous tests have shown that the RFM

Photogrammetric exploitation of IKONOS imagery

2835

can approximate the rigorous sensor models with no distinguishable loss of accuracy and also retains the full capability of performing photogrammetric processing in the absence of the physical sensor model. Orthorectification and DSM generation are the two most important methods for imagery exploitation and mapping applications. The use of RFM for image rectification is discussed in Yang (2000) and Tao and Hu (2001a, b). The RFMbased 3D reconstruction has been implemented in some softcopy photogrammetric software packages (Paderes et al. 1989, Madani 1999, ERDAS 2001, Tao and Hu 2001a) for generation of DSMs with the aid of image matching techniques. Yang (2000) described an RFM-based iterative procedure to compute the object point coordinates from a stereo pair. In his method, an inverse form of RFM is used to establish the 3D reconstruction. Tao and Hu (2002) discussed the 3D reconstruction algorithms based on the inverse and forward RFM forms and compared the performance of these two reconstruction methods using several stereo pairs of aerial photography and IKONOS imagery. In Di et al. (2001) and Baltsavias et al. (2001), procedures similar to the above forward or inverse RFM 3D reconstruction are described with testing using aerial imagery, simulated IKONOS data, high resolution stereo colour (HRSC) imagery, or IKONOS stereo pair. In Toutin et al. (2001), the authors stated that they have developed a parametric sensor model that reflects the physical reality of the complete viewing geometry for the IKONOS sensor, with an absence of detailed sensor information. Thus, for stereo images, both co-linearity and co-planarity conditions can be used to simultaneously compute the interior and exterior orientation parameters in the least-squares bundle adjustment. In addition, the users would update or improve the existing RFM solutions (provided, for example, by the vendors) when they have proprietary GCPs. Hu and Tao (2002) proposed an incremental technique to correct the RPCs directly. When the covariance matrix of the RPCs is available, users can achieve a reliable updating by controlling the system sensitivity to new control information in the Kalman filtering process. Fraser et al. (2001) proposed solving this problem by computing their own parameters for the (relief-corrected) affine projection and (extended) DLT (direct linear transformation) models, and found that these first-order models could yield sub-pixel positioning accuracy because of the very small field of view of IKONOS sensor (v1.0‡). Grodecki and Dial (2003) proposed the block adjustment of RPC models, which has proved to be an accurate and effective method by simulated and real numerical results. Their work is important because it shows that by adding some adjustable terms that are physically significant, the RPC models are proved to be competent for traditional photogrammetric processing. The past difficulty that RFM was not suitable for direct adjustment by analytical triangulation (OGC 1999a, Tao and Hu 2001b) is conquered perfectly in an alternative way. In Fraser and Hanley (2003), two bias parameters are introduced to compensate for the horizontal displacement in image space due to the lateral shift of the IKONOS platform and to generate bias-corrected RPCs. One benefit of this direct updating on the coefficients is that the exploitation algorithms developed based on the RFM can be used without the need for any change. Rational MapperTM has been developed at York University to ease the map production from high-resolution satellite imagery by implementing the above algorithms using the RFM as its image geometry model. In this paper, important processing methods and mapping applications are presented which have been developed for 3D feature extraction, orthorectification and RFM refinement, all

2836

C. V. Tao et al.

based on the RFM. The basic equations of forward and inverse RFM forms are given first, and two computational scenarios for determining the RPCs are discussed. Then, the principle and derivations for the two RFM-based 3D reconstruction methods, namely, forward RFM method and inverse RFM method, are described briefly. In a later section, two approaches are presented to refine the existing RFM when additional GCPs are available, given that the physical sensor model is unknown. Finally, the exploitation results are shown using an aerial photograph and two IKONOS stereo pairs in the experiments. As an application example, the forward RFM is used to perform orthorectification, 3D feature extraction and RFM refinement for the IKONOS stereo pairs based on the RPCs supplied by Space Imaging. 2. Imagery exploitation based on the RFM 2.1. Background A sensor model relates 3D object point positions to their corresponding 2D image positions. It describes the geometric relationships between the image space and the object space. The physical sensor models and generalized sensor models are two broad categories of sensor models that are used (McGlone 1996). A welldesigned sensor model ensures that 3D feature extraction and orthorectification products generated from the imagery are accurate. Described in the OpenGIS document (OGC 1999a), there are three main replacement sensor models: grid interpolation model; RFM; and universal real-time sensor model. These models are all generalized, i.e. the model parameters do not carry physical meanings of the imaging process. The term ‘replacement sensor model’ can sometimes be confusing. Actually, from an end-user’s perspective, with the replacement sensor model provided, the user can perform photogrammetric processing without the need to know the rigorous sensor model, the sensor type and the physical imaging process. The primary purpose of the use of ‘replacement sensor models’ is their capabilities of sensor independence, high fitting accuracy and real-time calculation (Madani 1999, Dowman and Dolloff 2000, Tao and Hu 2001b). To be able to replace the rigorous physical sensor model, the rigorous sensor model is often used to determine the unknown coefficients contained in the replacement sensor model. Thus, the achievable accuracy of the photogrammetric imagery exploitation using the replacement sensor model is mainly affected by the accuracy of the original rigorous sensor model used for solving the unknown coefficients in the replacement sensor model. Orthorectification and DSM generation are the two most important methods for imagery exploitation and applications and are discussed, following a brief introduction to the concepts of the RFM. 2.1.1. Rational functions The RFM relates object point coordinates (X, Y, Z) to image pixel coordinates (r, c), as any other sensor model, but in the form of rational functions that are ratios of polynomials. For the ground-to-image transformation, the defined ratios of polynomials have the forward form (OGC 1999a): r~

p1ðX , Y , Z Þ p2ðX , Y , Z Þ

p3ðX , Y , Z Þ c~ p4ðX , Y , Z Þ

ð1Þ

Photogrammetric exploitation of IKONOS imagery

2837

where r and c are the row and column index of pixels in the image respectively, X, Y and Z are coordinate values of points in object space. Each numerator or denominator in equation (1) has 20 terms for the third-order case. In order to minimize introduction of errors during the computing, the two image coordinates and three object point coordinates each are offset and scaled to fit the range from 21.0 to 1.0 over an image or image section (OGC 1999a, NIMA 2000). There are several names used to refer these polynomial coefficients, i.e. rational function coefficients (RFCs), rational polynomial coefficients (RPCs), rapid positioning capability (RPC) and rational polynomial camera (RPC) data. In the existing literature, the RPC model often refers to rational functions with third-order polynomials, while the RFM often refers to general rational functions with some variations. The RFM and the universal sensor model (USM) have been adopted by the OpenGIS Consortium (OGC 1999a) as a part of the standard image transfer format due to their capability of maintaining the full accuracy of physical sensor models and their characteristic of sensor independence. The imagery vendors compute the RPCs for each image product and deliver these together with the image and other related metadata. Although the order of the polynomial terms is trivial and differs in different literature, the order defined in NIMA (2000) is adopted by Space Imaging and Digital Globe and, thus, has become the industry standard. For image-to-ground transformation, an inverse form is defined below (Yang 2000): X~

p5ðr, c, Z Þ p6ðr, c, Z Þ

Y~

p7ðr, c, Z Þ p8ðr, c, Z Þ

ð2Þ

The inverse form expresses the planimetric object point coordinates as rational functions of the image coordinates and the vertical object coordinate. 2.1.2. Determination of the RPCs The RPCs can be solved with or without the physical sensor model. With the known physical sensor model, an image grid covering the full extent of the image can be established and its corresponding 3D object grid can be generated with each grid point’s coordinates calculated from its conjugate image point using the physical sensor model. Then, the RPCs are estimated using a direct least-squares solution with an input of the object grid points (X, Y, Z) and the conjugate image grid points (r, c). The RFM performs as a fitting function between the image grid and the object grid. Tests have shown that the RFM determined using this approach could achieve a very high approximating accuracy to the original physical sensor model, that is, RFM holds the full accuracy of the physical sensor model. Therefore, when the RFM is used for imagery exploitation such as orthorectification and 3D reconstruction, the achievable accuracy is very much affected by the physical sensor model used (i.e. the accuracy of the object grid generated using the physical sensor model). No actual terrain information is required for this approach and, thus, it is called the terrain-independent solution to RFM. This terrainindependent computational scenario makes RFM a perfect and safe replacement to the original physical sensor model and has been widely used to determine the unknown coefficients of the RFM (Paderes et al. 1989, Madani 1999, Yang 2000, Grodecki and Dial 2001, Tao and Hu 2001a, Hu and Tao 2002). Without knowing the physical sensor model, the 3D object grid cannot be

2838

C. V. Tao et al.

generated. Therefore, the GCPs on the terrain surface have to be collected in a conventional way (e.g. from maps or the actual DTM). The iterative least-squares solution with regularization is then used to solve for the RPCs. In this case, the RFM tries to approximate the complicated imaging geometry across the image scene using its plentiful polynomial terms, and the solution is highly dependent on the actual terrain relief, the number and the distribution of GCPs across the scene. In this context, the RFM behaves as a complex fitting model, and the overparametrization may cause instability in the least-squares solution. The regularization technique is used to improve the condition number of the design matrix. This approach is called the terrain-dependent solution to RFM. Unless a large number (e.g. twice the number of unknowns) of densely distributed GCPs is available, this terrain-dependent approach may not provide a sufficiently accurate and robust solution to the RFM. That is, the RFM established by this approach may not be used as a replacement sensor model if high accuracy is required. However, the terrain-dependent approach can be used as a general tool for image registration with some advantages and unique characteristics compared to regular polynomialbased methods (Tao and Hu 2001b). 2.1.3. Photogrammetric exploitation and interoperability Multiple different image geometry models are needed for exploiting different image types under different conditions (OGC 1999b). There are many different types of imaging geometries, including frame, panoramic, pushbroom, whiskbroom and so on. Many of these imaging geometries have multiple subtypes (e.g. multiple small images acquired simultaneously) and multiple submodels (e.g. atmospheric refraction and lens distortion). These image geometries are sufficiently different that somewhat different rigorous image geometry models are required. Furthermore, different cameras of the same basic geometry can require different rigorous image geometry models. When interoperation of several software packages is needed to complete one imagery exploitation service, it is necessary to standardize the different image geometry models that are expected to have widespread use by interoperable software. OGC (1999b) has adopted a specification for standardization of image geometry models. However, there are still some problems when combined adjustments among different image geometry models are needed. Because of the characteristic of sensor independence, the use of RFM would be a driving force towards photogrammetric interoperability among imagery exploitation software and service providers. If each sensory image comes with a set of RPCs (solved or supplied by the data vendor), end-users and developers will be able to perform the subsequent photogrammetric processing without knowing the original sophisticated physical sensor model or taking account of the submodels associated with the sensors used to acquire the images. For example, stereo intersection can be done using multiple different types of images (e.g. one is an IKONOS image and another is a QuickBird image) as long as they have an overlap. This is very beneficial in terms of making photogrammetric processing interoperable, and makes it an easy work for users and service providers to integrate images acquired by different types of sensors from multiple data vendors. Therefore, one can develop a software component with a generic interface to handle the RFMs for various sensors, provided that the RPCs are supplied with each image. It is obvious that the differences in image scales and error estimates associated with the RPCs may not be able to be appropriately processed by weighting during the adjustment,

Photogrammetric exploitation of IKONOS imagery

2839

and may result in a decreased processing accuracy. However, this technique is of unique value for those exploitation conditions where high accuracy is not the most important index. A feasibility of this novel approach has been carried out and a detailed report will be provided in a separate publication. Inspired by the above idea, a standalone software package called Rational MapperTM has been developed, which utilizes the RFM as the internal geometry model for imagery exploitation including orthorectification, 3D reconstruction and feature extraction, 3D measurement using single images with the aid of DTMs, and tie points measurement. Figure 1 demonstrates the 3D feature extraction result of a nuclear plant using an IKONOS Standard stereo pair virtualized from the same viewpoint of a real aerial photography taken in 2000. The ortho-rectified IKONOS image is draped and the extracted buildings, buckets and chimneys are extruded on the DTM. While the first version of Rational Mapper is still under beta testing, the next version will include more functions including bundle adjustment (Grodecki and Dial 2003), automatic tie points collection, automatic DSM generation, and so on, which will allow end-users to make full advantage of the RFM-enabled imagery from any source. The current user interface of Rational Mapper is shown in figure 2. Rational Mapper is being commercialized and became available in early 2004. 2.2. Orthorectification An original un-rectified aerial or satellite image does not show features in their correct locations due to displacements caused by the tilt of the sensor and the relief of the terrain. Orthorectification transforms the central projection of the image into an orthogonal view of the ground with uniform scale, thereby removing the distorting affects of tilt optical projection and terrain relief. RFM-based orthorectification is relatively straightforward. Either the forward form or the inverse form of RFM can be employed. It results in two different rectification approaches: direct rectification – from the original image space (r, c) with elevation Z to the object space (X, Y); and indirect rectification – from the object space (X, Y, Z) to the original image space (r, c). There is no significant difference between these two

Figure 1. Airscape of a nuclear plant: (a) real aerial photography; (b) scene reconstructed by 3D feature extraction and orthorectification and virtualized using ERDAS VirtualGIS.

2840 C. V. Tao et al.

Figure 2.

User interface of Rational Mapper (Version 1.0).

Photogrammetric exploitation of IKONOS imagery

2841

approaches in terms of the final results. The advantages and disadvantages of each approach, together with the re-sampling methods can be found in Novak (1992). For IKONOS imagery, the indirect rectification is often used because only the RPCs and normalization parameters for the forward form are provided by Space Imaging. This one-step ground-to-image computation could speed up the orthorectification significantly, especially for pushbroom or line scanner sensors such as SPOT PAN and Landsat Thematic Mapper (TM), where an iterative process is necessary for rigorous sensor models. 2.3. Three-dimensional reconstruction The IKONOS stereo images have a large base-to-height ratio of 0.54 to 0.83, and have a potential of generating DTMs with an accuracy of about 2 m for producing national mapping products (Ridley et al. 1997). Three-dimensional reconstruction can be performed using the corresponding points in the stereo pair images (or the fore-, nadir-, and aft-looking images) using the forward form or inverse form. It is worth noting that this can be extended to the case of multiple images, where each image is available with its own RFM. Compared to the conventional stereo intersection, there is no actual intersection of the light rays occurring at the object point. Therefore, the term ‘3D reconstruction’ is used instead of ‘stereo intersection’ throughout the paper to represent this virtual intersection. 2.3.1. Three-dimensional reconstruction with forward form Space Imaging provides only the values of RPCs and normalization parameters for the forward RFM. The 3D ground position of an object point can be iteratively reconstructed from its conjugate image points after the RPCs of the forward RFM form in equation (1) are solved. First-order approximations are achieved by applying the Taylor expansion of r, c towards the three input variables X, Y, Z in equation (1), and then formulating the error equations for two conjugate image points (rl, cl) and (rr, cr) as 3 3 2 3 2 2 rl {^rl Lrl =LX Lrl =LY Lrl =LZ 2 vrl 3 7 DX 7 6 7 6 6 6 vrr 7 6 Lrr =LX Lrr =LY Lrr =LZ 76 rr {^rr 7 7 6 76 7 6 7 6 6 7 DY 7 7 6 7~6 5 {6 6 vcl 7 6 Lcl =LX Lcl =LY Lcl =LZ 74 6 cl {^cl 7 ð3Þ 5 5 4 5 4 4 DZ Lcr =LX Lcr =LY Lcr =LZ vcr cr {^cr v~Ax{l where ^r, ^c are estimated by substituting the initial approximate X, Y, Z coordinates into equation (1). The least-squares solution is  {1 T x~½DX DY DZ T ~ AT WA A Wl ð4Þ where W is the weight matrix for the image points. It may be an identity matrix when the points are measured on images of same type. However, higher weights should be assigned to points measured in images of higher resolution when doing hybrid adjustment using images of different ground resolutions. The left and right images of a stereo pair are usually normalized separately using different offset and scale values for the ground coordinates. Therefore, the equations should be modified so that those original, instead of separately normalized, ground

2842

C. V. Tao et al.

coordinates are used in the error equations for the left and right images (Tao and Hu 2002). The initial approximate values for X, Y, Z coordinates may be obtained by solving the RFM with only constant and first-order terms, or by solving X, Y using one image when given an elevation value Z, or by setting to be the median values of the ground coordinate ranges. In most cases, eight iterations are enough to converge. When the initial approximate values are obtained from the inverse RFM reconstruction described below, two iterations are usually enough. 2.3.2. Three-dimensional reconstruction with inverse form After the RPCs of the inverse RFM form in equation (2) are solved for each image, the 3D object point coordinates can be iteratively calculated using the conjugate image points in the stereo pair. Similarly to the previous section, the Taylor expansion of and is applied towards the input variable in equation (2) and the error equations for a pair of conjugate image points (rl, cl) and (rr, cr) are formulated as " #   " LXr LXl # ^ l {X ^r vX X LZ { LZ ½DZ { ð5Þ ~ LY LYl r ^ l {Y ^r vY Y LZ { LZ ˆ are estimated by substituting an initial value of Z into equation (2). The ^, Y where X least-squares solution to the correction DZ is          ^ l {Y ^ l {X ^ r wX LXr { LXl z Y ^ r wY LYr { LYl DZ~ X LZ LZ LZ LZ  2  2 ! LXr LXl LYr LYl { { zwY wX LZ LZ LZ LZ

ð6Þ

where wX and wY are weights in X and Y directions. When the stereo images are resampled to be epipolar, the weights in Y directions are higher because the parallax in Y direction is supposed to be zero. The reconstruction procedure with inverse RFM form is described in Yang (2000) with a faster computation of the correction used. Since it allows only one explicit least-squares solution for Z, discrepancies may occur in the X and Y directions. So the inverse RFM reconstruction described above may not be able to obtain the best solution. As observed in the previous section, the forward form allows for a simultaneous least-squares adjustment for all three coordinates of an object point. So a better solution can be expected by treating the result of the inverse RFM as the initial approximate for the forward RFM reconstruction. 3.

Refinement of the RFM using GCPs The RPCs provided by imagery vendors can be updated using two methods, given no knowledge of the physical sensor model, when additional control information becomes available (Hu and Tao 2002). Assuming that the values of the RPCs have been pre-determined, the values of the RPCs can be resolved using the batch method when both the original and the additional GCPs are available. This can be fulfilled by simply incorporating all of the GCPs into the determination of the RPCs by terrain-independent or terrain-dependent approaches, with appropriate weighting for the original and new GCPs. In fact, this method may be used by the vendor when both the original and the new GCPs are known. While only the additional GCPs are available, an incremental method can be applied. Here, the

Photogrammetric exploitation of IKONOS imagery

2843

original GCPs refer to those used to compute the initial RPC values, whereas the additional GCPs are independently collected and are not used to solve the initial RPC values. These two methods update the RPCs directly and, thus, are called direct refinement methods. So the updated RPCs can be transferred without the need for changing the existing image transfer format. When the RPC values are determined using only satellite ephemeris and attitude (i.e. no GCPs are employed in building the rigorous sensor model), there mainly exist systematic errors, as observed by Grodecki and Dial (2003). The indirect refinement methods that introduce complementary transformations in ground space or image space are most suitable for this case, and they do not change the original RPCs directly. The affine transformation is used as an example below. In addition, a few tie points can be measured on both images of a stereo pair, and both models may be refined, resulting in better relative orientation. Grodecki and Dial (2003) formulated the block adjustment math model using both GCPs and tie points. The proposed block adjustment model uses two firstorder polynomials defined in image space that is adjustable to accurately model the effects originated from satellite ephemeris and attitude of IKONOS sensor. Each of the polynomial coefficients is proved to have physical significance and can absorb one or more effects such as in-track effects causing offsets in the line direction, cross-track effects causing offsets in the sample direction, and the small effects due to gyro drift. Fraser and Hanley (2003) introduced two bias parameters to compensate for the lateral shift of the sensor platform in two orthogonal directions under the assumption that the biases manifest themselves for all practical purposes as image coordinate perturbations. This bias removal process requires, as a minimum, a single GCP. Then the bias parameters can be used to correct original RPCs to achieve nominally meter-level absolute accuracy without any reference to additional terms. Actually, the two-step procedure of the indirect refinement can be combined into one by recalculating the RPCs with a pair of 3D ground grid and 2D image grids established for each image. Using the known ground coordinate ranges stored in the metadata files, a regular 3D ground grid can be built, and the image grid, whose systematic biases are removed by the additional affine transformation, can then be made by the two-step procedure, and the image grid points outside the image extent are discarded. Then a new set of RPCs can be computed to replace the original RPCs. A detailed comparison of the updated RPCs obtained this way with those from direct refinement methods will be presented elsewhere. 3.1. Incremental methods The existing RFM solutions computed using those GCPs could be updated in an incremental manner, provided that both the RPCs and the covariance matrix associated with them are known. This method can be used by end-users to update the existing RPCs using the new GCPs, which may be collected from time to time. For each group of new GCPs, the values of the RPCs can be updated by an incremental technique based on the Kalman filter or sequential least-squares techniques, and the new GCPs are assimilated by appropriate weighting. As described in Hu and Tao (2002), the error propagations on the RPCs and the GCPs can also be recorded during the updating process besides those simple statistics (e.g. standard deviation) computed using the final numerical results.

2844

C. V. Tao et al.

3.2. Concatenated transformations As will be shown in the experiments section, the systematic biases such as a simple displacement are often the major error source for those IKONOS products at Geo and Standard levels, whose coefficient values are determined using only satellite ephemeris and attitude information. Thus, a simple concatenated transformation (e.g. first- and second-order polynomials, or 2D and 3D affine models) can be introduced in ground space after 3D reconstruction or in image space after the ground-to-image transformation by the forward RFM. Because Space Imaging provides the forward RFM only, it is straightforward to apply an additional transformation in image space, for example, a 2D affine transformation defined below:  0      a0 a1 a2 r r ~ z ð7Þ c0 b0 b1 b2 c where r, c are image line and sample coordinates output by the forward RFM with input of the ground coordinates of GCPs, and r’, c’ are measured line and sample coordinates of the corresponding GCPs by human operators. All these coordinates are normalized using the same offset and scale parameters as those used by the RFM (contained in the metadata). The six coefficients a0, a1, a2, b0, b1, b2 in equation (7) can be solved using three or more GCPs by a least-squares adjustment. When only one or two GCP(s) are available, the coefficients a0, b0 represent the horizontal offset between the measured and the RFM-projected image positions of the GCP(s). The coefficients a1, b2 are simply set to be 1, and a2, b1 to be 0. Thus, the ground-to-image transformation becomes a concatenated transformation with the original forward RFM transform as the first step and the additional (affine or translation) transform as the second step. It is noteworthy that an inverse transformation to equation (7) should be used in 3D reconstruction. 4. Experiments and evaluation 4.1. Results on aerial photography data Two direct refinement methods were tested using an aerial photography stereo pair provided by ERDAS Inc., USA. The original stereo pair with the scale of 1 : 40 000 was taken in Colorado Springs, CO and both photos were scanned at 100 mm per pixel. The overlap between the two images was about 68%. The scanned size was 231362309 pixels, and the ground sampling distance was 4.5 m. A photogrammetric bundle block adjustment with OrthoBase was done by ERDAS and the rigorous co-linearity equations with orientation parameters for both images were obtained. In the overlapping area of the stereo pair, the 3D coordinates of 7499 object points were intersected using the rigorous co-linearity equations. The corresponding 7499 points in both the left and right images were also available. 4.1.1. RFM fitting The terrain-independent approach was used to solve for the RPCs of each image since the co-linearity equations were known. An image grid as well as the corresponding 3D control grid and check grid in object space were generated using the rigorous co-linearity equations. The image grid contains 11611 points across the full extent of each image. The 3D control grid contains five terrain layers, each with 11611 points, and the 3D check grid contains 10 terrain layers, each with 20620 points. The layers cover the full range of terrain relief. The unknown RPCs

Photogrammetric exploitation of IKONOS imagery

2845

in equation (1) and equation (2) were determined, respectively, using the image grid points and the 3D object grid points. The maximum errors at the check grids are 5.5261028 pixels for the forward form and 6.5561027 m for the inverse form, respectively. It was found that both the inverse and the forward RFM provide extremely high-fitting accuracy to the co-linearity equation model. The rational polynomial form can produce better accuracy than the regular polynomial form. It was also found that the use of the coordinate normalization technique could achieve results with much better accuracy than those without normalization. As a result, the accuracy loss is hardly distinguishable when the RFM is used to ‘replace’ the colinearity equation model. 4.1.2. Three-dimensional reconstruction The 7499 conjugate points were input in the left and right images to both the forward RFM and the inverse RFM 3D reconstruction procedures. All the 7499 object points were used to check the accuracy of the reconstruction results. Again, use of normalization can obtain much better results than un-normalization for both inverse and forward RFM reconstruction. The results also show that no significant differences are found between the use of different denominators and the same denominator. The maximum absolute errors of the reconstructed 3D coordinates are 5.6761027 for the forward form and 3.90625 for the inverse form, respectively. An interesting point from the results is that the regular polynomial form may obtain the same or better reconstruction accuracy than both the inverse and the forward RFM when normalization is not applied. The experimental results demonstrate that both methods can produce the reconstruction results with no distinguishable loss of accuracy compared to the physical sensor model. The reconstruction method with forward form obtains higher accuracy than that with inverse form except for the regular polynomial case. However, the inverse form can converge faster with less computational burden and less iterations. A comparison of these two reconstruction methods is summarized in Tao and Hu (2002). 4.2. Results on IKONOS data The test work was done in the region surrounding a power development site in southern Ontario, Canada. Two IKONOS stereo pairs (referred to as scene 1 and scene 2) were acquired on 12 July 2001, each consisting of two panchromatic 11-bit images that had been geocorrected to the Space Imaging Standard level without using GCPs. There is a 20% overlap between these two scenes. The RPCs supplied with the images represent the imaging geometry to a planimetric accuracy of 25 m CE90 and a vertical accuracy of 22 m LE90. Because the RPCs are only in the form of the forward RFM, the forward RFM methods are used for 3D feature extraction and orthorectification. 4.2.1. Three-dimensional reconstruction Absolute 3D positioning accuracy Several vector layers in ESRI Shapefile format were obtained from the Canada Data Alignment Layer (CDAL). The CDAL dataset consists of points that were derived from the National Topographic Database (NTDB) topographic maps. For

2846

C. V. Tao et al.

the absolute 3D positioning accuracy analysis, the CDAL points derived from road intersections were used because usually the road intersections could be identified more accurately and also more easily than other features on the IKONOS imagery. The CDAL points have a stated accuracy of 10 m CE95 (CDAL 1999) that is roughly equivalent to 4.1 m rms. error. The line (row) and sample (column) coordinates of 28 conjugate points in scene 1 and 27 conjugate points in scene 2 were picked from both the left and right images in each scene. The conjugate points were selected such that they would correspond with the CDAL road intersection points. The line and sample coordinates of the conjugate point pairs were sent through the 3D forward RFM reconstruction software to output the latitude, longitude and height coordinates of the object points. To facilitate accurate distance measurements, the CDAL points and the output object points were both projected into the Universal Transverse Mercator (UTM) projection at zone 17N. The horizontal differences in ground space between the 55 corresponding points in the two respective sets were calculated and the statistics are summarized in table 1. The CDAL points are plotted in figure 3, along with residual vectors showing horizontal discrepancies. It is easily observable from figure 3 that a horizontal displacement is the main bias resulting from the terrain-independent determination of RPCs using only satellite ephemeris and attitude information without using GCPs. The mean errors are 2.34 m in easting and 5.74 m in northing, respectively. For the vertical absolute accuracy analysis, a DTM from the Canadian Digital Elevation Data (CDED) was obtained for the area. The CDAL intersection points were overlain on the DTM to obtain orthometric heights relative to the Canadian Geodetic Vertical Datum of 1928 (CGVD28) at each of these points. Using the software (i.e. GPS-HT) supplied by the Canadian Geodetic Survey Division, the heights were converted into ellipsoidal heights relative to the World Geodetic System of 1984 (WGS84) ellipsoid. A comparison was done between these heights and those computed by the 3D reconstruction software and the statistics are also shown in table 1. It can be seen in table 1 that the results derived by the forward RFM are consistent with the accuracy specification from Space Imaging (Grodecki 2001). Relative accuracy The relative planimetric accuracy assessment was done using man-made objects by measuring their dimensions using the forward RFM reconstruction and comparing them to the known dimensions. Two large buildings, four squares and four rectangular ponds were carefully measured. The rms. error and standard deviation of the dimension differences of all the measured objects are 0.44 m and 0.13 m, Table 1.

Absolute accuracy assessment for 3D reconstruction of two IKONOS scenes (in m).

Accuracy Mean error rms. error Standard deviation Minimum absolute error Maximum absolute error

Easting

Northing

Horizontal

Vertical

2.34 3.60 2.76 0.01 6.95

5.74 6.20 2.37 0.30 10.87

6.82 7.17 2.23 1.43 11.20

23.29 3.91 2.13 0.63 6.84

Photogrammetric exploitation of IKONOS imagery

Figure 3.

2847

Residual vectors at CDAL road intersection points in two scenes.

respectively. In figure 4, the dimensions of the building rooftop are 54.86 m in width and 445.00 m in length (read from the engineering design map). While the dimensions computed by 3D RFM reconstruction are 55.36 m in width and 445.14 m in length, there exist differences of 0.50 m in width and 0.14 m in length, respectively. To obtain a figure of the relative vertical accuracy of the 3D feature extraction, conjugate points were collected manually on the rooftops of two large buildings in scene 1 and scene 2. The distribution of 20 points on the rooftop of the building in scene 1 is also shown in figure 4. It is known from the engineering design map that the building rooftops are planar surfaces. Thus, each building’s rooftop points were fitted by a plane function. The modelled roof for the building in scene 1 is shown in figure 5, with three times exaggeration in the vertical direction. The roof model in figure 5(a) was obtained using points that were manually measured to be conjugate points, while that in 5(b) was obtained using points placed at the intersections of

Figure 4.

Sub-image from scene 1 with building rooftop points measured.

2848

C. V. Tao et al.

Figure 5. Two roof models of the building in scene 1 fitted using object points reconstructed from (a) directly measured conjugate points and (b) intersected points of lines.

linear features that had been traced manually. This latter technique helped to reduce the matching error between the conjugate points selected and, as can be seen in figure 5(b), the second model is much closer to the true flat shape. Using the second rooftop point collection technique, the rms. errors of the plane-fitting residuals in the vertical direction for both building roofs are 0.68 m. The high relative planimetric and vertical accuracies demonstrated in the building features extraction show that these IKONOS stereo pairs have high geometrical integrity and potential for reconstructing high quality cartographical features. 4.2.2. RFM refinement Direct refinement The incremental method was used for updating the RPCs provided with IKONOS images. Among the 28 CDAL points collected from scene 1, 20 points were used as GCPs and the remaining eight points were used for accuracy checking purpose. First, the accuracy of the RPCs supplied for each image using the eight checkpoints before updating was evaluated. Then, the 20 GCPs, in groups of five, were used to update the RPCs pertaining to the left image and right image, respectively, using the incremental method. The accuracy of the new set of RPCs was evaluated at the eight checkpoints, respectively, for the left and right images. A part of the results are listed in table 2 in the incremental method column. Finally, the RPCs of the stereo pair before and after updating were used to do 3D reconstruction with the forward RFM form. The accuracies of 3D reconstruction results are shown in table 3 in the incremental method row. It shows that the use of the initial RPCs of the IKONOS stereo pair yields a horizontal accuracy of 11.1 m CE90 and a vertical accuracy of 5.6 m LE90. Using the incremental method with the additional 20 GCPs, the absolute accuracies become 6.1 m CE90 and 3.8 m LE90, respectively. The numerical results show that both the ground-to-image transformation and the 3D reconstruction are improved after assimilating 20 new GCPs. Although the accuracy of the new GCPs used in this test is limited, the improvement is still satisfactory. This shows that the incremental method can improve the RPCs accompanying the IKONOS standard level stereo products using additional GCPs. More findings about the accuracy improvement can be found in Hu and Tao (2002).

Photogrammetric exploitation of IKONOS imagery Table 2.

Image Left

Right

2849

Errors at eight checkpoints after refinements (in pixels). Incremental updating

Affine transformation

Line

Line

Sample

Sample

Number of GCPs

rms. error

Max

rms. error

Max

rms. error

Max

rms. error

Max

0 (original estimate) 5 10 15 20 0 (original estimate) 5 10 15 20

2.391 2.271 2.661 1.879 2.038 2.339 2.300 2.224 3.762 2.780

4.418 4.159 5.864 4.456 3.156 5.038 3.361 5.272 7.690 6.105

6.387 4.839 4.157 3.204 3.282 8.140 7.066 3.754 3.228 3.389

9.839 8.469 7.865 7.195 6.968 10.722 9.447 5.552 4.847 5.999

2.391 2.621 2.161 2.096 1.994 2.339 2.761 2.106 2.469 2.227

4.418 4.543 3.473 3.754 3.525 5.038 6.201 4.555 5.284 4.803

6.387 3.587 2.637 2.736 2.796 8.140 4.636 3.864 3.086 3.185

9.839 6.334 4.609 4.727 4.779 10.722 7.387 5.253 5.597 5.527

Table 3.

Errors at eight checkpoints for 3D reconstruction (in m). Easting

Refinement method Initial estimate Incremental updating Affine transform

Northing

Height

Number of GCPs

rms. error

Max

rms. error

Max

rms. error

Max

Mean

0 20 4 8 12 20

3.325 2.318 2.444 2.220 2.016 2.098

5.924 4.690 4.820 3.686 3.693 4.054

6.603 3.158 4.043 4.391 2.921 3.245

9.012 5.278 6.570 7.018 5.946 6.335

3.383 2.282 2.944 2.503 2.610 2.106

5.440 4.299 4.625 4.812 4.473 4.397

22.694 1.023 21.793 21.888 21.357 21.124

Indirect refinement Equation (6) was used in this test. Among the same set of 28 CDAL points in scene 1, 4–20 points were input as GCPs, and the eight points used as checkpoints in the direct refinement tests were still treated as checkpoints. First, the accuracy of the RPCs supplied for each image before refinement was evaluated at the eight checkpoints. Then, different configurations with 4–20 GCPs were used to solve the twelve coefficients of the two affine transformations pertaining to the left image and right image, respectively. For each configuration, the accuracy of the refined model was evaluated at the same set of eight checkpoints, respectively, for the left and right images. The results with 5, 10, 15 and 20 GCPs are listed in table 2 in the affine transformation column. Finally, the refined models of the stereo pair were used to do 3D reconstruction with the forward RFM form. The results are listed in table 3 in the affine transformation row. Using eight GCPs, the refined model of the stereo pair yields a planimetric accuracy of 7.5 m CE90 and a vertical accuracy of 4.2 m LE90, respectively. In figure 6, the left image in scene 1 is overlain with vector layers of the nuclear plant region. The vector layers are aligned by the original model and the refined model, respectively. It demonstrates that the displacement bias that is mainly an offset in the sample direction has been corrected as comparing the two snapshots.

2850

Figure 6.

C. V. Tao et al.

Left image in scene 1 overlain with vector layers aligned by (a) the original model; (b) the refined model.

The numerical results show that the additional affine transform method produces close accuracy improvements in both the ground-to-image transformation and the 3D reconstruction using different configurations of GCPs. Using more GCPs does not help much. The reason is likely to be that the biases existing in IKONOS Geo and Standard level imagery may be linear in simple forms, for example, may be a translation in the simplest case, as demonstrated in figure 3. It is shown from tables 2 and 3 that the affine transformation could produce better improvements in the image space when compared with the incremental updating, and close improvements in the ground space, but with significantly lower computational burden. The relative accuracy assessment procedure described above was repeated after applying the RPC models refinement and a total of squared errors of 0.7427 m2 for 20 dimensions was observed when comparing these with their old measurement results. Assuming that the standard deviation of the measurement errors in image is equivalent to 0.3 m, the chi-square statistic becomes 8.25, that is less than space {1 ~31:41. This shows that the refinement does not alter the relative x20:95 ð20Þ accuracy at the significance level of 5%. 4.2.3. Orthorectification Orthorectified images from the right image in scene 1 were generated using the indirect orthorectification approach with both original and refined RPCs. The CDED DTM covering a smaller region was first re-sampled to match the desired output image resolution of 1 m. Then, the DTM grid points were used as input to the original forward RFM to obtain line/sample coordinates of corresponding pixels and their grey values in the original image. The orthorectified images were then used to collect coordinate information for the same set of CDAL road intersection points that were used for the absolute accuracy analysis. Because the ortho-image has a smaller size, only 18 points were used in this analysis. The orthorectified coordinates have discrepancies with the original CDAL point coordinates in ground space, and the statistics are summarized in table 4 at the ortho-image rows. To facilitate a comparative analysis, the statistics for the same set of points reconstructed by forward RFM form from conjugate points are also listed in table 4 at the stereo rows. The relative accuracy

Photogrammetric exploitation of IKONOS imagery Table 4.

2851

Planimetric accuracy assessment for orthorectification of scene 1 using 18 CDAL points (in m).

Original model Refined model

Ortho-image Stereo Ortho-image Stereo

Mean error

Standard deviation

rms. error

Minimum absolute error

Maximum absolute error

8.17 7.09 2.14 0.98

2.96 2.63 3.21 2.54

8.67 7.54 3.86 3.62

3.39 1.43 0.26 0.63

12.98 11.20 6.78 5.71

difference between these two cases is at the sub-metre level. The orthorectification results are slightly worse than the 3D reconstruction results. This minor difference is likely because the 3D reconstruction is done by an adjustment using two sets of RPCs from the two images of a stereo pair, while the ortho-image generation uses only one set of RPCs from a single image to get the image coordinates from DTM grid point coordinates without an adjustment. The mean errors, rms. errors and maximum errors are greatly reduced. This indicates a large improvement in the 3D positioning accuracy with the refined model against the original model that contains systematic biases. However, as demonstrated by the standard deviations, the relative accuracy became even worse for the generated ortho-image, and only a little improvement was achieved for the stereo configuration. This, again, confirms the geometric integrity of IKONOS stereo products. 5.

Conclusions This study investigates the use of RFM for photogrammetric exploitation of IKONOS imagery, and presents many RFM-based imagery exploitation applications including orthorectification, stereo feature extraction, and some model refinement techniques. It is found that the RFM satisfy the requirements of high modelling accuracy for photogrammetric processing. From the viewpoint of imagery exploitation service providers, the RFM technology enables extensive interoperability between images, regardless of which types of sensors (e.g. frame, pushbroom, whiskbroom) are used and how these are acquired. It does this by defining a standard set of RPCs and associated error estimates. Two methods for RFM-based 3D reconstruction and feature extraction were used: the forward RFM method and the inverse RFM method. Experiments using a real aerial stereo pair validated the feasibility of these two reconstruction methods. Each method is shown to have its own strengths and weaknesses. Both methods converge well and are insensitive to their initial approximate values, provided the initial values are set reasonably. Both methods can obtain reconstruction results with no distinguishable loss of accuracy compared to the physical sensor model, although better results can be expected from using the forward RFM form. In addition, the reconstruction result from the inverse RFM may be input to the forward RFM to speed the convergence of the latter method and also to gain higher accuracy. Two RFM refinement approaches, namely, the direct and indirect methods, are proposed to update the existing RPC values in the absence of the original physical sensor model. The batch method incorporates all GCPs, including those used to calculate the initial estimate of the RPCs, simultaneously into its estimation process, while the incremental method is applied incrementally only when the new GCPs are available. The method applying a concatenated (affine) transformation

2852

C. V. Tao et al.

adds an additional step in image space. The accuracy of RFM solutions can be improved using those two direct refinement methods when a significant number of new GCPs are available, while a few GCPs are enough for a concatenated affine transformation when the distortions are mainly offsets in linear forms. The direct refinement methods will result in a better accuracy at checkpoints provided the additional GCPs are appropriately weighted and the covariance matrices associated with the RPCs are known. Based on the tests using IKONOS imagery, a satisfactory refinement using the incremental method can be expected even if the covariance of the RPCs is not provided. In addition, the direct refinement approach has an advantage of keeping the exploitation algorithms based on the RFM untouched and without needing any change. The concatenated affine transforms often produce close accuracy with fewer GCPs and lower computational costs. The experiments have shown that the IKONOS RPCs have the high relative accuracy at the sub-metre level, and the 3D mapping capability can be greatly enhanced after obtaining higher absolute accuracy by absorbing a few GCPs.

Acknowledgments The authors wish to thank Mr Steve Schnick for his great assistance in completing the IKONOS data test. Also the Rational Mapper development team, particularly Dr Arie Croitoru and Dr Feng Wang for their assistance to the research and development of Rational Mapping technology, and Mr Zhizhong Xu for his assistance in usability testing of Rational Mapper. Special thanks go to Dr Bob Truong, Canadian Nuclear Safety Commission for providing stereo IKONOS image pairs.

References BALTSAVIAS, E., PATERAKI, M., and ZHANG, L., 2001, Radiometric and geometric evaluation of IKONOS Geo images and their use for 3D building modeling. Proceedings of Joint ISPRS Workshop High Resolution Mapping from Space 2001, 19–21 September (Hannover: International Society of Photogrammetry and Remote Sensing (CD-ROM)), pp. 15–35. CDAL (CGDI Data Alignment Layer), 1999, Creating the CGDI Data Alignment Layer. http://cdal.cgdi.gc.ca/html/frames-e.html. DI, K., MA, R., and LI, R., 2001, Deriving 3-D shorelines from high resolution IKONOS satellite images with rational functions. Proceedings of 2001 ASPRS Annual convention, 24–27 April (St. Louis: American Society of Photogrammetry and Remote Sensing (CD ROM)). DIAL, G., 2000, IKONOS satellite mapping accuracy. Proceedings of ASPRS Annual convention, 22–26 May (Washington DC: American Society of Photogrammetry and Remote Sensing (CD ROM)). DIAL, G., GIBSN, L., and POULSEN, R., 2001, IKONOS satellite imagery and its use in automated road extraction. In Automated Extraction of Man-made Objects from Aerial and Space Images, edited by E. Baltsavias, A. Gruen, and L. Van Gool (Lisse: Balkema Publisher), pp. 349–358. DIGITAL GLOBE, 2001, QuickBird Imagery Products Product Guide. http://www.digitalglobe.com/ downloads/QuickBird Imagery Products – Product Guide.pdf DOWMAN, I., and DOLLOFF, J. T., 2000, An evaluation of rational functions for photogrammetric restitution. International Archive of Photogrammetry and Remote Sensing, 33, 254–266. ERDAS, 2001, IKONOS Sensor Model Support Tour Guide. http://www.unc.edu/atn/gis/ arcview/pc_manuals_33/IKONOS.pdf

Photogrammetric exploitation of IKONOS imagery

2853

FRASER, C. S., and HANLEY, H. B., 2003, Bias compensation in rational functions for Ikonos satellite imagery. Photogrammetric Engineering and Remote Sensing, 69, 53–58. FRASER, C. S., HANLEY, H. B., and YAMAKAWA, T., 2001, Sub-metre geo-positioning with IKONOS Geo imagery. Proceedings of Joint ISPRS Workshop on High Resolution Mapping from Space, 19–21 September (Hannover: International Society of Photogrammetry and Remote Sensing (CD ROM)), pp. 61–68. GRODECKI, J., 2001, IKONOS stereo feature extraction - RPC approach. Proceedings of the ASPRS Annual Conference, 24–27 April (St. Louis: American Society of Photogrammetry and Remote Sensing (CD ROM)). GRODECKI, J., and DIAL, G., 2001, IKONOS Geometric Accuracy, Proceedings of Joint ISPRS Workshop on High Resolution Mapping from Space, 19–21 September (Hannover: International Society of Photogrammetry and Remote Sensing (CD ROM)), pp. 77–86. GRODECKI, J., and DIAL, G., 2003, Block adjustment of high-resolution satellite images described by rational functions. Photogrammetric Engineering and Remote Sensing, 69, 59–69. HOFMANN, P., 2001, Detecting buildings and roads from IKONOS data using additional elevation information. GIS, 6, 28–33. HU, Y., and TAO, C. V., 2002, Updating solutions of the rational function model using additional control information. Photogrammetric Engineering and Remote Sensing, 68, 715–723. MADANI, M., 1999, Real-time sensor-independent positioning by rational functions. Proceedings of ISPRS Workshop on Direct versus Indirect Methods of Sensor Orientation, 25–26 November (Barcelona: International Society of Photogrammetry and Remote Sensing), pp. 64–75. MCGLONE, C., 1996, Sensor modeling in image registration. In Digital Photogrammetry: an addendum, edited by C. W. Greve (Bethesda: American Society of Photogrammetry and Remote Sensing), pp. 115–123. NIMA, 2000, The compendium of controlled extensions (CE) for the national imagery transmission format (NITF). version 2.1. http://www.ismc.nima.mil/ntb/superceded/ STDI-0002_v2.1.pdf. NOVAK, K., 1992, Rectification of digital imagery. Photogrammetric Engineering and Remote Sensing, 58, 339–344. OPENGIS CONSORTIUM, 1999a, The OpenGIS Abstract Specification – Topic 7: The Earth Imagery Case. http://www.opengis.org/public/abstract/99-107.pdf. OPENGIS CONSORTIUM, 1999b, The OpenGIS Abstract Specification –Volumn 16: Image Coordinate Transformation Services. http://www.opengis.org/public/abstract/99-116r2.pdf. PADERES, JR., F. C., MIKHAIL, E. M., and FAGERMAN, J. A., 1989, Batch and on-line evaluation of stereo SPOT imagery. Proceedings of the ASPRS–ACSM Convention, April, Baltimore, pp. 31–40. RIDLEY, H. M., ATKINSON, P. M., APLIN, P., MULLER, J. P., and DOWMAN, I., 1997, Evaluating the potential of the forthcoming commercial U.S. high-resolution satellite sensor imagery at the Ordnance Survey. Photogrammetric Engineering and Remote Sensing, 63, 997–1005. TAO, C. V., and HU, Y., 2001a, A comprehensive study of the rational function model for photogrammetric processing. Photogrammetric Engineering and Remote Sensing, 67, 1347–1357. TAO, C. V., and HU, Y., 2001b, Use of the rational function model for image rectification. Canadian Journal of Remote Sensing, 27, 593–602. TAO, C. V., and HU, Y., 2002, 3-D reconstruction methods based on the rational function model. Photogrammetric Engineering and Remote Sensing, 68, 705–714. TOUTIN, T., CHENIER, R., and CARBONNEAU, Y., 2001, 3-D geometric modeling of Ikonos GEO images. Proceedings of Joint ISPRS Workshop on High Resolution Mapping from Space, 19–21 September (Hannover: International Society of Photogrammetry and Remote Sensing (CD ROM)). YANG, X., 2000, Accuracy of rational function approximation in photogrammetry. Proceedings of ASPRS Annual Convention, 22–26 May (Washington DC: American Society of Photogrammetry and Remote Sensing (CD ROM)).