PRISM

1 downloads 0 Views 209KB Size Report
in-orbit geometric calibration of the PRISM radiometers on the ... forward, nadir, and backward views each with a 2.5 m ... Measurement Unit (IMU), and Star Tracker System (STS). .... in a base-to height ratio of 1. ..... 5. Model - (M5): Constant and the first order coefficients of the attitude angles are refined: (Dv0, Df0, Dk0, Dv1 ...
In-flight Geometric Calibration and Orientation of ALOS/PRISM Imagery with a Generic Sensor Model P.V. Radhadevi, Rupert Müller, Pablo d‘Angelo, and Peter Reinartz

Abstract Self-calibration is a powerful technique to exploit the geometric potential of optical spaceborne sensors. This paper explains the methodology of expanding a sensor model for in-orbit geometric calibration of the PRISM radiometers on the Japanese ALOS satellite. PRISM has three optical systems for forward, nadir, and backward views each with a 2.5 m nominal spatial resolution. Algorithms for the geometric processing of the PRISM images are proposed and implemented. It is shown how self calibration and orientation of the sensor can be done without having precise knowledge of the payload geometry and attitude data. Several cases and procedures are studied with the established sensor model, including weight matrices, attitude offsets, attitude drifts, and focal length estimations. It is concluded that self calibration of the PRISM cameras can be done effectively with a rigorous sensor model. Even if the post-launch parameters are not available, sub-pixel geometric accuracy can be achieved.

Introduction The Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM) on the Japanese satellite Advanced Land Observing Satellite (ALOS) combines high-resolution imagery (2.5 m) with the capability to generate DEMs by providing three optical line scanners. The ALOS satellite is equipped with high quality orbital position and attitude determination devices, such as a Global Positioning System (GPS), Inertial Measurement Unit (IMU), and Star Tracker System (STS). The exterior orientation is measured by the GPS and the combined STS/IMU with the other sensor model parameters (e.g., instrument internal geometry, mounting angles, and thermal modeling) which leads to good accuracies at the system level. Therefore, in-flight calibration helps to increase the geometric accuracy as well as to monitor system critical parameters (e.g., focal length and bore sight angles). Existing methods have to be extended in order to describe the PRISM imaging geometry correctly. A generic sensor model for georeferencing of the linear Charge Coupled Device (CCD) array images has been developed at Advanced Data Processing Research INstitute (ADRIN) of the Indian Space Research

P.V. Radhadevi is with the ADRIN (Advanced Data processing Research INstitute), Department of Space, Manovikasnagar P.O., Secunderabad 500 009, India ([email protected]). Rupert Müller, Pablo d‘Angelo, and Peter Reinartz are with the German Aerospace Center (DLR), Remote Sensing Technology Institute, 82234, Wessling, Germany. PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

Organization (ISRO). Initially, the model was developed for the rectification of SPOT imagery with an approach similar to the one proposed by Westin (1990). Later, the model was made flexible and generic with attitude correction alone, and has been successfully used for the orientation of IRS (Indian Remote Sensing) satellites (Radhadevi, 1999; Radhadevi and Solanki, 2008). This model is expanded and adapted for the PRISM geometry. In-flight calibration is important for satellite sensors because, during the launch, the environmental conditions change rapidly and drastically which usually causes changes in the internal sensor geometry which are estimated in the pre-launch calibration. The mathematical model used for georeferencing should respect the current geometry. Within the last few years, many studies have been carried out towards the calibration, orientation, and DEM/orthoimage generation of ALOS PRISM data. Examples can be found in Aksakal and Gruen (2008), Weser et al. (2008), Tadono (2004), Gruen et al. (2007). Kocaman and Gruen (2008a and 2008b) explain a sensor model which compensates for the displacement of the relative positions of the CCD chips by employing six additional parameters per image. Weser et al. (2008) present a sensor model with self-calibration for the orientation of PRISM and AVNIR-2 imagery, and concludes that if the original calibration data are used as initial values, selfcalibration can improve the accuracy of the results from pixel to sub-pixel level. DLR’s Remote Sensing Technology Institute takes part in the European Space Agency (ESA)/Japan Aerospace Exploration Agency (JAXA) AO program to evaluate the performance and potential of the three-line stereo scanner PRISM and the multispectral imaging sensor AVNIR-2 as a principal investigator. The processing chain for ALOS PRISM images developed at DLR is explained by Schneider et al. (2008) and Schwind et al. (2009). In most of these papers, precise orbit and attitude data along with coordinate conversion matrices and geometric parameters provided by JAXA after the post–launch calibration are used. The motivation of this paper is to show that selfcalibration is a powerful technique, which can be applied even without precise knowledge of the payload geometry and exterior orientation. It also allows achieving sub-pixel geometric accuracy for special applications. Additionally the proposed methods in the paper contribute to geometric

Photogrammetric Engineering & Remote Sensing Vol. 77, No. 5, May 2011, pp. 531–538. 0099-1112/11/7705–0531/$3.00/0 © 2011 American Society for Photogrammetry and Remote Sensing May 2011

531

processing using “generic” processors without development of new “PRISM” software. But nevertheless, the metadata from JAXA are available to end-user which are also reliable.

direction. Figure 1 illustrates the configuration of the six CCD chips. The stereo angles are ⫾23.8 degrees (F/B) that results in a base-to height ratio of 1. The nominal focal length is 1.939 m.

The PRISM Instrument and Imaging

Calibration Approach Many vendors of satellite imagery provide precise information about the interior orientation in metadata files, whereas few satellite agencies do not. If calibrated data is provided, it is generally good enough for direct georeferencing within certain limits. As the definitions of the parameters describing the relationship between the object and the image coordinate systems vary for different vendors, there is a requirement to map the vendor-specific definitions to those used in individual sensor models. If the complete camera geometry is not provided by the vendor, one can include the parameters in the sensor model and compute them with GCPs. Self-calibration is an efficient and powerful technique to calibrate photogrammetric imaging systems. This method uses the laboratory calibration data (if available) as initial input into the adjustment. Influence of system calibration to direct sensor orientation is explained by Yastikli and Jacobsen (2005). In this paper, an approach in which the identification and quantification of systematic ground residual patterns is followed by their application to sensor modeling with bundle adjustment. The detailed in-flight calibration methodology for different cameras of the IRS-P6 satellite using this method is explained in Radhadevi and Solanki (2008). Bore-sight alignment parameters and focal plane geometric parameters are part of the collinearity equations used in the sensor model. During in-flight calibration, these parameters are also updated in a self-calibration approach which can be applied to multiple cameras and payloads with little or no change to the bundle adjustment pattern. Another advantage of this approach is that it can consider systematic effects on the image coordinates from any source and not only those which depend on the modeling of the optical geometry. Therefore, it is better to use the same sensor model for in-flight calibration as well as for operational product generation. But, the disadvantage of this approach is that the parameters will be highly influenced by the distribution of the GCPs. During the commissioning phase of the satellite, this method can be used for many test datasets with a good distribution of control points to fix the camera parameters. Full sets of radial and tangential distortion parameters are difficult to address, because they correlate with each other. Therefore, the appropriate parameters must be selected based on the analysis of their correlations and quality. It is important, that the treatment of the deformation parameters and the analysis of the correlations and accuracies are efficiently implemented in the software. Bore-sight alignment angles and the focal length are the main calibration parameters.

The PRISM instrument is one of three instruments onboard of the Japanese satellite ALOS which was launched in January 2006. The other instruments are AVNIR-2, a multispectral radiometer, PALSAR, a L-Band radar sensor. PRISM consists of three independent panchromatic radiometers for nadir (N), backward (B), and forward (F) viewing. Each radiometer is composed of 6 (N) and 8 (F, B) CCD-arrays: 4,992 pixels for nadir and 4,928 pixels for forward/backward views, respectively. Images are acquired by pushbroom scanning in which imaging geometry is characterized by nearly parallel projection in the along-track direction and perspective projection in the cross-track direction. Depending on the imaging mode, four or six CCD chips are used for recording. These sub-images share their exterior orientation, camera mounting parameters, and the focal length. PRISM data is delivered in different levels of processing: Level 1A, where no correction is done; Level 1B1, where the images are radiometrically corrected; and Level 1B2, where the images are radiometrically and geometrically corrected. Imagery and ancillary data are given in CEOS format, partly in ASCII and partly binary coded. Joining these sub-images results in image rows of 16,000 and columns of 14,496 in a Level 1B1 product, referred to as a “scene.” For our purposes, Level 1B1 data are the most suitable. The attitude and position estimates are based on Star Tracker and GPS receiver data. The sensor alignment parameters are defined in relation to the satellite coordinate system. The knowledge of the sensor’s relative alignment parameters is crucial to transform the platform position and orientation data into the camera coordinate system, which originates at the perspective center. However, each CCD chip has its own framelet coordinate system, and thus the coordinates of the principal point can be different for each of the sub-images. Although these parameters are provided in the Image supplementary files, they are not fully employed in our sensor model. The pixels, which are not used on the right and left CCD array, respectively, are regarded as dummy pixels and are excluded in processing. The Image coordinate system used here is same as defined in most of the ALOS papers referenced (Kocaman and Gruen, 2008a and 2008b; Weser, et al., 2008). A spatial resolution of 2.5 m is provided with a swath width of 35 km in triplet mode. To realize the resolution of 2.5 m, one pixel size of CCD sensor is 7mm for cross-track direction and 5.5mm for along-track

Figure 1. Configuration of six individual CCD chips of ALOS PRISM.

532

May 2011

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

Correlation between the physical parameters of the camera and the bore-sight parameters is very significant; for example, the focal length and the bore-sight angle in yaw direction (dk) are correlated. Effects of certain parameters cannot be measured explicitly. Instead, a resulting total effect will be measured and assigned only to the selected parameters. Therefore, the sensor model is analyzed with different sets of parameters. The details are explained in the next section. The significance of a certain parameter is evaluated by comparing the parameter value to its standard deviation and Root Mean Square Error (RMSE), e.g., if the RMSE value is large, the parameter is considered as significant. Otherwise, that parameter is not considered, and its effect is accounted through another significant parameter with which it has a correlation. Therefore, the in-flight calibrated parameters may not facilitate the best characterization of the individual parameters of the camera, but they will provide the most accurate total pointing correction. An important aspect of this assessment is to help deciding which parameters of the geometry of the camera needs most probably an adjustment. The criteria below are followed to decide the alignment trend for each camera. • If the system level errors are high in the initial phase operations of the satellite, the major cause is the bore-sight misalignment angles of the cameras or variations in the focal length. • If images from all the cameras show the errors of the same order over the same ground area, then there is probably a bias in the body attitude determination. • If errors are consistent over different images (with different viewing configurations) of the same camera, but vary from camera to camera, the error can be due to a combination of payload to body residuals of different cameras and their attitude determination. • If the height errors from direct intersection show a pattern over the length of the detector array, the reason can be due to a small change in the effective focal length or a yaw rotation of the detector array in the focal plane.

Sensor Model A generic sensor model for the georeferencing of linear CCD array images has been developed at ADRIN. This model is very flexible and has been successfully used for the orientation determination of SPOT-1, IRS-1C/1D, TES, IRS-P6, Cartosat-1, and Cartosat-2. It is integrated into the software system called VAPS (Value Added Product generation System). The algorithm is based on the viewing geometry of the satellite combined with the principles of photogrammetric collinearity equations. This sensor model will be described with the focus on the requirements for precise georeferencing of ALOS PRISM imagery. The collinearity equations express the fundamental relationship that the perspective centre, the image point and the object point lay on a straight line. It relates an image observation Po ⫽ (X, Y, Z)T in an Earth-centered object coordinate system to the position of its projection PI ⫽ (xs, ys, ⫺ f)T in an image coordinate system through the projection center at the satellite position PS ⫽ (XP, YP, ZP)T in an inertial coordinate system. The collinearity equation is defined as: xs X ⫺ Xp ± ys ≤ ⫽ d.MT. ± Y ⫺ YP ≤ ⫺f Z ⫺ ZP where d is the scale factor and MT is the orthogonal rotation matrix to transform the geocentric to sensor coordinate system. The rotation matrix is defined by PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

MT ⫽ {QGI # QIO # QOB # QBT # QTC}T, where QTC denotes the CCD to telescope transformation matrix. If the post-launch (or pre-launch) CCD alignment angles of the telescopes are available, they can be used in QTC as rotation angles around X-axis and Y-axis. Otherwise, those angles can be set to zero. QBT denotes the telescope to body transformation matrix, which is a function of the bore-sight angles. Nominal pitch angles of ⫹23.8° and ⫺23.8° for forward and backward views are used and roll and yaw offsets are initialized to zero. QOB represents the body to orbit transformation matrix, where QOB ⫽ QOI * QIB. QOI is the inertial to orbit coordinates transformation matrix, which can be computed from the position (XP, YP, ZP) and velocity (Vx, Vy, Vz) of the satellite. The series of transformations a pixel undergoes before it is projected onto the ground is pictorially represented in Radhadevi and Solanki (2008). The orbit reference frame quaternion with respect to the inertial frame can be computed from the direction cosine matrix. If these are multiplied with the given quaternion of the body frame with respect to the inertial frame, we get quaternions with respect to the orbit frame. If q1, q2, q3, and q4 are the computed quaternions of the body frame with respect to the orbit frame, the direction cosine matrix is given by: DC(q) = q21 ⫺ q22 ⫺ q23 ⫺ q24 2(q1q2 ⫹ q3q4) 2(q1q3 ⫺ q2q4) ± 2(q1q2 ⫺ q3q4) ⫺q21 ⫹ q22 ⫺ q23 ⫹ q24 2(q1q3 ⫹ q2q4) ≤ (2) 2(q2q3 ⫺ q1q4) ⫺q21 ⫺ q22 ⫹ q23 ⫹ q24 2(q1q3 ⫹ q2q4) where the roll, pitch, and yaw angles are extracted from the above matrix as follows: roll ⫽ ⫺arcsin (DC23) pitch ⫽ ⫺arctan (DC13/DC33). yaw ⫽ ⫺arctan (DC21/DC22)

(3)

The angles at imaging time are computed by linear interpolation. The matrix QOB at imaging time is then built as follows: QOB ⫽ Ry(pitch) # Rx(roll) # Rz(yaw)

(4)

Where: Ry(pitch) ⫽ ±

cos(pitch) 0 ⫺sin(pitch)

1 Rx (roll) ⫽ ± 0 0

0 cos(roll) sin(roll)

cos(yaw) Rz(yaw) ⫽ ± sin(yaw) 0

0 1 0

sin(pitch) ≤, 0 cos(pitch)

0 ⫺sin(roll) ≤ cos(roll)

⫺sin(yaw) cos(yaw) 0

0 0≤ 1

(1) QIO denotes the orbit to inertial coordinates transformation matrix, and QGI represents the Earth-centered inertial to the Earth-centered earth fixed transformation matrix which is a function of the sidereal angle, precession, and nutation of the Earth. The image coordinate xs is taken as zero as each May 2011

533

recorded image row is a central projection of the Earth’s surface recorded at time t and ys ⫽ (pixel number ⫺ scene centre pixel) * size of the detector. In the present study, individual camera alignment is performed and not a transformation from the individual telescope frame to the telescope-centered frame. Therefore, the principal point of the sensor coordinate system can be defined in two different ways. Column numbers of the GCPs in the merged image can be used as it is, and half of the total number of detectors, i.e., 7,248 can be defined as the center pixel. Here, we consider the active part of CCDs (see Figure 1) as a single virtual array. We can also use the center of the full width of the CCDs of the telescope as principal point and convert the image pixel into absolute values by adding the start pixel location of the first active CCD. In either case, the initial CCD alignment angles at the center pixel is taken as zero and computed as part of the self-calibration. If we define the origin of the image coordinate system as the center of the full width of CCDs of the telescope, the computed CCD alignment angles will be closer to the values provided by JAXA since the measured value is given by CCD coordinates. It is confirmed by JAXA that the accuracy of positional elements determined by on-board GPS are within the specified accuracy of 1 m. However, the pointing elements determined by the on-board STS and the PRISM sensor alignment against STS have not yet satisfied the specified accuracy (2.0e ⫺ 4°). The major factor of the pointing error is the sensor alignment changes depending on thermal conditions. Therefore, bore-sight alignment angles along with interior orientation parameters are included in the self-calibration adjustment. Information on the payload geometry provided by JAXA is not used in the model. The nominal pre-launch camera focal length serves as initial approximation. Bore-sight alignment angles are initialized to zero, except for the nominal view directions of the telescopes. Satellite ephemeris, attitude, image start time, and number of dummy pixels of the first and last detector arrays are used. With these values, initial fitting of the trajectory is done. Fitting of the given telemetry values is done for predicting at required time interval, i.e., to make it continuous from given discrete values. The quaternions are given with a 10 Hz update rate. A generic polynomial model is used so that by selecting the order of the polynomial, it can be adapted for different types of sensors. The initial values of all the parameters are derived by least squares adjustment to the ephemeris/attitude data using a generalized polynomial model: a ⫽ a0 ⫹ a1 # t ⫹ Á ⫹ an # tn

(5)

where t is the scan time and ␣ corresponds to satellite attitude (v, ␾, ␬), position ( Xp, Yp, Zp) and velocity (Vx, Vy, Vz). The order for the initial fitting of the parameters and the order up to which the coefficients have to be corrected, can be selected. The variation in the attitude angles may not be constant, and therefore, time-dependent coefficients are also updated with GCPs. Additional refinements to the coefficients of the computed attitude angles v, f, and k are done as corrections to the constant term and the 1st Order term along with corrections for interior orientation parameters using a few GCPs. If the post calibrated values of the payload parameters are included for a particular sensor, the interior orientation parameters from the adjustment can be excluded: v ⫽ (v0 + Dv0) + (v1 + Dv1) t ␾ ⫽ (␾0 + D␾0) + (␾1 + D␾1) t ≤ . ␬ ⫽ (␬0 + D␬0) + (␬1 + D␬1) t

534

May 2011

(6)

Let P ⫽ [Dv0, Dv1, Df0, Df1, Dk0, Dk1, Df] be the set of interior and exterior orientation parameters to be estimated in the present study. Linearized forms of the condition equations are developed using Taylor series expansion around the observations and parameters. Then, a first least squares solution of the system is estimated, which is then iterated. Using the updated satellite parameters, the ground coordinates of the GCPs are calculated for each iteration step. If differences between calculated and original values are negligible (threshold), the iterations are terminated. Using the generic sensor model described in the previous section, the following models are studied to decide and compute the in-flight calibration parameters: 1. Model - (M1): In this model, two parameters are updated, roll bias and pitch bias: (Dv0, Df0). 2. Model - (M2): Three parameters are updated, roll, pitch and yaw biases: (Dv0, Df0, Dk0). 3. Model - (M3): Three parameters are updated: (Dv0, Df0, f). Instead of yaw bias in M2, the focal length is included in M3. 4. Model - (M4): Four parameters are corrected: (Dv0, Df0, Dk0, f). 5. Model - (M5): Constant and the first order coefficients of the attitude angles are refined: (Dv0, Df0, Dk0, Dv1, Df1, Dk1). 6. Model - (M6): The same parameters as M5 are refined, but this method includes weight matrices for the parameters. The general trend of the attitude variation is captured from the given INS (Inertial Navigation System) data. Time dependency of these parameters also can be analyzed. We include a co-factor matrix for observations and a weight matrix for parameter estimates into the system. The values of the co-factor matrix for the observations represent the uncertainty in the control point measurements, i.e., its diagonal elements are the inverse of the variances of the measurement errors of the control point coordinates. Similarly, the a priori weights of the parameters represent the uncertainty in the attitude information. Appropriate weights for the parameters define threshold limits (defined by the uncertainty of the attitude data) within which each individual parameter can vary. Therefore, the number of required GCPS will be minimal in this model. 7. Model - (M7): All the seven parameters are updated with GCPS. The parameters are constant and first order terms of attitude coefficients and focal length are refined.

Table 1 summarizes different combinations of parameters considered for models M1 to M7. Ground Coordinates Computation An extraction of ground point information using only one image is termed a mono (monoscopic) extraction. The mono extraction solution corresponds to the intersection of the modeled line of sight (LOS) with a height estimate, which is just the application of the sensor model’s image-to-ground function. More specifically, the mono adjustment inputs the image point specified as image coordinates (line, sample), or equivalently (row, column), where the target/object is seen

TABLE 1. SUMMARY OF DIFFERENT MODELS AND THE CORRESPONDING PARAMETERS CONSIDERED Model M1 M2 M3 M4 M5 M6 M7

Parameters Considered ⌬v0, ⌬v0, ⌬v0, ⌬v0, ⌬v0, ⌬v0, ⌬v0,

⌬f0 ⌬f0, ⌬f0, ⌬f0, ⌬f0, ⌬f0, ⌬␻1,

⌬k0 f ⌬k0, f ⌬k0, ⌬v1, ⌬f1, ⌬k1 (without weight matrix) ⌬k0, ⌬v1, ⌬f1, ⌬k1 (with weight matrix) ⌬f0, ⌬f1, ⌬k0, ⌬k1, ⌬f

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

in the image. It then projects the image point along the modeled LOS and intersects it with a specified height provided by an external source, such as a Digital Elevation Model (DEM). This projection results in a ground point, usually specified in geodetic ground coordinates (latitude, longitude). When an object is viewed in multiple images (Forward, Backward, Nadir), the multi-image extraction process is able to not only produce a more accurate solution, but is able to do it without external height information. This is done through a stereo adjustment in which a set of tie points from Forward, Backward, and Nadir images are used to correct the attitude data. Test Data Three datasets were used for the investigation. Because the image data is lossy JPEG compressed on board of the satellite, compression artifacts are visible in the imagery. The first site is located in Catalonia near Barcelona (Spain). One set of radiometrically corrected triplet images of PRISM were available, acquired during October 2006. In addition, five orthophotos with pixel size of 0.5-meter and accuracy better than 1 pixel (1s) as well as a DEM of the test site with 15 m resolution and orthometric height accuracy of 1.1 meter (1␴) were provided by the Institute Cartographic de Catalunya (ICC). Twenty-eight distributed points were identified which can be used as GCPs or checkpoints. The second test site area is around the city of Shibam (northwest of Sana) in Yemen, acquired during March 2008. In-situ ground control point (GCP) measurements are not available for the Yemini test site. The water resource management over this test site using stereo evaluation was reported by Müller et al. (2009). Ikonos stereo images acquired in March 2007 with a ground resolution of 1 m was provided. Therefore, the geocoded Ikonos scene (using the Rational Polynomial Coefficients RPCs for orthorectification) together with bilinear interpolated terrain height information from DEMs derived from SRTM data serve as source for GCP determination with a specified geometric accuracy of about 7 m (CE90). For each of the three stereo partners, ten points were manually measured. Unfortunately, the Ikonos image is covering only the first half of the PRISM image. The third test site is located

TABLE 2.

near Marseille (France), acquired in March 2007 and provided to DLR by the company GAEL Consultant (France). Coordinates of six GCPs were provided, measured with GPS with accuracies better than 1 m as well as a DEM with 100 m resolution. Dataset 3 is used only for the computation of effective focal length. It is not used for evaluation of other results, as the number of available check points is too low.

Results and Discussion The first step is to compute the effective focal length of the cameras. For this, models M3, M4, and M7 were used. Table 2 shows the effective focal lengths computed for forward, backward, and nadir views with different combinations of GCPs and models. The possible reason for slight variations (within 0.0009 m) between different models applied to a particular camera might be due to the variation in accuracy between different points (within a pixel). Taking an average, the computed focal length values are 1.93745 m, 1.93820 m, and 1.93666 m for nadir, forward, and backward PRISM cameras, respectively. Corresponding standard deviations for the estimated focal lengths are 0.000238 m, 0.000126 m, and 0.000489 m, respectively. The focal length error, if it exists, will reflect as an error mainly in the across-track direction in the mono adjustment. Figure 2 demonstrates this behavior for the nadir viewing camera. Here, models M1, M2, M5, and M6 are used, which do not account for the focal length as parameter. Other parameters are corrected after fixing the computed focal length from the other models. The same correction procedure is repeated after fixing the nominal focal length value of 1,939 mm. The error in X direction (across track) clearly shows an improvement with computed focal lengths. Therefore, these values are fixed as a default for all models. The error in Y-direction (along track) is insignificant between computed and nominal focal lengths. Table 3 shows the accuracies for the mono adjustment with different combinations of parameters and GCPs. Table 4 shows the stereo adjustment accuracies. Interesting error patterns can be seen in Table 3 and Table 4 with different models which indicate the influence of different parameters

FOCAL LENGTH COMPUTED WITH DIFFERENT MODELS (M3, M4, M7)

Combinations

AND

GCP COMBINATIONS FOR FORWARD, BACKWARD,

AND

NADIR VIEWS

Computed Focal length (in meters) Dataset 1

Dataset 2

Dataset 3

Model, #GCPs

Forward

Backward

Nadir

Forward

Backward

Nadir

Forward

Backward

Nadir

(M3, (M3, (M3, (M3, (M3, (M4, (M4, (M4, (M4, (M4, (M4, (M4, (M7, (M7, (M7, (M7, (M7, (M7, (M7,

1.9382 1.9382 1.93784 1.93785 1.9382 1.9382 1.9382 1.93779 1.9378 1.9382 1.9382 1.93808 1.9382 1.9382 1.93765 1.93766 1.9382 1.9382 1.93786

1.93664 1.93664 1.93696 1.93696 1.93664 1.93664 1.93664 1.93664 1.93695 1.93695 1.93664 1.93664 1.9371 1.93664 1.93664 1.93717 1.93717 1.93664 1.93664

1.93745 1.93745 1.93808 1.93808 1.93745 1.93745 1.93745 1.93813 1.93813 1.93745 1.93745 1.93741 1.93745 1.93745 1.93814 1.93814 1.93745 1.93745 1.93827

1.9382 1.9382 1.93816 1.93817 1.9382 1.9382 1.9382 1.93835 1.93836 1.9382 1.9382 1.93821 1.9382 1.9382 1.93841 1.93841 1.9382 1.9382 1.9382

1.93664 1.93664 1.9367 1.9367 1.93664 1.93664 1.93664 1.93664 1.93648 1.93648 1.93664 1.93664 1.93666 1.93664 1.93664 1.93652 1.93652 1.93664 1.93664

1.93745 1.93745 1.93808 1.93808 1.93745 1.93745 1.93745 1.93813 1.93813 1.93745 1.93745 1.93741 1.93745 1.93745 1.93814 1.93814 1.93745 1.93745 1.93827

1.9382 1.9382 1.9382 1.9382 1.9382 1.93799 1.9382 1.9382 1.93832 -

1.93664 1.93664 1.93886 1.93886 1.93664 1.93664 1.93664 1.93895 1.93896 1.93664 1.93664 1.9382 -

1.93745 1.93745 1.93742 1.93742 1.93745 1.93745 1.93745 1.93745 1.9375 1.9375 1.93745 -

3) 4) 5) 6) 7) 3) 4) 5) 6) 7) 8) 9) 3) 4) 5) 6) 7) 8) 9)

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

May 2011

535

From Figure 3 we observe that the errors in M3 are more as compared to M2. In M3, roll and pitch bias along with the focal length are refined. Since we are using the computed focal length as the initial value, the final refinement in focal length is very minimal. Due to the same reason, errors with M1 and M3 are almost the same. Similarly, the errors from models M2 and M4 are almost of the same order after fixing the correct focal length. Thus, comparing the first four models, it can be concluded that if the precise focal length is available, bore-sight alignment angles in roll, pitch, and yaw direction are to be included in the self calibration. But, the corrections in the along -track errors are not sufficiently refined only with these parameters. The parameters which can cause an along-track error are time, pitch angle, and a component of the yaw angle. If there is an error in time tagging of scene start or attitude start, it will get corrected through refinement of the constant term of the pitch angle. In the models M1 through M4, the constant term of the pitch offset is already corrected. Therefore, we refine the time dependent terms of the attitude angles through models M5 through M7. The alongtrack accuracies are improving in all the cases indicating that when we use a model and a correct trajectory of 2 to 3 minutes with a single set of parameters, time dependent terms of the coefficients also should be refined. Comparing the results using models M5, M6, and M7, it is seen that M6 is better in all cases. The difference between M5 and M6 is only the inclusion of weight matrix for the parameters in M6. Refinements for coefficients of roll and pitch are given more weights compared to coefficients of yaw assuming that the yaw steering of the spacecraft will take care of the major part of the yaw variations and will be already reflected in the attitude values provided. As M6 uses weight matrices, the number of GCP requirements is also minimal. Along with six attitude angles (constant and time-dependent terms), the focal length is also included in M7, M5, and M7 behave in a similar fashion as the focal length is already refined in the first step and included in the camera model. Thus, if no information about the camera parameters is available for a particular sensor, a self calibration can be done with M4 for few datasets to fix the focal length and bore-sight alignment offsets in the first step. During the product generation in the

Figure 2. Effect of focal length variation on along track errors (y) and across track errors (x) for the models M1, M2, M5, and M6. (The connecting dotted lines are drawn only to show the trend).

on the viewing geometry. These errors are to be critically analyzed while deciding the parameters for self-calibration and also the model which should be used for operational product generation. The accuracy in X-direction is better than a pixel size for all the cases. This obviously shows that the effective focal lengths computed are precise. With model M1, the distortions in the along-track direction are not improved significantly even with more number of GCPs. The error in Y-direction for the models M1 through M4 can be attributed to the observation that only constant terms in the attitude are being corrected, whereas, for the models M5 through M7 the corrections are done for the constant and the first order terms (time-dependent).

TABLE 3.

MONO ADJUSTMENT ACCURACIES FOR DIFFERENT COMBINATIONS OF MODELS AND GCPS

RMSE (m) for Dataset1 Model, #GCPs M1, M2, M3, M4, M6, M1, M2, M3, M4, M5, M6, M7, M1, M2, M3, M4, M5, M6, M7,

536

May 2011

3 3 3 3 3 4 4 4 4 4 4 4 5 5 5 5 5 5 5

Backward

RMSE (m) for Dataset-2

Forward

Nadir

Backward

Forward

Nadir

Y

X

Y

X

Y

X

Y

X

Y

X

Y

X

6.48 5.87 6.52 5.89 1.84 5.66 4.98 5.59 5.00 1.86 1.83 1.86 5.64 5.00 5.59 5.03 1.88 1.86 1.84

1.47 1.43 1.44 1.44 1.59 1.48 1.46 1.74 1.42 1.28 1.28 1.27 1.63 1.58 1.84 1.51 1.33 1.32 1.31

11.55 4.73 11.52 4.72 2.53 10.13 4.50 10.25 4.46 2.14 2.15 2.14 10.23 4.52 10.40 4.45 2.20 2.21 2.22

3.69 2.34 3.80 2.37 1.42 3.48 2.20 3.08 2.20 1.30 1.30 1.31 3.68 2.27 3.12 2.29 1.30 1.29 1.31

7.06 4.83 7.04 4.82 2.10 6.32 4.40 6.32 4.40 1.46 1.46 1.46 6.39 4.41 6.42 4.40 1.51 1.50 1.49

2.07 1.69 2.11 1.71 1.15 2.04 1.68 2.03 1.68 1.14 1.14 1.15 2.21 1.78 2.11 1.79 1.16 1.17 1.16

5.31 5.33 5.32 5.34 1.81 5.23 5.23 5.23 5.23 5.70 1.76 6.09 5.24 5.24 5.24 5.24 3.55 1.82 3.78

1.35 1.37 1.12 1.13 1.55 1.31 1.31 1.10 1.11 1.50 1.49 1.16 1.26 1.26 1.07 1.07 1.20 1.38 0.76

5.91 3.25 5.81 3.25 2.70 5.68 3.23 5.58 3.24 5.33 2.72 5.06 5.73 3.45 5.59 3.43 4.04 2.65 3.94

2.35 2.78 2.31 2.52 1.89 2.24 2.61 2.25 2.35 1.82 1.74 1.72 2.09 2.31 2.21 2.10 1.95 1.47 1.85

4.03 4.44 4.08 4.55 3.17 3.51 4.50 3.61 4.68 2.22 1.01 2.20 3.28 4.17 3.37 4.35 2.29 2.01 2.61

0.97 1.02 1.35 1.19 1.08 0.97 1.09 1.43 1.17 1.22 1.03 1.30 0.99 1.06 1.52 1.26 0.87 0.95 1.79

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

TABLE 4. STEREO ADJUSTMENT ACCURACIES FOR DIFFERENT COMBINATIONS OF MODELS AND GCPS RMSE(m) Dataset 1 Model, #GCPs M1, M2, M3, M4, M6, M1, M2, M3, M4, M5, M6, M7, M1, M2, M3, M4, M5, M6, M7,

3 3 3 3 3 4 4 4 4 4 4 4 5 5 5 5 5 5 5

Dataset 2

Y

X

Z

Y

X

Z

4.81 3.90 4.80 3.90 1.34 4.53 3.75 4.51 3.75 5.58 1.06 5.63 4.58 3.95 4.57 3.95 1.02 1.18 1.03

1.55 1.66 1.55 1.66 1.34 1.50 1.59 1.50 1.57 1.09 1.27 1.15 1.55 1.57 1.55 1.57 1.18 1.26 1.15

6.59 5.16 6.60 5.16 4.56 6.58 5.08 6.58 5.08 2.66 4.53 2.89 6.61 5.02 6.62 5.02 7.64 4.48 7.75

8.03 4.83 8.04 4.84 2.17 7.14 4.62 7.14 4.62 1.79 1.76 1.78 7.40 4.77 7.42 4.76 1.76 1.74 1.76

2.34 1.89 2.31 1.89 1.80 2.60 2.03 2.61 2.03 1.68 1.68 1.69 3.01 2.27 2.91 2.27 1.79 1.79 1.79

7.36 2.08 7.36 2.07 2.00 6.91 1.61 6.90 1.61 1.58 1.62 1.58 7.00 1.64 6.99 1.64 1.58 1.60 1.58

operational scenario, M6 can be used so that it will correct for the remaining static and dynamic components of errors with a minimum number of GCPs. We could not precisely fix the bore-sight alignments, because the three datasets used are in different years (2006, 2007, and 2008) and datasets 2 and 3 seem to have slightly different offsets compared to dataset 1 (early image). In any event, using the orientation model M6, all these offsets are accounted. Another experiment was done without using the given attitude values. Using M6, constant and first order terms are computed. The comparison of planimetric errors is tabulated in Table 5. It is noted from Table 5 that even if the attitude measurements are not used, we can achieve the same accuracy if we correct the constant and time dependent terms of the attitudes with GCPs. RMS discrepancies in planimetric positioning in all cases is less than 0.1 pixel (actually, except for one case they were generally at 0.01 pixel level). Table 6 gives the location accuracies in the mono and stereo adjustment with model M6. It is shown that sub-pixel accuracy could be achieved with the self-calibration approach just with three GCPs. Slightly higher RMSE in Z for dataset 2 in

Figure 3. RMS errors with different models using the four GCPs for correcting Nadir image of Dataset-1. (The connecting dotted lines are drawn only to show the trend).

TABLE 5.

COMPARISON OF ACCURACIES WITH AND WITHOUT USING THE GIVEN ATTITUDE DATA USING MODEL M6 RMS error without RMS error using using given attitude attitude data

DATA

CAMERA

Dataset 1

ALOS_F ALOS_N ALOS_B ALOS_F ALOS_N ALOS_B

Dataset 2

#GCPS

Y(m)

X(m)

Y(m)

X(m)

4 4 4

2.150 1.458 1.837

1.302 1.137 1.277

2.146 1.457 1.828

1.301 1.136 1.276

4 4 4

2.716 0.829 1.737

1.740 1.072 1.490

2.717 1.005 1.758

1.741 1.027 1.494

the stereo adjustment is due to lack of well-distributed GCP information. Out of ten points, three to five points are used as GCPs. Higher errors (maximum of two pixels) at one or two points are highly influencing the final RMSE value.

TABLE 6. ACHIEVABLE LOCATION ACCURACIES WITH MODEL M6 IN MONO AND STEREO ADJUSTMENTS Stereo adjustment (Fore & Nadir) Mono adjustment #GCPs

CAMERA

DATA Dataset 1

Dataset 2

RMS error Y(m)

X(m)

#GCPs

RMS error Y(m)

X(m)

Z(m)

4 4 4

ALOS_F ALOS_N ALOS_B

2.149 1.457 1.836

1.302 1.136 1.276

3 4 5

1.957 1.778 1.759

1.756 1.666 1.790

1.972 1.595 1.578

4 4 4

ALOS_F ALOS_N ALOS_B

2.715 0.828 1.736

1.739 1.072 1.490

3 4 5

1.343 1.066 1.190

1.334 1.271 1.257

4.536 4.508 4.456

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

May 2011

537

Conclusion Self-calibration is a very powerful method for sensor model refinement. In this paper, a successful self-calibration approach of ALOS PRISM imagery is presented. Seven models with different parameter combinations are analyzed to decide the best set of parameters. It is shown that sub-pixel accuracy can be achieved with a careful selection of parameters for self-calibration even without the knowledge of the precise payload geometry and given attitude data. Overall, the proposed self-calibration leads to a very high geometric accuracy for ALOS PRISM images with just three GCPs. Increasing the number of GCPs is not further improving the accuracies, which indicates that the model is stable and consistent. Further research is required to correct for the exterior orientation of the cameras at the same time through a tied camera approach taking advantage of the same pass acquisition.

Acknowledgments The authors would like to express their thanks to Director, ADRIN and Director, Remote Sensing Technology Institute, and DLR for providing the opportunity for this collaborative research. Special thanks go to DAAD for providing assistance for this study. The first author would like to extend her special acknowledgements to D. Sudheer Reddy, for his support and critical reviews to bring the paper in its present form.

References Aksakal, S., and A. Gruen, 2008. Self-calibration and sensor model validation of ALOS/PRISM Imagery, Proceedings of the ALOS PI 2008 Symposium, Island of Rhodes, Greece, 03–07 November, (ESA SP-664, January 2009). Gruen, A., S. Kocaman, and K. Wolff, 2007. Calibration and validation of early ALOS/PRISM Images, The Journal of the Japan Society of Photogrammetry and Remote Sensing, 46(1):24–38. Jaxa, 2006. ALOS Product Format Description, URL: http://www.jaxa.jp/projects/sat/alos/index_e.html (last date accessed: 01 December 2010).

538

May 2011

Kocaman, S., and A. Gruen, 2008a. Geometric modeling and validation of ALOS/PRISM imagery and products, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, Vol. XXXVII, Part B1. Kocaman, S., and A. Gruen, 2008b. Orientation and self-calibration of ALOS PRISM imagery, The Photogrammetric Record, 23(123):323–340. Müller, R., M. Schneider, P.V. Radhadevi, P. Reinartz, and F. Schwonke, 2009. Stereo evaluation of ALOS PRISM and IKONOS in Yemen, Proceedings of IEEE International Geoscience & Remote Sensing Symposium, 12–17 July, Cape Town, South Africa. Radhadevi, P.V., 1999. Pass processing of IRS-1C/1D PAN subscene blocks, ISPRS Journal of Photogrammetry & Remote Sensing, 54(4):289–297. Radhadevi, P.V., and S.S. Solanki, 2008. Inflight calibration of multiple camera of IRS-P6 – The Photogrammetric Record, 23(121):69–89. Schneider, M., M. Lehner, R. Müller, and P. Reinartz, 2008. Stereo evaluation of ALOS/PRISM data on ESA-AO test sites - First DLR results; Proceedings of the ALOS PI 2008 Symposium, Island of Rhodes, Greece, 03–07 November, (ESA SP-664, January 2009). Schwind, P., M. Schneider, G. Palubinskas, T. Storch, R. Müller, and R. Richter, 2009. Processors for ALOS optical data: Deconvolution, DEM generation, orthorectification, and atmospheric correction, IEEE Transactions on Geoscience and Remote Sensing, 47(12):4074–4082. Tadono, T., M. Shimada, M. Watanabe, T. Hashimoto, and T. Iwata, 2004. Calibration and validation of PRISM onboard ALOS, International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 35(1):13–18. Weser, T., F. Rottensteiner, J. Willneff, and C.S. Fraser, 2008. An improved pushbroom scanner model for precise georeferencing of ALOS PRISM imagery, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVII, Part B1, pp 723–729. Westin, T., 1990. Precision rectification of SPOT imagery, Photogrammetric Engineering & Remote Sensing, 56(2):247–253. Yastikli, N., and K. Jacobsen, 2005. Influence of system calibration to direct sensor orientation, Photogrammetric Engineering & Remote Sensing 71(5):629–633. (Received 16 April 2010; accepted 22 July 2010; final version 30 September 2010)

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING