constrained bundle adjustment of panoramic stereo images for mars ...

27 downloads 0 Views 719KB Size Report
KEY WORDS: Mars Landing Site Mapping, Bundle Adjustment, Panoramic Image, DEM, Orthoimage ... the two rovers land on Mars in January 2004. In the MPF.
CONSTRAINED BUNDLE ADJUSTMENT OF PANORAMIC STEREO IMAGES FOR MARS LANDING SITE MAPPING Kaichang Di, Fengliang Xu and Rongxing Li

Mapping and GIS Laboratory, CEEGS, The Ohio State University 470 Hitchcock Hall, 2070 Neil Avenue, Columbus, OH 43210-1275 – {di.2, xu.101, li.282}@osu.edu Commission II, WG II/1 KEY WORDS: Mars Landing Site Mapping, Bundle Adjustment, Panoramic Image, DEM, Orthoimage ABSTRACT: In Mars exploration missions, high-precision landing-site topographic information is crucial for engineering operations and the achievement of scientific goals. Detailed topographic information of landing sites is usually provided by ground panoramic images acquired by lander or rover stereo cameras. This technology was employed for the 1997 Mars Pathfinder (MPF) mission and will also be used in the current 2003 Mars Exploration Rover (MER) mission. Photogrammetric bundle adjustment is a key technique for achieving high-precision topographic products. This paper presents a special constrained bundle-adjustment method that supports high-precision Mars landing-site mapping. A complete set of constraint equations is derived to model the unique geometric characteristics of the stereo cameras. This constrained bundle adjustment is then applied to the panoramic image network to provide high-precision exterior orientation (EO) parameters of the images as well as ground positions of the tie points. A fast bundleadjustment algorithm is proposed in which two kinds of unknowns (EO parameters and ground positions) are solved iteratively in order to avoid the large-scale matrix computations that would be necessary in a simultaneous adjustment. The method and the software are tested using panoramic lander data obtained from the MPF mission and FIDO (Field Integrated Design & Operations) panoramic rover data acquired on Earth. Test results show that sub-pixel to 1 pixel accuracy can be achieved and that the fast algorithm is more than 100 times faster than the simultaneous solution, yet still provides the same or even better accuracy. Methods for automatic tie point selection and DEM and orthoimage generation are also briefly described. The proposed methods and developed software (with minor modifications) will be used during the 2003 MER mission. 1. INTRODUCTION High-precision topographic information is crucial in Mars landed exploration missions for the achievement of scientific goals as well as for engineering operations. In particular, largescale landing-site mapping will be extremely important for the current and future landed missions such as the 2003 Mars Exploration Rovers (MER), the European Beagle 2 Lander of the 2003 Mars Express, the 2009 Mars Science Laboratory mission, and sample-return missions beyond 2010. Before landing, topographic mapping of a landing site is performed using orbital data such as Viking, MOC (Mars Orbiter Camera), and THEMIS orbital images with MOLA (Mars Orbiter Laser Altimeter) altimetric data. After landing, detailed topographic information of the landing site is usually provided by ground panoramic images and traversing images acquired by lander and/or rover stereo cameras. This technology was employed for the 1997 Mars Pathfinder (MPF) mission and will also be used in the ongoing MER mission after the two rovers land on Mars in January 2004. In the MPF mission, the lander imager IMP (Imager for Mars Pathfinder) acquired over 16,500 lander images and the rover Sojourner acquired 550 rover images (Golombek et al., 1999). Among the IMP stereo images is a full panorama of multiple overlap. The two MER rovers will use their Pancam and Navcam stereo cameras at the landing sites to take both full/partial panoramic and traversing images. These ground-based (lander and rover) panoramic and traversing images provide landing-site

information in unprecedented detail that is much greater than in orbital images. The methods used to derive topographic information from ground images are extremely important to the resulting topographic accuracy. Photogrammetric bundle adjustment is a key technique for achieving high-precision topographic products. Because the camera position and attitudes of the panoramic stereo images are highly correlated, modeling the correlation is high desirable in the bundle adjustment to ensure high precision and reliability of the Least Squares solution. After the MPF mission surface operations, the USGS carried out photogrammetric and cartographic processing of the IMP images and produced a variety of topographic products including spectral cubes, panoramic maps and other topographic data (Gaddis et al., 1999; Kirk et al., 1999). A bundle adjustment was performed to revise the pointing data of the IMP images. Pointing errors of the raw IMP images are at least 5 pixels and commonly are as large as 15 pixels. After the bundle adjustment, the RMS residual between the measured image coordinates and the calculated ground points projected back into the image space was found to be about 0.5 pixel (Kirk et al., 1999). The German Space Agency (DLR) has produced a multispectral panoramic image map and an orthoimage map of the MPF landing site using similar photogrammetric and image processing techniques (Kuschel et al., 1999). Since 1998, the Mapping and GIS Laboratory at The Ohio State University has been developing a bundle-adjustment

method with relevant techniques for the near real-time processing of Mars descent and surface images for rover localization and landing-site mapping. In order to verify our algorithm and software, field tests were conducted in April 1999 and May 2000 at Silver Lake, CA. Using the simulated descent and rover images obtained, a rover localization accuracy was achieved of approximately 1m for a traverse length of 1km from the landing center (Li et al., 2000, 2002; Ma et al., 2001). In addition, we have tested our methods and software with actual Mars data. Intermediate results of landingsite mapping and rover localization from IMP data have been reported in Di et al. (2002) and Xu et al. (2002). In Li et al. (2003), we reported the developed techniques for automatic landing-site mapping and showed the final DEM and orthoimage generated from the panoramic IMP images. In this paper, we will describe the details of the bundle adjustment of panoramic images. Results using IMP and FIDO data will be given to show the effectiveness of the constrained adjustment. The developed methods and software will be employed in the 2003 MER mission for landing-site mapping and rover localization through a MER Participating Scientist project. 2. PANORAMIC IMAGE DATA In this investigation, we use two sets of panoramic image data to verify our algorithm and software. The first is MPF IMP panoramic images. We downloaded IMP images from the PDS (Planetary Data System) web site. The German DLR also provided us with a complete panorama chosen from a vast number of IMP images. We selected 129 IMP images (64 stereo pairs and one single image) that form a 360º (azimuth) panorama. The tilt angle for the upper panorama ranged from 69~90º and that for the lower panorama ranged from 50~69º. Overlapping exists in both the vertical and horizontal directions. Figure 1 shows a mosaic of the images provided by German DLR.

is a rough mosaic of the images. We have also been adding additional Navcam and Pancam panoramic images along with traversing images to test the mapping and rover localization capabilities of our methodology. There are several frames of reference that are used in the MPF image pointing data, including the camera head coordinate system, lander (L) frame, Martian Local Level (M) coordinate frame, Mars Surface Fixed (MFX) coordinate frame, and Landing Site Cartographic (LSC) coordinate system. Bundle adjustment and topographic products of our models are based on the LSC system as defined by the U.S. Geological Survey. We developed a program to convert the pointing data of the PDS images to the exterior orientation data. This is accomplished by a chain of translations and rotations through the above frames of reference (R. Kirk, 2001, personal communication). Converted exterior orientation values were then used as initial values in the bundle adjustment. For the FIDO data, the WITS (Web Interface for Tele-Science) system has converted the original telemetry data to CAHV and CAHOVR camera models. We developed a program to convert the CAHV and CAHOVR models to the conventional photogrammetry model that consists of interior orientation parameters, lens distortion parameters, and exterior orientation parameters. Bundle adjustment and topographic products of FIDO data are based on a local coordinate system defined by the Athena Science Team. 3. BUNDLE ADJUSTMENT 3.1 Basic Bundle-Adjustment Model The basic model for the bundle adjustment is based on the well-known collinearity equations (Wolf and Dewitt, 2000): x P = −f

m11 ( X P − X O ) + m12 (YP − YO ) + m13 ( Z P − Z O ) m 31 (X P − X O ) + m 32 ( YP − YO ) + m 33 ( Z P − Z O )

m 21 (X P − X O ) + m 22 (YP − YO ) + m 33 ( Z P − Z O ) m 31 (X P − X O ) + m 32 (YP − YO ) + m 33 ( Z P − Z O ) (1) where (XP, YP, ZP) are ground coordinates; (xP, yP) are image space coordinates; (XO, YO, ZO), are coordinates of the camera center in the object space; f is the focal length of the camera; and mij are the elements of a rotation matrix which is entirely determined by three rotation angles (ω, ϕ, κ). (XO, YO, ZO, ω, ϕ, κ) are called exterior orientation parameters. The image coordinates (xP, yP) are calculated from the pixel coordinates using pixel size and lens distortion corrections. y P = −f

Figure 1. MPF IMP images (Mosaic courtesy German DLR) (Segmented into two parts for larger view)

The linearized observation equation is expressed in matrix form as: (2) V = AX - L , P Figure 2. A rough mosaic of the Navcam panoramic images The second data set is from the Athena Science Team’s FIDO test conducted in August 2002. In this test, the rover traversed about 200m in 20 Sols (Martian days), taking more than 960 Navcam and Pancam images and collecting much additional data. Some of the collected Navcam and Pancam images are panoramic. We selected a 360º Navcam panorama at Site 5 to test our software. The results are reported in this paper. Figure 2

where P is the a priori weight matrix of the observations to reflect measurement quality and their contributions toward the final result. For the same image feature, measurements from higher resolution images will have greater weights than those from lower resolution images. In this bundle adjustment model, all of the unknowns (camera position, orientation of all the images, and 3D ground position of the tie points) are

adjusted together after all the images are acquired. Therefore we call it the integrated bundle-adjustment model. Because there is no absolute ground control available on the Martian surface, the adjustment is a free net adjustment where the normal matrix is rank deficient. We used the Singular Value Decomposition technique to solve the normal equation in which the Minimum Norm is applied using the Least Squares principle (Li et al., 2002). 3.2 Constrained Bundle Adjustment In order to reduce the correlation between the parameters in the IMP images, we incorporate constraint equations in the bundle adjustment. The constraints come from the fact that the cameras spin around the gimbal center and there is a hard camera base. By carefully examining the camera model and the calibration report of the IMP stereo cameras (Reid et al., 1999; Burkland et al., 2001; R. Kirk, 2001, personal communication), we established a set of constraint equations that can reduce or even eliminate the correlation. Each camera spins around the gimbal center. The first constraint can be represented as: X O   C1   X G       T − Y R  O C 2  −  YG  = 0  Z O   C 3   Z G 

For the FIDO data, we can not derive strict constraints as represented in Equation (3) because we have no access to the detailed documentation of the cameras and the original calibration reports. By analyzing the exterior orientation parameters converted from the CAHV and CAHVOR models, we derived the camera base constraint as represented in Equation (4). We also fitted two virtual spin centers for highand low-looking angles, the distances from the camera centers to the corresponding virtual center, and the distance between the two camera centers. These are then used in the adjustment. In the software implementation, the constrained bundle adjustment is transformed to the adjustment model Equation (2) without constraints by considering the constraints as observations with very small variances (very large weights). This implementation gives adjustment results equivalent to (any difference is negligible) the strict adjustment model with constraints (Kock, 1999). The advantage of this implementation is that it needs very little change of the software when adding the constraints. 3.3 A Fast Bundle Adjustment Algorithm

(3)

where (XO, YO, ZO) is the camera center; R is the rotation matrix of the camera in the mapping coordinate system; C1, C2 and C3 are the vector from the camera to the gimbal center, which are fixed for each camera but different for left and right cameras; and (XG, YG, ZG) is the gimbal position, which is fixed for the images taken on the same Sol though they may be different in different Sols. These parameters are estimated from the given calibration report. In the bundle adjustment, (XO, YO, ZO) and the angles in R are unknowns that make the above constraint equation nonlinear. Between the left and right cameras that are mounted on the same bar, the camera base constraint is represented as:  E1  X O  XO       T  YO  −  YO  − R L  E 2  = 0  E 3   ZO   ZO  L R

All the constraint equations are nonlinear. We linearized the equations and implemented them in our bundle-adjustment software for IMP images.

(4)

where E1, E2 and E3 are baseline components, which are fixed for all stereo images. We can see that the fixed distance between the two cameras is included in the above equation. Note that the fixed relative rotation between the two cameras is used in the derivation of the equation, so only one rotation matrix (RL) appears in the equation. Similarly, Equation (4) is also nonlinear. In the observation Equation (2), there are 12 exterior orientation parameters to be solved for one stereo pair. Now we have three constraint equations for each image from Equation (3) and three for each stereo pair from Equation (4). Overall, there are nine constraints for one stereo pair that will eliminate all the correlation between the 12 exterior orientation parameters.

In the integrated bundle adjustment, computational time increases significantly with an increase in number of images. For example, on a Pentium IV machine (1.8 GHz), about 25 hours were needed to adjust an IMP panoramic network consisting of 129 images and 655 tie points. This is because all of the unkowns were solved simultaneously, which involved a very large matrix (around 2,800×2,800) computation (SVD decomposition). In order to speed up the bundle adjustment, we have developed a fast bundle-adjustment algorithm in which the two kinds of unknowns (EO parameters and ground positions) are solved iteratively to avoid the large matrix computation. In the EO adjustment step, the ground positions of the tie points are treated as knowns, and the EO parameters are solved one stereo pair by one stereo pair, a process in which a very small matrix 12×12) is used. In the ground coordinate adjustment step, all the EO parameters are treated as knowns, and tie point positions are solved point by point. This strategy greatly reduces the computation burden. Test results show that it needs only about 12 minutes to complete the adjustment of the same image network. For a smaller network, for example of 20~30 images, the previous version of the software would take more than 5 minutes to complete, while the fast version needs only 5~6 seconds. This indicates that the new method is more than 100 times faster (depending on the number of images, the number of tie points, and the network structure) than the previous one. The attainable accuracy of this method is about the same, but the result is more stable. 3.4 Automatic Tie Point Selection In order to build a geometrically strong image network and ensure the success of bundle adjustment, a sufficient number of well-distributed tie points must be extracted and selected to link all the surface images to be processed.

We have developed a systematic method for automatic tie point selection (Xu et al., 2002; Di et al., 2002; Li et al., 2003). The procedure for selecting tie points within one stereo pair (intrastereo tie points) includes: interesting point extraction using the Förstner operator, interesting point matching using normalized cross-correlation coefficients, verification based on the consistency of parallaxes, and final selection by gridding. Figure 3 shows an example of automatically selected intrastereo tie points.

Figure 3. Intra-stereo tie points in IMP images Tie points between adjacent stereo images (inter-stereo tie points) are extracted and selected with the help of a coarse DEM generated from individual stereo pairs using the approximate orientation parameters. Figure 4 shows an example of automatically selected inter-stereo tie points.

It should be noted that in some cases, especially where there are significant differences in illumination and/or sun angle, some tie points still need to be selected manually. Our experience in processing IMP and FIDO panoramic images indicates that over 95 percent of tie points can be automatically selected for a panorama image network. 3.5 Bundle Adjustment Results Since there are no absolute ground checkpoints, we have used the following three methods to evaluate the precision of the bundle adjustment. 1) 2D residuals of back-projected tie points: bundle-adjusted 3D coordinates of tie points in the object space are back-projected to the stereo images. The back-project points are compared with their corresponding points to produce differences. 2) 3D differences of inter-stereo tie points: after bundle adjustment, the 3D position of the inter-stereo tie points can be triangulated from the left and right stereo pairs using the adjusted orientation parameters. Differences in 3D coordinates from the left and right stereo pairs are calculated and compared. 3) 2D differences of interstereo tie points: similar to method 2, for a tie point in the left pair we triangulate the 3D position and then back-project it to the right pair. The back-projected point is then compared with the corresponding tie point in the right pair image. In all three methods, the averages of absolute coordinate differences are calculated to depict the precision. The entire IMP panorama consists of 129 images that form either an upper panorama and a lower panorama with horizontal links, or an entire panorama with both horizontal and vertical links. In the image network, there are 655 tie points, 633 of which were automatically selected and 22 selected manually. A comparison of precision before and after adjustment is listed below.

Figure 4. Inter-stereo tie points of IMP images Figure 5 shows an example of tie points automatically selected from FIDO Navcam images. Blue crosses are intra-stereo tie points and red crosses are inter-stereo tie points.

Method 1 using 694 checkpoints, average image differences: (3.00, 2.70, 4.61) pixels in (x, y, distance) before adjustment (0.58, 0.41, 0.80) pixels in (x, y, distance) after adjustment Method 2 using 655 checkpoints, average ground differences: (0.040, 0.046, 0.028) meters in (x, y, z) before adjustment (0.030, 0.036, 0.018) meters in (x, y, z) after adjustment Method 3 using 655 checkpoints, average image differences: (6.58, 5.46, 9.57) pixels in (x, y, distance) before adjustment (1.10, 0.75, 1.49) pixels in (x, y, distance) after adjustment From this comparison, we can see that precision is improved in image space and object space by the bundle adjustment, with the improvement in image space being more significant. Overall, accuracy after the adjustment is around 1 pixel in the image space and 3 cm in the object space.

Figure 5. Automatically selected tie points from Navcam images

In the FIDO data processing, a panoramic image network is built by linking 36 Navcam images (18 pairs) at Site 5 with 249 automatically selected intra- and inter-stereo tie points. Before adjustment, the precision is 3.36 pixels in the image space (distance between measured and back-projected image points from Method 1) and 0.26 meter in the object space. After bundle adjustment, the precision is 0.74 pixel (2D accuracy) in the image space and 0.10 meter in the object space (3D accuracy). Thus precision has been improved in both image and object space.

Bundle-adjustment experiments of other FIDO sites have also been performed. The general result is that for single-site adjustment, 2D and 3D accuracy are both improved, especially the 2D accuracy which improved to from sub-pixel to 1 pixel. For multi-site adjustment, the 2D accuracy is 1 to 2 pixels and the 3D accuracy is about the same or slightly worse than before adjustment. 4. DEM AND ORTHOIMAGE GENERATION 4.1 DEM Generation After bundle adjustment, an image matching is performed to find dense conjugate points for DEM generation. The matching is accomplished in a coarse-to-fine manner. At first, the matched and verified interest points in the tie point selection process are sorted and used to fit a parallax curve and find parallax ranges along the rows of the points. Then, this parallax curve and these parallax ranges are applied to restrict the search range in a coarse grid (e.g., 5 pixel × 5 pixel) image matching. Next, a fine grid (1 pixel × 1 pixel) matching is performed in which the search is restricted to a very small range based on the results of the coarse grid matching. In both coarse and fine grid matching, epipolar geometry is employed to restrict the search in the row direction. Next, the 3D ground coordinates of the points are calculated by photogrammetric triangulation using the adjusted EO parameters. Finally, based on the mass 3D points, the DEM is generated using the Krigging method. The resultant DEM of the MPF landing site is shown as a colorcoded image in the left of Figure 6. The Krigging interpolation works well for close-range areas such as the MPF landing site, which was about 10 meters around the center. But at the FIDO site (shown in Figure 2), some distant mountains are about 100 meters away from the rover. In the far range areas, the distribution of ground points is extremely anistropic: dense in azimuth directions, very sparse in range directions. The Krigging interpolation works poorly for these far range areas. Therefore, we use a surface interpolation within equi-parallax lines for areas more than 20 meters away from the rover. The resultant DEM of the FIDO site is shown as a grayscale image in the left of Figure 7. 4.2 Orthoimage Generation The orthoimage is generated by back-projecting the grid points of the refined DEM onto the left image. A corresponding grayscale value is found and assigned to the grid point. In the area of overlap for adjacent images, the grid points will be projected to two or more images. The grayscale value is picked from the image in which the projected point is closest to its image center. Figure 6 (right) shows the orthoimage of the MPF landing site. Figure 7 (right) shows the orthoimage of the FIDO site. Through visual checking, we find no seams between image patches. This demonstrates the effectiveness of the bundle adjustment.

Figure 6. DEM and orthoimage of the MPF landing site

Figure 7. DEM and orthoimage of the FIDO site We also compared our IMP orthoimage with the JPL mosaic map and the DLR orthoimage by measuring the ground coordinates of five rocks. The comparison shows that the points from our orthoimage are very close to those from DLR. 5. SUMMARY We have presented a special constrained bundle-adjustment method for processing panoramic stereo images to support high-precision Mars landing-site mapping. The constraints eliminate or reduce the correlation between the parameters of the panoramic stereo images, thus improving the accuracy and reliability of the bundle adjustment. A fast bundle-adjustment algorithm is proposed in which two kinds of unknowns are solved alternatively to avoid large-scale matrix computations. The method and software have been tested using panoramic images obtained from the 1997 MPF mission and the 2002 FIDO field test. Test results show that sub-pixel to 1 pixel accuracy is achieved and that the fast algorithm is more than 100 times faster than the simultaneous solution, yet still provides the same or even better accuracy. In addition, the methods of automatic tie point selection, DEM and orthophoto generation have been proven effective and efficient. We will further enhance our algorithms and software and make the software ready for use during the 2003 MER mission. The results from processing actual Mars images will be presented at the symposium. ACKNOWLEDGEMENTS We would like to acknowledge the help of Monika Hoyer, Marita Waehlisch and Dr. Jurgen Oberst of German DLR, and Dr. Ray Arvidson and Bethany Ehlmann of Washington University in St. Louis for providing data and valuable information on MPF IMP data processing. We would

particularly like to thank Dr. Randy Kirk of USGS for his help on the conversion of IMP camera models. This research is being supported by JPL/NASA.

Ma, F., K. Di, R. Li, L. Matthies and C. Olson, 2001. Incremental Mars Rover Localization using Decent and Rover Imagery. ASPRS Annual Conference 2001, April 25-27, St. Louis, MO. (CD-ROM).

REFERENCES Burkland, M.K., et al., 2001. Computer Modeling of the Imager for Mars Pathfinder. http://imp.lpl.arizona.edu/imp_Team/report/SEC2/S2_07_03.ht m (accessed 12 June, 2001).

Olson, C.F., L.H. Matthies, Y. Yiong, R. Li, F. Ma and F. Xu, 2001. Multi-resolution Mapping Using Surface, Descent and Orbital Imagery. The 2001 International Symposium on Artificial Intelligence, Robotics and Automation in Space, Montreal, June 2001.

Di, K., R. Li, L.H. Matthies and C.F. Olson, 2002. A Study on Optimal Design of Image Traverse Networks for Mars Rover Localization. 2002 ASPRS-ACSM Annual Conference and FIG XXII Congress, April 22-26, Washington DC. (CD-ROM).

Reid, R. J., et al., 1999. Imager for Mars Pathfinder (IMP) Image Calibration. Journal of Geophysical Research, 104(E4), pp. 8907-8925.

Di, K., R. Li, F. Xu, L.H. Matthies and C. Olson, 2002. Highprecision landing-site mapping and rover localization by integrated bundle adjustment of MPF surface images. In: The International Achieves of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Ottawa, Canada, Vol. 34, Part 4, pp.733-737. Förstner, W. and E. Guelch, 1987. A Fast Operator for Detection and Precise Location of Distinct Points, Corners and Centers of Circular Features. ISPRS Inter-committee Workshop, Interlaken, Switzerland, pp.281-305. Gaddis, L.R., et al., 1999. Digital mapping of the Mars Pathfinder landing site: Design, acquisition, and derivation of cartographic products for science applications. . Journal of Geophysical Research, 104(E4), pp. 8853-8868. Golombek, M.P., et al., 1999. Overview of the Mars Pathfinder Mission: Launch through landing, surface operations, data sets, and science results. . Journal of Geophysical Research, 104(E4), pp. 8523-8553. Kirk, R.L. et al., 1999. Digital photogrammetric analysis of the IMP camera images: Mapping the Mars Pathfinder landing site in three dimensions. JGR, 104(E4), pp. 8869-8887. Kock, K-R., 1999. Parameter Estimation and Hypothesis Testing in Linear Models. Springer, Berlin Heidelberg, pp. 176177. Kuschel, M., J. Oberst, E. Hauber and R. Jaumann, 1999. An Image Map of the Mars Pathfinder Landing Site. ISPRS WG IV/5: Extraterrestrial Mapping Workshop "Mapping of Mars 1999", July 23-24, 1999, Caltech, Pasadena, CA. Li, R., F. Ma, F. Xu, L. Matthies, C. Olson and Y. Xiong, 2000. Large Scale Mars Mapping and Rover Localization using Descent and Rover Imagery. Proceedings of ISPRS 19th Congress, Amsterdam, July 16-23, 2000 (CD-ROM). Li, R., F. Ma, F. Xu, L.H. Matthies, C.F. Olson and R.E. Arvidson, 2002. Localization of Mars Rovers Using Descent and Surface-based Image Data. Journal of Geophysical Research-Planet, 107(E11), pp. FIDO 4.1 - 4.8. Li, R., K. Di and F. Xu, 2003. Automatic Mars Landing Site Mapping Using Surface-Based Images. ISPRS WG IV/9: Extraterrestrial Mapping Workshop "Advances in Planetary Mapping 2003", March 22, Houston, Texas.

Xu, F., F. Ma, R. Li, L. Matthies and C. Olson, 2001. Automatic Generation of a Hierarchical DEM for Mars Rover Navigation. Proceedings of the 3rd Mobile Mapping Conference, Cairo, Egypt, January 3-5, 2001. Xu, F., K. Di, R. Li, L. Matthies and C. Olson, 2002. Automatic Feature Registration and DEM Generation for Martian Surface Mapping. In: The International Achieves of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Xi'an, China, Vol. 34, Part 4, pp.549-554. Wolf, P.R., and B.A. Dewitt, 2000. Elements of Photogrammetry with Applications in GIS, Third Edition, McGraw-Hill, 608p.