Geodesy and Cartography Supervisor

9 downloads 1094 Views 6MB Size Report
A new method for measuring glass lens distortion based on using distortion-free ... Optical system. IOS. – Ideal optical system. POS – Pinhole optical system.
Doctoral Thesis

Ph.D. Programme: Geodesy and Cartography Branch of study: Geodesy and Cartography Supervisor: Prof. Dr. Ing. Karel Pavelka

2

Abstract In photogrammetric applications, a measurements are conducted on images given by an optical imaging systems (cameras). In order to ensure the demanded accuracy, those cameras has to be well calibrated geometrically. A new method for measuring glass lens distortion based on using distortion-free pinhole lens is proposed, where the measure of the distortion progress is derived by comparing two images of the calibration field given by pinhole lens and glass lens. Such a simple procedure of lens calibration is advantageous as no additional parameters, that could influence the reliability of the estimation, have to be considered. Those additional parameters are usually the elements of exterior orientation which in certain camera component configurations highly correlates with the lens distortion characteristics. Furthermore, such an approach based on measuring relative values given by the comparison of two images do not require absolute position of the calibration field targets which reduces additional sources of error given by target surveying. Particularly, the lens distortion variation induced by change in magnification or aperture can be easily obtained. The proposed method for measuring lens distortion is applied in an experimental investigation leading to the confirmation of the validity of the theoretical development. The result of the investigation derives a promising new possibility for accurate lens calibration.

Keywords Photogrammetry, Computer vision, Camera calibration, Lens distortion, Pinhole lens.

3

4

Abstrakt Ve fotogrametrii se provádí měření na snímcích, získaných optickým zobrazovacím systémem (kamerou). Aby bylo dosaženo požadované přesnosti měření, je nutná geometrická kalibrace tohoto systému. Tato práce prezentuje novou metodu pro měření distorze objektivu, která je založena na využití dírkového objektivu známého tím, že zobrazuje bez distorzí. Distorze je měřena na základě porovnání dvou snímků kalibračního pole, kde první snímek byl pořízen dírkovým objektivem a druhý proměřovaným čočkovým objektivem. Navržená metoda je výhodná zejména tím, že při výpočtu distorze se neuplatňují jiné parametry a tedy nedochází ke korelacím mezi nimi. Většinou se jedná o parametry vnější orientace kamery, které korelují s hlavním snímkovým bodem. Navržená metoda je dále výhodná tím, že terče kalibračního pole nemusí být zaměřeny. Tato práce popisuje v první části novou metodu po teoretické stránce a v druhé části prezentuje experimenty, které potvrzují správnost navržené metody.

Klíčová slova Fotogrammetrie, Počítačové vidění, Kalibrace kamery, Distorze objektivu, Dírkový objektiv.

5

6

Acknowledgements

I would like to express my deepest gratitude to my advisor Prof. Karel Pavelka for unwavering support during my time at CTU in Prague. As my advisor, he kept faith in my ideas when I had lost my way. His endless patience and goodwill gave me the freedom to explore. I would like to extend my gratitude to my colleague and friend Dr. Bronislav Koska from the neighboring Department of Special Geodesy for the uncountable number of discussions we had, which helped me to clarify the matter of lots of problems I was dealing with and also resulted in many new ideas. His sincerity and open-minded attitude made my time at CTU in Prague very pleasant. I am also grateful to Prof. Antonín Mikš from the Department of Physics, who gave me a great lessons in optics theory and moral support. I would also like to thank to Jan Buriánek at AV Media, whose enthusiasm and great knowledge in the field of computer vision inspired me many times. Thanks also to Ministry of Education, Youth and Sports for their financial support under Research scheme MSM6840770040. Finally, I would like to thank my parents for their continuous encouragement and support.

7

8

Acronyms IO

– Interior orientation

EO

– Exterior orientation

PPS

– Principal point of best symmetry

CFL

– Calibrated focal length

AP

– Additional parameters

OS

– Optical system

IOS

– Ideal optical system

POS

– Pinhole optical system

GOS

– Glass optical system

PSF

– Point spread function

FOV

– Field of view

USGS – United States Geological Survey GPS

– Global Positioning System

DN

– Digital number

PRNU – Pixel response non-uniformity FPN

– Fixed pattern noise

TIF

– Tagged Image File

2D

– Two-dimensional

3D

– Three-dimensional

SD

– Standard deviation

SW

– Software

PVC

– Polyvinyl chloride

9

10

Terminology and Notation In most of the literature concerning camera calibration, the term “pinhole camera” is used as the synonym for an ideal optical system. In this thesis, we propose the use of a real mechanical pinhole element which could cause confusion in terminology. For this reason, we specify here the most common terms used in relation to pinhole element in this thesis.

Pinhole lens:

Lens mount with a pinhole element, held in place with a retaining ring.

Glass lens (lens):

Lens assembled from several spherical or aspherical glass elements.

Optical system:

System, consisting of a some sort of optical lens mounted on a camera body having the imaging sensor inside.

Pinhole optical system: Optical system, consisting of a pinhole lens mounted on a camera body having the imaging sensor inside. Glass optical system:

Optical system, consisting of a glass lens mounted on a camera body having the imaging sensor inside.

The term “glass lens” will be used here only when we want to avoid the confusion with pinhole lens. However, in most cases, we will use term “lens”. The term “camera” will be sometimes used as a synonym of the term “optical system”. The term “sensor” does not represent the whole optical system, but only the physical element in the camera body, which collects the light intensity. By using “camera calibration”, we mean the calibration of the interior orientation elements only (in the field of computer vision, “camera calibration” usually means calibration of the interior and also exterior orientation elements). By using “lens calibration”, we mean measuring lens distortion, according to [Eisenhart 1963]. Besides the terms regarding the theory of errors, we adopt the terminology given by [Newby 2012] which has been accepted by the International Society for Photogrammetry and Remote Sensing. Because the terminology regarding to the theory of errors is not unified, we decided to adopt it from a well-known publication “Observation and Least Squares” [Mikhail at al. 1976]. This implies the use of term “standard deviation” instead of “standard error”. Although the term “standard error” is quite common in geodesy we prefer more general term “standard deviation” in order to avoid the confusion with other branches (see for example [Nagele 2003]). 11

Table of Contents ABSTRACT ..................................................................................................................................................... 3 ABSTRAKT ..................................................................................................................................................... 5 ACKNOWLEDGEMENTS .................................................................................................................................. 7 ACRONYMS ................................................................................................................................................... 9 TERMINOLOGY AND NOTATION................................................................................................................... 11 1.

INTRODUCTION ................................................................................................................................... 15 1.1 MOTIVATION ........................................................................................................................................... 15 1.2 OBJECTIVES ............................................................................................................................................. 15 1.3 INTERIOR ORIENTATION ............................................................................................................................. 16 1.4 LENS DISTORTION ..................................................................................................................................... 16 1.5 PINHOLE LENS .......................................................................................................................................... 18 1.6 MECHANICAL STABILITY AND REPEATABILITY OF OPTICAL SYSTEM COMPONENTS .................................................. 18 1.7 RELATED WORK ....................................................................................................................................... 20 1.7.1 Goniometer Method ........................................................................................................................ 20 1.7.2 Multi-collimator Method ................................................................................................................. 22 1.7.3 Stellar Method ................................................................................................................................. 22 1.7.4 Plumb-line Method .......................................................................................................................... 23 1.7.5 Cross-ratio Invariant Method .......................................................................................................... 23 1.7.6 Vanishing Points Method ................................................................................................................ 24 1.7.7 Methods Based on Targets Resection ............................................................................................. 24 1.7.8 Conclusion ....................................................................................................................................... 25

2.

DESCRIPTION OF NEW PROPOSED METHOD OF MEASURING LENS DISTORTION................................... 27 2.1 2.2 2.3 2.4

3.

INTRODUCTION AND MAIN IDEA .................................................................................................................. 27 CALIBRATION PROCEDURE AND ARRANGEMENT .............................................................................................. 28 CONDITIONS FOR CHOICE OF PRINCIPAL DISTANCE OF PINHOLE LENS .................................................................. 29 CONCLUSION ........................................................................................................................................... 30

CALIBRATION OF THE GLASS OPTICAL SYSTEM ..................................................................................... 31 3.1 COORDINATE SYSTEMS DEFINITION AND NOTATION ........................................................................................ 31 3.2 INTRODUCTION OF IDEAL OPTICAL SYSTEM AS INTERMEDIATE STEP IN TRANSFORMATION PROCESS .......................... 37 3.3 TRANSFORMATION BETWEEN PINHOLE AND IDEAL OPTICAL SYSTEMS ................................................................. 37 3.3.1 Error Propagation ............................................................................................................................ 42 3.3.2 Position of Ideal Optical System ...................................................................................................... 43 3.4 TRANSFORMATION BETWEEN GLASS AND IDEAL OPTICAL SYSTEM ...................................................................... 45

4.

CALIBRATION OF PINHOLE OPTICAL SYSTEM ........................................................................................ 47 4.1 INTRODUCTION ........................................................................................................................................ 47 4.2 METHOD OF RECOVERY OF INTERIOR ORIENTATION ELEMENTS USING PERSPECTIVE VIEWS OF RECTANGLE ................ 48 4.2.1 Estimation of Vanishing Points and Covariance Matrix .................................................................. 49 4.2.2 Method for Correction of Vanishing Points Position ....................................................................... 50 4.2.3 Computer Simulation....................................................................................................................... 55 4.3 CONCLUSION ........................................................................................................................................... 60

5.

VALIDATION OF RESULTS GIVEN BY PROPOSED METHOD..................................................................... 61

12

5.1 CROSS-RATIO VALIDATION TEST .................................................................................................................. 61 5.1.1 Cross-ratio Bar and Target Description ........................................................................................... 63 5.1.2 Cross-ratio Validation Test Procedure ............................................................................................. 63 5.1.3 Error Given by Middle Targets......................................................................................................... 64 5.1.4 Cross-ratio Unity Variance Estimate ............................................................................................... 65 5.2 VALIDATION OF PINHOLE OPTICAL SYSTEM .................................................................................................... 67 5.3 COMPUTER SIMULATION OF CROSS-RATIO VALIDATION TEST ............................................................................ 68 5.4 CONCLUSION ........................................................................................................................................... 70 6.

TARGET DETECTOR .............................................................................................................................. 71 6.1 6.2 6.3 6.4

7.

IMAGE PRE-PROCESSING STEPS ................................................................................................................... 72 ELLIPSE FITTING METHOD .......................................................................................................................... 74 CENTROID METHOD .................................................................................................................................. 76 POST-PROCESSING STEPS AFTER TARGET DETECTION ....................................................................................... 77

EXPERIMENTS ..................................................................................................................................... 79 7.1 TARGET DETECTOR TESTS ........................................................................................................................... 79 7.1.1 Pinhole Optical System .................................................................................................................... 79 7.1.2 Glass Optical System ....................................................................................................................... 81 7.1.3 Conclusion ....................................................................................................................................... 83 7.2 PINHOLE OPTICAL SYSTEM VALIDATION ........................................................................................................ 84 7.2.1 Conclusion ....................................................................................................................................... 87 7.3 LENS MOUNT REPEATABILITY TEST AND STANDARD DEVIATION OF IMAGE COORDINATE MEASUREMENT ESTIMATION . 88 7.3.1 Conclusion ....................................................................................................................................... 93 7.4 CHROMATIC ABERRATION .......................................................................................................................... 94 7.4.1 Conclusion ....................................................................................................................................... 95 7.5 GLASS LENS CALIBRATION EXPERIMENT ........................................................................................................ 96 7.5.1 Calibration Field .............................................................................................................................. 96 7.5.2 Description of the Experiment Procedures ...................................................................................... 96 7.5.3 Description of Results Given in Tables and Figures ......................................................................... 99 7.5.4 Conclusion ..................................................................................................................................... 110 7.6 VALIDATION OF THE GLASS LENS CALIBRATION RESULTS ................................................................................. 110 7.6.1 Conclusion ..................................................................................................................................... 112

8.

CONCLUSION AND FUTURE WORK ..................................................................................................... 115

BIBLIOGRAPHY .......................................................................................................................................... 116 LIST OF FIGURES ........................................................................................................................................ 119 LIST OF TABLES .......................................................................................................................................... 122 APPENDIX A .............................................................................................................................................. 124 APPENDIX B............................................................................................................................................... 127 APPENDIX C ............................................................................................................................................... 131 APPENDIX D .............................................................................................................................................. 132 APPENDIX E ............................................................................................................................................... 133

13

14

1. Introduction

In photogrammetric applications, a measurements are conducted on images given by an optical imaging systems (cameras). In order to ensure the demanded accuracy, those cameras has to be well calibrated geometrically and for certain applications even radiometrically. This thesis deals with the geometrical camera calibration.

1.1

Motivation

Camera calibration has been one of the most extensively researched topics in the field of photogrammetry and computer vision in last decades. There exists a plethora of prior work on each of the sub-topic concerning camera calibration methods. It seems that there is no other approach to the problem of camera calibration that would not have been already discovered and investigated. While doing some other research with the distortion-free pinhole lens, we conceived the thought of using this component for measuring of the (glass) lens distortion. After reviewing both the photogrammetric and the optical literature we did not find any mention about using pinhole lens for that kind of measurement. Thus, we decided to investigate the possibilities and limitations of this approach and to propose a new method for measuring lens distortion.

1.2 Objectives The aim of this thesis can be summarized into a three points: 

To propose a new method of measuring lens distortion by using pinhole lens.



To conduct an experiment leading to validation of the proposed method.



To investigate the advantages and limitations of the proposed method from the theoretical and practical point of view.

15

1.3 Interior Orientation The interior orientation (IO) elements can be defined as “All characteristics that affect the geometry of the photograph” [Slama et al. 1980, p. 244]. The most important elements of IO include the following: sensor dimensions, principal distance, principal point position and lens distortion characteristics. The list of all characteristics varies with different types of cameras. For example, airborne cameras are usually assembled from much more components then standard metric cameras for close-range photogrammetry and therefore needs much more parameters for the characterization of the interior geometry. Thus, depending on the camera type and demanding accuracy, additional characteristics could be needed: fiducials, axis scale, reseau coordinates, point spread function (PSF), sensor unflatness characteristics, sensor noise characteristics (e.g. FPN, PRNU), forward motion compensation characteristics, etc. This thesis is focused primarily on measuring lens distortion. Under certain circumstances, which will be clarified later, the proposed method could be also used for measuring principal distance and principal point position. All other elements of the IO will not be considered in the thesis in order to clarify the basic relations. Their influence should be studied in a future work.

1.4 Lens Distortion Lens distortion is a natural property of the optical system (OS) consisting of a glass elements, inducing displacement of the centroid of the PSF given by the object point projection, from the position given by an ideal OS. This definition can be described functionally as:

(1.1) where

are the image coordinates of the PSF centroid corresponding to ideal OS,

coordinates of the PSF centroid corresponding to glass OS and function

are the image

with parameters

is the

mapping of the image space given by ideal OS, to the image space given by glass OS, where the ideal OS is best approximation of glass OS. The word “best” implies that various approaches to the approximation exist. Our approach will be presented in Section 3.4. Superscript

in function

emphasize the known

fact, that distortion of the OS is dependent on magnification of the OS [Magill 1954]. Lens calibration is therefore a process of finding the function and parameters

and parameters

. Even if the function

are theoretically known from the lens design project (e.g. by ray tracing method),

which is rarely the case, in practice, the manufacturing errors (e.g. centering error in lens elements assembly) in producing the lens prevent their use for highly accurate applications. Photogrammetric literature of past decades contains a large amount of all kinds of mathematical model proposals. Many of them were derived on the base of the physical nature of the OS, using principal ray approximation, but even though it has to be stated, that all of them are only more or less correct 16

approximation of the reality. In fact, the OS assembled from several glass elements can not be fully described in analytic form with a universally valid mathematical formula. Pupils and other OS characteristics are usually treated as a constants, although they are a function of magnification, aperture, incident angle of rays, wave length etc. Moreover, principal ray approximations abandon the influence of coma aberration, which is shown in [Miks et al. 2012]. The demand for a priori known lens distortion description in an analytical form is usually driven by the use of an analytical approach to the camera calibration (e.g. in-situ calibration, self-calibration), firstly presented by well-known photogrammetrist Duane C. Brown in his work [1956]. His publication is an extension of previous work of another well-known photogrammetrist Helmut Schmidt [1953]. The distortion of the OS can be theoretically described in analytical form more accurately, considering the whole bundle of rays, by Miks and Novak [2012]:

(1.2)

(1.3)

where

where

and

of the OS in pupils,

(1.4)

represents the image coordinates of the centroid, is the angular magnification of the OS,

maximum numerical aperture of the OS in the object space, infinity,

is the angular magnification

is the field of view (FOV),

is the

is the f-number of the OS for the object at

is the third-order aberration coefficient of coma and

is the third-order aberration

coefficient of distortion. These formulas, given by Miks and Novak, are valid within the third-order aberration theory and for rotationally symmetrical OS. As can be seen from relation (1.3), the distortion is also influenced by coma (

).

Our approach to measuring lens distortion directly provides the deviations of the corresponding target images given by glass OS and ideal OS. Having the dataset of deviations a priori, it is possible to choose an arbitrary mathematical model that best approximates the distortion progress, without worrying about correlations between the IO and EO parameters and the physical nature of the lens.

17

1.5 Pinhole lens The main drawback of pinhole imaging lies in a low spatial frequency response and high f/number in comparison to glass lenses due to the diffraction phenomenon, which have a much higher influence in the case of a pinhole lens. However, resolution quality of pinhole OS is not important in our proposal, because it is used only for imaging special target, which is a relatively small object point (1 mm). Basically, we need only to recognize the boundary between the signal and darkness. The pinhole lens used for experiments given in Chapter 7 was assembled from components provided by Edmund Optics Inc. and from steel Canon lens mount, taken from an old lens (see Figure 1-1).

Figure 1-1. Pinhole lens with Canon bayonet mount. The diameter of the pinhole (300 µm) was chosen according to a well-known Rayleigh formula, for principal distance 55 mm and wavelength 550 nm. The f/number is then 183. This formula provides the highest possible resolution of a such OS. However, the resolution is not important for us. Larger diameter would be probably more appropriate, because of the lower f/number.

1.6 Mechanical Stability and Repeatability of Optical System Components The mechanical stability and repeatability of the OS components plays a major role in camera calibration procedure, as it limits the potential of the OS in accuracy. The OS are provided either with a fixed or variable parameters configuration. Generally speaking, systems with a fixed parameters achieve a higher precision as there is no need for tolerances between the elements, which enables mechanical movement of certain groups in OS.

18

In the case of a system having fixed parameters, the mechanical stability of the system is important, while in the second case with a variable parameters system the demand is not only for the overall mechanical stability, but also repeatability of the OS components. Mechanical stability, depending on the quality of the construction and fixation of all elements of the system, apply for example to these components: The camera body (the camera body shell made from magnesium alloy certainly provides better stability than body made of plastic materials), digital sensor (instability driven by imperfect joint with camera back), and lens mount (the bayonet or screw-threaded type of a joint mechanism for mounting the lens on a camera body is manufactured with a certain tolerances in dimensions, which induces further instability between the camera body shell and the lens), mechanical play within the lens elements, etc. Mechanical repeatability, depending primarily on the magnitude of the mechanical play, apply mainly to these systems or features: zoom and varifocal lenses, lenses with a thermal correction, lenses with an image stabilization system, sensor vibration for removal of dust particles and image stabilization system, which commands the actuator to move the sensor in order to reduce the blur caused by camera shake during image acquisition. It is important to mention here the issue of mechanical stability and repeatability of the OS, because our proposed method of lens calibration is based on a fundamental step, where the measured lens has to be interchanged with the pinhole lens. Although, we a priori expect that the repeatability of the connection of the lens with the camera body is not precise enough, there are OS having the repeatability good enough (usually screw-threaded types of mount) for less accurate applications. For those OS, the proposed method of measuring lens distortion can be expanded in order to measure also principal distance and principal point position. The detailed description will be given in next chapters. Therefore, the mechanical stability, or rather the mechanical repeatability of the lens mount determines which parameters characterizing OS geometry can be measured by using our method. This implies two possible results: 

First, if the mechanical repeatability of the lens mount is poor, then it is possible to measure only the lens distortion.



Second, if the mechanical repeatability of the lens mount is precise enough, then it is possible to measure also principal distance and principal point position.

More details about the issue of mechanical stability and repeatability of the OS can be found for example in [Eisenhart 1963], [Shortis et al. 1997], [Shortis et al. 2006], [Rieke-Zapp et al. 2009]. Finally, it has to be pointed out, that mechanical variations induced by ambient conditions (temperature, pressure etc.) are not part of what is meant in this work by “mechanical stability and repeatability” and have to be treated in calibration procedures separately.

19

1.7 Related Work The overview of most common techniques for camera calibration will be given here. We divide those methods into a two groups. The first group are methods that enables to measure the lens distortion directly. The second group contains a methods based on computing parameters of an a priori given mathematical calibration model. First group: 

Goniometer and multi-collimator based method



Stellar method

Second group: 

Projective invariant based methods (plumb-line, cross-ratio, vanishing points)



Targets resection (In situ, self-calibration)

1.7.1

Goniometer Method

There are basically two approaches. First is based on the visual inspection of the nodes of the plane parallel grid plate clamped at the focal plane of the camera (see Figure 1-2), while the second is based on projecting targets on the image plane. The first approach (historic) is limited only to analog cameras, because it is not possible to remount the digital sensor.

Figure 1-2. Goniometer scheme (source: [Cramer 2004]). In the first case a goniometer consist of a long-focal-length telescope mounted on the turntable which provides a very precise movement in the horizontal or vertical (depending on the type) directions. The camera is placed on the calibrator instrument and the photographic plate is interchanged with the plane parallel grid plate measured with a high accuracy. The system is then autocollimated. Each node of the grid of the plane parallel plate is then observed by the telescope and the corresponding angle is recorded. Then, those data are used for solving the parameters of the interior geometry of the camera. 20

Figure 1-3. Zeiss goniometer (left), © Zeiss (source: [Cramer 2004]). Historical Wild AKG Goniometer for visual calibration tasks (right, source: [Slama et al. 1980]). In the second case, the cross-hair of the theodolite or other pattern is projected on the image plane and measured on comparator (analog) or automatically (digital) with a subpixel precision (see Figure 1-3). The measurements are usually conducted in a four different planes – horizontal, vertical and two diagonals. Since the autocollimation of the digital camera is problematic, the measurements are usually performed twice with 180° rotated camera head and the following adjustment of the interior orientation elements is expanded by including three rotation angles as unknowns. Two examples of goniometer system are shown in Figure 1-3.

Figure 1-4. Multi-collimator scheme (source: [Cramer 2004]).

21

1.7.2

Multi-collimator Method

Multi-collimator consists of an array of collimators (simulating infinity targets) accurately aligned in such a way that all collimator axes intersect in a single center (see Figure 1-4). The autocollimating telescope is used to position the camera in such a way that the focal plane is perpendicular to the center collimator. The calibration procedure consists of a several steps. The camera is placed on the calibrator instrument and aligned in such a way, that the entrance pupil of the lens is located at the point of intersection of the array of collimators and the focal plane is perpendicular to the axis of the central collimator. Then, the photographic plate is exposed. The image coordinates of the targets given by the collimators are then used for solving the parameters of the interior geometry of the camera. The image of the central collimator is the principal point of autocollimation. An example of multi-collimator system is shown in Figure 1-5.

Figure 1-5. Multi-collimator at the USGS. © USGS (source: [Cramer 2004]). More information about methods employing goniometer or multi-collimator device can be found in [Cramer 2004], [Slama et al. 1980], [Clarke et al. 1998] or [Sandau et al. 2010].

1.7.3

Stellar Method

The Stellar method of camera calibration was frequently used during 1950’s by Hellmut Schmid and Duane C. Brown, for calibrating the ballistic cameras used for tracking test rockets. This method takes advantage of the stars, which can be considered as ideal targets at the infinity distance, with the position given in the astronomical tables. If the camera is placed in the known location (longitude and latitude) and directed to the zenith, then the image of the sky gives several hundreds of the star images, which, after atmospheric corrections, can be used for solving the parameters describing the interior camera geometry deviations from the projective geometry model. The detailed description of this method can be found in [Fritz et al. 1974].

22

1.7.4

Plumb-line Method

The Plumb-line method for camera calibration was introduced by Duane C. Brown in his most cited article "Close-Range Camera Calibration" [Brown 1971]. This method is based on using the straightness invariant of the projective geometry. The calibration field, consisting of a set of plumb-lines (see Figure 1-6), is imaged by the camera being calibrated. The images of the plumb-lines are due to the lens distortion distorted, which is used for the analytical estimation (by using adjustment with constrains) of the parameters describing the lens distortion. The principal point position can be also computed, but with a lower precision (the precision of the estimate depends primarily on the magnitude of the distortion. Higher distortion leads to a higher precision). The principal distance cannot be recovered here. The method is well suited for calibrating cameras used for close-range photogrammetric applications.

Figure 1-6. Plumb-line calibration field (source: [Gruen et al. 2001]). 1.7.5

Cross-ratio Invariant Method

In 2003, Zhang et al. published an article in which they proposed a method for camera calibration based on cross-ratio invariant. They propose to capture the image of a chessboard-pattern calibration field, chose any collinear four points (corners of the squares) and compute two cross-ratios. One corresponds to the object space (real chessboard) and the second to the image space. The differences between those two values are attributed to the lens distortion, which is analytically estimated by using constrained adjustment. Ricolfe-Viala et al. [2010] improved the estimation by incorporating all possible combinations of cross-ratio that can be found on a chessboard-pattern calibration field. Unlike Zhang et al., who derived the cross-ratio formulation separately for each coordinate x and y, Ricolfe-Viala et al. derived only

23

one cross-ratio formulation using absolute value of distances (e.g.

instead of

and

). In their

proposal, they also combine the cross-ratio invariant with the straight line invariant.

1.7.6

Vanishing Points Method

This method is based on the relations between the perspective projection and vanishing points. Among many approaches, we will describe the method proposed in [Tan et al. 1995], which is based on recovery of the IO elements by using perspective views of a rectangle. The method requires only a very simple calibration field which consists of a four targets composed in the shape of the rectangle and lying in a common plane. In projective geometry, the view of the rectangle constitutes two vanishing points. Each of them defines the vector, having the origin in the projection center. Those vectors are perpendicular in Euclidean geometry, which means that the dot product of those two vectors has to be zero. This can be written as a conditional equation, where each of the two vectors is composed from measured coordinates of the rectangle corners and IO elements. Each view of the rectangle constitutes such condition which leads to the system of equations. This system of equations is then solved for example by least square adjustment. Different approach can be found in [Pajdla et al. 1999].

1.7.7

Methods Based on Targets Resection

The methods based on targets resection are based on solving the well-known projective equations expanded by the set of additional parameters of a certain polynomial, which characterize the deviation of the real interior geometry of the camera from the ideal central projection. The projective equations expresses the relation between the 3D Cartesian coordinates of the targets in object space and the 2D coordinates of the corresponding targets in image space. The procedure of the calibration process is usually based on acquiring a set of images of the targets, taken from the different views. If the targets are surveyed with demanding accuracy and spread around the measured object, than those targets are known as “control points” or “ground control points” and such arrangement is known as “In situ” calibration method. The calibration procedure is here performed simultaneously with the object reconstruction. In case of an aerial photogrammetry, the ground control points are set up on the ground and measured with GPS or surveyed with theodolite. In case of a close range photogrammetric applications, the control points are usually given by a proprietary made highly accurate instruments in the shape of crosses or cages and bars.

24

Figure 1-7. Terrestrial calibration field used for digital aerial camera UltracamD calibration, © Vexcel (source: [Cramer 2004]). Calibration method known as “self-calibration” is able to the simultaneous reconstruction and calibration even without known 3D Cartesian coordinates of the targets, if the geometry of the configuration (see [Fraser 1984]) and additional calibration polynomial is designed properly. This is now extensively used in the field of “computer vision” for “structure of motion” method. Both above mentioned methods can be of course used for the purposes of the camera calibration only (see Figure 1-7). The most important investigations in the field of the simultaneous camera calibration and object reconstruction were made several decades ago by famous photogrammetrist Duane C. Brown within his company DBA Systems, Inc. (now Geodetic Systems, Inc.) and his findings, e.g. [Brown 1956, 1966, 1968, 1971] are used today as a standard. More information about the method of self-calibration can be found in [Fraser 1997] or [Pollefeys 1999]. Great source of information is provided in books [Gruen et al. 2001] or [Luhmann et al. 2007].

1.7.8

Conclusion

The overview of most common techniques for camera calibration was given in previous sections. After reviewing a hundreds of papers from all kinds of branches, we did not find any mention about using pinhole lens for camera calibration, therefore it is not possible to directly relate our proposed method to some previously published one. Some similarities can be found with the method employing the multicollimator device or with the Stellar method. Both methods are based on known angles between the targets constituting a “bundle of rays”. However, those methods are not suitable for calibrating cameras for close-range photogrammetric applications, because of the infinite targets. In the analytical approach to camera calibration (the second group), which is today the most popular, several problems can arise, such as the choice of the proper mathematical model describing optical system distortion. Inappropriate choice can lead to over-parameterization or high correlations among parameters. In most cases, well-known Brown polynomial model is sufficient. However, in case of a special lens (fish25

eye), or working in a close distance where the influence of the magnification to distortion dependence is high, then this model is not valid. Many papers proposing some modification or different model (AP’s) exists, e. g. [Tang et al. 2012], [Gennery 2003], [Fraser et al. 1992], [Stamatopoulos et al. 2011], [BräuerBurchardt et al. 2006] or [Willson 1994].

26

2. Description of New Proposed Method of Measuring Lens Distortion

2.1 Introduction and Main Idea The best OS for the use in photogrammetry, would be an ideal OS having both principle planes coincident. In reality, this can be realized by an OS with pinhole lens (see Figure 2-1, left). However, the numerical aperture of pinhole lens is very low, which brings a high amount of noise on sensor during long exposures. Additionally, the resolution performance given by pinhole lens is, due to the diffraction phenomenon, very poor comparing to the glass lens performance. These are the main drawbacks of the pinhole OS, preventing its wider use in practice. On the other hand, if the imaging process is limited only to small dots surrounded by an uniform background, then the resolution performance is not needed. Moreover, the long exposures can be shortened by introducing the active light targets providing direct light. These two enhancements enable us to use the pinhole OS for lens calibration. Imaging with the pinhole lens provides two significant advantages: The mapping of the 3D object space to a 2D image space is the outright realization of the projective geometry and is free of any distortion.

Figure 2-1. Pinhole optical system (left) and glass optical system (right). The main idea of the proposed method is based on comparing two images of the calibration field targets acquired by stable OS being calibrated, where the first image is acquired with a small change – the glass lens is replaced with the distortion-free pinhole lens. 27

2.2 Calibration Procedure and Arrangement The list of all instruments used for proposed lens calibration procedure is as follows: The OS being calibrated, consisting of a sensor mounted in camera body and the glass lens. The camera body needs to be equipped with such a lens mount (usually screw-threaded or bayonet type), that provides interchangeability of the lens. Pinhole lens must have the same type of lens mount and with a similar focal length (“focal length” of pinhole lens is given as a flange focal distance of the camera body + lens flangeto-pinhole center distance – see Figure 2-3). The planar calibration field consisting of a tens of uniformly distributed small (2 mm) active light targets on a black background. The calibration field is situated in front of the camera in the working distance given by demanded magnification

and is parallel to camera

sensor (see Figure 2-2). The number and spacing of targets is selected in such a way, that target images are uniformly distributed all over the sensor area.

Figure 2-2. Arrangement of the proposed calibration method. The procedure of the proposed method comprises several simple steps. The calibration field is captured from the single tightly fixed pose two times. The first image is acquired with the pinhole lens attached to the camera body. Then, the pinhole lens is carefully interchanged with the glass lens and the second image is acquired (from the same fixed pose). The tight fixture of the camera body is necessary in order to prevent any movement within lens change. 28

In the next step, the image coordinates of target images are detected on both images. Therefore, two different sets of image coordinates for same targets are given. The differences in image coordinates for corresponding targets between both sets are function of the differences of the IO elements (e.g. principal distance, principal point, lens distortion) between pinhole OS and glass OS. By proper treatment, it is possible to get those differences of the IO elements. If the first set of image coordinates is transformed (using scale and translation) to the second in such a way, that the Euclidean norm (also known as a L 2norm, or least squares) of the residuals is minimal, we get the differences of the principal distance and principal point. Then, the vector differences between corresponding target images after the transformation process can be attributed directly to the lens distortion, because the pinhole OS has no distortion. More details about the transformation process will be given in Chapter 3.

Figure 2-3. Schematic figure of the camera holding the pinhole lens, illustrating the principal distance cpin and FOV denoted as α. If the repeatability of the lens mount is precise enough, then the differences of the principal distance and principal point can be added to the corresponding values of the pinhole OS, which give us, in addition to lens distortion, an estimate of the principal distance and principal point of the glass OS. However, those elements of IO of the pinhole OS have to be known. One of the possible methods of calibrating pinhole OS is given in Chapter 4.

2.3 Conditions for Choice of Principal Distance of Pinhole Lens The principal distance

of the pinhole lens (or more precisely – pinhole OS), which can be changed by

using a set of extension tubes, has to be set in such a way, that following conditions relating to the principal distance

of the calibrated glass lens are met. In case where

, then the pinhole

OS has a narrower FOV and is able to cover only portion of the object space covered by glass OS. This practically means, that pinhole OS captures lower number of the targets of the calibration field. Calibrated 29

values would be then valid not for the whole FOV of calibrated glass OS, but only for smaller part. The rest of the field would have to be extrapolated. The principal distance

has therefore to be set in such

a way, that the calibrated values will be valid at least for pre-defined portion of the FOV of the calibrated glass OS. For example 90 % of the field. In case where

then the pinhole OS has a wider

FOV and the image of the calibration field is proportionally smaller than that acquired by the calibrated glass OS. This leads to decreased accuracy of the calibrated values. The principal distance therefore met also second condition:

has

has to be set in such a way, that the accuracy of the calibrated

values will be equal or higher than demanded values given for example by a priori error propagation analysis.

2.4 Conclusion The proposed method of lens calibration do not incorporate in any way the EO elements in the estimation which is advantageous, because those parameters are prone to correlations with the IO elements, which can lead to unreliable estimate. This usually happens when calibrating OS having narrow FOV or when using over-parameterized model. The targets of the calibration field do not to be measured, which increase the estimate accuracy, as no target measurement error is present. The main drawback of the proposed method is the requirement of changing the lens. This issue is described in Section 1.6. There is also the limitation in the range of FOV. A Wider FOV cannot be calibrated. The pinhole OS is not able to refract the light, therefore the smallest possible principal distance of such a OS (or more intuitively: the largest FOV) is limited by the flange focal distance of the camera body (see Figure 2-3).

30

3. Calibration of the Glass optical system

3.1 Coordinate Systems Definition and Notation The global coordinate system for the calibration arrangement is defined as follows: the origin lies in the projection center of the ideal OS (introduction of the ideal OS will be described in following Chapter 3.2), which is same as glass OS (we define it so). Axis orientation is defined with respect to axis orientation as usual in matrix systems, where the origin lies in the upper left corner of the image with x axis directed downward (rows) and y axis directed to the right (columns). However, this axis orientation is valid only for positive images. For negative images (as physically registered on sensor), shown in Figure 3-1, the origin lies in the lower right corner of the image with x axis directed upward and y axis directed to the left. z axis has been chosen using right-hand rule. This definition allows us to read the image coordinates directly in matrix system (images given by camera are already in positive orientation), which saves many problems and confusion with the orientation in programming. However, it has to be pointed out, that only axis orientation is adopted. The origin stays in the projection center.

Figure 3-1. Illustration of the global coordinate system definition, where the origin is located in the projection center of the ideal OS and the camera sensor is the negative. xred and yred denotes the axis of the global coordinate system, while x and y denotes the axis of the local coordinate system. 31

Instead of using the traditional Cartesian system of coordinates we will use the homogeneous (also called projective) system of coordinates. Image coordinates will be defined in the same global coordinate system defined herein, which is unusual, but very practical. z coordinate of every image point is therefore equal to (principal distance of ideal OS). x a y coordinates have origin in sensor plane in the location of the (principal point of ideal OS). Coordinates in global coordinate system will be denoted with superscript by

, symbolizing the reduction of the measured coordinates in local system to the origin given

. The local system uses the same axis orientation as the global coordinate system, but the x a y

component of the origin lies in the upper left corner (on positive image) of the image (see Figure 3-1). Now, the basic relations and the notation relating to Figure 3-1, Figure 3-2, Figure 3-3, Figure 3-4, Figure 3-5, Figure 3-6, Figure 3-8 and Figure 3-10, respectively, will be defined. One dot and two dots above the symbol denotes the applicability of the variable to the object space and image space, respectively. denotes the object space coordinates of point , while coordinates of point

and

and

denotes the image space

given by pinhole OS illustrated in Figure 3-3 and glass OS illustrated in Figure 3-4

and ideal OS illustrated in Figure 3-5, respectively. The point

is defined in global coordinate system by relations:

(3.1)

and in the local coordinate system:

(3.2)

where (similarly for

):

(3.3)

where

and

local system and matrix

denotes the x an y coordinates of the principal point of the ideal OS in the denotes the transformation matrix used for local to global 32

transformations.

and

and

denotes the principal point of pinhole OS and glass OS

and ideal OS in global coordinate system, respectively:

(3.4)

And in the local coordinate system:

(3.5)

where (in global coordinate system):

(3.6)

where

is the pinhole projection center. Position of the pinhole OS projection center in the global

coordinate system is shown in Figure 3-2. Principal distance of pinhole OS

is then:

(3.7)

33

Figure 3-2. Illustration of the pinhole OS projection center position with respect to the global coordinate system.

34

Figure 3-3. Illustration of the projection of the object point p to sensor by the pinhole optical system.

Figure 3-4. Illustration of the projection of the object point p to sensor by the glass optical system.

35

Figure 3-5. Illustration of the projection of the object point p to the sensor by the ideal optical system. Points denoted with H are the principal points of the ideal optical system (differ from principal point used in photogrammetry!).

Figure 3-6. Illustration of the projection of the object point p to the sensor by pinhole and glass optical systems.

36

3.2 Introduction of Ideal Optical System as Intermediate step in Transformation Process The ideal OS is designed and intended as a best approximation of the glass OS. This OS is only virtual and does not exist in reality. The relation between the real pinhole OS and the ideal OS is shown in Figure 3-8, while the relation between the real glass OS and the ideal OS is shown in Figure 3-10. The introduction of an ideal OS as an intermediate step has the advantage, that the transformation process between the pinhole OS and the glass OS can be divided into a two consecutive parts:

(3.8) where:

(3.9) where the first part (3.9) is a simple linear transformation, denoted

, between the pinhole OS and

the ideal OS. If we accept the existence of ideal OS, then the elements of matrix

are actually the

function of differences of principal distance and principal point position between pinhole and ideal OS (this premise is valid in our situation, where both – ideal and pinhole OS has the same image plane – sensor). The second part of the transformation process is realized with the function

having parameters

. This function maps the glass OS to ideal OS. This part of the transformation process represents the distortion of the glass OS induced by the optical and mechanical aberrations. Superscript

in function

and

emphasise the known fact, that the distortion of the glass OS

is dependent on magnification of such OS.

3.3 Transformation Between Pinhole and Ideal Optical Systems The transformation between pinhole OS and ideal OS is generally a projection, induced by a combination of two projections (3.13): projection of the object space by pinhole OS (3.10) shown in Figure 3-3 and by ideal OS (3.11) shown in Figure 3-5:

(3.10)

(3.11)

37

After combining equations (3.10) and (3.11):

(3.12) where:

(3.13) Let’s begin with deriving

:

(3.14)

therefore:

(3.15)

Now we will derive

:

(3.16)

where matrix matrix

is used for translating point

to have the origin in the pinhole projection center

constitutes the projection of shifted point

homogeneous coordinate system and matrix centre) of point

,

from object space to image space, using

translates the projection (via the pinhole projection

from global coordinate system to local coordinate system (

38

).

after multiplying

and

:

(3.17)

and the inversion is:

(3.18)

finally, we get:

(3.19) =

Because using homogeneous coordinates, we can divide the matrix

by the last component:

(3.20)

39

For the third component of

(given by

, same as the third component of

) according to

(3.12) we can write:

(3.21)

which is after trivial simplification:

(3.22)

therefore, only the first and second components constitute a system of independent equations which can be solved. Those two equations, according to (3.12), are given by:

(3.23)

where

:

(3.24)

and

(3.25)

and

(3.26)

40

The superscript

symbolise the dependence of parameters

on magnification, because:

(3.27)

Now, two approaches to solution of (3.23) exists. The first one, which is a primary in this thesis, is based on an assumption that the principal distance and principal point cannot be measured with sufficient reliability due to the poor mechanical repeatability of the lens mount. Therefore, we compute parameters (which do not have a direct physical meaning) only in order to be able to do the intermediate step in computation of the lens distortion (see eq. (3.8) and (3.9)). Simply saying: The real measured image coordinates of targets of the calibration field given by pinhole OS are scaled and transformed to position given by ideal OS. Then, each pair of

and

corresponding to each target of the calibration field

constitues the distortion measure applicible to the position of

.

The second approach considers measuring not only the lens distortion of glass OS but also the principal distance and the principal point of glass OS. In this approach, estimated parameters together with a priori known IO elements of pinhole OS (

,

) and

are used coordinate, as

a input to the system of equations given by equations (3.24), (3.25) and (3.26), where parameters and

are replaced by simple functions:

(3.28)

(3.29)

(3.30)

(3.31) where

is the z axis distance between the pinhole projection center (center of real mechanical pinhole

element) and point , which can be easily measured in real situation. Then, the system of three equations contain only three unknown parameters

, which leads to unique solution. This

41

system of three non-linear equations can be solved by an estimator given in Appendix A. The functions, partial derivatives and other details of the adjustment can be found in Appendix B.

3.3.1

Error Propagation

Several possible situations have been simulated in order to get the SD estimate of the unknown parameters. We are primarily interested in how accurately we have to know the distance

. An estimator

and methods described in Appendix A have been used. For each configuration, given in Table 3-1, parameters

were computed. Then, the tested SD (sx) of the selected input parameter was set to

the desired value. The values of SD of all other input parameters were set to minimal values (1e-9 mm). The results of error propagation for the evaluated configurations are given in Table 3-2.

Table 3-1. Parameters of the evaluated configurations. config.

ppPOS

cPOS

ppIOS

cIOS

no.

([x y]) mm

[mm]

([x y]) mm

[mm]

[mm]

Table 3-2. Results of error propagation for various configurations. spp_x_POS =

µm

spp_y_POS =

µm

sc_POS =

µm

sz_pc =

mm

sz_pc =

mm

config. no.

spp_IOS

sc_IOS

spp_IOS

sc_IOS

spp_IOS

sc_IOS

spp_IOS

sc_IOS

spp_IOS

sc_IOS

([x y]) µm

[µm]

([x y]) µm

[µm]

([x y]) µm

[µm]

([x y]) µm

[µm]

([x y]) µm

[µm]

Results given in Table 3-2 shows, that the distance

does not need to be measured with a high

accuracy. The accuracy in the order of millimeters, for evaluated configurations, is sufficient. Lower magnifications (compare configurations 1 v. 2 and 3 v. 4) demands less accuracy of

. Closer

configuration of the pinhole OS to the ideal OS (glass OS) demands less accuracy of

(compare

configurations 1 v. 3 and 2 v. 4), which is logical. The influence of the SD values of the principal point (spp_x_POS, spp_y_POS) and principal distance (sc_POS) of the pinhole OS on corresponding values of the ideal OS 42

is almost linear and is not dependent much on the configuration differences between both OS and on the magnification represented by the

3.3.2

.

Position of Ideal Optical System

The solution of the system of two equations given by (3.23) cannot be solved directly by using coordinates , because those coordinates do not actually exist as was stated previously. Inasmuch as the ideal OS is only approximation of the glass OS and therefore ideal OS also does not exist in reality, there has to be chosen some criterion, which defines the exact position of the ideal OS in relation to the glass OS. For example, following criterion can be chosen: the Euclidean norm (also known as a L2-norm, or least squares) of the difference of the coordinates corresponding to glass OS and coordinates corresponding to the ideal OS is minimal. This can be written as:

(3.32)

where n is the number of targets. After substitution of

from (3.12) we get:

(3.33) Finally it is possible to estimate the unknown parameters

using relations (3.23) and (3.33). The

solution can be performed for example by an estimator given in Appendix A. The functions, partial derivatives and other details of the adjustment can be found in Appendix C. Our approach to defining the position of the ideal OS in relation to the glass OS is actually a more general form of defining "principal point of best symmetry (PPS)" and "calibrated focal length (CFL)" – wellknown terms in photogrammetry. Then, the parameters

and

are related to PPS, while

relates to

CFL. We would like to pointed out again, that the ideal OS and glass OS share in common the principle distance and the principal point position:

(3.34)

(3.35)

43

-500

-500

0

0

500

500

1000

1000

1500

1500



2000

2500

2000

2500

3000

3000

3500

3500

4000

4000

0

1000

2000

3000

4000

5000

6000

0

1000

2000

3000

4000

5000

6000

Figure 3-7. Illustration of the transformation (equation (3.12)) of the target images given by the pinhole optical system (Red circles on the left image.) to target images (Red circles on the right image. Those target images do not exist in reality) given by the ideal optical system. Blue circles denotes the target images given by the glass optical system. Those target images are used for defining the position of the ideal optical system.

Figure 3-8. Comparison of the ideal optical system with the pinhole optical system. Points denoted with H are the principal points of the ideal optical system (differ from principal point used in photogrammetry!).

44

3.4 Transformation Between Glass and Ideal Optical System Transformation between glass and ideal OS is described by formula given in (3.8):

(3.36) Having function

it is possible to estimate the unknown parameters

. Same criterion (Euclidean

norm) as was used in previous section can be used here. Therefore:

(3.37)

The following condition has to be fulfilled, if demanding accuracy

shall be reached:

(3.38)

where

is given by (3.12) and where

depends on demanding accuracy. If using certain function

does not satisfy the condition (3.38), a different one has to be chosen. The lower value of

leads to a

more complicated functions. Our approach to measuring lens distortion directly provides the deviations of the corresponding target images given by glass OS and ideal OS, hence the function

approximating the distortion progress with

respect to local deformations can be arbitrary chosen. Moreover, no function

is needed, if we tabulate

the distortion values by proper interpolation method.

-500

-500

0

0

500

500

1000

1000

1500

1500



2000

2500

2000

2500

3000

3000

3500

3500

4000

4000

0

1000

2000

3000

4000

5000

0

6000

1000

2000

3000

4000

5000

6000

Figure 3-9. Illustration of the transformation (equation (3.36)) of the target images given by the glass optical system (Blue circles on the left image.) to the position given by the ideal optical system (Red circles). The amount of residual vectors after the transformation depends on the applicability of the function fd. Blue circles on the right image denotes the corrected position after the transformation. 45

Figure 3-10. Comparison of the ideal optical system with the glass optical system.

46

4. Calibration of Pinhole Optical System

4.1 Introduction As was mentioned in Section 3.3, in order to estimate the principal distance of the glass OS, the principal distance

and the principal point

and the principal point of the pinhole OS has

to be known. The calibration procedure of such a specific OS is given in this chapter. The notation and global and local coordinate system definitions are same as in the previous chapter with one exception – the origin is located in the projection center of the pinhole OS. The subscripts (POS,

GOS)

used to differ the targets given by the pinhole and glass OS are not needed in this chapter. The visualization of the global and local coordinate systems is shown in Figure 4-1.

Figure 4-1. Illustration of the global coordinate system definition, where the origin is located in the projection center of the pinhole OS and the camera sensor is the negative. xred and yred denotes the axis of the global coordinate system, while x and y denotes the axis of the local coordinate system.

47

Projection, realized by camera with mounted pinhole lens, is free of aberrations given by the glass lens. The only imperfection of the projection is due to a very small diameter of the aperture, that causes a much higher effect of diffraction in comparison to projections with glass lens. However, the diffraction effect has no influence on the position of the projected point. Therefore, the IO elements of the pinhole OS includes only the principal distance and the principle point which are grouped together and denoted as (4.1). The simplicity of the interior geometry then enables the use of simple calibration methods.

(4.1)

We have reviewed a large number of methods and found one, that seems to be the most proper for our case. This method recovers the IO elements by using perspective views of rectangle.

4.2 Method of Recovery of Interior Orientation Elements Using Perspective Views of Rectangle We decided to inspect the method of recovery of the IO elements using perspective views of rectangle, exploiting the geometry relations of perspective projection and vanishing points given in [Tan et al. 1995]. This method does not incorporate the EO elements in the estimation which is advantageous, because those parameters are prone to correlations with the IO elements, which can lead to unreliable estimates. This usually happens when calibrating OS having narrow FOV or when using over-parameterized model. The method requires only a very simple calibration field which consists of a four targets composed in the shape of the rectangle and lying in a common plane.

Figure 4-2. Illustration of the projection of the rectangle to sensor plane and two vanishing points constituted by opposite lengths of rectangle. 48

In projective geometry, the view of the rectangle constitutes two vanishing points (se Figure 4-2). Each of them defines a vector, having the origin in the projection center. Those vectors are perpendicular in Euclidean geometry, which means that the dot product of those two vectors has to be zero. This can be written as a conditional equation:

(4.2)

where

are vectors constituting the vanishing points:

(4.3)

Then, expanding the conditional equation (4.2) by terms in (4.3) gives:

(4.4)

Which may be expressed functionally:

(4.5) Each view of the rectangle constitutes one condition (4.4) which leads to the system of equations, where the measurements are the image coordinates of vanishing points and the unknown parameters are given by

. Having more than three views of rectangle,

can be then solved by an estimator given in

Appendix A. The functions, partial derivatives and other details of the adjustment can be found in Appendix D.

4.2.1

Estimation of Vanishing Points and Covariance Matrix

The method of recovery of the IO elements using perspective views of rectangle requires only a very simple calibration field which consist of a four targets composed in the shape of the rectangle and lying in common plane. The shape of the rectangle projection from object space to image space is a trapezoid enabling to constitute two vanishing points in image space. The image coordinates of the vanishing points and

are computed as an intersection of lines given by points

and

and

and

, respectively

as can be seen in Figure 4-2. Now, it is clear that the real measurements are the image coordinates of the

49

points

and not the image coordinates of the vanishing points. Therefore, the function (4.5)

should be changed to:

(4.6) However, such function would be quite large, due to the lines intersection function, and therefore the substitution with vanishing points

is used instead. The input values to the estimation will then be

computed as follows:

(4.7)

where

is a function for intersection of two lines. The covariance matrix corresponding to vanishing

points is according to error propagation law:

(4.8) where

denotes the covariance matrix of real measurements

where

where

, and

and is a Jacobian matrix:

(4.9)

, where m = 4 × number of images (each image constitutes two

vanishing points giving four coordinates) and n = 8 × number of images (four image points gives eight measurements).

4.2.2

Method for Correction of Vanishing Points Position

The theory given in previous sections assume that the shape of the calibration field is an ideal rectangle. However, in reality, it is not possible to adjust the targets to a perfect rectangular shape without an error. Therefore, in real situation, the opposite sides of the rectangle are not parallel and the estimation involves systematic error. The amount and other characteristics of a such systematic error, given by misaligned targets, cannot be modeled by an error propagation law that was used in relation (4.8). Therefore, the covariance matrix given in relation (4.8) cannot be used as a reliable characteristic of the accuracy relations of the problem. 50

Not all four targets of the calibration field has to be accurately adjusted. The first two targets a and b do not need to be adjusted at all, because they defines the first side of the rectangle. The third target c has to be adjusted only in one axis (the direction parallel to the first side of the rectangle), because the position in the other direction has no influence on perpendicularity of sides ab and ac, and because the z coordinate of c defines (together with z coordinate of a and b) the plane in which the rectangle lies. Only the fourth target d has to be adjusted in each axis in order to fulfill the parallelism of the opposite sides of the rectangle. The adjustment can be conducted by using a highly accurate positioning stages and standard geodetic theodolite. For reasons given in the first paragraph of this section, we propose a modified arrangement of the calibration field and a special mathematical treatment, leading to a significantly lower demand on accuracy of the target adjustment. Then, the xy positioning stages are not needed. The proposed technique is based on correcting of the misaligned coordinates of targets c and d. However it has to be pointed out that presented procedure is able to correct target coordinates only in the x a y axis direction. z axis direction of the targets has to be adjusted anyway.

Figure 4-3. Projective geometry of four points, inducing cross-ratio invariant. The basic idea of the correction method lies in using the projective geometry invariant – cross-ratio – which is defined as the double ratio of lengths lying on a common line. The cross-ratio is invariant to all possible changes of the collinear points constituting the lengths, given by the projective transformation. This is illustrated in Figure 4-3, where:

(4.10)

This approach of using projective geometry invariant has been chosen because the pinhole OS is the exact realization of the projective geometry. 51

Figure 4-4. Calibration field for the method of interior orientation elements recovery using perspective views of rectangle, extended with a three more points used for parallelism correction. In order to use the cross-ratio invariant, the calibration field consisting of points extended by a three more points

has to be

, placed approximately in the middle of the rectangle sides (see

Figure 4-4). Then, the position of all seven points of the calibration field has to be surveyed with the appropriate accuracy given by the error propagation analysis. Figure 4-5 shows the situation of the calibration field in 3D Euclidean space (object space), while Figure 4-6 shows the image of the calibration field in 2D projective space (image space). Black dots denote the real targets (measurements), while the circles denote auxiliary points computed as a function of measurements. The derivation of the formulas will be given here only for the correction of the point . The derivation for the point

is similar.

Figure 4-5. Situation of the calibration field in 3D Euclidean space, showing the real targets as black dots and auxiliary points as circles, including vanishing point q and corrected points ccor and dcor.

52

As a first step, the auxiliary points in the object space shown in Figure 4-5 will be computed. Computed points

and

, together with the baseline

intersections of vectors

and

, constitutes the ideal rectangle. Points

of vectors

and and

are the

, respectively. In case that the line couple is close to the

parallelism, which can cause numerical instability, then the direction of vectors instead of the direction of points

and

and , respectively. Points

, respectively. Points

and

and

and

will be used

are computed as a intersections

are computed as a intersection of vectors

, respectively. Now it is possible to compute the cross-ratios

and

where:

(4.11)

(4.12)

Next part of the derivation will be performed in the image space and is therefore related to Figure 4-6. All points are given by projection of the object space. Here, we are looking for the position of the corrected points

and

. It is obviously not possible to compute the position of those two points as easy as

in the object space. But if we compute the same auxiliary points as in the object space, then the point can be computed as a intersection of vectors

(and similarly for point

).

Figure 4-6. Situation of the projected calibration field in 2D projective space, showing the target images as a black dots and auxiliary points as a circles, including vanishing points p, q and corrected point ccor. 53

The auxiliary points

are computed similarly as in the object space. Points

computed with the help of the cross-ratios

and

and

will be

, respectively, which are already known from the

object space. It is known, that:

(4.13)

where:

(4.14)

The equation (4.14) should be extended with the constrain that all four points are collinear, or the points , lying in line, can be rotated into the direction of the x axis by angle . Then, the equation (4.14) can be rewritten only for the x component and solved in one dimension:

(4.15)

where:

(4.16)

then, from (4.15):

(4.17)

and finally:

(4.18)

54

Point

is derived analogically. Then, as was mentioned above, the point

intersection of vectors

. Point

is derived analogically as point

can be computed as a .

Now, the function (4.6) containing the real measurements is extended to include the additional measurements needed for correction of points

:

(4.19) However, such function is enormously large, and therefore the substitution with vanishing points be used again. The input values to the estimation of the

will

will then be computed as follows:

(4.20)

The covariance matrix of vanishing points

would contain a three sources of error: Error given by

target adjustment, error given by targets surveying and error given by measuring image coordinates of target images. Because the first two sources of error leads to the systematic influence of the estimate, the error propagation law incorporating all three sources of error would give unreliable estimate of the

error characteristics. Therefore, such covariance matrix (whose derivation is very

complicated, considering all the relations given in this section) is worthless.

4.2.3

Computer Simulation

In order to verify the theoretical development given in previous sections and to estimate the accuracy of the method for various configurations, the computer simulation have been developed. The Monte-Carlo method has been used in simulations, because the closed-form solution based on error propagation law is not able to propagate the systematic error given by the calibration field targets adjusting. Error simulations have been performed by using function normrnd given in SW Matlab. This function generates data having a normal distribution. In the first step, a large dataset of EO elements is simulated. Then, the best configuration (3–n images, where each image corresponds to unique EO elements) out of the all simulated EO elements is found. Finally, the accuracy of the adjusted parameters of

(using function (4.5) and procedure described

in Appendix A) is estimated by using Monte-Carlo method. Input data to the simulation of the EO elements are given by the pinhole OS parameters (sensor size, principal distance and principal point) and the calibration field target coordinates. The set of EO elements is generated under these conditions: 55



visibility of all calibration field targets on sensor,



minimum ratio (for example 20 %) of the projected calibration field area/sensor area,



distance of the camera pose to the center of the calibration field is within pre-specified boundaries,



each camera pose is located in the nodal point of virtual 3D grid, where the spacing is set manually,



the camera in each pose is rotated in such a way, that the z axis of the camera coordinate system is directed towards the center of the calibration field,



EO elements on each pose can be multiplied by rotating the camera around the z-axis k-times (see Figure 4-8).

One of the advantage of the method of recovery of the IO elements using perspective views of rectangle is that the calibration field can be viewed from one of the four quadrants of the object space only. This greatly reduce the demand for needed space. An example of automatically generated set of EO elements, which will be then used in following simulations, is shown in Figure 4-7.

Figure 4-7. An example of large dataset of simulated camera poses, symbolised by green dots. Black dots symbolise the calibration field targets. In the next step, the best configuration (set of 3–n images) out of the all simulated EO elements is found. The program selects at least three simulated camera views and estimates the parameters

. This

sequence is repeated multiple times in order to find the best camera views configuration providing best 56

estimate. The selection of such camera views providing the smallest sample variance (the highest value of sample variances is choosed for comparison) out of every other selection is considered as the best configuration. In the program, there are two options of how the selection of the camera views can be made. First option is to try all combinations. This guarantees that the best estimate possible will be found (within prespecified parameters) but an extremely high number of combinations limits the grid spacing used for generating camera poses and also allows to use only the triple of camera views. The second option selects camera views randomly either for a certain amount of time T or just n-times and records the best estimate. For our purposes, the second variant was chose. When the best configuration was searched, the error of the calibration field targets adjustment was considered to be the zero. This approach is necessary in order to use the covariance matrix given in (4.8). Monte-Carlo method could not be used here, because each configuration needs a new covariance matrix. Closed-form solution given in (4.8) is able to compute that matrix fast, contrary to Monte-Carlo.

Figure 4-8. An example of simulation of the three images of the calibration field, generated by rotating the camera around the z-axis. Having the best configuration of camera views, it is possible to start the main simulation leading to the covariance matrix of values of

. Such covariance matrix is computed statistically from a large set of simulated

. The process of simulation of those values is as follows: Firstly, the error given by the

targets adjustment is simulated on calibration field target Euclidean coordinates. Then, the targets are projected into an each image by using known EO elements from the given best configuration. Then, the error arising from image coordinate measurement is simulated and the image coordinates of vanishing points are computed as a intersection of corresponding target images. Finally, those image coordinates of vanishing points goes to the adjustment, which results in

value estimate. This process is repeated

until the sufficient amount of data is reached. The demanded covariance matrix of

is computed by

a well-known trivial relations. In case of applying the correction of the misaligned targets by using crossratio invariant, then the above described process is more complicated. The detailed description of the procedure can be found in previous section.

57

The covariance matrix of input values (image coordinates of vanishing points) that goes to the adjustment process described in previous paragraph, is computed statistically from a large set of simulated values. The procedure is exactly same to that given in previous paragraph with one exception: Because the simulated dataset contains the image coordinates of vanishing points and not the estimated values of

, the last

step (least square adjustment) is skipped. The validity of implementation of the Monte-Carlo method has been confirmed by simple test, based on the comparison of two covariance matrices corresponding to

. The first covariance matrix was

computed as a result of least square adjustment, which uses the covariance matrix of input values given by error propagation law. The second covariance matrix was computed by using Monte-Carlo method and procedure described in previous paragraphs. The results of this test are shown in Table 4-4. Experimental simulations have been performed for three values of the principal distance – 35 mm, 50 mm and 70 mm. Three tests have been conducted having the same configuration parameters, with following exceptions: The SD of target adjustment was in the second and the third test increased in x a y axis direction. The parallelism correction was applied in the third test. Table 4-5 shows the results of these tests. The parameters of the pinhole optical system, EO elements simulation and test parameters, used in presented experiments are given in Table 4-1, Table 4-2 and Table 4-3, respectively.

Table 4-1. Parameters of the pinhole optical system. Nominal principal distance Sensor size Principal point

35 mm, 50 mm, 70 mm (24 × 36) mm pp_x = 12 mm, pp_y = 18 mm

Table 4-2. Parameters of the exterior orientation elements simulation. Grid spacing Angle of camera rotation The overall number of simulated camera views Calibration field side length

300 mm 30° 6 856 (35 mm), 20 665 (50 mm), 58 301 (70 mm) 2 000 mm

Table 4-3. Test parameters. Standard deviation of targets adjustment – test I Standard deviation of targets adjustment – test II, III Standard deviation of targets surveying Standard deviation of image coordinate measuring Number of camera views

[0.5

0.5 0.5] mm for x, y, z axis direction

[10.0 10.0 0.5] mm for x, y, z axis direction [0.5

0.5] mm for x, y axis direction

0.64 µm 50

Number of tried combinations in best config. search

100 000

Number of simulation steps in Monte-Carlo method

100 000

58

Table 4-4. Validity test results.

parameters

Method I: no parallel. correction (Error propagation law) nominal [mm]

[µm]

correlation matrix [%]

Method II: no parallel. correction (Monte-Carlo)

[µm]

correlation matrix [%]

35

50

70

Table 4-5. Results of three tests on accuracy of the interior orientation parameters of the pinhole optical system, computed by using method of imaging rectangle-shaped calibration field.

nominal [mm]

parameters

Test I: no parallel. correction (Monte-Carlo)

[µm]

correlation matrix [%]

Test II: no parallel. correction (Monte-Carlo)

[µm]

35

50

1

70

59

correlation matrix [%]

Test III: parallel. correction (Monte-Carlo)

[µm]

correlation matrix [%]

4.3 Conclusion The results of validity test presented in Table 4-4 shows a good match of both independent methods – Error propagation law and Monte-Carlo, which confirms the rightness of the given theory and of course the program code. The results of estimation of the accuracy of the pinhole OS interior orientation elements (

),

performed by Monte-Carlo method, are given in Table 4-5. Highly accurate estimates can be achieved by using presented method, if the targets of the calibration field (2000 mm × 2000 mm) are adjusted within the reasonable tolerance – 0.5 mm in all axis direction. The obvious tendency of decreasing accuracy with increasing principal distance is logical, because the longer principal distance leads to decreased accuracy of the vanishing points estimate. In certain cases, where the configuration and demanded accuracy limits the accurate adjustment of the xy position of the calibration field targets, then it is possible to apply the proposed parallelism correction, which significantly increases the reached accuracy (compare test II and III). This is advantageous in situations, where the surveying of targets is much simpler than adjusting them.

60

5. Validation of Results Given By Proposed Method

The lens distortion estimate, given by proposed method, has to be validated by an independent test. The validation process can be resolved in a three different ways:  Calibrate the lens by using a well-known reliable method for lens calibration and compare the estimates.  Validate the results by using German guideline VDI/VDE 2634 part 1 [VDI/VDE 2634].  Test the calibrated lens against the projective geometry invariant. We decided to choose the third option for the following reasons. Each of the well-known methods for lens calibration has some limitations (e.g. correlation between estimated parameters, a priori knowledge of the mathematical model describing lens distortion, etc.) and therefore the estimates cannot be considered as a ground truth. The second option relates to the procedure of the German guideline VDI/VDE 2634 part 1, which recommends the use of a seven measuring lines calibrated with a high accuracy (“uncertainty in the line dimension shall be less than one-fifth of the maximum permissible length measurement error of the tested optical measuring system”). Validation of the results given by proposed method by using acceptance testing of the guideline VDI/VDE 2634 part 1 would be probably the best choice because this guideline is widely accepted as a standard. Unfortunately, there is no realization of this guideline in the Czech Republic and the budget for this thesis is too low for creating the new one. For these reasons we have decided to propose the validation procedure which is based on testing the cross-ratio invariant.

5.1 Cross-ratio Validation Test It has to be pointed out, that the cross-ratio invariant is not used here for camera calibration (unlike the method described in “Related work” section) but for validation purposes only. We propose to use only one set of four collinear targets constituting the unique cross-ratio value, which is invariant on all images acquired with the ideal OS from different views. This premise is not valid for aberrated OS, which cause the distortion of the position of target image. This leads to the variability of cross-ratio value in dependence to camera view (or more precisely: in dependence to target positions in sensor). The amount 61

of such a variability can be quantified directly by using cross-ratio sample variance

given as a result of

the cross-ratio estimation from the set of images. So this is our strategy: To estimate the cross-ratio value from the set of images acquired by validated OS from different views and compare the sample variance estimate with expected variance (simulations in Section 5.3 will show that the sample variance good parameter to test. Therefore, the sample variance of image coordinate measurement

is not a

will be used

instead). The main advantage of the proposed solution lies in the fact, that only the amount of deviation from invariability of the cross-ratio is tested and not the absolute value. Then, we do not need the crossratio bar calibrated to a high accuracy, which is expensive, especially if several bars shall be needed for a wide range of optical system FOV. The proposed cross-ratio bar prototype (Figure 5-1) is able to change its length very easily and can be then used for a large amount of scales.

Figure 5-1. Left – cross-ratio bar prototype. Right – detailed view on the target mount, assembled from the support plates (white parts) holding the led diode (rear side) and the black tube with a solid core end light optical fiber (front side). The mount can be adjusted in position by using a screw, connecting the mount to the aluminum bar.

Figure 5-2. Detailed view of the target used in the cross-ratio bar and also in the main calibration field.

62

5.1.1

Cross-ratio Bar and Target Description

The calibration artifact, called here the “cross-ratio bar”, consist of a four targets mounted on a bar in collinear position (see Figure 5-1). The adjustment of targets to collinear position is made with theodolite. The target is a small luminous point (2 mm in diameter of the luminous area) surrounded by black background. Realization of the luminous point was accomplished by using 25 mm long PVC solid core end light optical fiber having black plastic isolation around. This cable is put inside the black plastic distance tube of diameter 8 mm, having the same length as the optical fiber cable. Both sides of the fiber cable are manually brushed to matt finish (see Figure 5-2). The targets are put inside the support plate and lit from the rear side by white led diode (see Figure 5-1). This calibration artifact fulfils the requirements for stable realization of the cross-ratio, enabling the detection of the targets in a high accuracy not only for glass OS but also for pinhole OS.

5.1.2

Cross-ratio Validation Test Procedure

The procedure of the proposed cross-ratio validation test is very simple. The cross-ratio bar is captured by camera from different poses in such a way, that the target images uniformly cover the sensor area (this takes approximately several tens of views). The camera poses have to be chosen in such a way, that the cross-ratio bar is always perpendicular to the optical axis and in distance for which the camera is calibrated/validated (see Figure 5-3).

Figure 5-3. Cross-ratio validation test arrangement. 63

5.1.3

Error Given by Middle Targets

When considering the highest accuracy, we have to take into account the fact, that two middle targets of the cross-ratio bar cannot be adjusted to the collinear position with the first and last one without a certain amount of error. The error in the direction of the camera axis is insignificant, but the error in the direction perpendicular to the camera axis and to the cross-ratio bar has to be considered. This problem was overcomed by capturing the cross-ratio bar only in the horizontal or vertical direction (see Figure 5-5). Then, the dimension of the measurements is decreased from 2D to 1D by selecting only case of horizontal direction) or

coordinates (in

coordinates (in case of vertical direction). This way, the error in position

of middle targets of the cross-ratio bar, which is perpendicular to the direction of the selected coordinates, is minimized. In order to be sure that this error can be neglected, it can be enumerated for each test by using following formula:

(5.1) where

is the angle specifying maximum deviation from horizontal or vertical direction,

is the

erroneous distance (scaled to image reference frame) of middle point in perpendicular direction from the cross-ratio bar and values of

is the demanded error. The geometry of this problem is shown in Figure 5-4. Several

will be given later in the chapter with experiments.

Figure 5-4. Illustration of the middle point error e due to a non-zero angle α.

64

5.1.4

Cross-ratio Unity Variance Estimate

The relation between the cross-ratio and measurements is:

(5.2)

Where

are image coordinates of the cross-ratio bar targets in 1D (we will use

here symbol

for both cases of cross-ratio bar direction – horizontal and vertical). Under the assumption

that the real errors of measured image coordinates are not correlated and have a normal distribution, then the accuracy of the cross-ratio value depends on the length

of the edge targets (A, B)

on the image. Therefore, the cross-ratio will be estimated as a weighted arithmetic mean:

(5.3)

where

is a vector of all cross-ratio values computed from the set of

using relation (5.2) and

images

is a diagonal vector of weight matrix of

cross-ratio values. In accordance with the theory of errors [Mikhail et al. 1976], we can write for weights:

,

where

is the cross-ratio variance and

is the cross-ratio variance of unity weight. We will define

here the unity variance as the variance of cross-ratio with of

(5.4)

(such scaling provides higher values

which are more easy to compare). Now, it will be shown, that the weights are equal to squared

cross-ratio length :

(5.5) If we expand the equation (5.4) we get:

(5.6)

65

where

and

are vectors of partial derivatives of function

for unity cross-ratio and ith cross-ratio.

Inasmuch as we expect the variance of image coordinate measurement to be the same for all measurements, we can remove from equation the measurement weight matrix

:

(5.7)

Expanding the quadratic form

we get:

(5.8)

where partial derivatives of function

for each image coordinate are given by:

(5.9)

(5.10)

(5.11)

(5.12)

Now, the relation between the quadratic form of unity cross-ratio function

and

will be found. If the first partial derivatives

are taken and all coordinates scaled (using length

partial derivatives of ith cross-ratio function

) in order to get the

, then:

(5.13)

66

which leads to:

(5.14)

which gives the proof of our statement in (5.5).

Figure 5-5. Schematic image of the cross-ratio bar in horizontal direction and perpendicular to the optical axis. Having the weighted mean estimate

, the cross-ratio sample unity variance

can be finally computed:

(5.15)

where

is the vector of cross-ratio residuals, where:

(5.16)

5.2 Validation of Pinhole Optical System Up to now, the pinhole OS has been considered as an ideal OS without any aberrations (up to a differences due to physical limitations of the pinhole element, which are beyond the considered accuracies). The proposed validation test is perfectly suited to confirm this premise. The validation of the pinhole OS can also be interesting for the following reason. Any deviations from the expected values 67

given by the projective geometry cross-ratio invariant can then be attributed to either sensor or target imperfections or target detector limitations only. Therefore, the pinhole OS can be advantageous when also used for testing those sources of error (having high precise targets), which are not easy to estimate, usually due to a high correlations with parameters describing the lens distortion. Besides the cross-ratio unity variance characteristic, it is possible to estimate also the sample variance of the image coordinate measurement

. This characteristic can be derived from relation (5.6):

(5.17) Under the assumption, that all image coordinates have the same variance, we can write:

(5.18) Which implies:

(5.19)

Where the quadratic form

can be simply computed by enumerating the partial derivatives given in

(5.9)–(5.12). However, because the cross-ratio unity variance is derived for cross-ratio length of 1 then, all values in partial derivatives have to be scaled in order to get

,

.

5.3 Computer Simulation of Cross-ratio Validation Test The computer simulation will be presented here to verify the theoretical development given in previous sections and to derive the reference values of the cross-ratio unity variance for several simulated arrangements. The input data are given by the set of

images, where each image contains four target images

corresponding to the targets of the cross-ratio bar (Figure 5-6). The image coordinates of four targets are generated randomly by using a Matlab function rand under these conditions: 

all four points are collinear in horizontal direction,



all points lie in the area defined by imaging sensor size and



the cross-ratio computed from these points is same for all images.

68

The length

defined by the first and last point can be different if the cross-ratio value will be preserved.

This simulates the real conditions, where a small change in orientation of the camera to the cross-ratio bar changes the length of the first and last target. In the next step, the noise is simulated on each image coordinate. The noise is simulated by using a Matlab function normrnd which generates values from normal distribution. Then, the input data are processed in the cross-ratio validation test program written according to algorithms described in previous sections. The computer simulation was developed in SW Matlab v7.9.0. Twelve simulations have been conducted in overall, with different a priori values. The input parameters and results are given in Table 5-1.

0

10

20

30

40

50

60

0

10

20

30

40

50

60

Figure 5-6. An example of simulated input data (sample of five images).

69

Table 5-1. Simulated results of the cross-ratio validation test, where the sensor size: (60 × 60) mm, pixel size: 10 µm and number of images (m) = 10000. denotes the a priori given population variance of the image coordinate measurement. denotes the sample variance estimate of the image coordinate measurement. denotes the cross-ratio sample unity variance. sim. no.

length [mm]

[mm]

[]

dif. [%]

[µm]

[µm]

[]

[]

5.4 Conclusion The result of several simulations of the cross-ratio validation test clearly shows, that the cross-ratio sample unity variance

cannot be used as a reference value for the comparison. This value is obviously

dependent not only on the absolute length of the cross-ratio bar image, but also on the value of the crossratio, which is something we did not expected. We believe, that this fact could be overcomed by improving the weighting function. On the other hand, simulations show that we can use the sample variance of the image coordinate measurement

instead. This value is independent on the cross-ratio

validation test arrangement, which is our requirement. Therefore, we will use this characteristic.

70

6. Target Detector

In experiments given in Chapter 7, the target images given by pinhole and glass OS (see Figure 6-1) have to be detected. We decided to make our own code, which can be easily tuned to our requirements for the following reasons: The target detection process is of primary importance in these experiments and the unusual targets (the detailed description is given in Section 5.1.1) which we are using could need a special treatment. The ability to adjust the code and detector parameters is especially important for targets given by pinhole OS, because of the long exposure times and specific shape of the target image, which is unusual in standard photogrammetry. The PSF of the pinhole OS is driven by the diffraction phenomenon and has a well-known shape, described by Raylight-Sommerfeld diffraction formula.

Figure 6-1. The visualization of target images given by the pinhole and glass optical system. The bottom part of the figure shows four images of selected target. The first two images were acquired by the glass optical system from the distance defined by magnification m1 (left) and m6 (right), while the next couple of images were acquired by the pinhole optical system from the same distances. The upper part of the figure shows the intensity profiles of corresponding target images.

71

The detection of both kind of target shapes, given by glass and pinhole OS, will be tested by the known ellipse fitting and centroid method in order to find out which of those two methods provide better results in terms of accuracy. Before applying the target detector, the raw images will be pre-processed in a several steps, described in the following section. The whole procedure of target detection, including preprocessing and also post-processing steps is graphically illustrated in Figure 6-6.

6.1 Image Pre-processing Steps The first step is the conversion of the raw images to TIF format by using SW DCRAW developed by David J. Coffin [2013], with the following configuration: '-D -4 -T'

Where –D means “document mode without scaling (totally raw)”, –4 means “write 16-bit linear instead of 8-bit gamma” and -T means “write TIFF instead of PPM”. This gives us one channel, 16-bit, linear image without any scaling (no white balance, no darkness and saturation level, no color matrix applied etc.) in the original Bayer scheme (RGGB). In the next step, the process called “dark frame subtraction” is performed, which minimizes the image noise (fixed pattern noise, thermal noise etc.) given primarily by long exposure times. Dark frame is the image exposed under the same conditions as the one being repaired, but having the sensor in the total dark (totally covered lens and viewfinder). Some camera manufacturers, for example Canon, adds certain constant values to measured intensities (1024 DN for Canon images) which is automatically reduced by using the dark frame subtraction process. The next process step is the repairing of so-called “hot pixels” (pixels with erroneous intensity) which is handled simply by averaging the border pixel intensity values. The list of hot pixels is given by a self-developed script which is part of our calibration program. red channel

green1 channel

green2 channel

blue channel

raw image in Bayer pattern

Figure 6-2. Schematic picture of color channel separation process. This figure also shows, that different dimensions of each channel can occur. This is caused by the odd number of rows or columns of the raw image. Next, each color channel (red (R), green1 (G1), green2 (G2) and blue (B)) is separated into an independent sub-matrix (the resolution of such a sub-matrix is usually the quarter of the full image 72

resolution – see Figure 6-2) and the transformation matrices are added which describes the transformation of the each local coordinate system given by each separate channel to the global coordinate system (full image). Local and global coordinate systems are shown in Figure 6-3. The transformation matrices (using homogeneous form of notation) are then:

,

, (6.1) ,

The following example will show the use of these matrices. Assume given target image coordinates in the local coordinate system of red channel. Then, the transformation to the global system (full resolution image) will be performed in this way:

(6.2)

This example can be easily visualized in Figure 6-3 (a). The separation process, shown in Figure 6-2, is particularly important for such an OS, where chromatic aberration has to be considered.

73

(a)

(b)

y axis

y axis

p

x axis

x axis (global)

x axis

y axis (global)

plocal = [2 3] pglobal = [3 5]

(c)

(d)

x axis

y axis

x axis

y axis

Figure 6-3. Local coordinate systems of red (a), green1 (b), green2 (c) and blue (d) channels described by tfm matrices given in (6.1). Figure (a) shows also an example provided in (6.2) and global coordinate system. The next, optional, step is the image convolution driven by a Gaussian kernel, which functions as a simple denoising filter. This process can be repeated several times. The last pre-processing step is frames averaging. This process is performed only in the case of the capture of more than one frame from the same pose. Averaging more frames minimize the noise given by various sources (more information about various sources of noise in digital imaging can be found in [Janesick 2007]). The corresponding intensities are averaged by a simple arithmetic mean. The previous step using Gaussian convolution should be omitted, if more images are given to average.

6.2 Ellipse Fitting Method This method is based on fitting an ellipse to a chain of pixels, which surrounds the target image. The procedure can be divided into a two steps: 

finding the contour pixels which best represents the target position and shape and



ellipse fitting.

The second step, ellipse fitting, is a quite straightforward operation which can be hardly improved. On the other hand, finding the contour pixels which best represents the target position and shape is crucial for an 74

accurate detection and can be performed in various ways. Large amount of articles dealing with this topic exists. For more information, we recommend to look in [Quellet 2009].

Figure 6-4. This figure shows the selected target image binarised in ten different levels. The last levels will be rejected due to the low number of contour pixels. Blue circles symbolize the ellipse fitting process. We use an improved method of binarising the image with pre-defined thresholds. Our improvement is based on binarising the image several times with a different threshold value ranging between minimum and maximum intensity values (see Figure 6-4). This approach detect targets with a large differences in overall light intensity much easier. After thresholding the image, border pixels fulfilling following requirement are searched for: minimum/maximum length of the contour enclosing white pixels (in case of having white targets on black background). Then, the ellipse fitting algorithm (based on minimizing sum of the squared residuals) is applied to each of the contour pixels set. If the estimation of the ellipse parameters ends up with statistical characteristics (ellipse shape, reference variance, etc.) fulfilling predefined tolerances, the center of the ellipse is accepted as the valid position of the target. If the image of the target do not saturate the sensor too much (which leeds to high contrast) and has a high dynamic range histogram, then the target will be found in more binarised levels. Then, the final estimate of the target position will be given as a weighted arithmetic mean of all positions, where the weight is defined by the relation:

(6.3) where

is the length of the contour pixels surrounding the target and

is an arbitrarily chosen exponent

(usually 2). This approach improves the accuracy of the estimate, as more data (more contours) are used.

75

6.3 Centroid Method The position of the target

in the centroid method is computed as a weighted arithmetic mean of the

image coordinates of pixels, whose intensity comes from the target. The coordinates are weighted by intensity values in corresponding pixels. The formula is given by:

, where

where

(6.4)

are the estimated image coordinates of the target,

intensity values, Exponent

is the ith intensity value and

is the sum of all

are the image coordinates of the ith pixel.

(usually set to 2) is arbitrarily chosen.

Although the definition is simple, the realization of the centroid detector has to consider also the presence of noise and use such procedures, which eliminate it as much as possible. The first step of our implementation is finding the approximate position of the target. This process is performed by convolving the image with a unity circle shaped kernel and finding the regional intensity maxima afterwards. Then, the algorithm selects a square shaped area, defined by side length , around the approximate position of the target. The value of

has to be chosen large enough in order to be sure, that

the pixels within the area contain the whole target image and also some dark pixels (pixels not exposed to light) around. Dark pixels are very important as they are used for estimating the average value of noise

.

Dark pixels are selected from four corners of the square, given as a supplement of the area drawn by circle inside the square (see Figure 6-5). In addition to the average value, the SD of the noise

is computed

using the selection. Then, the function of these values defines the minimum intensity value (noise threshold) of pixels being used for computing the target position using formula (6.4). The function computing noise threshold is:

(6.5) where

is a multiplication factor with arbitrarily chosen values (usually 2 or 3). More information about

centroid method for target detection can be found for example in [Shortis 1995] or [Lee 2002].

76

Figure 6-5. This figure shows an area selected around the approximate position of the target image. Red pixels are used for estimating the amount of present noise.

6.4 Post-processing Steps After Target Detection Finally, detected image coordinates of the target images have to be transformed from the local (submatrix) to the global (full resolution matrix – image) coordinate system as was described in example (6.2), using the embedded transformation matrices shown in (6.1). Then, the image coordinates of the targets have to be compared between channels and properly referenced with the corresponding indexes. Targets not detected on all channels are rejected. Image coordinates of the targets detected on both green channels are averaged. If the chromatic aberration can be neglected, then it is possible to average image coordinates of corresponding targets from all channels.

77

PRE-PROCESSING

raw image extraction using DCRAW sw

dark frame subtraction

hot pixels repair

color sub-channels separation

image filtering

image averaging

ELLIPSE FITTING METHOD

CENTROID METHOD

image binarisation

target candidates selection

contour pixels detection

target area selection

target candidates selection

noise threshold estimation ellipse fitting

pixels with signal selection target candidates validation

weighted mean of corresponding targets

target position computation

POST-PROCESSING

transformation to global system

target referencing between color channels

Image coordinates averaging

Figure 6-6. Target detector scheme. 78

7. Experiments

Several experiments have been conducted in order to prove the theoretical development of our proposals given in previous chapters. All image coordinates are related to the global coordinate system (full resolution, for details see Section 6.1).

7.1 Target Detector Tests Several tests have been performed in order to decide which method for target detection provides more accurate results and how to set the input parameters. Two methods providing sample SD of the image coordinate measurement have been used. First is a cross-ratio validation test proposed in Section 5.1. The second is a 2D projective transformation, described in Section 7.3. Input images to those two methods have been processed several times with a different target detector configuration. Target detectors have been tested on images given by the pinhole and glass OS. All images were processed according the procedure described in Chapter 6.

7.1.1

Pinhole Optical System

The previously proposed cross-ratio validation test, providing a reliable, uncorrelated estimate of the sample SD of image coordinate measurement

, is used here for the accuracy estimate. This characteristic

is then used for comparison between various configurations. As an input data, we have used images (corresponding to magnification m1) acquired primarily for the experiment (given in Section 7.2) validating pinhole OS. Additionally, we have performed the second test, which is based on 2D projective transformation of two datasets, given by two images of the calibration field. Both images have been acquired from the distance given by magnification m1 and with a little shake of the tripod between the exposures. Parameters of the pinhole OS are given in Section 7.5. Imaging parameters related to the first and second test are given in Table 7-8 and Table 7-1, respectively. Target detector configuration for each test is given in Table 7-2, while the results are given in Table 7-3. Only the green channel has been used in processing.

79

Table 7-1. Pinhole OS imaging parameters related to the second test. image format shutter speed f-number ISO Speed

raw (*.CR2) 4 sec 183 400

Table 7-2. Target detector parameters. config. no.

target detector type

gaussian gaussian smooth sigma σ repetitions

binarisation exponent levels

centroid centroid centroid centroid centroid centroid centroid centroid ellipse fit. ellipse fit. ellipse fit. ellipse fit. ellipse fit. ellipse fit. ellipse fit. ellipse fit.

80

side length a [px]

exponent

multiplication factor

Table 7-3. Target detector test results given by the cross-ratio validation test and 2D projective transformation method. cross-ratio validation test

2D projective transformation

config. no. [µm]

7.1.2

[µm]

[µm]

[µm]

[µm]

[µm]

Glass Optical System

Cross-ratio validation test cannot be used here, because the images are distorted. Therefore, we have performed only the second test, which is based on 2D projective transformation of two datasets, given by two images of the calibration field. Both images were acquired from the distance given by magnification m1 and with a little shake of the tripod between the exposures. Parameters of the glass OS are given in Section 7.5. Imaging parameters are given in Table 7-4. Target detector configuration for each test is given in Table 7-5, while the results are given in Table 7-6. Only the green channel has been used in processing.

Table 7-4. Glass OS imaging parameters. image format shutter speed f-number ISO Speed

raw (*.CR2) 1/30 sec 16 100

81

Table 7-5. Target detector parameters. config. no.

target detector type

gaussian gaussian smooth sigma σ repetitions

binarisation exponent levels

side length a [px]

exponent

multiplication factor

centroid centroid centroid centroid centroid centroid centroid centroid ellipse fit. ellipse fit. ellipse fit. ellipse fit. ellipse fit. ellipse fit. ellipse fit. ellipse fit.

Table 7-6. Target detector test results given by the 2D projective transformation method.

config. no. [µm]

[µm]

82

[µm]

7.1.3

Conclusion

The ellipse fitting method should not be as sensitive to noise as the centroid method, but uses less amount of data for computing the target position. This contradictory behavior was one of the reason why we chose particularly those methods to test. Especially the imaging with pinhole OS is an undiscovered field and had to be tested. Long exposures leads to a high amount of noise and the shape of the target is different from that given by standard glass lens (see Figure 6-1). In case of the pinhole OS, results given by the cross-ratio validation test, shown in Table 7-3, shows that the sample SD is not sensitive to configuration changes of both target detector types. This is probably caused by the imperfection of the targets, because the results of the second test shows a good sensitivity of the target detector configuration. Therefore, the results given by the second test are in this case more reliable for conclusions. Considering the second test results, centroid method gives much better results than the ellipse fitting method. The best result given by centroid method is derived by using configuration no. 6, but we will use no. 2, because it uses only one smoothing step, while the sample SD is almost the same. We think, that in case of the centroid detector, the smoothing should be minimized. The largest variation has been induced by changing exponent A (compare 3, 5 and 8). The multiplication factor B seems to have no measurable impact on the results (compare 3 and 7). The tests related to ellipse fitting method proves the influence of the quantity of the binarisation steps (compare 13 and 15), while exponent C shows no impact (compare 10 and 14). More smoothing derives better results as can be seen from the comparison 10, 11, 12 and 16. This is in accordance with our expectations. We have also tried the parameters given in 13 with the smooth configuration given in 16, but the improvement was insignificant. In case of the glass OS, the centroid method gives also much better results than ellipse fitting method. It is interesting, that the results given by the glass OS seems to be quite same as those given by the pinhole OS. The configuration no. 3, 6 and 7 gives the best results. We will use no. 3. The worst result related to the centroid method is given by no. 1, where no smoothing has been applied. Other results do not show significant influence of any other parameter. Tests related to the ellipse fitting method provides the same findings as in the case of pinhole OS tests.

83

7.2 Pinhole Optical System Validation Throughout the whole work, the pinhole OS has been considered as being a distortion-free representation of an ideal OS. Such premise is theoretically true, but there are another sources of error, which should be evaluated. Among those sources of error, we can include for example the sensor manufacturing imperfections, deviations in circularity of the pinhole element and also the target detector limitations. This experiment aims to characterize the influence of those errors by validating pinhole OS by using cross-ratio invariant test described in Chapter 5.1. The outcome of this test, the SD estimate of the image coordinate measurement, characterize the influence of above mentioned errors. Pinhole OS has been tested for each magnification m, that has been tested in the main experiment – lens distortion estimation. Target detector configuration is given in Table 7-7 and imaging parameters in Table 7-8. Parameters of the cross-ratio validation test for each magnification are given in Table 7-9 and results in Table 7-10. All images were processed according the procedure described in Chapter 6. The length of the cross-ratio bar instrument was set to approximately 450 mm for magnifications m1–m4 and to approximately 300 mm for magnifications m5–m6.

Table 7-7. Target detector configuration used for the pinhole OS validation test. target detector type

gaussian gaussian smooth sigma σ repetitions

side length a [px]

exponent

multiplication factor

centroid

Table 7-8. Pinhole OS imaging parameters. image format shutter speed f-number ISO Speed

raw (*.CR2) 2 sec (m1 – m4), 1 sec (m5 – m6) 183 400 (m1 – m4), 200 (m5 – m6)

84

Table 7-9. Parameters of the cross-ratio validation test. magnification

target-to-sensor distance [mm]

number of images (hr/vr)

length diff. [%]

maximum angle α [deg]

e for d=0.2 mm [µm]

[]

m1 m2 m3 m4 m5 m6

Table 7-10. Results of the cross-ratio validation test. red channel magnification

[µm]

[µm]

green channel

[µm]

[µm]

[µm]

[µm]

m1 m2 m3 m4 m5 m6

85

blue channel

[µm]

[µm]

rgb

[µm]

[µm]

[µm]

[µm]

0

500

1000

1500

2000

2500

3000

3500

0

m1,

= 1.0 µm 0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

1000

2000

m3,

3000

2000

m2,

0

0

1000

4000

5000

0

1000

= 1.20 µm

2000

m4,

3000

4000

5000

= 1.10 µm

3000

4000

5000

= 1.80 µm

0

500

1000

1500

2000

2500

3000

3500

0

m5,

= 2.10 µm

1000

2000

m6,

3000

4000

= 3.20 µm

Figure 7-1. The target of the cross-ratio bar locations and residual vectors for m1–m6. The scale of the residual vectors is 1:1000.

86

5000

7.2.1

Conclusion

The results given in Table 7-10 show higher values of the SD estimate than we expected. As was stated in the section related to target detector test, we think that these higher values are caused by the target imperfections. The continuous increase of the SD estimate with the higher magnification supports our thought. The average of the target error given by each magnification and SD estimate (

) is

33 µm (this makes 1.7 % of the 2 mm diameter of the luminous point), which is realistic, because the targets have been brushed manually and also the black plastic isolation do not fit perfectly. The results show no significant deviation between color channels or axis. The small differences are caused probably by the different amount of light coming through the RGB filter placed in front of the sensor. The values of e given in Table 7-9, which characterize an amount of error caused by the misaligned middle targets of the cross-ratio bar, are too small to have some impact on the results. However, having a more precise targets, these values could be a problem and should be decreased by more precise targets alignment (d=0.1 mm instead of 0.2 mm). As can be seen in Table 7-9, the number of vertical orientated cross-ratio bars is quite small. This is caused by our imperfect design of the bar movement in vertical direction.

87

7.3 Lens Mount Repeatability Test and Standard Deviation of Image Coordinate Measurement Estimation The mechanical stability and repeatability of the OS components plays a major role in the camera calibration procedure, as it limits the potential of the system in precision and accuracy. Therefore, an experiment testing the repeatability of the lens mount is given here in order to decide which parameters (usually the principal point position) of the OS are meaningless to calibrate for. This experiment has been integrated into the main one, which estimates the lens distortion. The test procedure is as follows: Two images of the calibration field are acquired by the tested OS, where the imaging “optics” (either glass lens or pinhole lens) have been remounted before each exposure. Then, the target images are detected and transformed to each other by using the Euclidean transformation (scale and translation). After the transformation, the image coordinate residuals of the corresponding targets are computed and characterized by SD estimate. Therefore, two outputs are given: Transformation parameters (scale

and translation

) and the SD estimate of the image coordinate measurement. The

first output characterizes the repeatability of the OS, while the second characterizes the precision of the target detection. The results of this experiment are given in Table 7-11 (pinhole OS repeatability test), and Table 7-12 (glass OS repeatability test). Based on the results, the procedure was repeated again with using 2D projective transformation instead of Euclidean. Those results are given in Table 7-13 (pinhole OS repeatability test) and Table 7-14 (glass OS repeatability test). Then, the sequence of images taken during the main test is as follows: P1 – P2 – G1 – G2. Where pair P2 – G2 is used for the main test, while pairs P1 – P2 and G1 – G2 are used for testing the mount repeatability and SD estimate of the image coordinate measurement. Because the SD estimate of the image coordinate measurement of the pinhole OS has been already estimated in two previous experiments, we give the comparison of all estimates in Table 7-13 and Table 7-14. In case of the pinhole OS, all color channels have been averaged during processing, while in case of the glass OS, only the green color channel has been processed. The imaging parameters and target detector configuration is given in the section containing the main test (7.5).

88

Table 7-11. The pinhole lens mount test using Euclidean transformation. All values are in micrometers. magnification m1 m2 m3 m4 m5 m6

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

0

1000

2000

m1,

3000

4000

0

5000

1000

= 0.076 µm

2000

m2,

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3000

4000

5000

= 0.110 µm

3500

0

1000

2000

m3,

3000

4000

5000

0

= 0.077 µm

0

0

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

1000

2000

m5,

3000

4000

2000

m4,

500

0

1000

5000

0

= 0.052 µm

1000

4000

5000

= 0.120 µm

2000

m6,

3000

3000

4000

5000

= 0.110 µm

Figure 7-2. This figure shows the residual vectors, given by the Euclidean transformation of the datasets provided by the pinhole OS. The scale of the residual vectors is 1:10000.

89

Table 7-12. The glass lens mount test using Euclidean transformation. All values are in micrometers. magnification m1 m2 m3 m4 m5 m6

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

0

1000

2000

m1,

3000

4000

5000

0

= 0.052 µm

0

0

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

1000

2000

m3,

3000

4000

2000

m2,

500

0

1000

5000

0

1000

= 0.130 µm

4000

5000

= 0.078 µm

2000

m4,

0

3000

3000

4000

5000

= 0.058 µm

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

0

1000

2000

m5,

3000

4000

5000

0

= 0.120 µm

1000

2000

m6,

3000

4000

5000

= 0.082 µm

Figure 7-3. This figure shows the residual vectors, given by the Euclidean transformation of the datasets provided by the glass OS. The scale of the residual vectors is 1:10000.

90

Table 7-13. The pinhole lens mount test using 2D projective transformation. I – the results of the target detector test, II – the results of the mount test (Euclidean transformation), III – the results of the mount test (2D projective transformation). All values are in micrometers. magnification m1 m2 m3 m4 m5 m6

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

0

1000

2000

m1,

3000

4000

0

5000

1000

= 0.075 µm

2000

m2,

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3000

4000

5000

= 0.070 µm

3500

0

1000

2000

m3,

3000

4000

5000

0

1000

= 0.077 µm

2000

m4,

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3000

4000

5000

= 0.054 µm

3500

0

1000

2000

m5,

3000

4000

5000

0

= 0.050 µm

1000

2000

m6,

3000

4000

5000

= 0.049 µm

Figure 7-4. This figure shows the residual vectors, given by 2D projective transformation of the datasets provided by the pinhole OS. The scale of the residual vectors is 1:10000.

91

Table 7-14. The glass lens mount test using 2D projective transformation. I – the results of the target detector test, II – the results of the mount test (Euclidean transformation), III – the results of the mount test (2D projective transformation). All values are in micrometers. magnification m1 m2 m3 m4 m5 m6

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

0

1000

2000

m1,

3000

4000

5000

0

1000

= 0.048 µm

2000

m2,

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3000

4000

5000

= 0.048 µm

3500

0

1000

2000

m3,

3000

4000

5000

0

1000

= 0.065 µm

2000

m4,

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3000

4000

5000

= 0.041 µm

3500

0

1000

2000

m5,

3000

4000

5000

0

= 0.054 µm

1000

2000

m6,

3000

4000

5000

= 0.055 µm

Figure 7-5. This figure shows the residual vectors, given by 2D projective transformation of the datasets provided by the glass OS. The scale of the residual vectors is 1:10000.

92

7.3.1

Conclusion

The results given in Table 7-11 and Table 7-12 shows that the repeatability of both – pinhole and glass lens mount is poor and therefore it is meaningless (in following experiments) to try to calibrate also the principal point position and the principal distance of the glass OS. Although the steel mount component used in the pinhole lens is exactly the same type as the one used for the glass lens, the repeatability in the pinhole lens mount test is better than in the glass lens mount test. We think, that this is caused by the movement of the glass elements inside the lens assembly during the change. Residual vectors in Figure 7-2 clearly shows the presence of an unmodeled rotation in tests for magnifications m2, m4 and m6. This is probably caused by the imperfect fixation of the tripod, or the connection of the tripod with the camera body. Therefore, in order to model also this kind of error, we have additionally performed the same test with using 2D projective transformation instead of the Euclidean. The SD estimates resulting from 2D projective transformation, given in Table 7-13, proves the presence of rotation in tests for magnification m2, m4 and m6. The SD estimates resulting from 2D projective transformation decreased to the values comparable with the rest of the other magnifications. The SD estimates (

) given in Table 7-13 are a little bit smaller than the SD estimate given by the

previous target detector test. This is probably caused by the fact, that the corresponding targets of both datasets have been much closer to each other than during the target detector test. The Figure 7-3 shows an interesting progress of the residual vectors given by the Euclidean transformation (in magnifications m2, m3 and m5). We think, that this effect points out to a typical progress of the decentering distortion caused by the movement of the glass elements inside the lens assembly. Therefore, we can directly see here the influence of the lens change on the stability of the glass elements inside the lens assembly. By using 2D projective transformation, the decentering effect is no more visible (see Figure 7-5) for magnifications m2 and m5, but still remains in magnification m3. This fact is also supported by the value of the SD estimate in Table 7-14, which still remains quite high in compare to the others. The SD estimates (

) given in Table 7-14 are approximately half of that given by the target detector test.

This fact has probably two reasons: First reason is the same as in the case of the pinhole mount test (target closeness). The second reason could be the movement of the glass elements during tripod shaking in previous target detector test. Finally we have to point out, that the presented findings are not correlated with the magnification (as far as we know) and would probably appear even if we would repeat this mount test several times using one pose (magnification) only.

93

7.4 Chromatic Aberration The procedures given in the previous section can also be used to independently estimate and visualize the chromatic aberration between the color channels by using Euclidean transformation. We decided to test the image given by the glass OS, corresponding to magnification m3 (the image corresponding to m3 contains all targets of the calibration field). The comparison of the color channels (red v. green, red v. blue, green v. blue and green1 v. green2) without using any transformation is visualized by using residual vectors connecting the corresponding targets in Figure 7-6. Results given by the Euclidean transformation are shown in Table 7-15 and Figure 7-7. The comparison of both green color channels have been performed in order to check their independency.

-500

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

4000

0

1000

2000

3000

4000

5000

6000

0

1000

2000

(a)

3000

4000

5000

6000

4000

5000

6000

(b)

-500

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

4000

0

1000

2000

3000

4000

5000

6000

0

(c)

1000

2000

3000

(d)

Figure 7-6. The visualization of the chromatic aberration effect given by the color channels comparison. a – R v. G, b – R v. B, c – G v. B, d – G1 v. G2. The residual vectors scale is: a,b,c – 1:300, d – 1:10000.

94

Table 7-15. The chromatic aberration measure. Parameters of the Euclidean transformation performed between color channels. All values are in micrometers. color channels R v. G R v. B G v. B G1 v. G2

-500

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

4000

0

1000

2000

(a),

3000

4000

5000

6000

0

1000

= 0.550 µm

2000

(b),

-500

3000

4000

5000

6000

5000

6000

= 1.600 µm

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

4000

0

1000

2000

(c),

3000

4000

5000

6000

0

= 1.100 µm

1000

2000

(d),

3000

4000

= 0.058 µm

Figure 7-7. The visualization of the chromatic aberration effect given by the color channels comparison after Euclidean transformation. a – R v. G, b – R v. B, c – G v. B, d – G1 v. G2. The residual vectors scale is: a,b,c – 1:1000, d – 1:10000. 7.4.1

Conclusion

The parameters of the Euclidean transformation given in Table 7-15 indicate the difference of the principal point position between each color channel. The comparison of the both green channels shows no systematic tendency. The SD estimate given by the comparison of those two channels can be used as a measure of the target detector precision. The value given here is in accordance with the previous findings. 95

7.5 Glass Lens Calibration Experiment 7.5.1

Calibration Field

The calibration field (see Figure 7-8) consists of a solid wooden desk (800 mm × 1200 mm), painted with a black matte color on the front side. The desk is drilled in a regularly spaced intervals (60 mm) constituting 240 holes. On the front side of the desk, each drilled hole holds the target, described in Section 5.1.1, while from the rear side, the targets are lit by white led diodes connected to a chain with standard output to electricity.

Figure 7-8. The arrangement of the lens calibration test, consisting of a stabilized tripod holding the optical system (camera body with lens) and the calibration field prototype. 7.5.2

Description of the Experiment Procedures

The proposed method for measuring lens distortion will be tested in this experiment. The procedure, arrangement, instrumentation and theoretical development is given in Chapters 2 and 3. The lens distortion will be measured for six magnifications in order to see the progress of the results. The procedure of this experiment will be extended by incorporating the mount repeatability test and both cross-ratio validation tests (pinhole OS validation and glass OS validation) for practical reasons. The procedure is then as follows: The first image of the calibration field is acquired with the pinhole OS. Then, the pinhole lens is remounted and the second image is acquired (pinhole lens mount repeatability test procedure). Then, the pinhole lens is interchanged with the glass lens and the third image is acquired 96

(glass lens calibration procedure). Fourth image is taken after remounting the glass lens (glass lens mount repeatability test procedure). The arrangement of the lens calibration test is shown in Figure 7-8. Then, the calibration field is interchanged with the cross-ratio bar instrument (the platform holding the cross-ratio bar is made in such a way, that the targets are in the exactly same distance to camera as the targets of the calibration field). The bar is firstly captured in a horizontal direction and then in a vertical direction. After replacing the calibration field, several images of the cross-ratio bar are acquired following the rules given in Section 5.1.2. (cross-ratio validation test procedure). Then, the glass lens is again interchanged with the pinhole lens and the cross-ratio validation test procedure is repeated. It should be pointed out, that the cross-ratio bar target images should cover as much of the sensor inside the glass OS, as will be possible to calibrate. No more and no less. The arrangement of the cross-ratio validation test is shown in Figure 7-9. Then, the extended procedure described above is repeated for each magnification, after changing the pose of the tripod holding the camera.

Figure 7-9. The arrangement of the cross-ratio validation test, where the support stand holding the calibration field is used for moving the cross-ratio bar. The specifications of the used instruments – camera body, pinhole lens and the calibrated glass lens are given in Table 7-17, Table 7-18 and Table 7-19. Imaging parameters are given in Table 7-20 and Table 7-21. The specifications of the experiment – calibrated magnifications, target-to-sensor distance and image sequence ordering are given in Table 7-22. All images were processed with the procedure described in Chapter 6, using target detector parameters given in Table 7-16. The length of the cross-ratio bar 97

instrument was set to approximately 450 mm for magnifications m1–m4 and to approximately 300 mm for magnifications m5–m6. The glass lens was focused to distance 1140 mm and the focusing ring fixed with a tape.

Table 7-16. Target detector parameters. optical system

target detector type

pinhole OS glass OS

centroid centroid

gaussian gaussian smooth sigma σ repetitions

side length a [px]

exponent

multiplication factor

Table 7-17. Digital still camera (DSC) body specifications. DSC camera name serial number pixel size number of total pixels number of effective pixels (including ring pixels) number of recorded pixels Color Filter Array (CFA) sensor format

Canon EOS 5D, Mark II 1431022862 6.4 µm 5792 × 3804 px 5634 × 3753 px 5616 × 3744 px RGGB 35 mm full-frame (approximately 36 × 24 mm)

Table 7-18. Pinhole lens specifications. pinhole manufacturer approximate principal distance pinhole diameter f-number

Edmund Optics 55 mm 300 µm 183

Table 7-19. Glass lens specifications. lens name nominal focal length focused to distance (target-to-sensor) FOV: diagonal focus adjustment lens construction number of diaphragm blades aperture range weight year introduced

Canon EF 40 mm F2.8 STM 40 mm 1140 mm 56.8° (full frame) Linear Stepper Motor 6 elements/4 groups 7 f/2.8–22 130 g 2012

98

Table 7-20. Pinhole OS imaging parameters related to the glass lens calibration experiment (m1–m6). image format shutter speed f-number ISO Speed

raw (*.CR2) 4 sec 183 400

Table 7-21. Glass lens OS imaging parameters related to the glass lens calibration experiment (m1–m6). image format shutter speed f-number ISO Speed

raw (*.CR2) 1/30 sec 16 100

Table 7-22. Experiment specifications. magnification

7.5.3

magnification value [ ]

target-to-sensor distance [mm]

image sequence ordering

m1

P1, P2, G1, G2, 35×Ghor, 10×Gver, 35×Phor, 9×Pver

m2

P1, P2, G1, G2, 30×Ghor, 8×Gver, 30×Phor, 8×Pver

m3

P1, P2, G1, G2, 30×Ghor, 7×Gver, 30×Phor, 7×Pver

m4

P1, P2, G1, G2, 25×Ghor, 9×Gver, 25×Phor, 9×Pver

m5

P1, P2, G1, G2, 35×Ghor, 7×Gver, 35×Phor, 7×Pver

m6

P1, P2, G1, G2, 21×Ghor, 6×Gver, 21×Phor, 6×Pver

Description of Results Given in Tables and Figures

The results of this experiment are shown in several tables and graphs described hereinafter: 

An example of the computed glass lens distortion values for magnification m6 and sample SD (only for

) are given in Table 7-23. The distortion values (misclosures of the condition

equations, see Appendix C) are directly provided by Euclidean transformation of the set of target images given by the pinhole OS and the set of target images given by the glass OS. 

Visualized distortion values, provided by Euclidean transformation of the pinhole and glass OS datasets, are given in Figure 7-10 for each magnification.



Visualized distortion values, provided by 2D projective transformation of the pinhole and glass OS datasets, are given in Figure 7-11 for each magnification. (The reason for additional use of 2D projective transformation is given in the conclusion).

99



Figure 7-12 shows the residual vectors derived by applying Brown polynomial model correction on the results given by Euclidean transformation.



Figure 7-13 shows the residual vectors derived by applying Brown polynomial model correction on the results given by 2D projective transformation.



Table 7-24 shows the results of the lens calibration based on the Euclidean transformation, followed by Brown polynomial parameters estimate.



Table 7-25 shows the results of the lens calibration based on the 2D projective transformation, followed by Brown polynomial parameters estimate.



Figure 7-14 shows the comparison of the progress of the radial component of the distortion given by Brown polynomial for each color channel corresponding to m1.



Figure 7-15 shows the comparison of the progress of the radial component of the distortion given by Brown polynomial for each magnification using only the green channel.



Figure 7-16 is a hypsometry type of visualization of the absolute values of the distortion for each magnification (except m6).

The data acquired in this experiment have been processed according to the theoretical development given in Chapter 3. A brief description of the main procedure is given here: The image coordinates of the target images given by the pinhole OS were transformed to the corresponding target images given by the glass OS by using the Euclidean transformation (scale

and translation

). The vector differences between

corresponding target images after the transformation process can be attributed directly to the lens distortion and were therefore stored as the result of the lens calibration process. In order to visually show the radial component of the distortion progress (neglecting local deformations, however) and to estimate the residuals given by applying Brown polynomial correction, we used tabulated distortion values for estimation of the parameters of the Brown polynomial function. The estimation was performed by using a least square estimator given in Appendix A. The functions, partial derivatives and other details of the adjustment can be found in Appendix E. The hypsometry images were generated by interpolating the distortion values corresponding to the uniform grid given by the sensor size. Interpolation of the scattered dataset (the target images are not projected in a perfect grid due to the distortion and minor errors in perpendicularity to the calibration field), which is given by tabulated distortion values, was performed by using a function TriScatteredInterp provided by SW Matlab.

100

Table 7-23. Computed lens distortion values (by using Euclidean transformation) corresponding to m6 and sample SD (only for

[mm]

[mm]

[µm]

). Subscripts R, G, B denotes the color channels.

[µm]

[mm]

[mm]

[µm]

101

[µm]

[µm]

[mm]

[mm]

[µm]

[µm]

-500

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

4000

0

1000

2000

m1,

3000

4000

5000

0

6000

1000

= 9.3 µm

2000

m2,

-500

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

3000

4000

5000

6000

5000

6000

5000

6000

= 9.9 µm

4000

0

1000

2000

m3,

3000

4000

5000

6000

0

1000

= 9.0 µm

2000

m4,

-500

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

3000

4000

= 8.1 µm

4000

0

1000

2000

m5,

3000

4000

5000

6000

= 8.7 µm

0

1000

2000

m6,

3000

4000

= 9.4 µm

Figure 7-10. Illustration of the distortion progress (for green channel) given by using Euclidean transformation. The scale of the distortion vectors is 1:50.

102

-500

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

4000

0

1000

2000

m1,

3000

4000

5000

6000

0

1000

= 9.0 µm

2000

m2,

-500

3000

4000

5000

6000

5000

6000

5000

6000

= 9.6 µm

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

4000

0

1000

2000

m3,

3000

4000

5000

6000

0

1000

= 8.8 µm

2000

m4,

-500

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

3000

4000

= 7.9 µm

4000

0

1000

2000

m5,

3000

4000

5000

6000

0

= 8.4 µm

1000

2000

m6,

3000

4000

= 9.3 µm

Figure 7-11. Illustration of the distortion progress (for green channel) given by using 2D projective transformation. The scale of the distortion vectors is 1:50.

103

-500

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

4000

0

1000

2000

m1,

3000

4000

5000

6000

0

1000

= 0.42 µm

2000

m2,

-500

3000

4000

5000

6000

5000

6000

5000

6000

= 0.54 µm

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

4000

0

1000

2000

m3,

3000

4000

5000

6000

0

1000

= 0.60 µm

2000

m4,

-500

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

3000

4000

= 0.38 µm

4000

0

1000

2000

m5,

3000

4000

5000

6000

0

= 0.46 µm

1000

2000

m6,

3000

4000

= 0.60 µm

Figure 7-12. Illustration of the residual vectors (for green channel) given by applying Brown polynomial correction after Euclidean transformation. The scale of residual vectors is 1:1000.

104

-500

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

4000

0

1000

2000

m1,

3000

4000

5000

6000

0

1000

= 2.2 µm

2000

m2,

-500

3000

4000

5000

6000

5000

6000

5000

6000

= 2.0 µm

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

4000

0

1000

2000

m3,

3000

4000

5000

6000

0

1000

= 2.0 µm

2000

m4,

-500

-500

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

4000

3000

4000

= 1.7 µm

4000

0

1000

2000

m5,

3000

4000

5000

6000

0

= 1.8 µm

1000

2000

m6,

3000

4000

= 1.7 µm

Figure 7-13. Illustration of the residual vectors (for green channel) given by applying Brown polynomial correction after 2D projective transformation. The scale of residual vectors is 1:1000.

105

Table 7-24. Results of the lens calibration by using Euclidean transformation, followed by estimation of the Brown polynomial parameters (A0, A1, A2, P1, P2). sIxy denotes the SD estimate of the residual vectors given by using Euclidean transformation only. sIIxy denotes the SD estimate of the residual vectors given by applying Brown polynomial correction after Euclidean transformation. The estimated parameters of the Euclidean transformation (λ, tx, ty) are not shown here, because their values are meaningless in this experiment. Brown polynomial parameters were computed only for the green channel (except m1). color channel

m [µm]

[]

[mm-2]

[mm-4]

m1 m2 red

m3 m4 m5 m6 m1 m2

green

m3 m4 m5 m6 m1 m2

blue

m3 m4 m5 m6

106

[mm-1]

[mm-1]

[µm]

Table 7-25. Results of the lens calibration by using 2D projective transformation, followed by estimation of the Brown polynomial parameters (A0, A1, A2, P1, P2). sIxy denotes the SD estimate of the residual vectors given by using 2D projective transformation only. sIIxy denotes the SD estimate of the residual vectors given by applying Brown polynomial correction after 2D projective transformation. The estimated parameters of 2D projective transformation (hii) are not shown here, because their values are meaningless in this experiment. Brown polynomial parameters were computed only for the green channel (except m1). color channel

m [µm]

[]

[mm-2]

[mm-4]

m1 m2 red

m3 m4 m5 m6 m1 m2

green

m3 m4 m5 m6 m1 m2

blue

m3 m4 m5 m6

107

[mm-1]

[mm-1]

[µm]

40 35 Red channel Green channel Blue channel Maximum calibrated distance

30 25 20

Distortion [um]

15 10 5 0 -5 -10 -15 -20

0

1

2

3

4

5

6

7 8 9 Radial distance [mm]

10

11

12

13

14

15

16

Figure 7-14. Radial distortion progress (magnification m1) in each color channel, given by Brown polynomial parameters estimated after Euclidean transformation. 50 45 m1 m2 m3 m4 m5 m6 Maximum calibrated distance

40 35 30 25

Distortion [um]

20 15 10 5 0 -5 -10 -15 -20 -25 -30

0

1

2

3

4

5

6

7 8 9 Radial distance [mm]

10

11

12

13

14

15

16

Figure 7-15. Radial distortion progress for each magnification (for green channel), given by Brown polynomial parameters estimated after Euclidean transformation.

108

100

200

300

400

500

600

700

100

200

300

400

500

600

700

800

900

1000

1100

m1 100

100

200

200

300

300

400

400

500

500

600

600

700

700

100

200

300

400

500

600

700

800

900

1000

1100

100

200

300

400

500

m2

600

700

800

900

1000

1100

700

800

900

1000

1100

m3

100

100

200

200

300

300

400

400

500

500

600

600

700

700

100

200

300

400

500

600

700

800

900

1000

1100

m4

100

200

300

400

500

600

m5

Figure 7-16. Hypsometry form of visualization of the absolute values of the distortion for magnification m1–m5 (for green channel).

109

7.5.4

Conclusion

All results provided by this experiment are in accordance with our expectations. In addition to Euclidean transformation process described in Section 3.3, we have also tested the use of 2D projective transformation in order to suppress the effect of tripod instability detected in camera mount repeatability test given in Section 7.3. Furthermore, we have estimated the Brown polynomial parameters for these reasons: 

To visualize the progress of the radial component of the distortion



To estimate the residual vectors given by incorporating the Brown polynomials

The approximate value of the principal point position of the glass OS was computed together with the Brown polynomial parameters. The residual vectors in Figure 7-12 indicate the unmodeled systematic effect, which prevents lower SD estimates. This systematic effect is probably caused by the fact, that both transformations (Euclidean and Brown polynomials) were processed separately or by the fact, that Brown polynomials are not sufficient to model the lens distortion of this specific glass lens. The same conclusion can be made in case of using 2D projective transformation (see Figure 7-13) where the best SD estimate is much higher – 1.7 µm. The maximum radial distance for which the calibration values are valid is 15 mm, although the maximum value given by sensor dimensions is approximately 21.6 mm. This is caused by the fact, that the glass lens focal length is smaller than the pinhole OS principal distance. Figure 7-16 shows the hypsometry form of visualization of the absolute values of the distortion. Here, the distortion values are given directly by interpolation of the tabulated values, not influenced by Brown polynomials. The possible presence of local deformations can be evaluated here. However, in this particular experiment, the local deformations visible in the dark blue area are more probably caused by outliers in the dataset, than due to local deformations of the glass lens or camera sensor.

7.6 Validation of the Glass Lens Calibration Results Validation of the lens calibration process has been realized by using the cross-ratio validation test described in Section 5.1. The calibrated lens has been validated for each magnification m1–m6. The validation procedure was incorporated into the main test in such a way, that the glass lens was not interchanged between acquiring the calibration and the validation images. This is important, because changing the lens moves the principal point and most probably also the glass elements inside the lens assembly. The target detector parameters, specifications of the validated glass OS and the cross-ratio bar are given in the previous section. The imaging parameters are given in Table 7-26. Parameters of the cross-ratio 110

validation test for each magnification are given in Table 7-27 and results in Table 7-28. Image coordinates of the recorded targets has been corrected before processing the test in a four different ways in order to compare the different approaches: 

No correction applied (I).



Correction by using parameters given by the Euclidean transformation followed by Brown polynomial transformation (II).



Correction by using tabulated values given by the 2D projective transformation (III).



Correction by using tabulated values given by the Euclidean transformation (IV).

Interpolation of the tabulated distortion values was performed by using a Matlab function TriScatteredInterp.

Table 7-26. Glass OS imaging parameters. image format shutter speed f-number ISO Speed

raw (*.CR2) 1/125 sec (m1 – m6) 16 100

Table 7-27. Parameters of the cross-ratio validation test. magnification

target-to-sensor distance [mm]

number of images (hr/vr)

length diff. [%]

m1 m2 m3 m4 m5 m6

111

maximum angle α [deg]

e for d=0.2 mm [µm]

[]

Table 7-28. Results of the cross-ratio validation test. I – no correction applied, II – Euclidean + Brown, III – 2D projective (interpolation), IV – Euclidean (interpolation). All values are in micrometers. Values for the second type of the correction has been computed only for the green channel, besides m1. red channel

green channel

blue channel

m m1 m2 m3 m4 m5 m6

7.6.1

Conclusion

The results given in Table 7-28 shows higher values of SD estimate than we expected. As was stated in the section related to the target detector test, we think that these higher values are caused by targets imperfections. The average of the target error given by each magnification and SD estimate ( ) is 30 µm (using green channel,

), which correspondes to 33 µm given in the pinhole OS validation

test in Section 7.2. The values of e given in Table 7-27, which characterize an amount of error caused by the misaligned middle targets of the cross-ratio bar, are too small to have some impact on the results. However, having a more precise targets, these values could be a problem and should be decreased by more precise targets alignment (d=0.1 mm instead of 0.2 mm). The results show no significant deviation between different correction approaches. However, the probable poor precision of the targets prevent the reliable comparison. The only significant difference between resulting values of SD estimates can be found between the uncorrected and corrected estimates. This proves the validity of proposed method for measuring lens distortion.

112

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

0

1000

2000

m1,

3000

4000

5000

0

= 0.85 µm 0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

1000

2000

m3,

3000

4000

2000

m2,

0

0

1000

5000

0

1000

= 1.10 µm

2000

m4,

3000

4000

5000

= 0.96 µm

3000

4000

5000

= 1.50 µm

0

0

500

500

1000

1000

1500

1500

2000

2000

2500

2500

3000

3000

3500

3500

0

1000

2000

m5,

3000

4000

0

5000

= 2.10 µm

1000

2000

m6,

3000

4000

5000

= 2.80 µm

Figure 7-17. The target of the cross-ratio bar locations and residual vectors for m1–m6. The scale of the residual vectors is 1:1000. All vertical measurements corresponding to magnification m6 had to be removed because they were outside of the calibrated area. Therefore, no residual vectors in vertical direction are shown.

113

114

8. Conclusion and Future Work

A new method for measuring lens distortion has been proposed and validated. In addition, we proposed a new method for lens distortion validation – cross-ratio validation test. However, the imperfections of the targets of the cross-ratio bar prevented to estimate the full potential of this test. More precise targets should be made and the cross-ratio validation test should be evaluated again. For this reason the validation of the lens distortion estimate was limited. The proposed method for measuring the lens distortion is advantageous as no additional parameters, that could influence the reliability of the estimation, have to be considered. Those additional parameters are usually the elements of EO which in certain camera component configuration highly correlates with the lens distortion characteristics. Furthermore, our approach based on measuring relative values given by comparison of two images do not require absolute position of the calibration field targets which reduces additional sources of error given by target surveying. Particularly the lens distortion variation induced by change in magnification or aperture can be easily measured. Our approach to measuring the lens distortion provides directly the deviations of corresponding target images given by the glass OS and the ideal OS, therefore the method is not limited by pre-defined functional model describing the distortion progress. The modeling of the distortion can be done a posteriori, by using tabulated distortion values. The main drawback of the proposed method is the requirement of changing the lens, which leads in most cases to the significant change of the IO elements, which has been also shown in chapter with experiments. Another significant limitation of the method lies in the fact, that wider FOV cannot be calibrated. The pinhole OS is not able to refract the light, therefore the smallest possible principal distance of such OS (or more intuitively: the largest FOV) is limited by the flange focal distance of the camera body.

115

Bibliography BRÄUER-BURCHARDT C., M. HEINZE, C. MUNKELT, P. KUHMSTEDT AND G. NOTNI. (2006). Distance Dependent Lens Distortion Variation in 3D Measuring Systems Using Fringe Projection. In: Proceedings of the British Machine Conference, pp 34.1–34.10. BMVA Press BROWN, D. C. (1956). The Simultaneous Determination of Lens Distortion of a Photogrammetric Camera. AF Missile Test Center Technical Report No. 56-20, Patric AFB, Florida, USA BROWN, D. C. (1966). Decentering Distortion of Lenses. In: Photogrammetric Engineering 32 (3) pp. 444-462. BROWN, D. C. (1968). Advanced Methods for the Calibration of Metric Cameras. Final Report, Part 1, under Contract DA-44-009-AMC-1457(X) to U.S. Army Engineering Topographic Laboratories, Fort Belvoir, Va., USA BROWN, D. C. (1971). Close-Range Camera Calibration. In: Photogrammetric Engineering 37 (8) pp. 855-866. CLARKE, T. A. AND J. F. FRYER. (1998). The development of camera calibration methods and models. In: Photogrammetric Record, 16 (91), pp. 51–66. COFFIN, D. J. (2013). DCRAW sofware. http://www.cybercom.net/~dcoffin/dcraw/ CRAMER, M. (2004). EuroSDR network on Digital Camera Calibration. Final Report – Phase I. Institute for Photogrammetry (ifp), University of Stuttgart. EISENHART, CH. (1963). Realistic Evaluation of the Precision and Accuracy of Instrument Calibration System. Journal of Research of the National Bureau of Standards–C. Engineering and Instrumentation Vol. 67C, No. 2 FRASER, C. S. (1984). Network Design Considerations for Non-Topographic Photogrammetry. In: Photogrammetrical Engineering and Remote Sensing, 50 (8), pp. 1115–1126. FRASER, C. S. AND M. R. SHORTIS. (1992). Variation of Distortion within the Photographic Field. In: Photogrammetrical Engineering and Remote Sensing, 58 (6), pp. 851–855. FRASER, C. S. (1997). Digital camera self-calibration. In: ISPRS Journal of Photogrammetry and Remote Sensing 52, pp. 149–159. FRITZ, L. W. AND H. H. SCHMID. (1974). Stellar Calibration of the Orbigon Lens. In: Photogrammetric Engineering 40 (1), pp. 101–115. GENNERY, D. B. (2002). Generalized Camera Calibration Including Fish-Eye Lenses. In: International Journal of Computer Vision, vol. 68, pp. 239–266. GRUEN, A. (2001). Calibration and Orientation of Cameras in Computer Vision. Springer Series in Information Sciences (Book 34), Published by Springer.

116

JANESICK, J. R. (2007). Photon transfer DN →λ. Published by SPIE, Bellingham, Washington, USA LEE, S. (2002). Pointing Accuracy Improvement Using Model-Based Noise Reduction Method. In: Proceeding SPIE 4635, Free-Space Laser Communication Technologies XIV, 65. LUHMANN, S. (2007). Close Range Photogrammetry: Principles, Techniques and Applications. Published by Wiley. MAGILL, A. A. (1954). Variation in Distortion with Magnification. Journal of Research of the National Bureau of Standards, Vol. 54, No. 3, March 1955, pp. 135–142, Research Paper 2574 MIKHAIL, E. M. AND F. ACKERMANN (1976). Observations and Least Squares. Published by IEP–A DunDonnelley, 666 Fifth Avenue, New York, USA MIKS, A. AND J. NOVAK (2012). Dependence of camera lens induced radial distortion and circle of confusion on object position, Optics and Laser Technology, Volume 44, Issue 4, pp. 1043–1049. NAGELE, P. (2003). Misuse of standard error of the mean (SEM) when reporting variability of a sample. A critical evaluation of four anaesthesia journals, British Journal of Anaesthesia, Volume 90, Issue 4, pp. 514–516. NEWBY, PAUL R. T. (2012). Photogrammetric Terminology: Second Edition, The Photogrammetric Record, Volume 27, Issue 139, pp. 360–386. PAJDLA, T. AND M. URBAN. (1999). Camera Calibration from Bundles of Parallel Lines, CMP Technical Report, Center for Machine Perception, Department of Cybernetics, Czech Technical University in Prague. POLLEFEYS, M., R. KOCH AND L. VAN GOOL. (1999). Self-Calibration and Metric Reconstruction Inspite of Varying and Unknown Intrinsic Camera Parameters, In: International Journal of Computer Vision, 32 (1), pp. 7–25. QUELLET, J. N. AND P. HÉBERT (2009). Precise ellipse estimation without contour point extraction. In: Machine Vision and Applications. Volume 21, Issue 1, pp. 59–67 RICOLFE-VIALA, C. AND A. J. SÁNCHEZ-SALMERÓN (2010). Correcting non-linear lens distortion in cameras without using a model. In: Optics & Laser Technology 42, pp. 628–639. RIEKE-ZAPP, D., W. TECKLENBURG, J. PEIPE, H. HASTEDT AND C. HAIG (2009). Evaluation of the Geometric Stability and the Accuracy Potential of Digital Cameras – Comparing Mechanical Stabilisation Versus Parameterisation. ISPRS Journal of Photogrammetry and Remote Sensing 64, pp. 248–258. SANDAU, R. (2010). Digital Airborne Camera, Introduction and Technology. Published by Springer. SCHMID, H. H., (1953). An Analytical Treatment of the Orientation of a Photogrammetric Camera. In: Ballistic Research Laboratories, Report No. 880, Aberdeen Proving Ground, Maryland, USA. SHORTIS, M. R., T. A. CLARKE AND S. ROBSON (1995). Practical Testing of the Precision and Accuracy of Target Image Centring Algorithms. Videometrics IV. SPIE 2598.

117

SHORTIS, M. R. AND H. A. BEYER (1997). Calibration stability of the Kodak DCS420 and 460 cameras. Videometrics V. SPIE 3174: pp. 94–105. SHORTIS, M. R., C. J. BELLMAN, S. ROBSON, G. J. JOHNSTON AND G. W. JOHNSON (2006). Stability of Zoom and Fixed Lenses Used With Digital SLR Cameras. IAPRS Volume XXXVI, Part 5, pp. 285–290, Dresden. SLAMA, CH. C., CH. THEURER AND S. W. HENRIKSEN (1980). Manual of Photogrammetry. Published by American Society of Photogrammetry, Falls Church, Virginia, USA STAMATOPOULOS, C. AND C. FRASER (2011). Calibration of Long Focal Length Cameras in Close Range Photogrammetry. In: The Photogrammetric Record, 26 (135), pp. 339–360. TAN, T. N., G. D. SULLIVAN AND K. D. BAKER (1995). Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles. In: Proceedings of the British conference on Machine vision, Vol. 1, pp. 177–186. TANG, R., D. FRITSCH AND M. CRAMER. (2012). New Rigorous and Flexible Fourier Self-calibration Models for Airborne CameraCalibration. In: ISPRS Journal of Photogrammetry and Remote Sensing 71, p. 76–85. VDI/VDE 2634. Optische 3D-Messtechnik. VDI/VDE-Richtlinie, Blatt 1, Beuth Verlag, Berlin WILLSON, R. G. (1994). Modeling and Calibration of Automated Zoom Lenses. PhD thesis, Carnegie Mellon University. ZHANG, G., J. HE AND X. YANG (2003). Calibrating camera radial distortion with cross-ratio invariability. In: Optics & Laser Technology, Volume 35, Issue 6, pp. 457–461.

118

List of Figures Figure 1-1. Pinhole lens with Canon bayonet mount. ...........................................................................................18 Figure 1-2. Goniometer scheme (source: [Cramer 2004]). ...................................................................................20 Figure 1-3. Zeiss goniometer (left), © Zeiss (source: [Cramer 2004]). Historical Wild AKG Goniometer for visual calibration tasks (right, source: [Slama et al. 1980]). ...........................................................21 Figure 1-4. Multi-collimator scheme (source: [Cramer 2004]). ............................................................................21 Figure 1-5. Multi-collimator at the USGS. © USGS (source: [Cramer 2004]). .................................................22 Figure 1-6. Plumb-line calibration field (source: [Gruen et al. 2001]).................................................................23 Figure 1-7. Terrestrial calibration field used for digital aerial camera UltracamD calibration, © Vexcel (source: [Cramer 2004]). ............................................................................................................................25 Figure 2-1. Pinhole optical system (left) and glass optical system (right). ..........................................................27 Figure 2-2. Arrangement of the proposed calibration method. ...........................................................................28 Figure 2-3. Schematic figure of the camera holding the pinhole lens, illustrating the principal distance cpin and FOV denoted as α. .............................................................................................................................29 Figure 3-1. Illustration of the global coordinate system definition, where the origin is located in the projection center of the ideal OS and the camera sensor is the negative. xred and yred denotes the axis of the global coordinate system, while x and y denotes the axis of the local coordinate system. ..........................................................................................................................................................31 Figure 3-2. Illustration of the pinhole OS projection center position with respect to the global coordinate system. ..........................................................................................................................................................34 Figure 3-3. Illustration of the projection of the object point p to sensor by the pinhole optical system. .....35 Figure 3-4. Illustration of the projection of the object point p to sensor by the glass optical system. ..........35 Figure 3-5. Illustration of the projection of the object point p to the sensor by the ideal optical system. Points denoted with H are the principal points of the ideal optical system (differ from principal point used in photogrammetry!). .............................................................................................................36 Figure 3-6. Illustration of the projection of the object point p to the sensor by pinhole and glass optical systems. ........................................................................................................................................................36 Figure 3-7. Illustration of the transformation (equation (3.12)) of the target images given by the pinhole optical system (Red circles on the left image.) to target images (Red circles on the right image. Those target images do not exist in reality) given by the ideal optical system. Blue circles denotes the target images given by the glass optical system. Those target images are used for defining the position of the ideal optical system. ........................................................................................................44 Figure 3-8. Comparison of the ideal optical system with the pinhole optical system. Points denoted with H are the principal points of the ideal optical system (differ from principal point used in photogrammetry!). ......................................................................................................................................44 Figure 3-9. Illustration of the transformation (equation (3.36)) of the target images given by the glass optical system (Blue circles on the left image.) to the position given by the ideal optical system (Red circles). The amount of residual vectors after the transformation depends on the applicability of the function fd. Blue circles on the right image denotes the corrected position after the transformation. ...........................................................................................................................45 Figure 3-10. Comparison of the ideal optical system with the glass optical system. ........................................46 Figure 4-1. Illustration of the global coordinate system definition, where the origin is located in the projection center of the pinhole OS and the camera sensor is the negative. xred and yred denotes 119

the axis of the global coordinate system, while x and y denotes the axis of the local coordinate system. ..........................................................................................................................................................47 Figure 4-2. Illustration of the projection of the rectangle to sensor plane and two vanishing points constituted by opposite lengths of rectangle. ........................................................................................48 Figure 4-3. Projective geometry of four points, inducing cross-ratio invariant. ...............................................51 Figure 4-4. Calibration field for the method of interior orientation elements recovery using perspective views of rectangle, extended with a three more points used for parallelism correction. ................52 Figure 4-5. Situation of the calibration field in 3D Euclidean space, showing the real targets as black dots and auxiliary points as circles, including vanishing point q and corrected points ccor and dcor. ...52 Figure 4-6. Situation of the projected calibration field in 2D projective space, showing the target images as a black dots and auxiliary points as a circles, including vanishing points p, q and corrected point ccor.................................................................................................................................................................53 Figure 4-7. An example of large dataset of simulated camera poses, symbolised by green dots. Black dots symbolise the calibration field targets. ....................................................................................................56 Figure 4-8. An example of simulation of the three images of the calibration field, generated by rotating the camera around the z-axis...........................................................................................................................57 Figure 5-1. Left – cross-ratio bar prototype. Right – detailed view on the target mount, assembled from the support plates (white parts) holding the led diode (rear side) and the black tube with a solid core end light optical fiber (front side). The mount can be adjusted in position by using a screw, connecting the mount to the aluminum bar. .........................................................................................62 Figure 5-2. Detailed view of the target used in the cross-ratio bar and also in the main calibration field. ...62 Figure 5-3. Cross-ratio validation test arrangement. ..............................................................................................63 Figure 5-4. Illustration of the middle point error e due to a non-zero angle α. .................................................64 Figure 5-5. Schematic image of the cross-ratio bar in horizontal direction and perpendicular to the optical axis. ...............................................................................................................................................................67 Figure 5-6. An example of simulated input data (sample of five images). .........................................................69 Figure 6-1. The visualization of target images given by the pinhole and glass optical system. The bottom part of the figure shows four images of selected target. The first two images were acquired by the glass optical system from the distance defined by magnification m1 (left) and m6 (right), while the next couple of images were acquired by the pinhole optical system from the same distances. The upper part of the figure shows the intensity profiles of corresponding target images. ..........71 Figure 6-2. Schematic picture of color channel separation process. This figure also shows, that different dimensions of each channel can occur. This is caused by the odd number of rows or columns of the raw image. .............................................................................................................................................72 Figure 6-3. Local coordinate systems of red (a), green1 (b), green2 (c) and blue (d) channels described by tfm matrices given in (6.1). Figure (a) shows also an example provided in (6.2) and global coordinate system. ......................................................................................................................................74 Figure 6-4. This figure shows the selected target image binarised in ten different levels. The last levels will be rejected due to the low number of contour pixels. Blue circles symbolize the ellipse fitting process. ........................................................................................................................................................75 Figure 6-5. This figure shows an area selected around the approximate position of the target image. Red pixels are used for estimating the amount of present noise. ...............................................................77 Figure 6-6. Target detector scheme. .........................................................................................................................78 Figure 7-1. The target of the cross-ratio bar locations and residual vectors for m1–m6. The scale of the residual vectors is 1:1000. .........................................................................................................................86

120

Figure 7-2. This figure shows the residual vectors, given by the Euclidean transformation of the datasets provided by the pinhole OS. The scale of the residual vectors is 1:10000. ......................................89 Figure 7-3. This figure shows the residual vectors, given by the Euclidean transformation of the datasets provided by the glass OS. The scale of the residual vectors is 1:10000. ...........................................90 Figure 7-4. This figure shows the residual vectors, given by 2D projective transformation of the datasets provided by the pinhole OS. The scale of the residual vectors is 1:10000. ......................................91 Figure 7-5. This figure shows the residual vectors, given by 2D projective transformation of the datasets provided by the glass OS. The scale of the residual vectors is 1:10000. ...........................................92 Figure 7-6. The visualization of the chromatic aberration effect given by the color channels comparison. a – R v. G, b – R v. B, c – G v. B, d – G1 v. G2. The residual vectors scale is: a,b,c – 1:300, d – 1:10000. ........................................................................................................................................................94 Figure 7-7. The visualization of the chromatic aberration effect given by the color channels comparison after Euclidean transformation. a – R v. G, b – R v. B, c – G v. B, d – G1 v. G2. The residual vectors scale is: a,b,c – 1:1000, d – 1:10000. ..........................................................................................95 Figure 7-8. The arrangement of the lens calibration test, consisting of a stabilized tripod holding the optical system (camera body with lens) and the calibration field prototype. ....................................96 Figure 7-9. The arrangement of the cross-ratio validation test, where the support stand holding the calibration field is used for moving the cross-ratio bar........................................................................97 Figure 7-10. Illustration of the distortion progress (for green channel) given by using Euclidean transformation. The scale of the distortion vectors is 1:50. ............................................................. 102 Figure 7-11. Illustration of the distortion progress (for green channel) given by using 2D projective transformation. The scale of the distortion vectors is 1:50. ............................................................. 103 Figure 7-12. Illustration of the residual vectors (for green channel) given by applying Brown polynomial correction after Euclidean transformation. The scale of residual vectors is 1:1000. .................... 104 Figure 7-13. Illustration of the residual vectors (for green channel) given by applying Brown polynomial correction after 2D projective transformation. The scale of residual vectors is 1:1000. ............. 105 Figure 7-14. Radial distortion progress (magnification m1) in each color channel, given by Brown polynomial parameters estimated after Euclidean transformation. ................................................. 108 Figure 7-15. Radial distortion progress for each magnification (for green channel), given by Brown polynomial parameters estimated after Euclidean transformation. ................................................. 108 Figure 7-16. Hypsometry form of visualization of the absolute values of the distortion for magnification m1–m5 (for green channel). .................................................................................................................... 109 Figure 7-17. The target of the cross-ratio bar locations and residual vectors for m1–m6. The scale of the residual vectors is 1:1000. All vertical measurements corresponding to magnification m6 had to be removed because they were outside of the calibrated area. Therefore, no residual vectors in vertical direction are shown. .................................................................................................................. 113

121

List of Tables Table 3-1. Parameters of the evaluated configurations. ........................................................................................42 Table 3-2. Results of error propagation for various configurations. ...................................................................42 Table 4-1. Parameters of the pinhole optical system. ............................................................................................58 Table 4-2. Parameters of the exterior orientation elements simulation. .............................................................58 Table 4-3. Test parameters.........................................................................................................................................58 Table 4-4. Validity test results. ..................................................................................................................................59 Table 4-5. Results of three tests on accuracy of the interior orientation parameters of the pinhole optical system, computed by using method of imaging rectangle-shaped calibration field. .......................59 Table 5-1. Simulated results of the cross-ratio validation test, where the sensor size: (60 × 60) mm, pixel size: 10 µm and number of images (m) = 10000. denotes the a priori given population variance of the image coordinate measurement. denotes the sample variance estimate of the image coordinate measurement. denotes the cross-ratio sample unity variance. ...................70 Table 7-1. Pinhole OS imaging parameters related to the second test. ..............................................................80 Table 7-2. Target detector parameters. ....................................................................................................................80 Table 7-3. Target detector test results given by the cross-ratio validation test and 2D projective transformation method. ............................................................................................................................81 Table 7-4. Glass OS imaging parameters.................................................................................................................81 Table 7-5. Target detector parameters. ....................................................................................................................82 Table 7-6. Target detector test results given by the 2D projective transformation method. ..........................82 Table 7-7. Target detector configuration used for the pinhole OS validation test. ..........................................84 Table 7-8. Pinhole OS imaging parameters. ............................................................................................................84 Table 7-9. Parameters of the cross-ratio validation test. .......................................................................................85 Table 7-10. Results of the cross-ratio validation test. ............................................................................................85 Table 7-11. The pinhole lens mount test using Euclidean transformation. All values are in micrometers. .89 Table 7-12. The glass lens mount test using Euclidean transformation. All values are in micrometers. ......90 Table 7-13. The pinhole lens mount test using 2D projective transformation. I – the results of the target detector test, II – the results of the mount test (Euclidean transformation), III – the results of the mount test (2D projective transformation). All values are in micrometers. ..............................91 Table 7-14. The glass lens mount test using 2D projective transformation. I – the results of the target detector test, II – the results of the mount test (Euclidean transformation), III – the results of the mount test (2D projective transformation). All values are in micrometers. ..............................92 Table 7-15. The chromatic aberration measure. Parameters of the Euclidean transformation performed between color channels. All values are in micrometers. ......................................................................95 Table 7-16. Target detector parameters. ..................................................................................................................98 Table 7-17. Digital still camera (DSC) body specifications. .................................................................................98 Table 7-18. Pinhole lens specifications. ...................................................................................................................98 Table 7-19. Glass lens specifications. .......................................................................................................................98 Table 7-20. Pinhole OS imaging parameters related to the glass lens calibration experiment (m1–m6)........99 Table 7-21. Glass lens OS imaging parameters related to the glass lens calibration experiment (m1–m6). ..99 122

Table 7-22. Experiment specifications. ....................................................................................................................99 Table 7-23. Computed lens distortion values (by using Euclidean transformation) corresponding to m6 and sample SD (only for ). Subscripts R, G, B denotes the color channels. ......................... 101 Table 7-24. Results of the lens calibration by using Euclidean transformation, followed by estimation of the Brown polynomial parameters (A0, A1, A2, P1, P2). sIxy denotes the SD estimate of the residual vectors given by using Euclidean transformation only. sIIxy denotes the SD estimate of the residual vectors given by applying Brown polynomial correction after Euclidean transformation. The estimated parameters of the Euclidean transformation (λ, tx, ty) are not shown here, because their values are meaningless in this experiment. Brown polynomial parameters were computed only for the green channel (except m1)............................................... 106 Table 7-25. Results of the lens calibration by using 2D projective transformation, followed by estimation of the Brown polynomial parameters (A0, A1, A2, P1, P2). sIxy denotes the SD estimate of the residual vectors given by using 2D projective transformation only. sIIxy denotes the SD estimate of the residual vectors given by applying Brown polynomial correction after 2D projective transformation. The estimated parameters of 2D projective transformation (hii) are not shown here, because their values are meaningless in this experiment. Brown polynomial parameters were computed only for the green channel (except m1). .................................................................. 107 Table 7-26. Glass OS imaging parameters. .......................................................................................................... 111 Table 7-27. Parameters of the cross-ratio validation test. .................................................................................. 111 Table 7-28. Results of the cross-ratio validation test. I – no correction applied, II – Euclidean + Brown, III – 2D projective (interpolation), IV – Euclidean (interpolation). All values are in micrometers. Values for the second type of the correction has been computed only for the green channel, besides m1. ................................................................................................................................................ 112 Table 0-1. Adjustment parameters......................................................................................................................... 127 Table 0-1. Adjustment parameters......................................................................................................................... 131 Table 0-1. Adjustment parameters......................................................................................................................... 132 Table 0-1. Adjustment parameters......................................................................................................................... 133

123

Appendix A Throughout the work, several problems have been set up, where the solution is not unique, due to a large number of redundant observations. Such cases needs to be treated with a certain kind of adjustment method. We chose the method of “Adjustment with Conditions Only – General Case (Adjustment of Observations and Functionally Independent Parameters)” fully described in [Mikhail et al. 1976]. In this chapter, we will briefly describe the basics of this approach. Most of the following text is taken directly from that book. For the purpose of this chapter, we will introduce here a selected notation used in the reference book: ......... number of condition equations

........Jacobian matrix corresponding to observation variables

........ number of observations

........Jacobian matrix corresponding to unknown

........ number of unknown parameters

parameters

......... the redundancy or degrees of freedom ......... observations

........cofactor matrix

........ vector of residuals

........covariance matrix

........ unknown parameters

.......weight matrix

Each problem in this work, that has to be solved, is described by a functional model, leading to a set of condition equations in the linearized form:

(0.1) where: (0.2) where

is a column vector of constants. The matrices dimensions in equations (0.1) and (0.2) are: matrix vector (0.3)

Since

for cases treated in this work, there exist many solutions for equation (0.1). A unique

least squares solution is obtained by incorporating following criterion: 124

(0.4) After derivation (based on the method of constrained minima by Lagrange multipliers ), described in the reference book, we get the solution for the unknown parameters:

(0.5) where: (0.6)

(0.7) where: (0.8) The vector of residuals is:

(0.9) where: (0.10) The vector of adjusted observations is: (0.11) The estimate of the reference variance

is:

(0.12)

125

The cofactor matrices of

and

are: (0.13)

(0.14)

(0.15) where cofactor, covariance and weight matrices are related by relations:

(0.16)

(0.17) where

denotes the a priori reference variance.

The variance of the estimated parameter i is then:

(0.18)

126

Appendix B Notation: lbd tx ty cp pxp pyp z ci pxi pyi

Condition equations are defined by following functions: ci z f1 = ---------------- - lbd cp (ci - cp + z)

(0.1)

ci (pxp z + cp (pxi - pxp)) f2 = pxi - tx - --------------------------cp (ci - cp + z)

(0.2)

ci (pyp z + cp (pyi - pyp)) f3 = pyi - ty - --------------------------cp (ci - cp + z)

(0.3)

Table 0-1. Adjustment parameters.

As can be seen in Table 0-1, there are no redundant observations in this problem and therefore no least square solution is necessary. However, it is practical to use our Matlab program for condition adjustment for two reasons: 1/ The system of condition equations is non-linear and needs to be linearized. The linearization process is implemented in the program. 2/ The error propagation process is already implemented in the program also. All partial derivatives were computed in SW Matlab v7.9.0 (Mathworks co.).

127

Partial derivatives of function

:

diff(f1,lbd) =

(0.4)

-1 diff(f1,tx) =

(0.5)

0 diff(f1,ty) =

(0.6)

0 diff(f1,cp) = ci z ci z ----------------- - ----------------2 2 cp (ci - cp + z) cp (ci - cp + z)

(0.7)

diff(f1,pxp) =

(0.8)

0 diff(f1,pyp) =

(0.9)

0 diff(f1,z) = ci ci z ---------------- - ----------------cp (ci - cp + z) 2 cp (ci - cp + z)

(0.10)

diff(f1,ci) = z ci z ---------------- - ----------------cp (ci - cp + z) 2 cp (ci - cp + z)

(0.11)

diff(f1,pxi) =

(0.12)

0 diff(f1,pyi) =

(0.13)

0

128

Partial derivatives of function

:

diff(f2,lbd) =

(0.14)

0 diff(f2,tx) =

(0.15)

-1 diff(f2,ty) =

(0.16)

0 diff(f2,cp) =

ci (pxp z + cp (pxi - pxp)) ci (pxp z + cp (pxi - pxp)) ci (pxi - pxp) --------------------------- - --------------------------- - ---------------2 2 cp (ci - cp + z) cp (ci - cp + z) cp (ci - cp + z)

(0.17)

diff(f2,pxp) = ci (cp - z) ---------------cp (ci - cp + z)

(0.18)

diff(f2,pyp) =

(0.19)

0 diff(f2,z) = ci (pxp z + cp (pxi - pxp)) ci pxp --------------------------- - ---------------2 cp (ci - cp + z) cp (ci - cp + z)

(0.20)

diff(f2,ci) = ci (pxp z + cp (pxi - pxp)) pxp z + cp (pxi - pxp) --------------------------- - ---------------------2 cp (ci - cp + z) cp (ci - cp + z)

(0.21)

diff(f2,pxi) = ci 1 - ----------ci - cp + z

(0.22)

diff(f2,pyi) =

(0.23)

0

129

Partial derivatives of function

:

diff(f3,lbd) =

(0.24)

0 diff(f3,tx) =

(0.25)

0 diff(f3,ty) =

(0.26)

-1 diff(f3,cp) = ci (pyp z + cp (pyi - pyp)) ci (pyp z + cp (pyi - pyp)) ci (pyi - pyp) --------------------------- - --------------------------- - ---------------2 2 cp (ci - cp + z) cp (ci - cp + z) cp (ci - cp + z)

(0.27)

diff(f3,pxp) =

(0.28)

0 diff(f3,pyp) = ci (cp - z) ---------------cp (ci - cp + z)

(0.29)

diff(f3,z) = ci (pyp z + cp (pyi - pyp)) ci pyp --------------------------- - ---------------2 cp (ci - cp + z) cp (ci - cp + z)

(0.30)

diff(f3,ci) = ci (pyp z + cp (pyi - pyp)) pyp z + cp (pyi - pyp) --------------------------- - ---------------------2 cp (ci - cp + z) cp (ci - cp + z)

(0.31)

diff(f3,pxi) =

(0.32)

0 diff(f3,pyi) = ci 1 - ----------ci - cp + z

(0.33)

130

Appendix C Notation: lbd tx ty xp yp xg yg

Condition equations are defined by following functions: f1 = tx - xg + lbd xp

(0.1)

f2 = ty - yg + lbd yp

(0.2)

Table 0-1. Adjustment parameters. , where

is the number of targets

Because both functions (0.1) and (0.2) are in a linear form, the partial derivatives are trivial and will not be shown here.

The distortion value

and

is given directly by a misclosure:

(0.3) The variance of the distortion value is according to error propagation law:

(0.4)

131

Appendix D Notation: pxp pyp cp xp yp xq yq

Condition equations are defined by following function:

f1 =

2 cp + (pxp - xp) (pxp - xq) + (pyp - yp) (pyp - yq)

(0.1)

Table 0-1. Adjustment parameters. , where

Partial derivatives of function

is the number of images

:

diff(f1,pxp) =

(0.2)

2*pxp - xp - xq diff(f1,pyp) =

(0.3)

2*pyp - yp - yq diff(f1,cp) =

(0.4)

2*cp diff(f1,xp) =

(0.5)

xq - pxp diff(f1,yp) =

(0.6)

yq - pyp diff(f1,xq) =

(0.7)

xp - pxp diff(f1,yq) =

(0.8)

yp - pyp

132

Appendix E Notation: pxg pyg xg yg xi yi A0 A1 A2 A3 P1 P2

Condition equations are defined by following functions: f1 = xi – pxg - xg_red*(1+A0+A1*r2+A2*r2^2+A3*r2^3) - P1*(r2+2*xg_red^2) - 2*P2*xg_red*yg_red

f2 = yi – pyg - yg_red*(1+A0+A1*r2+A2*r2^2+A3*r2^3) - P2*(r2+2*yg_red^2) - 2*P1*xg_red*yg_red

(0.1)

(0.2)

where: xg_red = xg – pxg, yg_red = yg – pyg, r2 = (xg_red)^2 + (yg_red)^2

Table 0-1. Adjustment parameters. , where

133

is the number of targets

(0.3)

Partial derivatives of function

:

diff(f1,pxg) = 2 2 2 2 2 2 2 3 A0 + A1 ((pxg - xg) + (pyg - yg) ) - 2 P2 (pyg - yg) + A2 ((pxg - xg) + (pyg - yg) ) + A3 ((pxg - xg) + (pyg - yg) ) + 2 2 2 2 2 (pxg - xg) (A1 (2 pxg - 2 xg) + 2 A2 (2 pxg - 2 xg) ((pxg - xg) + (pyg - yg) ) + 3 A3 (2 pxg - 2 xg) ((pxg - xg) + (pyg - yg) ) ) - P1 (6 pxg - 6 xg)

(0.4)

diff(f1,pyg) = 2 2 2 2 2 (pxg - xg) (A1 (2 pyg - 2 yg) + 2 A2 (2 pyg - 2 yg) ((pxg - xg) + (pyg - yg) ) + 3 A3 (2 pyg - 2 yg) ((pxg - xg) + (pyg - yg) ) ) - 2 P2 (pxg - xg) - P1 (2 pyg - 2 yg)

(0.5)

diff(f1,xg) = 2 2 2 2 2 2 2 3 2 P2 (pyg - yg) - A1 ((pxg - xg) + (pyg - yg) ) - A0 - A2 ((pxg - xg) + (pyg - yg) ) - A3 ((pxg - xg) + (pyg - yg) ) 2 2 2 2 2 (pxg - xg) (A1 (2 pxg - 2 xg) + 2 A2 (2 pxg - 2 xg) ((pxg - xg) + (pyg - yg) ) + 3 A3 (2 pxg - 2 xg) ((pxg - xg) + (pyg - yg) ) ) + P1 (6 pxg - 6 xg) - 1

(0.6)

diff(f1,yg) = 2 2 2 2 2 2 P2 (pxg - xg) - (pxg - xg) (A1 (2 pyg - 2 yg) + 2 A2 (2 pyg - 2 yg) ((pxg - xg) + (pyg - yg) ) + 3 A3 (2 pyg - 2 yg) ((pxg - xg) + (pyg - yg) ) ) + P1 (2 pyg - 2 yg)

(0.7)

diff(f1,xi) =

(0.8)

1 diff(f1,yi) =

(0.9)

0 diff(f1,A0) =

(0.10)

pxg - xg diff(f1,A1) = 2

2 + (pyg - yg) ) (pxg - xg)

(0.11)

2

2 2 + (pyg - yg) ) (pxg - xg)

(0.12)

2

2 3 + (pyg - yg) ) (pxg - xg)

(0.13)

((pxg - xg) diff(f1,A2) =

((pxg - xg) diff(f1,A3) =

((pxg - xg) diff(f1,P1) =

2 - 3 (pxg - xg)

2

(0.14)

- (pyg - yg)

diff(f1,P2) =

(0.15)

(-2) (pxg - xg) (pyg - yg)

134

Partial derivatives of function

:

diff(f2,pxg) = 2 2 2 2 2 (pyg - yg) (A1 (2 pxg - 2 xg) + 2 A2 (2 pxg - 2 xg) ((pxg - xg) + (pyg - yg) ) + 3 A3 (2 pxg - 2 xg) ((pxg - xg) + (pyg - yg) ) ) - 2 P1 (pyg - yg) - P2 (2 pxg - 2 xg)

(0.16)

diff(f2,pyg) = 2 2 2 2 2 2 2 3 A0 + A1 ((pxg - xg) + (pyg - yg) ) - 2 P1 (pxg - xg) + A2 ((pxg - xg) + (pyg - yg) ) + A3 ((pxg - xg) + (pyg - yg) ) + 2 2 2 2 2 (pyg - yg) (A1 (2 pyg - 2 yg) + 2 A2 (2 pyg - 2 yg) ((pxg - xg) + (pyg - yg) ) + 3 A3 (2 pyg - 2 yg) ((pxg - xg) + (pyg - yg) ) ) - P2 (6 pyg - 6 yg)

(0.17)

diff(f2,xg) = 2 2 2 2 2 2 P1 (pyg - yg) - (pyg - yg) (A1 (2 pxg - 2 xg) + 2 A2 (2 pxg - 2 xg) ((pxg - xg) + (pyg - yg) ) + 3 A3 (2 pxg - 2 xg) ((pxg - xg) + (pyg - yg) ) ) + P2 (2 pxg - 2 xg)

(0.18)

diff(f2,yg) = 2 2 2 2 2 2 2 3 2 P1 (pxg - xg) - A1 ((pxg - xg) + (pyg - yg) ) - A0 - A2 ((pxg - xg) + (pyg - yg) ) - A3 ((pxg - xg) + (pyg - yg) ) 2 2 2 2 2 (pyg - yg) (A1 (2 pyg - 2 yg) + 2 A2 (2 pyg - 2 yg) ((pxg - xg) + (pyg - yg) ) + 3 A3 (2 pyg - 2 yg) ((pxg - xg) + (pyg - yg) ) ) + P2 (6 pyg - 6 yg) - 1

(0.19)

diff(f2,xi) =

(0.20)

0 diff(f2,yi) =

(0.21)

1 diff(f2,A0) =

(0.22)

pyg - yg diff(f2,A1) = 2

2 + (pyg - yg) ) (pyg - yg)

(0.23)

2

2 2 + (pyg - yg) ) (pyg - yg)

(0.24)

2

2 3 + (pyg - yg) ) (pyg - yg)

(0.25)

((pxg - xg) diff(f2,A2) =

((pxg - xg) diff(f2,A3) =

((pxg - xg) diff(f2,P1) =

(0.26)

(-2) (pxg - xg) (pyg - yg) diff(f2,P2) = 2 - (pxg - xg)

2

(0.27)

- 3 (pyg - yg)

135